Combined Feed
https://feeds.library.caltech.edu/people/Katz-J-N/combined.rss
A Caltech Library Repository Feedhttp://www.rssboard.org/rss-specificationpython-feedgenenSat, 13 Apr 2024 01:25:16 +0000Government Partisanship, Labor Organization, and Macroeconomic Performance: A Corrigendum
https://resolver.caltech.edu/CaltechAUTHORS:20140314-120455643
Authors: {'items': [{'id': 'Beck-N', 'name': {'family': 'Beck', 'given': 'Nathaniel'}}, {'id': 'Katz-J-N', 'name': {'family': 'Katz', 'given': 'Jonathan N.'}, 'orcid': '0000-0002-5287-3503'}, {'id': 'Alvarez-R-M', 'name': {'family': 'Alvarez', 'given': 'R. Michael'}, 'orcid': '0000-0002-8113-4451'}, {'id': 'Garrett-G', 'name': {'family': 'Garrett', 'given': 'Geoffrey'}}, {'id': 'Lange-P', 'name': {'family': 'Lange', 'given': 'Peter'}}]}
Year: 1993
DOI: 10.2307/2938825
Alvarez, Garrett and Lange (1991) used cross-national data panel data on the Organization for Economic Coordination and Development nations to show that countries with left governments and encompassing labor movements enjoyed superior economic performance. Here we show that the standard errors reported in that article are incorrect. Reestimation of the model using ordinary least squares and robust standard errors upholds the major finding of Alvarez, Garrett and Lange, regarding the political and institutional causes of economic growth but leaves the findings for unemployment and inflation open to question. We show that the model used by Alvarez, Garrett and Lange, feasible generalized least squares, cannot produce standard errors when the number of countries analyzed exceeds the length of the time period under analysis. Also, we argue that ordinary least squares with robust standard errors is superior to feasible generalized least square for typical cross-national panel studies.https://authors.library.caltech.eduhttps://authors.library.caltech.edu/records/j4fy0-4nm32What To Do (and Not To Do) with Time-Series Cross-Section Data
https://resolver.caltech.edu/CaltechAUTHORS:20140314-120455208
Authors: {'items': [{'id': 'Beck-N', 'name': {'family': 'Beck', 'given': 'Nathaniel'}}, {'id': 'Katz-J-N', 'name': {'family': 'Katz', 'given': 'Jonathan N.'}, 'orcid': '0000-0002-5287-3503'}]}
Year: 1995
DOI: 10.2307/2082979
We examine some issues in the estimation of time-series cross-section models, calling into
question the conclusions of many published studies, particularly in the field of comparative
political economy. We show that the generalized least squares approach of Parks produces
standard errors that lead to extreme overconfidence, often underestimating variability by 50% or
more. We also provide an alternative estimator of the standard errors that is correct when the error
structures show complications found in this type of model. Monte Carlo analysis shows that these
"panel-corrected standard errors" perform well. The utility of our approach is demonstrated via a
reanalysis of one "social democratic corporatist" model.https://authors.library.caltech.eduhttps://authors.library.caltech.edu/records/5q5x8-v1537Nuisance vs. Substance: Specifying and Estimating Time-Series-Cross-Section Models
https://resolver.caltech.edu/CaltechAUTHORS:20140314-120455306
Authors: {'items': [{'id': 'Beck-N', 'name': {'family': 'Beck', 'given': 'Nathaniel'}}, {'id': 'Katz-J-N', 'name': {'family': 'Katz', 'given': 'Jonathan N.'}, 'orcid': '0000-0002-5287-3503'}]}
Year: 1996
DOI: 10.1093/pan/6.1.1
In a previous article we showed that ordinary least squares with panel
corrected standard errors is superior to the Parks generalized least
squares approach to the estimation of time-series-cross-section models.
In this article we compare our proposed method with another leading
technique, Kmenta's "cross-sectionally heteroskedastic and timewise autocorrelated"
model. This estimator uses generalized least squares to
correct for both panel heteroskedasticity and temporally correlated errors.
We argue that it is best to model dynamics via a lagged dependent
variable rather than via serially correlated errors. The lagged dependent
variable approach makes it easier for researchers to examine dynamics
and allows for natural generalizations in a manner that the serially correlated
errors approach does not. We also show that the generalized
least squares correction for panel heteroskedasticity is, in general, no
improvement over ordinary least squares and is, in the presence of parameter
heterogeneity, inferior to it. In the conclusion we present a
unified method for analyzing time-series-cross-section data.https://authors.library.caltech.eduhttps://authors.library.caltech.edu/records/p7s9k-tw938Careerism, Committee Assignments and the Electoral Connection
https://resolver.caltech.edu/CaltechAUTHORS:20140314-120456710
Authors: {'items': [{'id': 'Katz-J-N', 'name': {'family': 'Katz', 'given': 'Jonathan N.'}, 'orcid': '0000-0002-5287-3503'}, {'id': 'Sala-B-R', 'name': {'family': 'Sala', 'given': 'Brian R.'}}]}
Year: 1996
DOI: 10.2307/2082795
Most scholars agree that members of Congress are strongly motivated by their desire for reelection. This assumption implies that members of Congress adopt institutions, rules, and norms of behavior in part to serve their electoral interests. Direct tests of the electoral connection are rare, however, because
significant, exogenous changes in the electoral environment are difficult to identify. We develop and test an electoral rationale for the norm of committee assignment "property rights." We examine committee tenure patterns before and after a major, exogenous change in the electoral system-the states' rapid adoption of Australian ballot laws in the early 1890s. The ballot changes, we argue, induced new "personal vote" electoral incentives which contributed to the adoption of "modern" congressional institutions such as property rights to committee assignments. We demonstrate a marked increase in assignment stability after 1892, by which time
a majority of states had put the new ballot laws into force, and earlier than previous studies have suggested.https://authors.library.caltech.eduhttps://authors.library.caltech.edu/records/fj5hy-vm845Why Did the Incumbency Advantage in U.S. House Elections Grow?
https://authors.library.caltech.edu/records/gehhp-j3h73
Authors: {'items': [{'id': 'Cox-G-W', 'name': {'family': 'Cox', 'given': 'Gary W.'}}, {'id': 'Katz-J-N', 'name': {'family': 'Katz', 'given': 'Jonathan N.'}}]}
Year: 1996
Theory: A simple rational entry argument suggests that the value of incumbency
consists not just of a direct effect, reflecting the value of resources (such as staff)
attached to legislative office, but also of an indirect effect, reflecting the fact that
stronger challengers are less likely to contest incumbent-held seats. The indirect
effect is the product of a scare-off effect-the ability of incumbents to scare off
high-quality challengers-and a quality effect-reflecting how much electoral advantage
a party accrues when it has an experienced rather than an inexperienced
candidate.
Hypothesis: The growth of the overall incumbency advantage was driven principally
by increases in the quality effect.
Methods: We use a simple two-equation model, estimated by ordinary least-squares
regression, to analyze U.S. House election data from 1948 to 1990.
Results: Most of the increase in the incumbency advantage, at least down to 1980,
came through increases in the quality effect (i.e., the advantage to the incumbent
party of having a low-quality challenger). This suggests that the task for those
wishing to explain the growth in the vote-denominated incumbency advantage is
to explain why the quality effect grew. It also suggests that resource-based explanations
of the growth in the incumbency advantage cannot provide a full explanation.https://authors.library.caltech.eduhttps://authors.library.caltech.edu/records/gehhp-j3h73Why Did the Incumbency Advantage in U.S. House Elections Grow?
https://resolver.caltech.edu/CaltechAUTHORS:20140314-120455847
Authors: {'items': [{'id': 'Cox-G-W', 'name': {'family': 'Cox', 'given': 'Gary W.'}}, {'id': 'Katz-J-N', 'name': {'family': 'Katz', 'given': 'Jonathan N.'}, 'orcid': '0000-0002-5287-3503'}]}
Year: 1996
DOI: 10.2307/2111633
Theory: A simple rational entry argument suggests that the value of incumbency
consists not just of a direct effect, reflecting the value of resources (such as staff)
attached to legislative office, but also of an indirect effect, reflecting the fact that
stronger challengers are less likely to contest incumbent-held seats. The indirect
effect is the product of a scare-off effect-the ability of incumbents to scare off
high-quality challengers-and a quality effect-reflecting how much electoral advantage
a party accrues when it has an experienced rather than an inexperienced
candidate.
Hypothesis: The growth of the overall incumbency advantage was driven principally
by increases in the quality effect.
Methods: We use a simple two-equation model, estimated by ordinary least-squares
regression, to analyze U.S. House election data from 1948 to 1990.
Results: Most of the increase in the incumbency advantage, at least down to 1980,
came through increases in the quality effect (i.e., the advantage to the incumbent
party of having a low-quality challenger). This suggests that the task for those
wishing to explain the growth in the vote-denominated incumbency advantage is
to explain why the quality effect grew. It also suggests that resource-based explanations
of the growth in the incumbency advantage cannot provide a full explanation.https://authors.library.caltech.eduhttps://authors.library.caltech.edu/records/epbx6-d8f84Taking Time Seriously: Time-Series-Cross-Section Analysis with a Binary Dependent Variable
https://resolver.caltech.edu/CaltechAUTHORS:20140314-120455747
Authors: {'items': [{'id': 'Beck-N', 'name': {'family': 'Beck', 'given': 'Nathaniel'}}, {'id': 'Katz-J-N', 'name': {'family': 'Katz', 'given': 'Jonathan N.'}, 'orcid': '0000-0002-5287-3503'}, {'id': 'Tucker-R', 'name': {'family': 'Tucker', 'given': 'Richard'}}]}
Year: 1998
DOI: 10.2307/2991857
Researchers typically analyze time-series-cross-section data with a binary dependent
variable (BTSCS) using ordinary logit or probit. However, BTSCS observations are
likely to violate the independence assumption of the ordinary logit or probit statistical
model. It is well known that if the observations are temporally related that the results of
an ordinary logit or probit analysis may be misleading. In this paper, we provide a simple
diagnostic for temporal dependence and a simple remedy. Our remedy is based on the
idea that BTSCS data are identical to grouped duration data. This remedy does not require
the BTSCS analyst to acquire any further methodological skills, and it can be easily
implemented in any standard statistical software package. While our approach is suitable
for any type of BTSCS data, we provide examples and applications from the field of
International Relations, where BTSCS data are frequently used. We use our methodology
to reassess Oneal and Russett's (1997) findings regarding the relationship between economic
interdependence, democracy, and peace. Our analyses show that (1) their finding
that economic interdependence is associated with peace is an artifact of their failure to
account for temporal dependence yet (2) their finding that democracy inhibits conflict is
upheld even taking duration dependence into account.https://authors.library.caltech.eduhttps://authors.library.caltech.edu/records/cah4v-26y02A Statistical Model for Multiparty Electoral Data
https://resolver.caltech.edu/CaltechAUTHORS:20140314-120456629
Authors: {'items': [{'id': 'Katz-J-N', 'name': {'family': 'Katz', 'given': 'Jonathan N.'}, 'orcid': '0000-0002-5287-3503'}, {'id': 'King-G', 'name': {'family': 'King', 'given': 'Gary'}, 'orcid': '0000-0002-5327-7631'}]}
Year: 1999
DOI: 10.2307/2585758
We propose a comprehensive statistical model for analyzing multiparty, district-level elections. This
model, which provides a tool for comparative politics research analogous to that which regression
analysis provides in the American two-party context, can be used to explain or predict how
geographic distributions of electoral results depend upon economic conditions, neighborhood ethnic
compositions, campaign spending, and other features of the election campaign or aggregate areas. We also
provide new graphical representations for data exploration, model evaluation, and substantive interpretation.
We illustrate the use of this model by attempting to resolve a controversy over the size of and trend in the
electoral advantage of incumbency in Britain. Contrary to previous analyses, all based on measures now
known to be biased, we demonstrate that the advantage is small but meaningful, varies substantially across
the parties, and is not growing. Finally, we show how to estimate the party from which each party's advantage
is predominantly drawn.https://authors.library.caltech.eduhttps://authors.library.caltech.edu/records/5bxcp-g0m93The Reapportionment Revolution and Bias in U.S. Congressional Elections
https://resolver.caltech.edu/CaltechAUTHORS:20140314-120455941
Authors: {'items': [{'id': 'Cox-G-W', 'name': {'family': 'Cox', 'given': 'Gary W.'}}, {'id': 'Katz-J-N', 'name': {'family': 'Katz', 'given': 'Jonathan N.'}, 'orcid': '0000-0002-5287-3503'}]}
Year: 1999
DOI: 10.2307/2991836
We develop a formal model of the redistricting process that highlights the importance of two factors: first, partisan or bipartisan control of the redistricting process; second, the nature of the reversionary outcome, should the state legislature and governor fail to agree on a new districting plan. Using this model, we predict the levels of partisan bias and responsiveness that should be observed under districting plans adopted under various constellations of partisan control of state government and reversionary outcomes, testing our predictions on postwar (1946-70) U.S. House electoral data. We find strong evidence that both partisan control and reversionary outcomes systematically affect the nature of a re- districting plan and the subsequent elections held under it. Further, we show that the well-known disappearance circa 1966 of what had been a long-time pro-Republican bias of about 6 percent in nonsouthern congressional elections can be explained largely by the changing composition of northern districting plans.https://authors.library.caltech.eduhttps://authors.library.caltech.edu/records/xfkyd-mfy23Throwing Out the Baby With the Bath Water: A Comment on Green, Kim and Yoon
https://resolver.caltech.edu/CaltechAUTHORS:20140314-120456912
Authors: {'items': [{'id': 'Beck-N', 'name': {'family': 'Beck', 'given': 'Nathaniel'}}, {'id': 'Katz-J-N', 'name': {'family': 'Katz', 'given': 'Jonathan N.'}, 'orcid': '0000-0002-5287-3503'}]}
Year: 2001
DOI: 10.1162/00208180151140658
Donald P. Green, Soo Yeon Kim, and David H. Yoon contribute to the literature on
estimating pooled times-series cross-section models in international relations (IR).
They argue that such models should be estimated with fixed effects when such
effects are statistically necessary. While we obviously have no disagreement that
sometimes fixed effects are appropriate, we show here that they are pernicious for
IR time-series cross-section models with a binary dependent variable and that they
are often problematic for IR models with a continuous dependent variable. In the
binary case, this perniciousness is the result of many pairs of nations always being
scored zero and hence having no impact on the parameter estimates; for example,
many dyads never come into conflict. In the continuous case, fixed effects are
problematic in the presence of the temporally stable regressors that are common IR
applications, such as the dyadic democracy measures used by Green, Kim, and
Yoon.https://authors.library.caltech.eduhttps://authors.library.caltech.edu/records/s2jwx-yw458Poststratification Without Population Level Information on the Poststratifying Variable With Application to Political Polling
https://resolver.caltech.edu/CaltechAUTHORS:20140314-120456999
Authors: {'items': [{'id': 'Reilly-Cavan', 'name': {'family': 'Reilly', 'given': 'Cavan'}}, {'id': 'Gelman-Andrew', 'name': {'family': 'Gelman', 'given': 'Andrew'}}, {'id': 'Katz-J-N', 'name': {'family': 'Katz', 'given': 'Jonathan N.'}, 'orcid': '0000-0002-5287-3503'}]}
Year: 2001
DOI: 10.1198/016214501750332640
We investigate the construction of more precise estimates of a collection of population means using information about a related variable in the context of repeated sample surveys. The method is illustrated using poll results concerning presidential approval rating (our related variable is political party identification). We use poststratification to construct these improved estimates, but because we do not have population level information on the poststratifying variable, we construct a model for the manner in which the poststratifier develops over time. In this manner, we obtain more precise estimates without making possibly untenable assumptions about the dynamics of our variable of interest, the presidential approval rating.https://authors.library.caltech.eduhttps://authors.library.caltech.edu/records/4tcxs-nen52Throwing Out the Baby with the Bath Water: A Comment on Green, Kim, and Yoon
https://resolver.caltech.edu/CaltechAUTHORS:20170809-142415851
Authors: {'items': [{'id': 'Beck-N', 'name': {'family': 'Beck', 'given': 'Nathaniel'}}, {'id': 'Katz-J-N', 'name': {'family': 'Katz', 'given': 'Jonathan N.'}, 'orcid': '0000-0002-5287-3503'}]}
Year: 2001
DOI: 10.1162/00208180151140658
Donald P. Green, Soo Yeon Kim, and David H. Yoon argue that many findings in quantitative international relations that use the dyad-year design are flawed. In particular, they argue that the effect of democracy on both trade and conflict has been vastly overstated, that researchers have ignored unobserved heterogeneity between the various dyads, and that heterogeneity can be best modeled by "fixed effects," that is, a model that includes a separate dummy for each dyad.
We argue that the use of fixed effects is almost always a bad idea for dyad-year data with a binary dependent variable like conflict. This is because conflict is a rare event, and the inclusion of fixed effects requires us to not analyze dyads that never conflict. Thus while the 90 percent of dyads that never conflict are more likely to be democratic, the use of fixed effects gives democracy no credit for the lack of conflict in these dyads. Green, Kim, and Yoon's fixed-effects logit can tell us little, if anything, about the pacific effects of democracy.
Their analysis of the impact of democracy on trade is also flawed. The inclusion of fixed effects almost always masks the impact of slowly changing independent variables; the democracy score is such a variable. Thus it is no surprise that the inclusion of dyadic dummy variables in their model completely masks the relationship between democracy and trade. We show that their preferred fixed-effects specification does not outperform a model with no effects (when that model is correctly specified in other ways). Thus there is no need to include the masking fixed effects, and so Green, Kim, and Yoon's findings do not overturn previous work that found that democracy enhanced trade.
We agree with Green, Kim, and Yoon that modeling heterogeneity in time-series cross-section data is important. We mention a number of alternatives to their fixed-effects approach, none of which would have the pernicious consequences of using dyadic dummies in their two reanalyses.https://authors.library.caltech.eduhttps://authors.library.caltech.edu/records/1e6q1-xq633Elbridge Gerry's Salamander: The Electoral Consequences of the Reapportionment Revolution
https://resolver.caltech.edu/CaltechAUTHORS:20140314-120456025
Authors: {'items': [{'id': 'Cox-G-W', 'name': {'family': 'Cox', 'given': 'Gary W.'}}, {'id': 'Katz-J-N', 'name': {'family': 'Katz', 'given': 'Jonathan N.'}, 'orcid': '0000-0002-5287-3503'}]}
Year: 2002
[Preface] Elbridge Gerry was governor of Massachusetts from 1810 to 1812. During his term, his party produced an artful electoral map intended to maximize the number of seats it could eke out of its expected vote share. Contemporary observers latched onto one district in particular, in the shape of a salamander, and pronounced it a Gerry-mander. This book is about a unique episode in the long history of American gerrymandering – the Supreme Court's landmark reapportionment decisions in the early 1960s and their electoral consequences. The dramatis personae of our story are the state politicians who drew congressional district lines, the judges on the courts supervising their handiwork, and the candidates competing for congressional office. The plot of our story concerns the strategic adaptation of these actors to the new electoral playing field created by the Court's decisions.https://authors.library.caltech.eduhttps://authors.library.caltech.edu/records/zdrny-x0596A Fast, Easy, and Efficient Estimator for Multiparty Electoral Data
https://resolver.caltech.edu/CaltechAUTHORS:20140314-120456464
Authors: {'items': [{'id': 'Honaker-J', 'name': {'family': 'Honaker', 'given': 'James'}}, {'id': 'Katz-J-N', 'name': {'family': 'Katz', 'given': 'Jonathan N.'}, 'orcid': '0000-0002-5287-3503'}, {'id': 'King-G', 'name': {'family': 'King', 'given': 'Gary'}, 'orcid': '0000-0002-5327-7631'}]}
Year: 2002
DOI: 10.1093/pan/10.1.84
Katz and King have previously developed a model for predicting or explaining aggregate electoral results in multiparty democracies. Their model is, in principle, analogous to what least‐squares regression provides American political researchers in that two‐party system. Katz and King applied their model to three‐party elections in England and revealed a variety of new features of incumbency advantage and sources of party support. Although the mathematics of their statistical model covers any number of political parties, it is computationally demanding, and hence slow and numerically imprecise, with more than three parties. In this paper we produce an approximate method that works in practice with many parties without making too many theoretical compromises. Our approach is to treat the problem as one of missing data. This allows us to use a modification of the fast EMis algorithm of King, Honaker, Joseph, and Scheve and to provide easy‐to‐use software, while retaining the attractive features of the Katz and King model, such as the t distribution and explicit models for uncontested seats.https://authors.library.caltech.eduhttps://authors.library.caltech.edu/records/ecgvw-4ng13The Mathematics and Statistics of Voting Power
https://resolver.caltech.edu/CaltechAUTHORS:20140314-120456380
Authors: {'items': [{'id': 'Gelman-Andrew', 'name': {'family': 'Gelman', 'given': 'Andrew'}}, {'id': 'Katz-J-N', 'name': {'family': 'Katz', 'given': 'Jonathan N.'}, 'orcid': '0000-0002-5287-3503'}, {'id': 'Tuerlinckx-Francis', 'name': {'family': 'Tuerlinckx', 'given': 'Francis'}}]}
Year: 2002
In an election, voting power—the probability that a single vote is
decisive—is affected by the rule for aggregating votes into a single outcome.
Voting power is important for studying political representation, fairness and
strategy, and has been much discussed in political science. Although power
indexes are often considered as mathematical definitions, they ultimately
depend on statistical models of voting. Mathematical calculations of voting
power usually have been performed under the model that votes are decided
by coin flips. This simple model has interesting implications for weighted
elections, two-stage elections (such as the U.S. Electoral College) and
coalition structures. We discuss empirical failings of the coin-flip model of
voting and consider, first, the implications for voting power and, second,
ways in which votes could be modeled more realistically. Under the random
voting model, the standard deviation of the average of n votes is proportional
to 1/√n, but under more general models, this variance can have the form cn^(−α) or √a−b log n. Voting power calculations undermore realistic models
present research challenges in modeling and computation.https://authors.library.caltech.eduhttps://authors.library.caltech.edu/records/zsz33-m4n06Standard Voting Power Indexes Don't Work: An Empirical Analysis
https://resolver.caltech.edu/CaltechAUTHORS:20140314-120456292
Authors: {'items': [{'id': 'Gelman-Andrew', 'name': {'family': 'Gelman', 'given': 'Andrew'}}, {'id': 'Katz-J-N', 'name': {'family': 'Katz', 'given': 'Jonathan N.'}, 'orcid': '0000-0002-5287-3503'}, {'id': 'Bafumi-Joseph', 'name': {'family': 'Bafumi', 'given': 'Joseph'}}]}
Year: 2004
DOI: 10.1017/S0007123404000237
Voting power indexes such as that of Banzhaf are derived, explicitly or implicitly, from the assumption that
all votes are equally likely (i.e., random voting). That assumption implies that the probability of a vote being
decisive in a jurisdiction with n voters is proportional to 1/√n. In this article the authors show how this
hypothesis has been empirically tested and rejected using data from various US and European elections. They
find that the probability of a decisive vote is approximately proportional to 1/n. The random voting model (and,
more generally, the square-root rule) overestimates the probability of close elections in larger jurisdictions.
As a result, classical voting power indexes make voters in large jurisdictions appear more powerful than
they really are. The most important political implication of their result is that proportionally weighted voting
systems (that is, each jurisdiction gets a number of votes proportional to n) are basically fair. This contradicts
the claim in the voting power literature that weights should be approximately proportional to √n.https://authors.library.caltech.eduhttps://authors.library.caltech.edu/records/d0nze-0ss25Indecision Theory: Weight of Evidence and Voting Behavior
https://resolver.caltech.edu/CaltechAUTHORS:20140314-120456198
Authors: {'items': [{'id': 'Ghirardato-P', 'name': {'family': 'Ghirardato', 'given': 'Paolo'}}, {'id': 'Katz-J-N', 'name': {'family': 'Katz', 'given': 'Jonathan N.'}, 'orcid': '0000-0002-5287-3503'}]}
Year: 2006
DOI: 10.1111/j.1467-9779.2006.00269.x
In this paper, we show how to incorporate weight of evidence, or ambiguity,
into a model of voting behavior. We do so in the context of
the turnout decision of instrumentally rational voters who differ in
their perception of the ambiguity of the candidates' policy positions.
Ambiguity is reflected by the fact that the voter's beliefs are given
by a set of probabilities, each of which represents in the voter's mind
a different possible scenario. We show that a voter who is averse to
ambiguity considers abstention strictly optimal when the candidates'
policy positions are both ambiguous and they are "ambiguity complements."
Abstaining is preferred since it is tantamount to mixing the
prospects embodied by the two candidates, thus enabling the voter
to "hedge" the candidates' ambiguity.https://authors.library.caltech.eduhttps://authors.library.caltech.edu/records/2pbx5-y5k66Comment on 'What To Do (and Not To Do) with Times-Series-Cross-Section Data'
https://resolver.caltech.edu/CaltechAUTHORS:20140314-120455389
Authors: {'items': [{'id': 'Beck-N', 'name': {'family': 'Beck', 'given': 'Nathaniel'}}, {'id': 'Katz-J-N', 'name': {'family': 'Katz', 'given': 'Jonathan N.'}, 'orcid': '0000-0002-5287-3503'}]}
Year: 2006
DOI: 10.1017/S0003055406292566
Much as we would like to believe that the high citation count
for this article is due to the brilliance and clarity of our argument,
it is more likely that the count is due to our being in the
right place (that is, the right part of the discipline) at the right
time. In the 1960s and 1970s, serious quantitative analysis
was used primarily in the study of American politics. But
since the 1980s it has spread to the study of both comparative
politics and international relations. In comparative politics
we see in the 20 most cited Review articles Hibbs's (1977)
and Cameron's (1978) quantitative analyses of the political
economy of advanced industrial societies; in international
relations we see Maoz and Russett's (1993) analysis of the
democratic peace; and these studies have been followed by
myriad others. Our article contributed to the methodology
for analyzing what has become the principal type of data used in the study of comparative politics; a related article
(Beck, Katz, and Tucker 1998), which has also had a good
citation history, dealt with analyzing this type of data with a
binary dependent variable, data heavily used in conflict studies
similar to that of Maoz and Russett's. Thus the citations
to our methodological discussions reflect the huge amount
of work now being done in the quantitative analysis of both
comparative politics and international relations.https://authors.library.caltech.eduhttps://authors.library.caltech.edu/records/mcw8n-t5v29Gerrymandering Roll-Calls in Congress, 1879-2000
https://resolver.caltech.edu/CaltechAUTHORS:20140314-120456110
Authors: {'items': [{'id': 'Cox-G-W', 'name': {'family': 'Cox', 'given': 'Gary W.'}}, {'id': 'Katz-J-N', 'name': {'family': 'Katz', 'given': 'Jonathan N.'}, 'orcid': '0000-0002-5287-3503'}]}
Year: 2007
DOI: 10.1111/j.1540-5907.2007.00240.x
We argue that the standard toolbox used in electoral studies to assess the bias and responsiveness of electoral systems can
also be used to assess the bias and responsiveness of legislative systems. We consider which items in the toolbox are the most
appropriate for use in the legislative setting, then apply them to estimate levels of bias in the U.S. House from 1879 to 2000.
Our results indicate a systematic bias in favor of the majority party over this period, with the strongest bias arising during
the period of "czar rule" (51st–60th Congresses, 1889–1910) and during the post-packing era (87th–106th Congresses,
1961–2000). This finding is consistent with the majority party possessing a significant advantage, either in "buying" vote
options, in setting the agenda, or both.https://authors.library.caltech.eduhttps://authors.library.caltech.edu/records/yaw5j-haf08Random Coefficient Models for Time-Series-Cross-Section Data: Monte Carlo Experiments
https://resolver.caltech.edu/CaltechAUTHORS:20140314-120455475
Authors: {'items': [{'id': 'Beck-N', 'name': {'family': 'Beck', 'given': 'Nathaniel'}}, {'id': 'Katz-J-N', 'name': {'family': 'Katz', 'given': 'Jonathan N.'}, 'orcid': '0000-0002-5287-3503'}]}
Year: 2007
DOI: 10.1093/pan/mpl001
This article considers random coefficient models (RCMs) for time-series–cross-section data.
These models allow for unit to unit variation in the model parameters. The heart of the article
compares the finite sample properties of the fully pooled estimator, the unit by unit
(unpooled) estimator, and the (maximum likelihood) RCM estimator. The maximum likelihood
estimator RCM performs well, even where the data were generated so that the RCM
would be problematic. In an appendix, we show that the most common feasible generalized
least squares estimator of the RCM models is always inferior to the maximum likelihood
estimator, and in smaller samples dramatically so.https://authors.library.caltech.eduhttps://authors.library.caltech.edu/records/emvzt-g5t27Fraud or Failure? What Incident Reports Reveal about Election Anomalies and Irregularities
https://resolver.caltech.edu/CaltechAUTHORS:20160222-160838091
Authors: {'items': [{'id': 'Kiewiet-D-R', 'name': {'family': 'Kiewiet', 'given': 'D. Roderick'}}, {'id': 'Hall-T-E', 'name': {'family': 'Hall', 'given': 'Thad E.'}}, {'id': 'Alvarez-R-M', 'name': {'family': 'Alvarez', 'given': 'R. Michael'}, 'orcid': '0000-0002-8113-4451'}, {'id': 'Katz-J-N', 'name': {'family': 'Katz', 'given': 'Jonathan N.'}, 'orcid': '0000-0002-5287-3503'}]}
Year: 2008
When things go wrong in elections involving direct-recording electronic (DRE) voting technology, these episodes are viewed by many as proof of the vulnerability, or at least the unreliability, of these systems. Claims by election officials that such problems are "par for the course" in elections or symptomatic of "growing pains" associated with implementing a new technology ring false to many Americans who expect elections to be run without error every time. To date, however, each side in the debate has been able to rely on only limited data and scant research. DRE technology has only recently been introduced on a large scale in the United States, so there is little systematic information regarding the difficulties encountered in its implementation.
In this chapter, we examine a novel and potentially very useful source of data concerning the frequency and severity of different types of problems encountered by voters and precinct workers in a DRE environment. The data consist of incident reports collected by poll workers during the May 2, 2006, primary election in Cuyahoga County, Ohio. This election marked the first use of DRE technology in this jurisdiction. Voters cast their ballots on Diebold Accuvote-TSx voting machines -- touch-screen machines equipped with printers to produce the voter-verified paper audit trail (VVPAT) mandated by Ohio election law. Since most voters were unfamiliar with the technology, the Cuyahoga County Board of Elections took the prudent and useful measure of providing poll workers at each precinct with incident report forms to record and to describe difficulties they encountered in conducting the balloting.https://authors.library.caltech.eduhttps://authors.library.caltech.edu/records/k3prj-dd268Comment on 'Estimating incumbency advantage and its variation, as an example of a before-after study'
https://resolver.caltech.edu/CaltechAUTHORS:20140314-120456813
Authors: {'items': [{'id': 'Katz-J-N', 'name': {'family': 'Katz', 'given': 'Jonathan N.'}, 'orcid': '0000-0002-5287-3503'}]}
Year: 2008
DOI: 10.1198/016214507000000635
The incumbency advantage is one of the most widely studied phenomena in political science. In fact, it is one of the few quantities of interest in the field where there is relative agreement not only on its directionality, but also on its relative size. Thus I was somewhat dubious that any significant additions remainder to be made to our understanding; however, Gelman and Huang have in fact made two important contributions.https://authors.library.caltech.eduhttps://authors.library.caltech.edu/records/7b8p3-9mm78Correcting for Survey Misreports using Auxiliary Information with an Application to Estimating Turnout
https://resolver.caltech.edu/CaltechAUTHORS:20140314-120456550
Authors: {'items': [{'id': 'Katz-J-N', 'name': {'family': 'Katz', 'given': 'Jonathan N.'}, 'orcid': '0000-0002-5287-3503'}, {'id': 'Katz-Gabriel', 'name': {'family': 'Katz', 'given': 'Gabriel'}, 'orcid': '0000-0001-5970-2769'}]}
Year: 2010
DOI: 10.1111/j.1540-5907.2010.00462.x
Misreporting is a problem that plagues researchers who use survey data. In this article, we develop a parametric model that
corrects for misclassified binary responses using information on the misreporting patterns obtained from auxiliary data
sources. The model is implemented within the Bayesian framework via Markov Chain Monte Carlo (MCMC) methods
and can be easily extended to address other problems exhibited by survey data, such as missing response and/or covariate
values. While the model is fully general, we illustrate its application in the context of estimating models of turnout using
data from the American National Elections Studies.https://authors.library.caltech.eduhttps://authors.library.caltech.edu/records/mmsd1-kxx74An empirical Bayes approach to estimating ordinal treatment effects
https://resolver.caltech.edu/CaltechAUTHORS:20171208-161127899
Authors: {'items': [{'id': 'Alvarez-R-M', 'name': {'family': 'Alvarez', 'given': 'R. Michael'}, 'orcid': '0000-0002-8113-4451'}, {'id': 'Bailey-D', 'name': {'family': 'Bailey', 'given': 'Delia'}}, {'id': 'Katz-J-N', 'name': {'family': 'Katz', 'given': 'Jonathan N.'}, 'orcid': '0000-0002-5287-3503'}]}
Year: 2011
DOI: 10.1093/pan/mpq033
Ordinal variables—categorical variables with a defined order to the categories, but without equal spacing between them—are frequently used in social science applications. Although a good deal of research exists on the proper modeling of ordinal response variables, there is not a clear directive as to how to model ordinal treatment variables. The usual approaches found in the literature for using ordinal treatment variables are either to use fully unconstrained, though additive, ordinal group indicators or to use a numeric predictor constrained to be continuous. Generalized additive models are a useful exception to these assumptions. In contrast to the generalized additive modeling approach, we propose the use of a Bayesian shrinkage estimator to model ordinal treatment variables. The estimator we discuss in this paper allows the model to contain both individual group—level indicators and a continuous predictor. In contrast to traditionally used shrinkage models that pull the data toward a common mean, we use a linear model as the basis. Thus, each individual effect can be arbitrary, but the model "shrinks" the estimates toward a linear ordinal framework according to the data. We demonstrate the estimator on two political science examples: the impact of voter identification requirements on turnout and the impact of the frequency of religious service attendance on the liberality of abortion attitudes.https://authors.library.caltech.eduhttps://authors.library.caltech.edu/records/j9bzz-e2y82An Empirical Bayes Approach to Estimating Ordinal Treatment Effect
https://resolver.caltech.edu/CaltechAUTHORS:20140314-120454857
Authors: {'items': [{'id': 'Alvarez-R-M', 'name': {'family': 'Alvarez', 'given': 'R. Michael'}, 'orcid': '0000-0002-8113-4451'}, {'id': 'Bailey-D', 'name': {'family': 'Bailey', 'given': 'Delia'}}, {'id': 'Katz-J-N', 'name': {'family': 'Katz', 'given': 'Jonathan N.'}, 'orcid': '0000-0002-5287-3503'}]}
Year: 2011
DOI: 10.1093/pan/mpq033
Ordinal variables—categorical variables with a defined order to the categories, but without equal spacing
between them—are frequently used in social science applications. Although a good deal of research exists on
the proper modeling of ordinal response variables, there is not a clear directive as to how to model ordinal
treatment variables. The usual approaches found in the literature for using ordinal treatment variables are
either to use fully unconstrained, though additive, ordinal group indicators or to use a numeric predictor
constrained to be continuous. Generalized additive models are a useful exception to these assumptions. In
contrast to the generalized additive modeling approach, we propose the use of a Bayesian shrinkage
estimator to model ordinal treatment variables. The estimator we discuss in this paper allows the model to
contain both individual group–level indicators and a continuous predictor. In contrast to traditionally used
shrinkage models that pull the data toward a common mean, we use a linear model as the basis. Thus, each
individual effect can be arbitrary, but the model ''shrinks'' the estimates toward a linear ordinal framework
according to the data. We demonstrate the estimator on two political science examples: the impact of voter
identification requirements on turnout and the impact of the frequency of religious service attendance on the
liberality of abortion attitudes.https://authors.library.caltech.eduhttps://authors.library.caltech.edu/records/w5g41-e0z87Implementing Panel Corrected Standard Errors in R: The pcse Package
https://resolver.caltech.edu/CaltechAUTHORS:20140314-120455027
Authors: {'items': [{'id': 'Bailey-D', 'name': {'family': 'Bailey', 'given': 'Delia'}}, {'id': 'Katz-J-N', 'name': {'family': 'Katz', 'given': 'Jonathan N.'}, 'orcid': '0000-0002-5287-3503'}]}
Year: 2011
Time-series-cross-section (TSCS) data are characterized by having repeated observations over time on some set of units, such as states or nations. TSCS data typically display
both contemporaneous correlation across units and unit level heteroskedasity making inference from standard errors produced by ordinary least squares incorrect. Panel-corrected
standard errors (PCSE) account for these these deviations from spherical errors and allow
for better inference from linear models estimated from TSCS data. In this paper, we
discuss an implementation of them in the R system for statistical computing. The key
computational issue is how to handle unbalanced data.https://authors.library.caltech.eduhttps://authors.library.caltech.edu/records/v24jv-2ty09Modeling Dynamics in Time-Series-Cross-Section Political Economy Data
https://resolver.caltech.edu/CaltechAUTHORS:20140314-120455558
Authors: {'items': [{'id': 'Beck-N', 'name': {'family': 'Beck', 'given': 'Nathaniel'}}, {'id': 'Katz-J-N', 'name': {'family': 'Katz', 'given': 'Jonathan N.'}, 'orcid': '0000-0002-5287-3503'}]}
Year: 2011
DOI: 10.1146/annurev-polisci-071510-103222
This article deals with a variety of dynamic issues in the analysis of
time-series–cross-section (TSCS) data. Although the issues raised are
general, we focus on applications to comparative political economy,
which frequently uses TSCS data. We begin with a discussion of specification
and lay out the theoretical differences implied by the various
types of dynamic models that can be estimated. It is shown that there
is nothing pernicious in using a lagged dependent variable and that all
dynamic models either implicitly or explicitly have such a variable; the
differences between the models relate to assumptions about the speeds
of adjustment of measured and unmeasured variables. When adjustment
is quick, it is hard to differentiate between the various models;
with slower speeds of adjustment, the various models make sufficiently
different predictions that they can be tested against each other. As the
speed of adjustment gets slower and slower, specification (and estimation)
gets more and more tricky. We then turn to a discussion of estimation.
It is noted that models with both a lagged dependent variable
and serially correlated errors can easily be estimated; it is only ordinary
least squares that is inconsistent in this situation. There is a brief
discussion of lagged dependent variables combined with fixed effects
and issues related to non-stationarity. We then show how our favored
method of modeling dynamics combines nicely with methods for dealing
with other TSCS issues, such as parameter heterogeneity and spatial
dependence. We conclude with two examples.https://authors.library.caltech.eduhttps://authors.library.caltech.edu/records/srmze-3sy60Special Section on the Forty-First Annual ACM Symposium on Theory of Computing (STOC 2009)
https://resolver.caltech.edu/CaltechAUTHORS:20191126-134942908
Authors: {'items': [{'id': 'Immorlica-N', 'name': {'family': 'Immorlica', 'given': 'Nicole'}}, {'id': 'Katz-J-N', 'name': {'family': 'Katz', 'given': 'Jonathan N.'}, 'orcid': '0000-0002-5287-3503'}, {'id': 'Mitzenmacher-M', 'name': {'family': 'Mitzenmacher', 'given': 'Michael'}}, {'id': 'Servedio-R', 'name': {'family': 'Servedio', 'given': 'Rocco'}}, {'id': 'Umans-C', 'name': {'family': 'Umans', 'given': 'Chris'}}]}
Year: 2012
DOI: 10.1137/120973305
This issue of SICOMP contains nine specially selected papers from the Forty-first Annual ACM Symposium on the Theory of Computing, otherwise known as STOC 2009, held May 31 to June 2 in Bethesda, Maryland. The papers here were chosen to represent both the excellence and the broad range of the STOC program. The papers have been revised and extended by the authors, and subjected to the standard thorough reviewing process of SICOMP.
The program committee consisted of Susanne Albers, Andris Ambainis, Nikhil Bansal, Paul Beame, Andrej Bogdanov, Ran Canetti, David Eppstein, Dmitry Gavinsky, Shafi Goldwasser, Nicole Immorlica, Anna Karlin, Jonathan Katz, Jonathan Kelner, Subhash Khot, Ravi Kumar, Leslie Ann Goldberg, Michael Mitzenmacher (Chair), Kamesh Munagala, Rasmus Pagh, Anup Rao, Rocco Servedio, Mikkel Thorup, Chris Umans, and Lisa Zhang. They accepted 77 papers out of 321 submissions.https://authors.library.caltech.eduhttps://authors.library.caltech.edu/records/2719d-ve992Estimating Partisan Bias of the Electoral College Under Proposed Changes in Elector Apportionment
https://resolver.caltech.edu/CaltechAUTHORS:20140314-120457077
Authors: {'items': [{'id': 'Thomas-A-C', 'name': {'family': 'Thomas', 'given': 'A. C.'}}, {'id': 'Gelman-Andrew', 'name': {'family': 'Gelman', 'given': 'Andrew'}}, {'id': 'King-Gary', 'name': {'family': 'King', 'given': 'Gary'}, 'orcid': '0000-0002-5327-7631'}, {'id': 'Katz-J-N', 'name': {'family': 'Katz', 'given': 'Jonathan N.'}, 'orcid': '0000-0002-5287-3503'}]}
Year: 2013
DOI: 10.1515/spp-2012-0001
In the election for President of the United States, the Electoral College
is the body whose members vote to elect the President directly. Each state sends
a number of delegates equal to its total number of representatives and senators
in Congress; all but two states (Nebraska and Maine) assign electors pledged
to the candidate that wins the state's plurality vote. We investigate the effect
on presidential elections if states were to assign their electoral votes according
to results in each congressional district, and conclude that the direct popular
vote and the current electoral college are both substantially fairer compared
to those alternatives where states would have divided their electoral votes by
congressional district.https://authors.library.caltech.eduhttps://authors.library.caltech.edu/records/9j7h4-bnf57Of Nickell Bias and its Cures: Comment on Gaibulloev, Sandler, and Sul
https://resolver.caltech.edu/CaltechAUTHORS:20140529-082104040
Authors: {'items': [{'id': 'Beck-N-L', 'name': {'family': 'Beck', 'given': 'Nathaniel L.'}}, {'id': 'Katz-J-N', 'name': {'family': 'Katz', 'given': 'Jonathan N.'}, 'orcid': '0000-0002-5287-3503'}, {'id': 'Mignozzetti-U-G', 'name': {'family': 'Mignozzetti', 'given': 'Umberto G.'}}]}
Year: 2014
DOI: 10.1093/pan/mpu004
Gaibulloev, Sandler, and Sul (2014) (here after GSS) present two methodological suggestions for estimating dynamic panel models with fixed effects and provide an empirical application using them. Our interest is only in their methodological suggestions, so we do not discuss the empirical application here. One of their methodological suggestions is that analysts account for cross-sectional dependence by adjoining to the model a common factor which relates to events going on in the world that are not explained by the unit-level covariates. This is surely an interesting way to proceed, though we await further evidence on whether the recommended method is superior to standard spatial econometric approach. Since GSS do not discuss this comparison, we do not either, but their approach is clearly of potential interest.
The second suggestion, that there is a problem with Nickell (1981) bias when the number of cross-sectional units (N) is considerably greater than the number of time points (T), and that this problem can be solved by simply analyzing subsets of the units independently, is on its face puzzling. In fact, we will argue that it is misguided. This suggestion is puzzling because usually in statistical analysis more data are better than less data. GSS suggest that less data, or equivalently, independent analyses of subsets of the data, is superior to using all the data simultaneously. As with any suggested fix for a methodological problem, we ask: (1) does the problem exist; (2) is the problem serious in applied work; and (3) does the proposed solution do more good than harm? The answer to the first question is yes, but the answers to the last two questions are clearly no.https://authors.library.caltech.eduhttps://authors.library.caltech.edu/records/cjkfv-yv904What's age got to do with it? Supreme Court appointees and the long run location of the Supreme Court median justice
https://resolver.caltech.edu/CaltechAUTHORS:20171208-150543356
Authors: {'items': [{'id': 'Katz-J-N', 'name': {'family': 'Katz', 'given': 'Jonathan N.'}, 'orcid': '0000-0002-5287-3503'}, {'id': 'Spitzer-M-L', 'name': {'family': 'Spitzer', 'given': 'Matthew L.'}}]}
Year: 2014
For approximately the past forty years, Republican Presidents have appointed younger Justices than have Democratic Presidents. Depending on how one does the accounting, the average age difference will vary, but will not go away. This Article posits that Republicans appointing younger justices than Democrats may have caused a rightward shift in the Supreme Court. We use computer simulations to show that if the trend continues the rightward shift will likely increase. We also produce some very rough estimates of the size of the ideological shift, contingent on the size of the age differential. In addition, we show that the Senate's role in confirming nominated Justices has a significant moderating effect on the shift. Last, we consider the interaction between our results and the oft-proposed eighteen year staggered terms for Supreme Court Justices. We show that such an institutional change would almost completely wipe out the ideological effect of one Party appointing younger Justices.https://authors.library.caltech.eduhttps://authors.library.caltech.edu/records/ztwdz-cah13The Effect of Voter Identification Laws on Turnout
https://resolver.caltech.edu/CaltechAUTHORS:20170728-165011731
Authors: {'items': [{'id': 'Alvarez-R-M', 'name': {'family': 'Alvarez', 'given': 'R. Michael'}, 'orcid': '0000-0002-8113-4451'}, {'id': 'Bailey-D', 'name': {'family': 'Bailey', 'given': 'Delia'}}, {'id': 'Katz-J-N', 'name': {'family': 'Katz', 'given': 'Jonathan N.'}, 'orcid': '0000-0002-5287-3503'}]}
Year: 2017
DOI: 10.2139/ssrn.1084598
Since the passage of the "Help America Vote Act" in 2002, nearly half of the states have adopted a variety of new identification requirements for voter registration and participation by the 2006 general election. There has been little analysis of whether these requirements reduce voter participation, especially among certain classes of voters. In this paper we document the effect of voter identification requirements on registered voters as they were imposed in states in the 2000 and 2004 presidential elections, and in the 2002 and 2006 midterm elections. Looking first at trends in the aggregate data, we find no evidence that voter identification requirements reduce participation. Using individual-level data from the Current Population Survey across these elections, however, we find that the strictest forms of voter identification requirements—combination requirements of presenting an identification card and positively matching one's signature with a signature either on file or on the identification card, as well as requirements to show picture identification—have a negative impact on the participation of registered voters relative to the weakest requirement, stating one's name. We also find find evidence that the stricter voter identification requirements depress turnout to a greater extent for less educated and lower income populations, for both minorities and non-minorities.https://authors.library.caltech.eduhttps://authors.library.caltech.edu/records/a4nx9-8pq40Auctioning off the Agenda: Bargaining in Legislatures with Endogenous Scheduling
https://resolver.caltech.edu/CaltechAUTHORS:20170728-170556066
Authors: {'items': [{'id': 'Čopič-J', 'name': {'family': 'Čopič', 'given': 'Jernej'}}, {'id': 'Katz-J-N', 'name': {'family': 'Katz', 'given': 'Jonathan N.'}, 'orcid': '0000-0002-5287-3503'}]}
Year: 2017
DOI: 10.7907/wkd74-vbm27
There are many examples of allocation problems where the final allocation affects more than one agent, but the models developed to study them typically allow for side payments between agents. However, there are political economy applications where it is hard to imagine monetary transfers between the agents, at least not legal ones. In this paper we propose a general political economic framework for the study of allocation problems with externalities without side payments. We consider a setup with complete information and we formulate the problem as one where the status quo describes an initial allocation that can altered in a sequence of proposals. The number of these proposals is restricted. In the context of our main application, bidding for slots on a legislative agenda, such restriction can be interpreted as scarcity of plenary time for considering the possible bills to move the policy. The intuition for our model comes out of framing the problem as a special type of a multi-good auction. We show that equilibria generically exist within the general model.https://authors.library.caltech.eduhttps://authors.library.caltech.edu/records/wkd74-vbm27What's age to do with it? Supreme Court appointees and the long run location of the Supreme Court median justice
https://resolver.caltech.edu/CaltechAUTHORS:20170727-113314415
Authors: {'items': [{'id': 'Katz-J-N', 'name': {'family': 'Katz', 'given': 'Jonathan N.'}, 'orcid': '0000-0002-5287-3503'}, {'id': 'Spitzer-M-L', 'name': {'family': 'Spitzer', 'given': 'Matthew L.'}}]}
Year: 2017
DOI: 10.7907/5j2s1-sgj23
[No abstract]https://authors.library.caltech.eduhttps://authors.library.caltech.edu/records/5j2s1-sgj23The Mathematics and Statistics of Voting Power
https://resolver.caltech.edu/CaltechAUTHORS:20170802-153714426
Authors: {'items': [{'id': 'Gelman-Andrew', 'name': {'family': 'Gelman', 'given': 'Andrew'}}, {'id': 'Katz-J-N', 'name': {'family': 'Katz', 'given': 'Jonathan N.'}, 'orcid': '0000-0002-5287-3503'}, {'id': 'Tuerlinckx-Francis', 'name': {'family': 'Tuerlinckx', 'given': 'Francis'}}]}
Year: 2017
DOI: 10.7907/hpnm8-cmw66
In an election, voting power—the probability that a single vote is decisive—is affected by the rule for aggregating votes into a single outcome. Voting power is important for studying political representation, fairness and strategy, and has been much discussed in political science. Although power indexes are often considered as mathematical definitions, they ultimately depend on statistical models of voting. Mathematical calculations of voting power usually have been performed under the model that votes are decided by coin flips. This simple model has interesting implications for weighted elections, two-stage elections (such as the U.S. Electoral College) and coalition structures. We discuss empirical failings of the coin-flip model of voting and consider, first, the implications for voting power and, second, ways in which votes could be modeled more realistically. Under the random voting model, the standard deviation of the average of n votes is proportional to 1/√n, but under more general models, this variance can have the form cn^(−α) or √a−b log n. Voting power calculations under more realistic models present research challenges in modeling and computation.https://authors.library.caltech.eduhttps://authors.library.caltech.edu/records/hpnm8-cmw66Standard Voting Power Indexes Don't Work: An Empirical Analysis
https://resolver.caltech.edu/CaltechAUTHORS:20170802-162810725
Authors: {'items': [{'id': 'Gelman-Andrew', 'name': {'family': 'Gelman', 'given': 'Andrew'}}, {'id': 'Katz-J-N', 'name': {'family': 'Katz', 'given': 'Jonathan N.'}, 'orcid': '0000-0002-5287-3503'}, {'id': 'Bafumi-Joseph', 'name': {'family': 'Bafumi', 'given': 'Joseph'}}]}
Year: 2017
DOI: 10.7907/8999e-yz237
Voting power indexes such as that of Banzhaf (1965) are derived, explicitly or implicitly, from the assumption that all votes are equally likely (i.e., random voting). That assumption can be generalized to hold that the probability of a vote being decisive in a jurisdiction with n voters is proportional to 1/√n.
We test and reject this hypothesis empirically, using data from several different U.S. and European elections. We find that the probability of a decisive vote is approximately proportional to 1/n. The random voting model (or its generalization, the square-root rule) overestimates the probability of close elections in larger jurisdictions. As a result, classical voting power indexes make voters in large jurisdictions appear more powerful than they really are.
The most important political implication of our result is that proportionally weighted voting systems (that is, each jurisdiction gets a number of votes proportional to n) are basically fair. This contradicts the claim in the voting power literature that weights should be approximately proportional to √n.https://authors.library.caltech.eduhttps://authors.library.caltech.edu/records/8999e-yz237Modeling Dynamics in Time-Series–Cross-Section Political Economy Data
https://resolver.caltech.edu/CaltechAUTHORS:20170727-141913417
Authors: {'items': [{'id': 'Beck-N', 'name': {'family': 'Beck', 'given': 'Nathaniel'}}, {'id': 'Katz-J-N', 'name': {'family': 'Katz', 'given': 'Jonathan N.'}, 'orcid': '0000-0002-5287-3503'}]}
Year: 2017
DOI: 10.7907/zx3dd-bq275
This paper deals with a variety of dynamic issues in the analysis of time- series–cross-section (TSCS) data. While the issues raised are more general, we focus on applications to political economy. We begin with a discussion of specification and lay out the theoretical differences implied by the various types of time series models that can be estimated. It is shown that there is nothing pernicious in using a lagged dependent variable and that all dynamic models either implicitly or explicitly have such a variable; the differences between the models relate to assumptions about the speeds of adjustment of measured and unmeasured variables. When adjustment is quick it is hard to differentiate between the various models; with slower speeds of adjustment the various models make sufficiently different predictions that they can be tested against each other. As the speed of adjustment gets slower and slower, specification (and estimation) gets more and more tricky. We then turn to a discussion of estimation. It is noted that models with both a lagged dependent variable and serially correlated errors can easily be estimated; it is only OLS that is inconsistent in this situation. We then show, via Monte Carlo analysis shows that for typical TSCS data that fixed effects with a lagged dependent variable performs about as well as the much more complicated Kiviet estimator, and better than the Anderson-Hsiao estimator (both designed for panels).https://authors.library.caltech.eduhttps://authors.library.caltech.edu/records/zx3dd-bq275Gerrymandering Roll-Calls: Votes, Decisions, and Partisan bias in Congress, 1879-2000
https://resolver.caltech.edu/CaltechAUTHORS:20170801-164857645
Authors: {'items': [{'id': 'Cox-G-W', 'name': {'family': 'Cox', 'given': 'Gary W.'}}, {'id': 'Katz-J-N', 'name': {'family': 'Katz', 'given': 'Jonathan N.'}, 'orcid': '0000-0002-5287-3503'}]}
Year: 2017
DOI: 10.7907/pt2hj-d5610
We argue that the standard toolbox used in electoral studies to assess the bias and responsiveness of electoral systems can also be used to assess the bias and responsiveness of legislative systems. We consider which items in the toolbox are the most appropriate for use in the legislative setting, then apply them to estimate levels of bias in the U.S. House from 1879 to 2000. Our results indicate a systematic bias in favor of the majority party over this period, with the strongest bias arising during the period of "Czar rule" (51st-60th Congresses, 1889-1910) and during the post-packing era (87th-106th Congresses, 1961-2000). This finding is consistent with the majority party possessing a significant advantage in setting the agenda.
"The definition of alternatives is the supreme instrument of power."
-–E. E. Schattschneider (1960, p. 86).https://authors.library.caltech.eduhttps://authors.library.caltech.edu/records/pt2hj-d5610Empirically Evaluating the Electoral College
https://resolver.caltech.edu/CaltechAUTHORS:20170802-162109260
Authors: {'items': [{'id': 'Katz-J-N', 'name': {'family': 'Katz', 'given': 'Jonathan N.'}, 'orcid': '0000-0002-5287-3503'}, {'id': 'Gelman-Andrew', 'name': {'family': 'Gelman', 'given': 'Andrew'}}, {'id': 'King-G', 'name': {'family': 'King', 'given': 'Gary'}, 'orcid': '0000-0002-5327-7631'}]}
Year: 2017
DOI: 10.7907/5pr6t-3a795
The 2000 U.S. presidential election has once again rekindled interest in possible electoral reform including the possible elimination of the Electoral College. Most arguments against the Electoral College have either been based on ancedotal evidence from particular elections or on highly stylized formal models We take a very different approach here. We develop a set of statistical models based on historical election results to evaluate the Electoral College as it has performed in practice. Thus, while we do not directly address the normative question of the value of the U.S. Electoral College, this paper does provide the necessary tools and evidence to make such an evaluation. We show that when one preforms such an analysis there is not much basis to argue for reforming the Electoral College. We first show that while the Electoral College may once have been biased against the Democrats, given the current distribution of voters, neither party is advantaged by the system. Further, the electoral vote will differ from the popular vote will only when the average votes shares are very close to a half. We then show that while there has been much temporal variation in voting power over the last several decades, the voting power of individual citizens would not likely increase under a popular vote system of electing the president.https://authors.library.caltech.eduhttps://authors.library.caltech.edu/records/5pr6t-3a795Correcting for Survey Misreports using Auxiliary Information with an Application to Estimating Turnout
https://resolver.caltech.edu/CaltechAUTHORS:20170727-155558097
Authors: {'items': [{'id': 'Katz-J-N', 'name': {'family': 'Katz', 'given': 'Jonathan N.'}, 'orcid': '0000-0002-5287-3503'}, {'id': 'Katz-Gabriel', 'name': {'family': 'Katz', 'given': 'Gabriel'}, 'orcid': '0000-0001-5970-2769'}]}
Year: 2017
DOI: 10.7907/r5ehj-emp92
Misreporting is a problem that plagues researchers that use survey data. In this paper, we develop a parametric model that corrects for misclassified binary responses using information on the misreporting patterns obtained from auxiliary data sources. The model is implemented within the Bayesian framework via Markov Chain Monte Carlo (MCMC) methods, and can be easily extended to address other problems exhibited by survey data, such as missing response and/or covariate values. While the model is fully general, we illustrate its application in the context of estimating models of turnout using data from the American National Elections Studies.https://authors.library.caltech.eduhttps://authors.library.caltech.edu/records/r5ehj-emp92An Empirical Bayes Approach to Estimating Ordinal Treatment Effects
https://resolver.caltech.edu/CaltechAUTHORS:20170727-161733786
Authors: {'items': [{'id': 'Alvarez-R-M', 'name': {'family': 'Alvarez', 'given': 'R. Michael'}, 'orcid': '0000-0002-8113-4451'}, {'id': 'Bailey-D', 'name': {'family': 'Bailey', 'given': 'Delia'}}, {'id': 'Katz-J-N', 'name': {'family': 'Katz', 'given': 'Jonathan N.'}, 'orcid': '0000-0002-5287-3503'}]}
Year: 2017
DOI: 10.7907/wn44k-gkd25
Ordinal variables—categorical variables with a defined order to the categories, but without equal spacing between them—are frequently used in social science applications. Although a good deal of research exists on the proper modeling of ordinal response variables, there is not a clear directive as to how to model ordinal treatment variables. The usual approaches found in the literature for using ordinal treatment variables are either to use fully unconstrained, though additive, ordinal group indicators or to use a numeric predictor constrained to be continuous. Generalized additive models are a useful exception to these assumptions (Beck and Jackman 1998). In contrast to the generalized additive modeling approach, we propose the use of a Bayesian shrinkage estimator to model ordinal treatment variables. The estimator we discuss in this paper allows the model to contain both individual group level indicators and a continuous predictor. In contrast to traditionally used shrinkage models that pull the data toward a common mean, we use a linear model as the basis. Thus, each individual effect can be arbitrary, but the model "shrinks" the estimates toward a linear ordinal framework according to the data. We demonstrate the estimator on two political science examples: the impact of voter identification requirements on turnout (Alvarez, Bailey, and Katz 2007), and the impact of the frequency of religious service attendance on the liberality of abortion attitudes (e.g., Singh and Leahy 1978, Tedrow and Mahoney 1979, Combs and Welch 1982).https://authors.library.caltech.eduhttps://authors.library.caltech.edu/records/wn44k-gkd25Scheduling Auctions and Proto-Parties in Legislatures
https://resolver.caltech.edu/CaltechAUTHORS:20170726-143507538
Authors: {'items': [{'id': 'Čopič-J', 'name': {'family': 'Čopič', 'given': 'Jernej'}}, {'id': 'Katz-J-N', 'name': {'family': 'Katz', 'given': 'Jonathan N.'}, 'orcid': '0000-0002-5287-3503'}]}
Year: 2017
DOI: 10.7907/7gq8e-c4j23
We consider the impact of the scarcity of plenary time in legislatures both on the outcome of the legislative bargaining process and the organization of the legislature itself. We do so by developing a novel model that we call scheduling auctions. In the model, the legislature is charged with allocating a fixed budget. Members can propose an allocation and the scheduling agent decides which one of the possible proposals will be considered by the entire legislature in plenary session for an up or down vote. We show in this simple setting that deciding which member should be selected as the scheduling agent endogenously induces the creation of nascent political parties that we call proto-parties. We also show that the these legislative structures have positive welfare implications.https://authors.library.caltech.eduhttps://authors.library.caltech.edu/records/7gq8e-c4j23Indecision Theory: Quality of Information and Voting Behavior
https://resolver.caltech.edu/CaltechAUTHORS:20170807-152618695
Authors: {'items': [{'id': 'Ghirardato-P', 'name': {'family': 'Ghirardato', 'given': 'Paolo'}}, {'id': 'Katz-J-N', 'name': {'family': 'Katz', 'given': 'Jonathan N.'}, 'orcid': '0000-0002-5287-3503'}]}
Year: 2017
DOI: 10.7907/x45sh-c4n27
In this paper we show how to incorporate quality of information into a model of voting behavior. We do so in the context of the turnout decision of instrumentally rational voters who differ in their quality of information, which we refer to as ambiguity. Ambiguity is reflected by the fact that the voter's beliefs are given by a set of probabilities, each of which represents in the voter's mind a different possible scenario.
We show that in most elections voters who satisfy the Bayesian model do not strictly prefer abstaining over voting for one of the candidates. In contrast, a voter who is averse to ambiguity considers abstention strictly optimal when the candidates' policy positions are both ambiguous and they are "ambiguity complements". Abstaining is preferred since it is tantamount to mixing the prospects embodied by the two candidates, thus enabling the voter to "hedge" the candidates' ambiguity.https://authors.library.caltech.eduhttps://authors.library.caltech.edu/records/x45sh-c4n27How much does a vote count? Voting power, coalitions, and the Electoral College
https://resolver.caltech.edu/CaltechAUTHORS:20170807-140415393
Authors: {'items': [{'id': 'Gelman-Andrew', 'name': {'family': 'Gelman', 'given': 'Andrew'}}, {'id': 'Katz-J-N', 'name': {'family': 'Katz', 'given': 'Jonathan N.'}, 'orcid': '0000-0002-5287-3503'}]}
Year: 2017
DOI: 10.7907/4q8jx-vrx44
In an election, the probability that a single voter is decisive is affected by the electoral system—that is, the rule for aggregating votes into a single outcome. Under the assumption that all votes are equally likely (i.e., random voting), we prove that the average probability of a vote being decisive is maximized under a popular-vote (or simple majority) rule and is lower under any coalition system, such as the U.S. Electoral College system, no matter how complicated. Forming a coalition increases the decisive vote probability for the voters within a coalition, but the aggregate effect of coalitions is to decrease the average decisiveness of the population of voters. We then review results on voting power in an electoral college system. Under the random voting assumption, it is well known that the voters with the highest probability of decisiveness are those in large states. However, we show using empirical estimates of the closeness of historical U.S. Presidential elections that voters in small states have been advantaged because the random voting model overestimates the frequencies of close elections in the larger states. Finally, we estimate the average probability of decisiveness for all U.S. Presidential elections from 1960 to 2000 under three possible electoral systems: popular vote, electoral vote, and winner-take-all within Congressional districts. We find that the average probability of decisiveness is about the same under all three systems.https://authors.library.caltech.eduhttps://authors.library.caltech.edu/records/4q8jx-vrx44An Improved Statistical Model for Multiparty Electoral Data
https://resolver.caltech.edu/CaltechAUTHORS:20170807-153549706
Authors: {'items': [{'id': 'Honaker-J', 'name': {'family': 'Honaker', 'given': 'James'}}, {'id': 'Katz-J-N', 'name': {'family': 'Katz', 'given': 'Jonathan N.'}, 'orcid': '0000-0002-5287-3503'}, {'id': 'King-G', 'name': {'family': 'King', 'given': 'Gary'}, 'orcid': '0000-0002-5327-7631'}]}
Year: 2017
DOI: 10.7907/c5d2w-qmn15
Katz and King (1999) develop a model for predicting or explaining aggregate electoral results in multiparty democracies. Their model is, in principle, analogous to what least squares regression provides American politics researchers in that two-party system. Katz and King applied their model to three-party elections in England and revealed a variety of new features of incumbency advantage and where each party pulls support from. Although the mathematics of their statistical model covers any number of political parties, it is computationally very demanding, and hence slow and numerically imprecise, with more than three. The original goal of our work was to produce an approximate method that works quicker in practice with many parties without making too many theoretical compromises. As it turns out, the method we offer here improves on Katz and King's (in bias, variance, numerical stability, and computational speed) even when the latter is computationally feasible. We also offer easy-to-use software that implements our suggestions.https://authors.library.caltech.eduhttps://authors.library.caltech.edu/records/c5d2w-qmn15Aggregation and Dynamics of Survey Responses: The Case of Presidential Approval
https://resolver.caltech.edu/CaltechAUTHORS:20170807-164540519
Authors: {'items': [{'id': 'Alvarez-R-M', 'name': {'family': 'Alvarez', 'given': 'R. Michael'}, 'orcid': '0000-0002-8113-4451'}, {'id': 'Katz-J-N', 'name': {'family': 'Katz', 'given': 'Jonathan N.'}, 'orcid': '0000-0002-5287-3503'}]}
Year: 2017
DOI: 10.7907/kdsp6-nwb86
In this paper we critique much of the empirical literature on the important political science concept of presidential approval. Much of the recent research on presidential approval has focused on the dynamic nature of approval; arguments have raged about whether presidential approval is integrated, co-integrated, or fractionally integrated. We argue that none of these time-series concepts, imported from an econometrics literature which has fundamentally different types of data than do political scientists, can apply to the presidential approval time series. Instead, we advocate careful use of aggregated approval as a time-series cross-section, or the use of individual-level survey responses. Ultimately most of the important hypotheses political scientists wish to test regarding presidential approval involve individual voters or citizens; thus we argue that using the appropriate data unit is the best methodology.https://authors.library.caltech.eduhttps://authors.library.caltech.edu/records/kdsp6-nwb86Throwing Out the Baby with the Bath Water: A Comment on Green, Yoon and Kim
https://resolver.caltech.edu/CaltechAUTHORS:20170731-170111519
Authors: {'items': [{'id': 'Beck-N', 'name': {'family': 'Beck', 'given': 'Nathaniel'}}, {'id': 'Katz-J-N', 'name': {'family': 'Katz', 'given': 'Jonathan N.'}, 'orcid': '0000-0002-5287-3503'}]}
Year: 2017
DOI: 10.7907/mkbra-4vn16
[No abstract]https://authors.library.caltech.eduhttps://authors.library.caltech.edu/records/mkbra-4vn16Post-Stratification without Population Level Information on the Post-Stratifying Variable, with Application to Political Polling
https://resolver.caltech.edu/CaltechAUTHORS:20170808-143149807
Authors: {'items': [{'id': 'Reilly-Cavan', 'name': {'family': 'Reilly', 'given': 'Cavan'}}, {'id': 'Gelman-Andrew', 'name': {'family': 'Gelman', 'given': 'Andrew'}}, {'id': 'Katz-J-N', 'name': {'family': 'Katz', 'given': 'Jonathan N.'}, 'orcid': '0000-0002-5287-3503'}]}
Year: 2017
DOI: 10.7907/814rt-46j78
We investigate the construction of more precise estimates of a collection of population means using information about a related variable in the context of repeated sample surveys. The method is illustrated using poll results concerning presidential approval rating (our related variable is political party identification). We use post-stratification to construct these improved estimates, but since we don't have population level information on the post-stratifying variable, we construct a model for the manner in which the post-stratifier develops over time. In this manner, we obtain more precise estimates without making possibly untenable assumptions about the dynamics of our variable of interest, the presidential approval rating.https://authors.library.caltech.eduhttps://authors.library.caltech.edu/records/814rt-46j78Beyond Ordinary Logit: Taking Time Seriously in Binary Time-Series-Cross-Section Models
https://resolver.caltech.edu/CaltechAUTHORS:20170814-133839169
Authors: {'items': [{'id': 'Beck-N', 'name': {'family': 'Beck', 'given': 'Nathaniel'}}, {'id': 'Katz-J-N', 'name': {'family': 'Katz', 'given': 'Jonathan N.'}, 'orcid': '0000-0002-5287-3503'}, {'id': 'Tucker-R', 'name': {'family': 'Tucker', 'given': 'Richard'}}]}
Year: 2017
DOI: 10.7907/2fmk9-nvs52
Researchers typically analyze time-series-cross-section data with a binary dependent variable (BTSCS) using ordinary logit or probit. However, BTSCS observations are likely to violate the independence assumption of the ordinary logit or probit statistical model. It is well known that if the observations are temporally related that the results of an ordinary logit or probit analysis may be misleading. In this paper, we provide a simple diagnostic for temporal dependence and a simple remedy. Our remedy is based on the idea that BTSCS data is identical to grouped duration data. This remedy does not require the BTSCS analyst to acquire any further methodological skills and it can be easily implemented in any standard statistical software package. While our approach is suitable for any type of BTSCS data, we provide examples and applications from the field of International Relations, where BTSCS data is frequently used. We use our methodology to re-assess Oneal and Russett's (1997) findings regarding the relationship between economic interdependence, democracy, and peace. Our analyses show that 1) their finding that economic interdependence is associated with peace is an artifact of their failure to account for temporal dependence and 2) their finding that democracy inhibits conflict is upheld even taking duration dependence into account.https://authors.library.caltech.eduhttps://authors.library.caltech.edu/records/2fmk9-nvs52The Reapportionment Revolution and Bias in U.S. Congressional Elections
https://resolver.caltech.edu/CaltechAUTHORS:20170814-144530248
Authors: {'items': [{'id': 'Cox-G-W', 'name': {'family': 'Cox', 'given': 'Gary W.'}}, {'id': 'Katz-J-N', 'name': {'family': 'Katz', 'given': 'Jonathan N.'}, 'orcid': '0000-0002-5287-3503'}]}
Year: 2017
DOI: 10.7907/xt8ng-zdv36
We develop a simple formal model of the redistricting process that highlights the importance of two factors: first, partisan or bipartisan control of the redistricting process; second, the nature of the reversionary outcome, should the state legislature and governor fail to agree on a new districting plan. Using this model, we derive various predictions about the levels of partisan bias and responsiveness that should be observed under districting plans adopted under various constellations of partisan control of state government and reversionary outcomes, testing our predictions on postwar (1946{70) U.S. House electoral data. We find strong evidence that both partisan control and reversionary outcomes systematically affect the nature of a redistricting plan and the subsequent elections held under it. Further, we show that the well-known disappearance circa 1966 of what had been a long-time pro-Republican bias of about 6% in nonsouthern congressional elections can be explained completely by the changing composition of northern districting plans.https://authors.library.caltech.eduhttps://authors.library.caltech.edu/records/xt8ng-zdv36A Statistical Model for Multiparty Electoral Data
https://resolver.caltech.edu/CaltechAUTHORS:20170814-155246837
Authors: {'items': [{'id': 'Katz-J-N', 'name': {'family': 'Katz', 'given': 'Jonathan N.'}, 'orcid': '0000-0002-5287-3503'}, {'id': 'King-G', 'name': {'family': 'King', 'given': 'Gary'}, 'orcid': '0000-0002-5327-7631'}]}
Year: 2017
DOI: 10.7907/ktgms-6m904
We propose an internally consistent and comprehensive statistical model for analyzing multiparty, district-level aggregate election data. This model can be used to explain or predict how the geographic distribution of electoral results depends upon economic conditions, neighborhood ethnic compositions, campaign spending, and other features of the election campaign or characteristics of the aggregate areas. We also provide several new graphical representations for help in data exploration, model evaluation, and substantive interpretation.
Although the model applies more generally, we use it to help resolve an important controversy over the size of and trend in the electoral advantage of incumbency in Great Britain. Contrary to previous analyses, which are all based on measures now known to be biased, we demonstrate that the incumbency advantage is small but politically meaningful. We also find that it differs substantially across the parties, about half a percent for the Conservatives, 1% for the Labor Party, and 3% for the Liberal party and its successors. Also contrary to previous research, we show that these effects have not grown in recent years. Finally, we are able to estimate from which party each party's incumbency advantage is predominantly drawn.https://authors.library.caltech.eduhttps://authors.library.caltech.edu/records/ktgms-6m904Why Did The Incumbency Advantage In U.S. House Elections Grow?
https://resolver.caltech.edu/CaltechAUTHORS:20170817-154732918
Authors: {'items': [{'id': 'Cox-G-W', 'name': {'family': 'Cox', 'given': 'Gary W.'}}, {'id': 'Katz-J-N', 'name': {'family': 'Katz', 'given': 'Jonathan N.'}, 'orcid': '0000-0002-5287-3503'}]}
Year: 2017
DOI: 10.7907/mztr7-cd807
In the last twenty years, scholars have scrutinized the electoral advantages conferred by incumbency-both at the federal and at the state level-more than perhaps any other factor affecting U .S. legislative elections.1 Much of the literature focuses on explaining why the incumbency advantage in U .S. House elections grew so substantially, starting in the mid-1960s. The dominant contenders in the literature are two, one emphasizing resources of various kinds (Mayhew 1974) and opportunities to perform constituency services (Fiorina 1977; 1989), one emphasizing partisan dealignment (Erikson 1972; Burnham 1974; Ferejohn 1977). While not incompatible, these explanations do point to significantly different factors as key, and neither has emerged as a clear winner.
In this paper, we suggest a new approach to measuring the incumbency advantage, one that disaggregates the total value of incumbency into three components. By examining the trends over time in these three components we find evidence suggesting that much of the growth in the incumbency advantage at the federal level cannot be accounted for by resource growth; rather, some version of the dealignment story will have to be employed.https://authors.library.caltech.eduhttps://authors.library.caltech.edu/records/mztr7-cd807Government Partisanship, Labor Organization and Macroeconomic Performance: A Corrigendum
https://resolver.caltech.edu/CaltechAUTHORS:20170824-162753007
Authors: {'items': [{'id': 'Beck-N', 'name': {'family': 'Beck', 'given': 'Nathaniel'}}, {'id': 'Katz-J-N', 'name': {'family': 'Katz', 'given': 'Jonathan N.'}, 'orcid': '0000-0002-5287-3503'}, {'id': 'Alvarez-R-M', 'name': {'family': 'Alvarez', 'given': 'R. Michael'}, 'orcid': '0000-0002-8113-4451'}, {'id': 'Garrett-G', 'name': {'family': 'Garrett', 'given': 'Geoffrey'}}, {'id': 'Lange-P', 'name': {'family': 'Lange', 'given': 'Peter'}}]}
Year: 2017
DOI: 10.7907/5f7wb-bjt75
Alvarez, Garrett and Lange (1991) used cross-national panel data on the OECD nations to show that countries with left governments and encompassing labor movements enjoyed superior economic performance. Here we show that the standard errors reported in that article are incorrect. Re-estimation of the model using ordinary least squares and robust standard errors shows that the major finding of Alvarez, Garrett and Lange, regarding the political and institutional causes of economic growth, is upheld but the findings for unemployment and inflation are open to question. We show that the model used by Alvarez, Garrett and Lange, feasible generalized least squares, cannot produce standard errors when the number of countries analyzed exceeds the length of the time period under analysis. Also, we argue that ordinary least squares with robust standard errors is superior to feasible generalized least squares for typical cross-national panel studies.https://authors.library.caltech.eduhttps://authors.library.caltech.edu/records/5f7wb-bjt75Constitutions of Exception: The Constitutional Foundations of the Interruption of Executive and Legislative Function
https://resolver.caltech.edu/CaltechAUTHORS:20180816-160144978
Authors: {'items': [{'id': 'Katz-J-N', 'name': {'family': 'Katz', 'given': 'Jonathan N.'}, 'orcid': '0000-0002-5287-3503'}, {'id': 'McCubbins-M-D', 'name': {'family': 'McCubbins', 'given': 'Mathew D.'}}]}
Year: 2018
DOI: 10.1628/093245617x15120238641848
Constitutions of exception are commonplace legal regimes that prescribe conditions and procedures under which the constitution itself can be legally suspended. Often the suspension of the constitution involves the interruption of scheduled elections for the national legislature and/or the chief executive. Military participation in civilian government and the derogation of civil liberties and rights are also typical. There is a debate in the literature about the value of exception clauses. We argue that they threaten democratic stability and consolidation. We do this by studying a large panel of countries and their constitutions over the twentieth century.https://authors.library.caltech.eduhttps://authors.library.caltech.edu/records/s0zt0-1an38An Audit of Political Behavior Research
https://resolver.caltech.edu/CaltechAUTHORS:20180830-091610406
Authors: {'items': [{'id': 'Robison-J', 'name': {'family': 'Robison', 'given': 'Joshua'}}, {'id': 'Stevenson-R-T', 'name': {'family': 'Stevenson', 'given': 'Randy T.'}}, {'id': 'Druckman-J-N', 'name': {'family': 'Druckman', 'given': 'James N.'}}, {'id': 'Jackman-S', 'name': {'family': 'Jackman', 'given': 'Simon'}}, {'id': 'Katz-J-N', 'name': {'family': 'Katz', 'given': 'Jonathan N.'}, 'orcid': '0000-0002-5287-3503'}, {'id': 'Vavreck-L', 'name': {'family': 'Vavreck', 'given': 'Lynn'}}]}
Year: 2018
DOI: 10.1177/2158244018794769
What are the most important concepts in the political behavior literature? Have experiments supplanted surveys as the dominant method in political behavior research? What role does the American National Election Studies (ANES) play in this literature? We utilize a content analysis of over 1,100 quantitative articles on American mass political behavior published between 1980 and 2009 to address these questions. We then supplement this with a second sample of articles published between 2010 and 2018. Four key takeaways are apparent. First, the agenda of this literature is heavily skewed toward understanding voting to a relative lack of attention to specific policy attitudes and other topics. Second, experiments are ascendant, but are far from displacing surveys, and particularly the ANES. Third, while important changes to this agenda have occurred over time, it remains much the same in 2018 as it was in 1980. Fourth, the centrality of the ANES seems to stem from its time-series component. In the end, we conclude that the ANES is a critical investment for the scientific community and a main driver of political behavior research.https://authors.library.caltech.eduhttps://authors.library.caltech.edu/records/jn1sm-rd515Random Coefficient Models for Time-Series–Cross-Section Data
https://resolver.caltech.edu/CaltechAUTHORS:20191018-164513923
Authors: {'items': [{'id': 'Beck-N', 'name': {'family': 'Beck', 'given': 'Nathaniel'}}, {'id': 'Katz-J-N', 'name': {'family': 'Katz', 'given': 'Jonathan N.'}, 'orcid': '0000-0002-5287-3503'}]}
Year: 2019
DOI: 10.7907/dgeyw-vqa80
This paper considers random coefficient models (RCMs) for time-series–cross-section data. These models allow for unit to unit variation in the model parameters. After laying out the various models, we assess several issues in specifying RCMs. We then consider the finite sample properties of some standard RCM estimators, and show that the most common one, associated with Hsiao, has very poor properties. These analyses also show that a somewhat awkward combination of estimators based on Swamy's work performs reasonably well; this awkward estimator and a Bayes estimator with an uninformative prior (due to Smith) seem to perform best. But we also see that estimators which assume full pooling perform well unless there is a large degree of unit to unit parameter heterogeneity. We also argue that the various data driven methods (whether classical or empirical Bayes or Bayes with gentle priors) tends to lead to much more heterogeneity than most political scientists would like. We speculate that fully Bayesian models, with a variety of informative priors, may be the best way to approach RCMs.https://authors.library.caltech.eduhttps://authors.library.caltech.edu/records/dgeyw-vqa80Hidden Donors: The Censoring Problem in U.S. Federal Campaign Finance Data
https://resolver.caltech.edu/CaltechAUTHORS:20200109-081212045
Authors: {'items': [{'id': 'Alvarez-R-M', 'name': {'family': 'Alvarez', 'given': 'R. Michael'}, 'orcid': '0000-0002-8113-4451'}, {'id': 'Katz-J-N', 'name': {'family': 'Katz', 'given': 'Jonathan N.'}, 'orcid': '0000-0002-5287-3503'}, {'id': 'Kim-Seo-young Silvia', 'name': {'family': 'Kim', 'given': 'Seo-young Silvia'}, 'orcid': '0000-0002-8801-9210'}]}
Year: 2020
DOI: 10.33774/apsa-2020-sdjkp
Inferences about individual campaign contributors are limited by how the Federal Election Commission collects and reports data. Only transactions that exceed a cycle-to-date total of $200 are individually disclosed, so that contributions of many donors are unobserved. We contrast visible and "hidden" donors, i.e., small donors who are invisible due to censoring—and routinely ignored in existing research. We use the Sanders presidential campaign in 2016, whose unique campaign structure received money only through an intermediary/conduit committee. These are governed by stricter disclosure statutes, allowing us to study donors who are normally hidden. For Sanders, there were seven hidden donors for every visible donor, and altogether, hidden donors were responsible for 33.8% of Sanders' campaign funds. We show that hidden donors start giving relatively later, with contributions concentrated around early primaries. We suggest that as presidential campaign strategies change towards wooing smaller donors, more research on what motivates them is necessary.https://authors.library.caltech.eduhttps://authors.library.caltech.edu/records/pvv8e-ae423Theoretical Foundations and Empirical Evaluations of Partisan Fairness in District-Based Democracies
https://resolver.caltech.edu/CaltechAUTHORS:20191030-114031247
Authors: {'items': [{'id': 'Katz-J-N', 'name': {'family': 'Katz', 'given': 'Jonathan N.'}, 'orcid': '0000-0002-5287-3503'}, {'id': 'King-G', 'name': {'family': 'King', 'given': 'Gary'}, 'orcid': '0000-0002-5327-7631'}, {'id': 'Rosenblatt-E', 'name': {'family': 'Rosenblatt', 'given': 'Elizabeth'}, 'orcid': '0000-0002-8890-3014'}]}
Year: 2020
DOI: 10.1017/S000305541900056X
We clarify the theoretical foundations of partisan fairness standards for district-based democratic electoral systems, including essential assumptions and definitions not previously recognized, formalized, or in some cases even discussed. We also offer extensive empirical evidence for assumptions with observable implications. We cover partisan symmetry, the most commonly accepted fairness standard, and other perspectives. Throughout, we follow a fundamental principle of statistical inference too often ignored in this literature—defining the quantity of interest separately so its measures can be proven wrong, evaluated, and improved. This enables us to prove which of the many newly proposed fairness measures are statistically appropriate and which are biased, limited, or not measures of the theoretical quantity they seek to estimate at all. Because real-world redistricting and gerrymandering involve complicated politics with numerous participants and conflicting goals, measures biased for partisan fairness sometimes still provide useful descriptions of other aspects of electoral systems.https://authors.library.caltech.eduhttps://authors.library.caltech.edu/records/awqcc-r3g16