CaltechAUTHORS: Article
https://feeds.library.caltech.edu/people/Katz-J-N/article.rss
A Caltech Library Repository Feedhttp://www.rssboard.org/rss-specificationpython-feedgenenFri, 08 Nov 2024 19:06:54 -0800Government Partisanship, Labor Organization, and Macroeconomic Performance: A Corrigendum
https://resolver.caltech.edu/CaltechAUTHORS:20140314-120455643
Year: 1993
DOI: 10.2307/2938825
Alvarez, Garrett and Lange (1991) used cross-national data panel data on the Organization for Economic Coordination and Development nations to show that countries with left governments and encompassing labor movements enjoyed superior economic performance. Here we show that the standard errors reported in that article are incorrect. Reestimation of the model using ordinary least squares and robust standard errors upholds the major finding of Alvarez, Garrett and Lange, regarding the political and institutional causes of economic growth but leaves the findings for unemployment and inflation open to question. We show that the model used by Alvarez, Garrett and Lange, feasible generalized least squares, cannot produce standard errors when the number of countries analyzed exceeds the length of the time period under analysis. Also, we argue that ordinary least squares with robust standard errors is superior to feasible generalized least square for typical cross-national panel studies.https://resolver.caltech.edu/CaltechAUTHORS:20140314-120455643What To Do (and Not To Do) with Time-Series Cross-Section Data
https://resolver.caltech.edu/CaltechAUTHORS:20140314-120455208
Year: 1995
DOI: 10.2307/2082979
We examine some issues in the estimation of time-series cross-section models, calling into
question the conclusions of many published studies, particularly in the field of comparative
political economy. We show that the generalized least squares approach of Parks produces
standard errors that lead to extreme overconfidence, often underestimating variability by 50% or
more. We also provide an alternative estimator of the standard errors that is correct when the error
structures show complications found in this type of model. Monte Carlo analysis shows that these
"panel-corrected standard errors" perform well. The utility of our approach is demonstrated via a
reanalysis of one "social democratic corporatist" model.https://resolver.caltech.edu/CaltechAUTHORS:20140314-120455208Nuisance vs. Substance: Specifying and Estimating Time-Series-Cross-Section Models
https://resolver.caltech.edu/CaltechAUTHORS:20140314-120455306
Year: 1996
DOI: 10.1093/pan/6.1.1
In a previous article we showed that ordinary least squares with panel
corrected standard errors is superior to the Parks generalized least
squares approach to the estimation of time-series-cross-section models.
In this article we compare our proposed method with another leading
technique, Kmenta's "cross-sectionally heteroskedastic and timewise autocorrelated"
model. This estimator uses generalized least squares to
correct for both panel heteroskedasticity and temporally correlated errors.
We argue that it is best to model dynamics via a lagged dependent
variable rather than via serially correlated errors. The lagged dependent
variable approach makes it easier for researchers to examine dynamics
and allows for natural generalizations in a manner that the serially correlated
errors approach does not. We also show that the generalized
least squares correction for panel heteroskedasticity is, in general, no
improvement over ordinary least squares and is, in the presence of parameter
heterogeneity, inferior to it. In the conclusion we present a
unified method for analyzing time-series-cross-section data.https://resolver.caltech.edu/CaltechAUTHORS:20140314-120455306Careerism, Committee Assignments and the Electoral Connection
https://resolver.caltech.edu/CaltechAUTHORS:20140314-120456710
Year: 1996
DOI: 10.2307/2082795
Most scholars agree that members of Congress are strongly motivated by their desire for reelection. This assumption implies that members of Congress adopt institutions, rules, and norms of behavior in part to serve their electoral interests. Direct tests of the electoral connection are rare, however, because
significant, exogenous changes in the electoral environment are difficult to identify. We develop and test an electoral rationale for the norm of committee assignment "property rights." We examine committee tenure patterns before and after a major, exogenous change in the electoral system-the states' rapid adoption of Australian ballot laws in the early 1890s. The ballot changes, we argue, induced new "personal vote" electoral incentives which contributed to the adoption of "modern" congressional institutions such as property rights to committee assignments. We demonstrate a marked increase in assignment stability after 1892, by which time
a majority of states had put the new ballot laws into force, and earlier than previous studies have suggested.https://resolver.caltech.edu/CaltechAUTHORS:20140314-120456710Why Did the Incumbency Advantage in U.S. House Elections Grow?
https://resolver.caltech.edu/CaltechAUTHORS:20140314-120455847
Year: 1996
DOI: 10.2307/2111633
Theory: A simple rational entry argument suggests that the value of incumbency
consists not just of a direct effect, reflecting the value of resources (such as staff)
attached to legislative office, but also of an indirect effect, reflecting the fact that
stronger challengers are less likely to contest incumbent-held seats. The indirect
effect is the product of a scare-off effect-the ability of incumbents to scare off
high-quality challengers-and a quality effect-reflecting how much electoral advantage
a party accrues when it has an experienced rather than an inexperienced
candidate.
Hypothesis: The growth of the overall incumbency advantage was driven principally
by increases in the quality effect.
Methods: We use a simple two-equation model, estimated by ordinary least-squares
regression, to analyze U.S. House election data from 1948 to 1990.
Results: Most of the increase in the incumbency advantage, at least down to 1980,
came through increases in the quality effect (i.e., the advantage to the incumbent
party of having a low-quality challenger). This suggests that the task for those
wishing to explain the growth in the vote-denominated incumbency advantage is
to explain why the quality effect grew. It also suggests that resource-based explanations
of the growth in the incumbency advantage cannot provide a full explanation.https://resolver.caltech.edu/CaltechAUTHORS:20140314-120455847Taking Time Seriously: Time-Series-Cross-Section Analysis with a Binary Dependent Variable
https://resolver.caltech.edu/CaltechAUTHORS:20140314-120455747
Year: 1998
DOI: 10.2307/2991857
Researchers typically analyze time-series-cross-section data with a binary dependent
variable (BTSCS) using ordinary logit or probit. However, BTSCS observations are
likely to violate the independence assumption of the ordinary logit or probit statistical
model. It is well known that if the observations are temporally related that the results of
an ordinary logit or probit analysis may be misleading. In this paper, we provide a simple
diagnostic for temporal dependence and a simple remedy. Our remedy is based on the
idea that BTSCS data are identical to grouped duration data. This remedy does not require
the BTSCS analyst to acquire any further methodological skills, and it can be easily
implemented in any standard statistical software package. While our approach is suitable
for any type of BTSCS data, we provide examples and applications from the field of
International Relations, where BTSCS data are frequently used. We use our methodology
to reassess Oneal and Russett's (1997) findings regarding the relationship between economic
interdependence, democracy, and peace. Our analyses show that (1) their finding
that economic interdependence is associated with peace is an artifact of their failure to
account for temporal dependence yet (2) their finding that democracy inhibits conflict is
upheld even taking duration dependence into account.https://resolver.caltech.edu/CaltechAUTHORS:20140314-120455747A Statistical Model for Multiparty Electoral Data
https://resolver.caltech.edu/CaltechAUTHORS:20140314-120456629
Year: 1999
DOI: 10.2307/2585758
We propose a comprehensive statistical model for analyzing multiparty, district-level elections. This
model, which provides a tool for comparative politics research analogous to that which regression
analysis provides in the American two-party context, can be used to explain or predict how
geographic distributions of electoral results depend upon economic conditions, neighborhood ethnic
compositions, campaign spending, and other features of the election campaign or aggregate areas. We also
provide new graphical representations for data exploration, model evaluation, and substantive interpretation.
We illustrate the use of this model by attempting to resolve a controversy over the size of and trend in the
electoral advantage of incumbency in Britain. Contrary to previous analyses, all based on measures now
known to be biased, we demonstrate that the advantage is small but meaningful, varies substantially across
the parties, and is not growing. Finally, we show how to estimate the party from which each party's advantage
is predominantly drawn.https://resolver.caltech.edu/CaltechAUTHORS:20140314-120456629The Reapportionment Revolution and Bias in U.S. Congressional Elections
https://resolver.caltech.edu/CaltechAUTHORS:20140314-120455941
Year: 1999
DOI: 10.2307/2991836
We develop a formal model of the redistricting process that highlights the importance of two factors: first, partisan or bipartisan control of the redistricting process; second, the nature of the reversionary outcome, should the state legislature and governor fail to agree on a new districting plan. Using this model, we predict the levels of partisan bias and responsiveness that should be observed under districting plans adopted under various constellations of partisan control of state government and reversionary outcomes, testing our predictions on postwar (1946-70) U.S. House electoral data. We find strong evidence that both partisan control and reversionary outcomes systematically affect the nature of a re- districting plan and the subsequent elections held under it. Further, we show that the well-known disappearance circa 1966 of what had been a long-time pro-Republican bias of about 6 percent in nonsouthern congressional elections can be explained largely by the changing composition of northern districting plans.https://resolver.caltech.edu/CaltechAUTHORS:20140314-120455941Poststratification Without Population Level Information on the Poststratifying Variable With Application to Political Polling
https://resolver.caltech.edu/CaltechAUTHORS:20140314-120456999
Year: 2001
DOI: 10.1198/016214501750332640
We investigate the construction of more precise estimates of a collection of population means using information about a related variable in the context of repeated sample surveys. The method is illustrated using poll results concerning presidential approval rating (our related variable is political party identification). We use poststratification to construct these improved estimates, but because we do not have population level information on the poststratifying variable, we construct a model for the manner in which the poststratifier develops over time. In this manner, we obtain more precise estimates without making possibly untenable assumptions about the dynamics of our variable of interest, the presidential approval rating.https://resolver.caltech.edu/CaltechAUTHORS:20140314-120456999Throwing Out the Baby With the Bath Water: A Comment on Green, Kim and Yoon
https://resolver.caltech.edu/CaltechAUTHORS:20140314-120456912
Year: 2001
DOI: 10.1162/00208180151140658
Donald P. Green, Soo Yeon Kim, and David H. Yoon contribute to the literature on
estimating pooled times-series cross-section models in international relations (IR).
They argue that such models should be estimated with fixed effects when such
effects are statistically necessary. While we obviously have no disagreement that
sometimes fixed effects are appropriate, we show here that they are pernicious for
IR time-series cross-section models with a binary dependent variable and that they
are often problematic for IR models with a continuous dependent variable. In the
binary case, this perniciousness is the result of many pairs of nations always being
scored zero and hence having no impact on the parameter estimates; for example,
many dyads never come into conflict. In the continuous case, fixed effects are
problematic in the presence of the temporally stable regressors that are common IR
applications, such as the dyadic democracy measures used by Green, Kim, and
Yoon.https://resolver.caltech.edu/CaltechAUTHORS:20140314-120456912Throwing Out the Baby with the Bath Water: A Comment on Green, Kim, and Yoon
https://resolver.caltech.edu/CaltechAUTHORS:20170809-142415851
Year: 2001
DOI: 10.1162/00208180151140658
Donald P. Green, Soo Yeon Kim, and David H. Yoon argue that many findings in quantitative international relations that use the dyad-year design are flawed. In particular, they argue that the effect of democracy on both trade and conflict has been vastly overstated, that researchers have ignored unobserved heterogeneity between the various dyads, and that heterogeneity can be best modeled by "fixed effects," that is, a model that includes a separate dummy for each dyad.
We argue that the use of fixed effects is almost always a bad idea for dyad-year data with a binary dependent variable like conflict. This is because conflict is a rare event, and the inclusion of fixed effects requires us to not analyze dyads that never conflict. Thus while the 90 percent of dyads that never conflict are more likely to be democratic, the use of fixed effects gives democracy no credit for the lack of conflict in these dyads. Green, Kim, and Yoon's fixed-effects logit can tell us little, if anything, about the pacific effects of democracy.
Their analysis of the impact of democracy on trade is also flawed. The inclusion of fixed effects almost always masks the impact of slowly changing independent variables; the democracy score is such a variable. Thus it is no surprise that the inclusion of dyadic dummy variables in their model completely masks the relationship between democracy and trade. We show that their preferred fixed-effects specification does not outperform a model with no effects (when that model is correctly specified in other ways). Thus there is no need to include the masking fixed effects, and so Green, Kim, and Yoon's findings do not overturn previous work that found that democracy enhanced trade.
We agree with Green, Kim, and Yoon that modeling heterogeneity in time-series cross-section data is important. We mention a number of alternatives to their fixed-effects approach, none of which would have the pernicious consequences of using dyadic dummies in their two reanalyses.https://resolver.caltech.edu/CaltechAUTHORS:20170809-142415851A Fast, Easy, and Efficient Estimator for Multiparty Electoral Data
https://resolver.caltech.edu/CaltechAUTHORS:20140314-120456464
Year: 2002
DOI: 10.1093/pan/10.1.84
Katz and King have previously developed a model for predicting or explaining aggregate electoral results in multiparty democracies. Their model is, in principle, analogous to what least‐squares regression provides American political researchers in that two‐party system. Katz and King applied their model to three‐party elections in England and revealed a variety of new features of incumbency advantage and sources of party support. Although the mathematics of their statistical model covers any number of political parties, it is computationally demanding, and hence slow and numerically imprecise, with more than three parties. In this paper we produce an approximate method that works in practice with many parties without making too many theoretical compromises. Our approach is to treat the problem as one of missing data. This allows us to use a modification of the fast EMis algorithm of King, Honaker, Joseph, and Scheve and to provide easy‐to‐use software, while retaining the attractive features of the Katz and King model, such as the t distribution and explicit models for uncontested seats.https://resolver.caltech.edu/CaltechAUTHORS:20140314-120456464The Mathematics and Statistics of Voting Power
https://resolver.caltech.edu/CaltechAUTHORS:20140314-120456380
Year: 2002
In an election, voting power—the probability that a single vote is
decisive—is affected by the rule for aggregating votes into a single outcome.
Voting power is important for studying political representation, fairness and
strategy, and has been much discussed in political science. Although power
indexes are often considered as mathematical definitions, they ultimately
depend on statistical models of voting. Mathematical calculations of voting
power usually have been performed under the model that votes are decided
by coin flips. This simple model has interesting implications for weighted
elections, two-stage elections (such as the U.S. Electoral College) and
coalition structures. We discuss empirical failings of the coin-flip model of
voting and consider, first, the implications for voting power and, second,
ways in which votes could be modeled more realistically. Under the random
voting model, the standard deviation of the average of n votes is proportional
to 1/√n, but under more general models, this variance can have the form cn^(−α) or √a−b log n. Voting power calculations undermore realistic models
present research challenges in modeling and computation.https://resolver.caltech.edu/CaltechAUTHORS:20140314-120456380Standard Voting Power Indexes Don't Work: An Empirical Analysis
https://resolver.caltech.edu/CaltechAUTHORS:20140314-120456292
Year: 2004
DOI: 10.1017/S0007123404000237
Voting power indexes such as that of Banzhaf are derived, explicitly or implicitly, from the assumption that
all votes are equally likely (i.e., random voting). That assumption implies that the probability of a vote being
decisive in a jurisdiction with n voters is proportional to 1/√n. In this article the authors show how this
hypothesis has been empirically tested and rejected using data from various US and European elections. They
find that the probability of a decisive vote is approximately proportional to 1/n. The random voting model (and,
more generally, the square-root rule) overestimates the probability of close elections in larger jurisdictions.
As a result, classical voting power indexes make voters in large jurisdictions appear more powerful than
they really are. The most important political implication of their result is that proportionally weighted voting
systems (that is, each jurisdiction gets a number of votes proportional to n) are basically fair. This contradicts
the claim in the voting power literature that weights should be approximately proportional to √n.https://resolver.caltech.edu/CaltechAUTHORS:20140314-120456292Indecision Theory: Weight of Evidence and Voting Behavior
https://resolver.caltech.edu/CaltechAUTHORS:20140314-120456198
Year: 2006
DOI: 10.1111/j.1467-9779.2006.00269.x
In this paper, we show how to incorporate weight of evidence, or ambiguity,
into a model of voting behavior. We do so in the context of
the turnout decision of instrumentally rational voters who differ in
their perception of the ambiguity of the candidates' policy positions.
Ambiguity is reflected by the fact that the voter's beliefs are given
by a set of probabilities, each of which represents in the voter's mind
a different possible scenario. We show that a voter who is averse to
ambiguity considers abstention strictly optimal when the candidates'
policy positions are both ambiguous and they are "ambiguity complements."
Abstaining is preferred since it is tantamount to mixing the
prospects embodied by the two candidates, thus enabling the voter
to "hedge" the candidates' ambiguity.https://resolver.caltech.edu/CaltechAUTHORS:20140314-120456198Comment on 'What To Do (and Not To Do) with Times-Series-Cross-Section Data'
https://resolver.caltech.edu/CaltechAUTHORS:20140314-120455389
Year: 2006
DOI: 10.1017/S0003055406292566
Much as we would like to believe that the high citation count
for this article is due to the brilliance and clarity of our argument,
it is more likely that the count is due to our being in the
right place (that is, the right part of the discipline) at the right
time. In the 1960s and 1970s, serious quantitative analysis
was used primarily in the study of American politics. But
since the 1980s it has spread to the study of both comparative
politics and international relations. In comparative politics
we see in the 20 most cited Review articles Hibbs's (1977)
and Cameron's (1978) quantitative analyses of the political
economy of advanced industrial societies; in international
relations we see Maoz and Russett's (1993) analysis of the
democratic peace; and these studies have been followed by
myriad others. Our article contributed to the methodology
for analyzing what has become the principal type of data used in the study of comparative politics; a related article
(Beck, Katz, and Tucker 1998), which has also had a good
citation history, dealt with analyzing this type of data with a
binary dependent variable, data heavily used in conflict studies
similar to that of Maoz and Russett's. Thus the citations
to our methodological discussions reflect the huge amount
of work now being done in the quantitative analysis of both
comparative politics and international relations.https://resolver.caltech.edu/CaltechAUTHORS:20140314-120455389Gerrymandering Roll-Calls in Congress, 1879-2000
https://resolver.caltech.edu/CaltechAUTHORS:20140314-120456110
Year: 2007
DOI: 10.1111/j.1540-5907.2007.00240.x
We argue that the standard toolbox used in electoral studies to assess the bias and responsiveness of electoral systems can
also be used to assess the bias and responsiveness of legislative systems. We consider which items in the toolbox are the most
appropriate for use in the legislative setting, then apply them to estimate levels of bias in the U.S. House from 1879 to 2000.
Our results indicate a systematic bias in favor of the majority party over this period, with the strongest bias arising during
the period of "czar rule" (51st–60th Congresses, 1889–1910) and during the post-packing era (87th–106th Congresses,
1961–2000). This finding is consistent with the majority party possessing a significant advantage, either in "buying" vote
options, in setting the agenda, or both.https://resolver.caltech.edu/CaltechAUTHORS:20140314-120456110Random Coefficient Models for Time-Series-Cross-Section Data: Monte Carlo Experiments
https://resolver.caltech.edu/CaltechAUTHORS:20140314-120455475
Year: 2007
DOI: 10.1093/pan/mpl001
This article considers random coefficient models (RCMs) for time-series–cross-section data.
These models allow for unit to unit variation in the model parameters. The heart of the article
compares the finite sample properties of the fully pooled estimator, the unit by unit
(unpooled) estimator, and the (maximum likelihood) RCM estimator. The maximum likelihood
estimator RCM performs well, even where the data were generated so that the RCM
would be problematic. In an appendix, we show that the most common feasible generalized
least squares estimator of the RCM models is always inferior to the maximum likelihood
estimator, and in smaller samples dramatically so.https://resolver.caltech.edu/CaltechAUTHORS:20140314-120455475Comment on 'Estimating incumbency advantage and its variation, as an example of a before-after study'
https://resolver.caltech.edu/CaltechAUTHORS:20140314-120456813
Year: 2008
DOI: 10.1198/016214507000000635
The incumbency advantage is one of the most widely studied phenomena in political science. In fact, it is one of the few quantities of interest in the field where there is relative agreement not only on its directionality, but also on its relative size. Thus I was somewhat dubious that any significant additions remainder to be made to our understanding; however, Gelman and Huang have in fact made two important contributions.https://resolver.caltech.edu/CaltechAUTHORS:20140314-120456813Correcting for Survey Misreports using Auxiliary Information with an Application to Estimating Turnout
https://resolver.caltech.edu/CaltechAUTHORS:20140314-120456550
Year: 2010
DOI: 10.1111/j.1540-5907.2010.00462.x
Misreporting is a problem that plagues researchers who use survey data. In this article, we develop a parametric model that
corrects for misclassified binary responses using information on the misreporting patterns obtained from auxiliary data
sources. The model is implemented within the Bayesian framework via Markov Chain Monte Carlo (MCMC) methods
and can be easily extended to address other problems exhibited by survey data, such as missing response and/or covariate
values. While the model is fully general, we illustrate its application in the context of estimating models of turnout using
data from the American National Elections Studies.https://resolver.caltech.edu/CaltechAUTHORS:20140314-120456550An empirical Bayes approach to estimating ordinal treatment effects
https://resolver.caltech.edu/CaltechAUTHORS:20171208-161127899
Year: 2011
DOI: 10.1093/pan/mpq033
Ordinal variables—categorical variables with a defined order to the categories, but without equal spacing between them—are frequently used in social science applications. Although a good deal of research exists on the proper modeling of ordinal response variables, there is not a clear directive as to how to model ordinal treatment variables. The usual approaches found in the literature for using ordinal treatment variables are either to use fully unconstrained, though additive, ordinal group indicators or to use a numeric predictor constrained to be continuous. Generalized additive models are a useful exception to these assumptions. In contrast to the generalized additive modeling approach, we propose the use of a Bayesian shrinkage estimator to model ordinal treatment variables. The estimator we discuss in this paper allows the model to contain both individual group—level indicators and a continuous predictor. In contrast to traditionally used shrinkage models that pull the data toward a common mean, we use a linear model as the basis. Thus, each individual effect can be arbitrary, but the model "shrinks" the estimates toward a linear ordinal framework according to the data. We demonstrate the estimator on two political science examples: the impact of voter identification requirements on turnout and the impact of the frequency of religious service attendance on the liberality of abortion attitudes.https://resolver.caltech.edu/CaltechAUTHORS:20171208-161127899An Empirical Bayes Approach to Estimating Ordinal Treatment Effect
https://resolver.caltech.edu/CaltechAUTHORS:20140314-120454857
Year: 2011
DOI: 10.1093/pan/mpq033
Ordinal variables—categorical variables with a defined order to the categories, but without equal spacing
between them—are frequently used in social science applications. Although a good deal of research exists on
the proper modeling of ordinal response variables, there is not a clear directive as to how to model ordinal
treatment variables. The usual approaches found in the literature for using ordinal treatment variables are
either to use fully unconstrained, though additive, ordinal group indicators or to use a numeric predictor
constrained to be continuous. Generalized additive models are a useful exception to these assumptions. In
contrast to the generalized additive modeling approach, we propose the use of a Bayesian shrinkage
estimator to model ordinal treatment variables. The estimator we discuss in this paper allows the model to
contain both individual group–level indicators and a continuous predictor. In contrast to traditionally used
shrinkage models that pull the data toward a common mean, we use a linear model as the basis. Thus, each
individual effect can be arbitrary, but the model ''shrinks'' the estimates toward a linear ordinal framework
according to the data. We demonstrate the estimator on two political science examples: the impact of voter
identification requirements on turnout and the impact of the frequency of religious service attendance on the
liberality of abortion attitudes.https://resolver.caltech.edu/CaltechAUTHORS:20140314-120454857Implementing Panel Corrected Standard Errors in R: The pcse Package
https://resolver.caltech.edu/CaltechAUTHORS:20140314-120455027
Year: 2011
Time-series-cross-section (TSCS) data are characterized by having repeated observations over time on some set of units, such as states or nations. TSCS data typically display
both contemporaneous correlation across units and unit level heteroskedasity making inference from standard errors produced by ordinary least squares incorrect. Panel-corrected
standard errors (PCSE) account for these these deviations from spherical errors and allow
for better inference from linear models estimated from TSCS data. In this paper, we
discuss an implementation of them in the R system for statistical computing. The key
computational issue is how to handle unbalanced data.https://resolver.caltech.edu/CaltechAUTHORS:20140314-120455027Modeling Dynamics in Time-Series-Cross-Section Political Economy Data
https://resolver.caltech.edu/CaltechAUTHORS:20140314-120455558
Year: 2011
DOI: 10.1146/annurev-polisci-071510-103222
This article deals with a variety of dynamic issues in the analysis of
time-series–cross-section (TSCS) data. Although the issues raised are
general, we focus on applications to comparative political economy,
which frequently uses TSCS data. We begin with a discussion of specification
and lay out the theoretical differences implied by the various
types of dynamic models that can be estimated. It is shown that there
is nothing pernicious in using a lagged dependent variable and that all
dynamic models either implicitly or explicitly have such a variable; the
differences between the models relate to assumptions about the speeds
of adjustment of measured and unmeasured variables. When adjustment
is quick, it is hard to differentiate between the various models;
with slower speeds of adjustment, the various models make sufficiently
different predictions that they can be tested against each other. As the
speed of adjustment gets slower and slower, specification (and estimation)
gets more and more tricky. We then turn to a discussion of estimation.
It is noted that models with both a lagged dependent variable
and serially correlated errors can easily be estimated; it is only ordinary
least squares that is inconsistent in this situation. There is a brief
discussion of lagged dependent variables combined with fixed effects
and issues related to non-stationarity. We then show how our favored
method of modeling dynamics combines nicely with methods for dealing
with other TSCS issues, such as parameter heterogeneity and spatial
dependence. We conclude with two examples.https://resolver.caltech.edu/CaltechAUTHORS:20140314-120455558Special Section on the Forty-First Annual ACM Symposium on Theory of Computing (STOC 2009)
https://resolver.caltech.edu/CaltechAUTHORS:20191126-134942908
Year: 2012
DOI: 10.1137/120973305
This issue of SICOMP contains nine specially selected papers from the Forty-first Annual ACM Symposium on the Theory of Computing, otherwise known as STOC 2009, held May 31 to June 2 in Bethesda, Maryland. The papers here were chosen to represent both the excellence and the broad range of the STOC program. The papers have been revised and extended by the authors, and subjected to the standard thorough reviewing process of SICOMP.
The program committee consisted of Susanne Albers, Andris Ambainis, Nikhil Bansal, Paul Beame, Andrej Bogdanov, Ran Canetti, David Eppstein, Dmitry Gavinsky, Shafi Goldwasser, Nicole Immorlica, Anna Karlin, Jonathan Katz, Jonathan Kelner, Subhash Khot, Ravi Kumar, Leslie Ann Goldberg, Michael Mitzenmacher (Chair), Kamesh Munagala, Rasmus Pagh, Anup Rao, Rocco Servedio, Mikkel Thorup, Chris Umans, and Lisa Zhang. They accepted 77 papers out of 321 submissions.https://resolver.caltech.edu/CaltechAUTHORS:20191126-134942908Estimating Partisan Bias of the Electoral College Under Proposed Changes in Elector Apportionment
https://resolver.caltech.edu/CaltechAUTHORS:20140314-120457077
Year: 2013
DOI: 10.1515/spp-2012-0001
In the election for President of the United States, the Electoral College
is the body whose members vote to elect the President directly. Each state sends
a number of delegates equal to its total number of representatives and senators
in Congress; all but two states (Nebraska and Maine) assign electors pledged
to the candidate that wins the state's plurality vote. We investigate the effect
on presidential elections if states were to assign their electoral votes according
to results in each congressional district, and conclude that the direct popular
vote and the current electoral college are both substantially fairer compared
to those alternatives where states would have divided their electoral votes by
congressional district.https://resolver.caltech.edu/CaltechAUTHORS:20140314-120457077Of Nickell Bias and its Cures: Comment on Gaibulloev, Sandler, and Sul
https://resolver.caltech.edu/CaltechAUTHORS:20140529-082104040
Year: 2014
DOI: 10.1093/pan/mpu004
Gaibulloev, Sandler, and Sul (2014) (here after GSS) present two methodological suggestions for estimating dynamic panel models with fixed effects and provide an empirical application using them. Our interest is only in their methodological suggestions, so we do not discuss the empirical application here. One of their methodological suggestions is that analysts account for cross-sectional dependence by adjoining to the model a common factor which relates to events going on in the world that are not explained by the unit-level covariates. This is surely an interesting way to proceed, though we await further evidence on whether the recommended method is superior to standard spatial econometric approach. Since GSS do not discuss this comparison, we do not either, but their approach is clearly of potential interest.
The second suggestion, that there is a problem with Nickell (1981) bias when the number of cross-sectional units (N) is considerably greater than the number of time points (T), and that this problem can be solved by simply analyzing subsets of the units independently, is on its face puzzling. In fact, we will argue that it is misguided. This suggestion is puzzling because usually in statistical analysis more data are better than less data. GSS suggest that less data, or equivalently, independent analyses of subsets of the data, is superior to using all the data simultaneously. As with any suggested fix for a methodological problem, we ask: (1) does the problem exist; (2) is the problem serious in applied work; and (3) does the proposed solution do more good than harm? The answer to the first question is yes, but the answers to the last two questions are clearly no.https://resolver.caltech.edu/CaltechAUTHORS:20140529-082104040What's age got to do with it? Supreme Court appointees and the long run location of the Supreme Court median justice
https://resolver.caltech.edu/CaltechAUTHORS:20171208-150543356
Year: 2014
For approximately the past forty years, Republican Presidents have appointed younger Justices than have Democratic Presidents. Depending on how one does the accounting, the average age difference will vary, but will not go away. This Article posits that Republicans appointing younger justices than Democrats may have caused a rightward shift in the Supreme Court. We use computer simulations to show that if the trend continues the rightward shift will likely increase. We also produce some very rough estimates of the size of the ideological shift, contingent on the size of the age differential. In addition, we show that the Senate's role in confirming nominated Justices has a significant moderating effect on the shift. Last, we consider the interaction between our results and the oft-proposed eighteen year staggered terms for Supreme Court Justices. We show that such an institutional change would almost completely wipe out the ideological effect of one Party appointing younger Justices.https://resolver.caltech.edu/CaltechAUTHORS:20171208-150543356Constitutions of Exception: The Constitutional Foundations of the Interruption of Executive and Legislative Function
https://resolver.caltech.edu/CaltechAUTHORS:20180816-160144978
Year: 2018
DOI: 10.1628/093245617x15120238641848
Constitutions of exception are commonplace legal regimes that prescribe conditions and procedures under which the constitution itself can be legally suspended. Often the suspension of the constitution involves the interruption of scheduled elections for the national legislature and/or the chief executive. Military participation in civilian government and the derogation of civil liberties and rights are also typical. There is a debate in the literature about the value of exception clauses. We argue that they threaten democratic stability and consolidation. We do this by studying a large panel of countries and their constitutions over the twentieth century.https://resolver.caltech.edu/CaltechAUTHORS:20180816-160144978An Audit of Political Behavior Research
https://resolver.caltech.edu/CaltechAUTHORS:20180830-091610406
Year: 2018
DOI: 10.1177/2158244018794769
What are the most important concepts in the political behavior literature? Have experiments supplanted surveys as the dominant method in political behavior research? What role does the American National Election Studies (ANES) play in this literature? We utilize a content analysis of over 1,100 quantitative articles on American mass political behavior published between 1980 and 2009 to address these questions. We then supplement this with a second sample of articles published between 2010 and 2018. Four key takeaways are apparent. First, the agenda of this literature is heavily skewed toward understanding voting to a relative lack of attention to specific policy attitudes and other topics. Second, experiments are ascendant, but are far from displacing surveys, and particularly the ANES. Third, while important changes to this agenda have occurred over time, it remains much the same in 2018 as it was in 1980. Fourth, the centrality of the ANES seems to stem from its time-series component. In the end, we conclude that the ANES is a critical investment for the scientific community and a main driver of political behavior research.https://resolver.caltech.edu/CaltechAUTHORS:20180830-091610406Theoretical Foundations and Empirical Evaluations of Partisan Fairness in District-Based Democracies
https://resolver.caltech.edu/CaltechAUTHORS:20191030-114031247
Year: 2020
DOI: 10.1017/S000305541900056X
We clarify the theoretical foundations of partisan fairness standards for district-based democratic electoral systems, including essential assumptions and definitions not previously recognized, formalized, or in some cases even discussed. We also offer extensive empirical evidence for assumptions with observable implications. We cover partisan symmetry, the most commonly accepted fairness standard, and other perspectives. Throughout, we follow a fundamental principle of statistical inference too often ignored in this literature—defining the quantity of interest separately so its measures can be proven wrong, evaluated, and improved. This enables us to prove which of the many newly proposed fairness measures are statistically appropriate and which are biased, limited, or not measures of the theoretical quantity they seek to estimate at all. Because real-world redistricting and gerrymandering involve complicated politics with numerous participants and conflicting goals, measures biased for partisan fairness sometimes still provide useful descriptions of other aspects of electoral systems.https://resolver.caltech.edu/CaltechAUTHORS:20191030-114031247Rejoinder: Concluding Remarks on Scholarly Communications
https://authors.library.caltech.edu/records/b22ya-z3106
Year: 2023
DOI: 10.1017/pan.2021.48
<p>We are grateful to DeFord et al. for the continued attention to our work and the crucial issues of fair representation in democratic electoral systems. Our response (Katz, King, and Rosenblatt Forthcoming) was designed to help readers avoid being misled by mistaken claims in DeFord et al. (Forthcoming-a), and does not address other literature or uses of our prior work. As it happens, none of our corrections were addressed (or contradicted) in the most recent submission (DeFord et al. Forthcoming-b).</p>https://authors.library.caltech.edu/records/b22ya-z3106The Essential Role of Statistical Inference in Evaluating Electoral Systems: A Response to DeFord et al.
https://authors.library.caltech.edu/records/gv8fr-72a96
Year: 2023
DOI: 10.1017/pan.2021.46
<p>Katz, King, and Rosenblatt (2020, American Political Science Review 114, 164–178) introduces a theoretical framework for understanding redistricting and electoral systems, built on basic statistical and social science principles of inference. DeFord et al. (2021, Political Analysis, this issue) instead focuses solely on descriptive measures, which lead to the problems identified in our article. In this article, we illustrate the essential role of these basic principles and then offer statistical, mathematical, and substantive corrections required to apply DeFord et al.'s calculations to social science questions of interest, while also showing how to easily resolve all claimed paradoxes and problems. We are grateful to the authors for their interest in our work and for this opportunity to clarify these principles and our theoretical framework.</p>https://authors.library.caltech.edu/records/gv8fr-72a96