Book Section records
https://feeds.library.caltech.edu/people/Echenique-F/book_section.rss
A Caltech Library Repository Feedhttp://www.rssboard.org/rss-specificationpython-feedgenenFri, 12 Apr 2024 23:31:27 +0000Correspondence Principle
https://resolver.caltech.edu/CaltechAUTHORS:20101004-111522975
Authors: {'items': [{'id': 'Echenique-F', 'name': {'family': 'Echenique', 'given': 'Federico'}, 'orcid': '0000-0002-1567-6770'}]}
Year: 2008
N/Ahttps://authors.library.caltech.edu/records/3537r-8y303Aggregate matchings
https://resolver.caltech.edu/CaltechAUTHORS:20161018-153505782
Authors: {'items': [{'id': 'Echenique-F', 'name': {'family': 'Echenique', 'given': 'Federico'}, 'orcid': '0000-0002-1567-6770'}, {'id': 'Lee-Sangmok', 'name': {'family': 'Lee', 'given': 'SangMok'}}, {'id': 'Shum-M', 'name': {'family': 'Shum', 'given': 'Matthew'}, 'orcid': '0000-0002-6262-915X'}]}
Year: 2010
DOI: 10.1145/1807406.1807477
This paper characterizes the testable implications of stability for aggregate matchings. We consider data on matchings where individuals are aggregated, based on their observable characteristics, into types, and we know how many agents of each type match. We derive stability conditions for an aggregate matching, and, based on these, provide a simple necessary and sufficient condition for an observed aggregate matching to be rationalizable (i.e. such that preferences can be found so that the observed aggregate matching is stable). Subsequently, we derive moment inequalities based on the stability conditions, and provide an empirical illustration using the cross-sectional marriage distributions across the US states.https://authors.library.caltech.edu/records/v8av9-td811A revealed preference approach to computational complexity in economics
https://resolver.caltech.edu/CaltechAUTHORS:20120521-110926358
Authors: {'items': [{'id': 'Echenique-F', 'name': {'family': 'Echenique', 'given': 'Federico'}, 'orcid': '0000-0002-1567-6770'}, {'id': 'Golovin-D', 'name': {'family': 'Golovin', 'given': 'Daniel'}}, {'id': 'Wierman-A', 'name': {'family': 'Wierman', 'given': 'Adam'}}]}
Year: 2011
DOI: 10.1145/1993574.1993591
Recent results in complexity theory suggest that various economic theories require agents to solve computationally intractable problems. However, such results assume the agents are optimizing explicit utility functions, whereas the economic theories merely assume the agents behave rationally, where rational behavior is defined via some optimization problem. Might making rational choices be easier than solving the corresponding optimization problem? For at least one major economic theory, the theory of the consumer (which simply postulates that consumers are utility maximizing), we find this is indeed the case. In other words, we prove the possibly surprising result that computational constraints have no empirical consequences for consumer choice theory.
Our result motivates a general approach for posing questions about the empirical content of computational constraints: the revealed preference approach to computational complexity. This approach complements the conventional worst-case view of computational complexity in important ways, and is methodologically close to mainstream economics.https://authors.library.caltech.edu/records/9dfmp-vd612Finding a Walrasian Equilibrium is Easy for a Fixed Number of Agents
https://resolver.caltech.edu/CaltechAUTHORS:20161010-171918212
Authors: {'items': [{'id': 'Echenique-F', 'name': {'family': 'Echenique', 'given': 'Federico'}, 'orcid': '0000-0002-1567-6770'}, {'id': 'Wierman-A', 'name': {'family': 'Wierman', 'given': 'Adam'}}]}
Year: 2012
DOI: 10.1145/2229012.2229049
In this work, we study the complexity of finding a Walrasian equilibrium. Our main result gives an algorithm which can compute an approximate Walrasian equilibrium in an exchange economy with general, but well-behaved, utility functions in time that is polynomial in the number of goods when the number of agents is held constant. This result has applications to macroeconomics and finance, where applications of Walrasian equilibrium theory tend to deal with many goods but a fixed number of agents.https://authors.library.caltech.edu/records/s94hs-bb881Partial Identification in Two-sided Matching Models
https://resolver.caltech.edu/CaltechAUTHORS:20160330-121540651
Authors: {'items': [{'id': 'Echenique-F', 'name': {'family': 'Echenique', 'given': 'Federico'}, 'orcid': '0000-0002-1567-6770'}, {'id': 'Lee-Sangmok', 'name': {'family': 'Lee', 'given': 'Sangmok'}}, {'id': 'Shum-M', 'name': {'family': 'Shum', 'given': 'Matthew'}, 'orcid': '0000-0002-6262-915X'}]}
Year: 2013
DOI: 10.1108/S0731-9053(2013)0000032004
We propose a methodology for estimating preference parameters in matching models. Our estimator applies to repeated observations of matchings among a fixed group of individuals. Our estimator is based on the stability conditions in matching models; we consider both transferable (TU) and nontransferable utility (NTU) models. In both cases, the stability conditions yield moment inequalities which can be taken to the data. The preference parameters are partially identified. We consider simple illustrative examples, and also an empirical application to aggregate marriage markets.https://authors.library.caltech.edu/records/19qxt-me041A test for monotone comparative statistics
https://resolver.caltech.edu/CaltechAUTHORS:20171129-134154134
Authors: {'items': [{'id': 'Echenique-F', 'name': {'family': 'Echenique', 'given': 'Federico'}, 'orcid': '0000-0002-1567-6770'}, {'id': 'Komunjer-I', 'name': {'family': 'Komunjer', 'given': 'Ivana'}}]}
Year: 2013
DOI: 10.1108/S0731-9053%282013%290000032007
In this article we design an econometric test for monotone comparative statics (MCS) often found in models with multiple equilibria. Our test exploits the observable implications of the MCS prediction: that the extreme (high and low) conditiona l quantiles of the dependent variable increase monotonically with the explanatory variable. The main contribution of the article is to derive a likelihood-ratio test, which to the best of our knowledge is the first econometric test of MCS proposed in the literature. The test is an asymptotic "chi-bar squared" test for order restrictions on intermediate conditional quantiles. The key features of our approach are: (1) we do not need to estimate the underlying nonparametric model relating the dependent and explanatory variables to the latent disturbances; (2) we make few assumptions on the cardinality, location, or probabilities over equilibria. In particular, one can implement our test without assuming an equilibrium selection rule.https://authors.library.caltech.edu/records/1pkzh-j0b37The Empirical Implications of Rank in Bimatrix Games
https://resolver.caltech.edu/CaltechAUTHORS:20131008-155108539
Authors: {'items': [{'id': 'Barman-S', 'name': {'family': 'Barman', 'given': 'Siddharth'}}, {'id': 'Bhaskar-U', 'name': {'family': 'Bhaskar', 'given': 'Umang'}}, {'id': 'Echenique-F', 'name': {'family': 'Echenique', 'given': 'Federico'}, 'orcid': '0000-0002-1567-6770'}, {'id': 'Wierman-A', 'name': {'family': 'Wierman', 'given': 'Adam'}}]}
Year: 2013
DOI: 10.1145/2492002.2482589
We study the structural complexity of bimatrix games, formalized via rank, from an empirical perspective. We consider a setting where we have data on player behavior in diverse strategic situations, but where we do not observe the relevant payoff functions. We prove that high complexity (high rank) has empirical consequences when arbitrary data is considered. Additionally, we prove that, in more restrictive classes of data (termed laminar), any observation is rationalizable using a low-rank game: specifically a zero-sum game. Hence complexity as a structural property of a game is not always testable. Finally, we prove a general result connecting the structure of the feasible data sets with the highest rank that may be needed to rationalize a set of observations.https://authors.library.caltech.edu/records/nd3cv-16n60On the Existence of Low-Rank Explanations for Mixed Strategy Behavior
https://resolver.caltech.edu/CaltechAUTHORS:20150615-083251191
Authors: {'items': [{'id': 'Barman-S', 'name': {'family': 'Barman', 'given': 'Siddharth'}}, {'id': 'Bhaskar-U', 'name': {'family': 'Bhaskar', 'given': 'Umang'}}, {'id': 'Echenique-F', 'name': {'family': 'Echenique', 'given': 'Federico'}, 'orcid': '0000-0002-1567-6770'}, {'id': 'Wierman-A', 'name': {'family': 'Wierman', 'given': 'Adam'}}]}
Year: 2014
DOI: 10.1007/978-3-319-13129-0_38
Nash equilibrium is used as a model to explain the observed behavior of players in strategic settings. For example, in many empirical applications we observe player behavior, and the problem is to determine if there exist payoffs for the players for which the equilibrium corresponds to observed player behavior. Computational complexity of Nash equilibria is important in this framework. If the payoffs that explain observed player behavior requires players to have solved a computationally hard problem, then the explanation provided is questionable. In this paper we provide conditions under which observed behavior of players can be explained by games in which Nash equilibria are easy to compute. We identify three structural conditions and show that if the data set of observed behavior satisfies any of these conditions, then it can be explained by payoff matrices for which Nash equilibria are efficiently computable.https://authors.library.caltech.edu/records/40ykk-qy211The empirical implications of privacy-aware choice
https://resolver.caltech.edu/CaltechAUTHORS:20161005-162239588
Authors: {'items': [{'id': 'Cummings-R', 'name': {'family': 'Cummings', 'given': 'Rachel'}}, {'id': 'Echenique-F', 'name': {'family': 'Echenique', 'given': 'Federico'}, 'orcid': '0000-0002-1567-6770'}, {'id': 'Wierman-A', 'name': {'family': 'Wierman', 'given': 'Adam'}}]}
Year: 2014
DOI: 10.1145/2600057.2602830
This paper initiates the study of the testable implications of choice data in settings where agents have privacy preferences. We adapt the standard conceptualization of consumer choice theory to a situation where the consumer is aware of, and has preferences over, the information revealed by her choices. The main message of the paper is that little can be inferred about consumers' preferences once we introduce the possibility that the consumer has concerns about privacy. This holds even when consumers' privacy preferences are assumed to be monotonic and separable. This motivates the consideration of stronger assumptions and, to that
end, we introduce an additive model for privacy preferences that does have testable implications.https://authors.library.caltech.edu/records/g0vaa-01664Learnability and Models of Decision Making under Uncertainty
https://resolver.caltech.edu/CaltechAUTHORS:20180828-142049232
Authors: {'items': [{'id': 'Basu-P', 'name': {'family': 'Basu', 'given': 'Pathikrit'}}, {'id': 'Echenique-F', 'name': {'family': 'Echenique', 'given': 'Federico'}, 'orcid': '0000-0002-1567-6770'}]}
Year: 2018
DOI: 10.1145/3219166.3219223
We study whether some of the most important models of decision-making under uncertainty are uniformly learnable, in the sense of PAC (probably approximately correct) learnability. Many studies in economics rely on Savage's model of (subjective) expected utility. The expected utility model is known to predict behavior that runs counter to how many agents actually make decisions (the contradiction usually takes the form of agents' choices in the Ellsberg paradox). As a consequence, economists have developed models of choice under uncertainty that seek to generalize the basic expected utility model. The resulting models are more general and therefore more flexible, and more prone to overfitting. The purpose of our paper is to understand this added flexibility better. We focus on the classical expected utility (EU) model, and its two most important generalizations: Choquet expected utility (CEU) and Max-min Expected Utility (MEU).
Our setting involves an analyst whose task is to estimate or learn an agent's preference based on data available on the agent's choices. A model of preferences is PAC learnable if the analyst can construct a learning rule to precisely learn the agent's preference with enough data. When a model is not learnable we interpret it as the model being susceptible to overfitting. PAC learnability is known to be characterized by the model's VC dimension: thus our paper takes the form of a study of the VC dimension of economic models of choice under uncertainty. We show that EU and CEU have finite VC dimension, and are consequently learnable. Morever, the sample complexity of the former is linear, and of the latter is exponential, in the number of states of uncertainty. The MEU model is learnable when there are two states but is not learnable when there are at least three states, in which case the VC dimension is infinite. Our results also exhibit a close relationship between learnability and the underlying axioms which characterise the model.https://authors.library.caltech.edu/records/spd80-1db49The Edgeworth Conjecture with Small Coalitions and Approximate Equilibria in Large Economies
https://resolver.caltech.edu/CaltechAUTHORS:20190626-090938938
Authors: {'items': [{'id': 'Barman-Siddharth', 'name': {'family': 'Barman', 'given': 'Siddharth'}}, {'id': 'Echenique-F', 'name': {'family': 'Echenique', 'given': 'Federico'}, 'orcid': '0000-0002-1567-6770'}]}
Year: 2020
DOI: 10.1145/3391403.3399481
We revisit the connection between bargaining and equilibrium in exchange economies, and study its algorithmic implications. We consider bargaining outcomes to be allocations that cannot be blocked (i.e., profitably re-traded) by coalitions of small size and show that these allocations must be approximate Walrasian equilibria. Our results imply that deciding whether an allocation is approximately Walrasian can be done in polynomial time, even in economies for which finding an equilibrium is known to be computationally hard.https://authors.library.caltech.edu/records/55cr6-a9a31Screening p-Hackers: Dissemination Noise as Bait
https://resolver.caltech.edu/CaltechAUTHORS:20220707-170534070
Authors: {'items': [{'id': 'Echenique-F', 'name': {'family': 'Echenique', 'given': 'Federico'}, 'orcid': '0000-0002-1567-6770'}, {'id': 'He-Kevin', 'name': {'family': 'He', 'given': 'Kevin'}, 'orcid': '0000-0001-5806-0370'}]}
Year: 2022
DOI: 10.1145/3490486.3538358
We show that adding noise to data before making data public is effective at screening p-hacked findings: spurious explanations of the outcome variable produced by attempting multiple econometric specifications. Noise creates "baits'' that affect two types of researchers differently. Uninformed p-hackers who engage in data mining with no prior information about the true causal mechanism often fall for baits and report verifiably wrong results when evaluated with the original data. But informed researchers who start with an ex-ante hypothesis about the causal mechanism before seeing any data are minimally affected by noise. We characterize the optimal level of dissemination noise and highlight the relevant trade-offs in a simple theoretical model. Dissemination noise is a tool that statistical agencies (e.g., the US Census Bureau) currently use to protect privacy, and we show this existing practice can be repurposed to improve research credibility.https://authors.library.caltech.edu/records/y3ejs-7mr36