Advisor Feed
https://feeds.library.caltech.edu/people/Echenique-F/advisor.rss
A Caltech Library Repository Feedhttp://www.rssboard.org/rss-specificationpython-feedgenenSat, 13 Apr 2024 01:10:12 +0000Selection, Learning, and Nomination: Essays on Supermodular Games, Design, and Political Theory
https://resolver.caltech.edu/CaltechETD:etd-05282008-122413
Authors: {'items': [{'email': 'lmath@nyu.edu', 'id': 'Mathevet-Laurent-Alexandre', 'name': {'family': 'Mathevet', 'given': 'Laurent Alexandre'}, 'show_email': 'YES'}]}
Year: 2008
DOI: 10.7907/BQAQ-KV91
Games with strategic complementarities (GSC) possess nice properties in terms of learning and structure of the equilibria. Two major concerns in the theory of GSC and mechanism design are addressed. Firstly, complementarities often result in multiple equilibria, which requires a theory of equilibrium selection for GSC to have predictive power. Chapter 2 deals with global games, a selection paradigm for GSC. I provide a new proof of equilibrium uniqueness in a wide class of global games. I show that the joint best-response in these games is a contraction. The uniqueness result then follows as a corollary of the contraction principle. Furthermore, the contraction-mapping approach provides an intuition for why uniqueness arises: complementarities generate multiple equilibria, but the global-games structure dampens complementarities until one equilibrium survives. Secondly, there is a concern in mechanism design about the assumption of equilibrium play. Chapter 3 examines the problem of designing mechanisms that induce supermodular games, thereby guiding agents to play desired equilibrium strategies via learning. In quasilinear environments, I prove that if a scf can be implemented by a mechanism that generates bounded substitutes - as opposed to strategic complementarities - then this mechanism can be converted into a supermodular mechanism that implements the scf. If the scf also satisfies some efficiency criterion, then it admits a supermodular mechanism that balances budget. Then I provide general sufficient conditions for a scf to be implementable with a supermodular mechanism whose equilibria are contained in the smallest interval among all supermodular mechanisms. I also give conditions for the equilibrium to be unique. Finally, a supermodular revelation principle is provided for general preferences. The final chapter is an independent chapter on political economics. It provides three different processes by which two political parties nominate candidates for a general election: Nominations by party leaders, by a vote of party members, and by a spending competition. It is shown that more extreme outcomes can emerge from spending competition and that non-median outcomes can result via any process. Under endogenous party membership, median outcomes ensue when nominations are decided by a vote but not with spending competition.https://thesis.library.caltech.edu/id/eprint/2210Organizational and Financial Economics
https://resolver.caltech.edu/CaltechETD:etd-05292009-150803
Authors: {'items': [{'email': 'noah.myung@caltech.edu', 'id': 'Myung-Noah', 'name': {'family': 'Myung', 'given': 'Noah'}, 'show_email': 'NO'}]}
Year: 2009
DOI: 10.7907/B75A-MW79
<p>We investigate behaviors in organizational and financial economics by utilizing and developing the latest techniques from game theory, experimental economics, computational testbed, and decision-making under risk and uncertainty.</p>
<p>In the first chapter, we use game theory and experimental economics approaches to analyze the relationships between corporate culture and the persistent performance differences among seemingly similar enterprises. First, we show that competition leads to higher minimum effort levels in the minimum effort coordination game. Furthermore, we show that organizations with better coordination also lead to higher rates of cooperation in the prisoner's dilemma game. This supports the theory that the high-efficiency culture developed in coordination games act as a focal point for the outcome of subsequent prisoner's dilemma game. In turn, we argue that these endogenous features of culture developed from coordination and cooperation can help explain the persistent performance differences.</p>
<p>In the second chapter, using a computational testbed, we theoretically predict and experimentally show that in the minimum effort coordination game, as the cost of effort increases: 1. the game converges to lower effort levels, 2. convergence speed increases, and 3. average payoff is not monotonically decreasing. In fact, the average profit is an U-shaped curve as a function of cost. Therefore, contrary to the intuition, one can obtain a higher average profit by increasing the cost of effort.</p>
<p>In the last chapter, we investigate a well-known paradox in finance. The equity market home bias occurs when the investors over-invest in their home country assets. The equity market home bias is a paradox because the investors are not hedging their risk optimally. Even with unrealistic levels of risk aversion, the equity market home bias cannot be explained using the standard mean-variance model. We propose ambiguity aversion to be the behavioral explanation. We design six experiments using real-world assets and derivatives to show the relationship between ambiguity aversion and home bias. We tested for ambiguity aversion by showing that the investor's subjective probability is sub-additive. The result from the experiment provides support for the assertion that ambiguity aversion is related to the equity market home bias paradox.</p>
https://thesis.library.caltech.edu/id/eprint/2278Three Essays on Microeconomic Theory
https://resolver.caltech.edu/CaltechTHESIS:05192012-101726941
Authors: {'items': [{'email': 'sangmok.work@gmail.com', 'id': 'Lee-SangMok', 'name': {'family': 'Lee', 'given': 'SangMok'}, 'show_email': 'NO'}]}
Year: 2012
DOI: 10.7907/F5DX-5375
<p>This thesis considers three issues in microeconomic theory - two-sided matching, strategic voting, and revealed preferences.</p>
<p>In the first chapter I discuss the strategic manipulation of stable matching mechanisms commonly used in two-sided matching markets. Stable matching mechanisms are very successful in practice, despite theoretical concerns that they are manipulable by participants. The key finding is that most agents in large markets are close to being indifferent among partners in all stable matchings. It is known that the utility gain by manipulating a stable matching mechanism is bounded by the difference between utilities from the best and the worst stable matching partners. Thus, the main finding implies that the proportion of agents who may obtain a significant utility gain from manipulation vanishes in large markets. This result reconciles the success of stable mechanisms in practice with the theoretical concerns about strategic manipulation. Methodologically, I introduce techniques from the theory of random bipartite graphs for the analysis of large matching markets.</p>
<p>In the second chapter I study the criminal court process, focusing on plea bargaining. Plea bargains screen the types of defendants, guilty or innocent, who go to jury trial, which affects the jurors' voting decision and, in turn, the performance of the entire criminal court. The equilibrium jurors' voting behavior in the case of plea bargaining resembles the equilibrium behavior in the classical jury model in the absence of plea bargaining. By optimizing a plea bargain offer, a prosecutor, however, may induce jurors to act as if they echo the prosecutor's preferences against convicting innocent defendants and acquitting guilty defendants. With reference to Feddersen and Pesendorfer (1998), I study different voting rules in the trial stage and their consequences in the entire court process. Compared to general super-majority rules, we find that a court using the unanimity rule delivers more expected punishment to innocent defendants and less punishment to guilty defendants.</p>
<p>In the third chapter I study collective choices from the revealed preference theory viewpoint. For every product set of individual actions, joint choices are called Nash-rationalizable if there exists a preference relation for each player such that the selected joint actions are Nash equilibria of the corresponding game. I characterize Nash-rationalizable joint choice behavior by zero-sum games, or games of conflicting interests. If the joint choice behavior forms a product subset, the behavior is called interchangeable. I prove that interchangeability is the only additional empirical condition which distinguishes zero-sum games from general noncooperative games.</p>https://thesis.library.caltech.edu/id/eprint/7054Essays on Economics Networks
https://resolver.caltech.edu/CaltechTHESIS:05312013-135942135
Authors: {'items': [{'email': 'esquinadelinfinito@gmail.com', 'id': 'Melo-Sanchez-Cristian-Emerson', 'name': {'family': 'Melo Sanchez', 'given': 'Cristian Emerson'}, 'show_email': 'YES'}]}
Year: 2013
DOI: 10.7907/C9B1-3M92
<p>This thesis belongs to the growing field of economic networks. In particular, we develop three essays in which we study the problem of bargaining, discrete choice representation, and pricing in the context of networked markets. Despite analyzing very different problems, the three essays share the common feature of making use of a network representation to describe the market of interest.</p>
<p>In Chapter 1 we present an analysis of bargaining in networked markets. We make two contributions. First, we characterize market equilibria in a bargaining model, and find that players' equilibrium payoffs coincide with their degree of centrality in the network, as measured by Bonacich's centrality measure. This characterization allows us to map, in a simple way, network structures into market equilibrium outcomes, so that payoffs dispersion in networked markets is driven by players' network positions. Second, we show that the market equilibrium for our model converges to the so called eigenvector centrality measure. We show that the economic condition for reaching convergence is that the players' discount factor goes to one. In particular, we show how the discount factor, the matching technology, and the network structure interact in a very particular way in order to see the eigenvector centrality as the limiting case of our market equilibrium.</p>
<p>We point out that the eigenvector approach is a way of finding the most central or relevant players in terms of the “global” structure of the network, and to pay less attention to patterns that are more “local”. Mathematically, the eigenvector centrality captures the relevance of players in the bargaining process, using the eigenvector associated to the largest eigenvalue of the adjacency matrix of a given network. Thus our result may be viewed as an economic justification of the eigenvector approach in the context of bargaining in networked markets.</p>
<p>As an application, we analyze the special case of seller-buyer networks, showing how our framework may be useful for analyzing price dispersion as a function of sellers and buyers' network positions.</p>
<p>Finally, in Chapter 3 we study the problem of price competition and free entry in networked markets subject to congestion effects. In many environments, such as communication networks in which network flows are allocated, or transportation networks in which traffic is directed through the underlying road architecture, congestion plays an important role. In particular, we consider a network with multiple origins and a common destination node, where each link is owned by a firm that sets prices in order to maximize profits, whereas users want to minimize the total cost they face, which is given by the congestion cost plus the prices set by firms. In this environment, we introduce the notion of Markovian traffic equilibrium to establish the existence and uniqueness of a pure strategy price equilibrium, without assuming that the demand functions are concave nor imposing particular functional forms for the latency functions. We derive explicit conditions to guarantee existence and uniqueness of equilibria. Given this existence and uniqueness result, we apply our framework to study entry decisions and welfare, and establish that in congested markets with free entry, the number of firms exceeds the social optimum. </p>
https://thesis.library.caltech.edu/id/eprint/7794Essays in Social and Economic Networks
https://resolver.caltech.edu/CaltechTHESIS:05292015-045627301
Authors: {'items': [{'email': 'khaixiang@gmail.com', 'id': 'Chiong-Khai-Xiang', 'name': {'family': 'Chiong', 'given': 'Khai Xiang'}, 'show_email': 'NO'}]}
Year: 2015
DOI: 10.7907/Z9639MPX
<p>This thesis consists of three chapters, and they concern the formation of social and
economic networks. In particular, this thesis investigates the solution concepts of
Nash equilibrium and pairwise stability in models of strategic network formation.
While the first chapter studies the robustness property of Nash equilibrium in network
formation games, the second and third chapters investigate the testable implication
of pairwise stability in networks.</p>
<p>The first chapter of my thesis is titled "The Robustness of Network Formation
Games". In this chapter, I propose a notion of equilibrium robustness, and analyze
the robustness of Nash equilibria in a class of well-studied network formation
games that suffers from multiplicity of equilibria. Under this notion of robustness,
efficiency is also achieved. A Nash equilibrium is k-robust if k is the smallest integer
such that the Nash equilibrium network can be perturbed by adding some k number
of links. This chapter shows that acyclic networks are particularly fragile: with
the exception of the periphery-sponsored star, all Nash equilibrium networks without
cycles are 1-robust, or minimally robust. The main result of this paper then proves
that for all Nash equilibria, cyclic or acyclic, the periphery-sponsored star is the most
robust Nash equilibrium. Moreover the periphery-sponsored star is by far the most
robust in the sense that asymptotically in large network, it must be at least twice as
robust as any other Nash equilibria.</p>
<p>The second chapter of my thesis is titled "On the Consistency of Network Data with
Pairwise Stability: Theory". In this chapter, I characterize the consistency of social
network data with pairwise stability, which is a solution concept that in a pairwise
stable network, no agents prefer to deviate by forming or dissolving links. I take
preferences as unobserved and nonparametric, and seek to characterize the networks
that are consistent with pairwise stability. Specifically, given data on a single network,
I provide a necessary and sufficient condition for the existence of some preferences
that would induce this observed network as pairwise stable. When such preferences
exist, I say that the observed network is rationalizable as pairwise stable. Without any
restriction on preferences, any network can be rationalized as pairwise stable. Under
one assumption that agents who are observed to be similar in the network have similar
preferences, I show that an observed network is rationalizable as pairwise stable if
and only if it satisfies the Weak Axiom of Revealed Pairwise Stability (WARPS). This
result is generalized to include any arbitrary notion of similarity.</p>
<p>The third chapter of my thesis is titled "On the Consistency of Network Data with
Pairwise Stability: Application". In this chapter, I investigate the extent to which
real-world networks are consistent with WARPS. In particular, using the network
data collected by Banerjee et al. (2013), I explore how consistency with WARPS
is empirically associated with economic outcomes and social characteristics. The
main empirical finding is that targeting of nodes that have central positions in social
networks to increase the spread of information is more effective when the underlying
networks are also more consistent with WARPS.</p>https://thesis.library.caltech.edu/id/eprint/8912Essays in Behavioral Decision Theory
https://resolver.caltech.edu/CaltechTHESIS:05292015-095332979
Authors: {'items': [{'email': 'matthewkovach1@gmail.com', 'id': 'Kovach-Matthew-Luke', 'name': {'family': 'Kovach', 'given': 'Matthew Luke'}, 'show_email': 'YES'}]}
Year: 2015
DOI: 10.7907/Z9D21VJH
<p>This thesis studies decision making under uncertainty and how economic agents respond to information. The classic model of subjective expected utility and Bayesian updating is often at odds with empirical and experimental results; people exhibit systematic biases in information processing and often exhibit aversion to ambiguity. The aim of this work is to develop simple models that capture observed biases and study their economic implications.</p>
<p>In the first chapter I present an axiomatic model of cognitive dissonance, in which an agent's response to information explicitly depends upon past actions. I introduce novel behavioral axioms and derive a representation in which beliefs are directionally updated. The agent twists the information and overweights states in which his past actions provide a higher payoff. I then characterize two special cases of the representation. In the first case, the agent distorts the likelihood ratio of two states by a function of the utility values of the previous action in those states. In the second case, the agent's posterior beliefs are a convex combination of the Bayesian belief and the one which maximizes the conditional value of the previous action. Within the second case a unique parameter captures the agent's sensitivity to dissonance, and I characterize a way to compare sensitivity to dissonance between individuals. Lastly, I develop several simple applications and show that cognitive dissonance contributes to the equity premium and price volatility, asymmetric reaction to news, and belief polarization.</p>
<p>The second chapter characterizes a decision maker with sticky beliefs. That is, a decision maker who does not update enough in response to information, where enough means as a Bayesian decision maker would. This chapter provides axiomatic foundations for sticky beliefs by weakening the standard axioms of dynamic consistency and consequentialism. I derive a representation in which updated beliefs are a convex combination of the prior and the Bayesian posterior. A unique parameter captures the weight on the prior and is interpreted as the agent's measure of belief stickiness or conservatism bias. This parameter is endogenously identified from preferences and is easily elicited from experimental data.</p>
<p>The third chapter deals with updating in the face of ambiguity, using the framework of Gilboa and Schmeidler. There is no consensus on the correct way way to update a set of priors. Current methods either do not allow a decision maker to make an inference about her priors or require an extreme level of inference. In this chapter I propose and axiomatize a general model of updating a set of priors. A decision maker who updates her beliefs in accordance with the model can be thought of as one that chooses a threshold that is used to determine whether a prior is plausible, given some observation. She retains the plausible priors and applies Bayes' rule. This model includes generalized Bayesian updating and maximum likelihood updating as special cases.</p>https://thesis.library.caltech.edu/id/eprint/8922Essays in Behavioral Decision Theory
https://resolver.caltech.edu/CaltechTHESIS:05272016-142109004
Authors: {'items': [{'email': 'gerelt531@gmail.com', 'id': 'Tserenjigmid-Gerelt', 'name': {'family': 'Tserenjigmid', 'given': 'Gerelt'}, 'orcid': '0000-0003-1412-9692', 'show_email': 'NO'}]}
Year: 2016
DOI: 10.7907/Z93R0QT5
<p>Many different behavioral phenomena that cannot be rationalized by standard models in economics have been well-documented both in the real world and in lab experiments. Motivated by these behavioral phenomena, the purpose of this dissertation is three-fold. First, I develop axiomatic models of individual decision-making to explain these well-documented phenomena. Second, I derive the implications and predictions of these axiomatic models for intertemporal choice, asset pricing, and other economic contexts. Third, I provide connections between these seemingly separate behavioral phenomena and widely-used properties of preferences in economics and psychology. This dissertation consists of five chapters. The first chapter studies dynamic choice under uncertainty. The second and third chapters study choice over multi-attribute alternatives. The fourth and fifth chapters study stochastic choice.</p>
<p>The first chapter studies history-dependent risk aversion and focuses on a behavioral phenomenon called the reinforcement effect (RE), which states that people become less risk-averse after a good history than after a bad history. The RE is well-documented in consumer choices, financial markets, and lab experiments. I show that this seemingly anomalous behavior occurs whenever risk preferences are history-dependent (in a nontrivial way) and satisfy monotonicity with respect to first-order stochastic dominance. To study history-dependent risk aversion and the RE formally, I develop a behaviorally-founded model of dynamic choice under risk that generalizes standard discounted expected utility. To illustrate the usefulness of my model, I apply it to the Lucas tree model of asset pricing and draw implications of the RE for asset price dynamics. I find that, compared to history-independent models, assets are overpriced when the economy is in a good state and are underpriced in a bad state. Moreover, my model generates high, volatile, and predictable asset returns, and low and smooth bond returns, consistent with empirical evidence.</p>
<p>In the second chapter, I develop an axiomatic model of reference-dependent preferences in which reference points are endogenous. In particular, I focus on choices from menus of two-attribute alternatives, and the reference point for a given menu is a vector that consists of the minimums of each dimension of the menu. I characterize this model by two weakenings of the Weak Axiom of Revealed Preference (WARP) in addition to standard axioms. My model is not just consistent with the attraction effect and the compromise effect, well-known preference reversals, but it also provides a connection between these two effects and diminishing sensitivity, a widely used behavioral property in economics. The model also provides bounds on preference reversals. I apply the model to two different contexts, intertemporal choice and risky choice, and diminishing sensitivity has interesting implications. In intertemporal choice, the main implication of the model is that borrowing constraints produce a psychological pressure to move away from the constraints even if they are not binding. In risky choice, the model allows conflicting risk behaviors.</p>
<p>In the third chapter, I study choice over multidimensional alternatives. Making a choice between multidimensional alternatives is a difficult task. Therefore, a decision maker may adopt some procedure (heuristic) to simplify this task. I provide an axiomatic model of one such heuristic called the Intra-Dimensional Comparison (IDC) heuristic. The IDC heuristic is well-documented in the experimental literature on choice under risk. The IDC heuristic is a procedure in which a decision maker compares multidimensional alternatives dimension-by-dimension and makes a decision based on those comparisons. The model of the IDC heuristic provides a general framework applicable to many different contexts, including risky choice and social choice.</p>
<p>The fourth chapter is joint work with Federico Echenique and Kota Saito. We develop an axiomatic theory of random choice that builds on Luce's (1959) model to incorporate a role for perception. We capture the role of perception through perception priorities; priorities that determine whether an object or alternative is perceived sooner or later than other alternatives. We identify agents' perception priorities from their violations of Luce's axiom of independence from irrelevant alternatives (IIA). The direction of the violation of IIA implies an orientation of agents' priority rankings. We adjust choice probabilities to account for the effects of perception, and impose that adjusted choice probabilities satisfy IIA. So all violations of IIA are accounted for by the perception order. The theory can explain some very well-documented behavioral phenomena in individual choice. We can also explain the effects of forced choice and choice overload in experiments.</p>
<p>The fifth chapter studies how the ordering of alternatives (e.g., the location of products in a grocery store, the order of candidates on a ballot) affects a decision maker's choices. I develop an axiomatic model of random choice that builds on Luce's (1959) and incorporates the effect of the ordering of alternatives on choice frequencies. When the ordering of alternatives is observed, I characterize the model by two weakenings of IIA. When the ordering of alternatives is not observed, I can identify it from choice data. The model can accommodate the similarity, compromise, and attraction effects, violations of stochastic transitivity, and the choice overload, which are well-known behavioral phenomena in individual choice.</p>
https://thesis.library.caltech.edu/id/eprint/9793Three Essays on Information Economics
https://resolver.caltech.edu/CaltechTHESIS:05272016-192342051
Authors: {'items': [{'email': 'jdzhangqx@gmail.com', 'id': 'Zhang-Qiaoxi', 'name': {'family': 'Zhang', 'given': 'Qiaoxi'}, 'orcid': '0000-0002-3139-7659', 'show_email': 'YES'}]}
Year: 2016
DOI: 10.7907/Z9TB14WS
<p>The main theme of my thesis is how uncertainty affects behaviors. I explore how agents seek to resolve uncertainty in different environments. In Chapter 1, agents learn from the messages of informed experts in a signaling game. In Chapter 2, an agent learns about a fixed and uncertain physical environment through dynamic experimentation. In the last chapter, agents learn about others' preferences through the outcome of a central matching mechanism. </p>
<p>Motivated by the question of how opposing political candidates who are policy experts can communicate to voters in a way that helps them win the election, I study a delegation problem with two informed, self-interested agents. Agents make proposals before the decision maker decides to whom to delegate a task. The innovation is that there are multiple issues that the principal and agents care about, and the agents can be vague about any issue in their proposals. Intuition says that agents should be specific about the issues that they are trusted on and vague about other issues. I find the opposite: an agent is disadvantaged by revealing information about certain issues to the decision maker, those on which he is trusted by the principal on. The reason is that doing so enables his opponent to take advantage of this revealed information and undercut him. Essentially, when the principal is on an agent's side for some issue, that agent does not want to be specific, because it creates a visible target for his opponent to react to. He wants to be vague, because that allows the principal's ignorance about the optimal action create an insurmountable obstacle for his opponent. As a result, it is to an agent's advantage to be vague about the issue that he is trusted on. </p>
<p>The second chapter investigates the implication of biased updating in dynamic experimentation such as a firm's R&D process. People exhibit near miss effect during gambling. For example, if the first two wheels of a slot machine indicate a potential final outcome of jackpot but the last wheel indicates a loss, people are motivated to gamble more. An outcome that is close to a success but is still a failure is called a "near miss." In this chapter, I explain the near miss effect in a firm's repeated R&D process. There are two factors that sequentially affect the profitability of R&D, both of which are uncertain. First is whether the R&D team is skilled enough to make a technical breakthrough. If a breakthrough occurs, then a second factor comes into play, which is whether the market demand is high enough to make the product profitable. Moreover, good news for the first stage is a prerequisite for learning about the second stage. In each one of the infinite periods, the decision maker of the firm decides whether to involve in risky R&D and observe whether the outcome is a failure (no breakthrough), a success (with breakthrough and high market demand), or a near miss (with breakthrough but low market demand). I assume that the decision maker of the firm learns about the skill of the team properly, but when she updates about the market demand, she updates incorrectly and overweighs her prior. In particular, her posterior about the market demand is a convex combination of her prior and the Bayesian posterior. This bias affects the relative updating of the two factors, which gives rise to the near miss effect: after a near miss is observed, the decision maker values doing R&D more than before although she has received no payoff. </p>
<p>I show that if the decision maker is sufficiently biased and overweighs her prior enough, then she exhibits the near miss effect. I also compare the near miss effect for decision makers with different degrees of biases. As it turns out, the more biased a decision maker is, the more sever she exhibits the near miss effect. However, given the decision maker's belief about the two factors, the more biased she is, the less she values R&D. Consequently, the value of R&D is highest for a Bayesian. </p>
<p>In the last chapter, I study how well a centralized matching mechanism works when agents do not know others' preferences. I consider a standard two-sided marriage matching problem, except that agents only know their own preferences. Roth(1989) proved by an example the non-existence of a mechanism with at least one stable equilibria. In his proof, an agent is allowed to report a preference that is realized with ex ante zero probability, which violates the setup of a Bayesian game. Instead, by restricting agents to report only preferences with positive realization probabilities, I show that Roth's result still holds. More interestingly, as long as agents are allowed to form blocking pairs after a matching outcome is announced, the final outcome is always stable with respect to the true preferences. This means that even when the mechanism fails to produce a stable outcome, it can still release enough information for agents to initialize a blocking pair, which induces a stable outcome. </p>https://thesis.library.caltech.edu/id/eprint/9806Essays on Information Collection
https://resolver.caltech.edu/CaltechTHESIS:05312017-141442186
Authors: {'items': [{'email': 'tmayskaya@gmail.com', 'id': 'Mayskaya-Tatiana-S', 'name': {'family': 'Mayskaya', 'given': 'Tatiana S.'}, 'orcid': '0000-0003-1445-4612', 'show_email': 'YES'}]}
Year: 2017
DOI: 10.7907/Z9DV1GWC
<p>This thesis is devoted to the problem of information collection from theoretical and experimental perspectives.</p>
<p>In Chapter 2, I characterize the unique optimal learning strategy when there are two information sources, three possible states of the world, and learning is modeled as a search process. The optimal strategy consists of two phases. During the first phase, only beliefs about the state and the quality of information sources matter for the optimal choice between these sources. During the second phase, this choice also depends on how much the agent values different types of information. The information sources are substitutes when each individual source is likely to reveal the state eventually, and they are complements otherwise.</p>
<p>In Chapter 3, co-authored with Li Song, we conducted an experiment which demonstrates that even in a simple four person circle network people appear to fail to account for possible repetition of information they receive. Moreover, we show that this phenomenon can be partially attributed to rational considerations, which take into account other people’s deviations from optimal behavior.</p>
<p>In Chapter 4, co-authored with Marcelo A. Fernández,we model overconfidence as if a decision maker perceives information as being more precise than it actually is. We show that the effect of overconfidence on the quality of the final decision is shaped by three forces, overestimating the precision of future information, overestimating the precision of past information and overestimating the amount of information to be collected in the future. The first force pushes an overconfident decision maker to collect more information, while the second and the third forces work in the other direction.</p>https://thesis.library.caltech.edu/id/eprint/10231Essays on Matching Theory
https://resolver.caltech.edu/CaltechTHESIS:05192017-101424341
Authors: {'items': [{'email': 'zhangjun404@gmail.com', 'id': 'Zhang-Jun', 'name': {'family': 'Zhang', 'given': 'Jun'}, 'orcid': '0000-0003-4154-3741', 'show_email': 'YES'}]}
Year: 2017
DOI: 10.7907/Z9S75DCJ
<p>Matching theory is a rapidly growing field in economics that often deals with markets in which monetary transfers are forbidden. Hence, policy makers often use centralized procedures to organize markets and coordinate players' behavior. Three concerns play central roles in designing the procedures: efficiency, fairness, and incentive compatibility. These concerns are also what I focus on in my studies. Specifically, my dissertation consists of three original studies on the allocation of indivisible resources to agents. The first chapter studies school choice, which is a centralized market to assign students to public schools. I compare popular matching mechanisms used in school choice by accommodating the fact that students and their parents often have heterogeneous sophistication in understanding the mechanisms. In the second chapter I study abstract object allocation problem in which objects do not have priority rankings of agents. I want to show that the three objectives of efficiency, fairness, and incentive compatibility can be incompatible with each other: a mechanism that satisfies a minimal efficiency requirement and mild fairness requirements must be manipulable by some group of agents in a strong sense. Since the efficiency requirement is weak enough such that policy makers are likely to pursue, my results suggest that policy makers have to make a choice between fairness and group incentive compatibility. In the third chapter I study same object allocation problems except that some agents have private endowments. I propose a new mechanism that has desirable properties in efficiency, fairness, and incentive compatibility. In the following I provide more details of each chapter.</p>
<p>School choice is a trend in the K-12 public education of US and many other countries that allows children to choose schools across neighborhoods. In Chapter 1, "Level-k Reasoning in School Choice", I compare two matching algorithms that many cities use to assign children to public schools in school choice. The algorithms are called Boston Mechanism and Deferred Acceptance. BM is manipulable, while DA is strategy-proof. Recently several cities decide to switch from BM to DA to avoid manipulation. However, the effect of the switch has not been well understood. In this paper I use the level-k model to study the strategies used by parents in BM by taking account of the fact that parents often have different abilities to manipulate BM, which are due to their heterogeneous sophistication. Interestingly, I find that the level-k reasoning process in BM is analogous to the procedure of DA. This analogy provides a new way to understand how parents may behave in BM. Under some mild assumption it implies that for any school choice problem and any sophistication distribution of parents, the assignment found by BM is never less efficient than the assignment found by DA. I also examine how parents' beliefs about others' sophistication affect their welfare. I find that, in general, a child is guaranteed to benefit from his parent's sophistication in BM only when his parent's level is high relative to others and his parent's belief about others' sophistication levels is accurate. The simulation results of my model exhibit patterns similar to empirical datasets.</p>
<p>Without monetary transfers, the concern of fairness motivates policy makers to use random assignments in objection allocation problems. In Chapter 2, "Efficient and Fair Assignment Mechanism is Strongly Group Manipulable", I study group incentive compatibility in random assignment mechanisms. I show that if a mechanism satisfies the minimal efficiency requirement (ex-post efficiency), then it cannot satisfy some mild fairness requirements and be minimally group incentive compatible simultaneously: by misreporting preferences, a group of agents can obtain lotteries that strictly first-order stochastically dominate the lotteries they obtain in the truth-telling case. Hence, fairness concerns may force policy maker to give up group incentive compatibility. My results hold as long as there are at least three agents and at least three objects, no matter outside option is available or not. Possibility results exist when there are only two objects and outside option is not available.</p>
<p>In some object allocation problems, some players have private endowments and are willing to bring them to the market in exchange for better ones. In Chapter 3, "A New Solution to the Random Assignment Problem with Private Endowment", I propose a new mechanism to solve the problems. Intuitively, in my mechanism the popularity of a private endowment plays the role of "price" in determining his owner's advantage in the market. Formally, the mechanism is a simultaneous eating algorithm, which generalizes Probabilistic Serial, by letting agents obtain additional eating speeds if their private endowments are consumed by others, and letting multiple agents trade their private endowments if they form cycles. This feature can be summarized by the idea of "you request my house - I get your speed". Indifferent preferences often cause difficulty in efficient random assignment mechanisms. Interestingly, I show that the same idea can also be used to deal with indifferent preferences in a straightforward way. It is in contrast to the mainstream method of iteratively solving maximum network flow problems in the literature.</p>https://thesis.library.caltech.edu/id/eprint/10185Essays in Market Design
https://resolver.caltech.edu/CaltechTHESIS:05312018-141046982
Authors: {'items': [{'email': 'marzelofernandez@gmail.com', 'id': 'Fernandez-Marcelo-Ariel', 'name': {'family': 'Fernandez', 'given': 'Marcelo Ariel'}, 'orcid': '0000-0002-5475-0304', 'show_email': 'NO'}]}
Year: 2018
DOI: 10.7907/PXYF-WS15
<p>This thesis investigates the impact of incomplete information and behavioral biases in the context of market design.</p>
<p>In chapter 2, I analyze centralized matching markets and rationalize why the arguably most heavily used mechanism in applications, the deferred acceptance mechanism, has been so successful in practice, despite the fact that it provides participants with opportunities to “game the system.” Accounting for the lack of information that participants typically have in these markets in practice, I introduce a new notion of behavior under uncertainty that captures participants’ aversion to experience regret. I show that participants optimally choose not to manipulate the deferred acceptance mechanism in order to avoid regret. Moreover, the deferred acceptance mechanism is the unique mechanism within an interesting class (quantile stable) to induce honesty from participants in this way.</p>
<p>In chapter 3, co-authored with Leeat Yariv, we study the impacts of incomplete information on centralized one-to-one matching markets. We focus on the commonly used deferred acceptance mechanism (Gale and Shapley, 1962). We characterize settings in which many of the results known when information is complete are overturned. In particular, small (complete-information) cores may still be associated with multiple outcomes and incentives to misreport, selection of equilibria can affect the set of individuals who are unmatched—i.e., there is no analogue for the Rural Hospital Theorem, and agents might prefer to be on the receiving side of the of the algorithm underlying the mechanism. Nonetheless, when either side of the market has assortative preferences, incomplete information does not hinder stability, and results from the complete-information setting carry through.</p>
<p>In chapter 4, co-authored with Tatiana Mayskaya, we present a dynamic model that illustrates three forces that shape the effect of overconfidence (overprecision of consumed information) on the amount of collected information. The first force comes from overestimating the precision of the next consumed piece of information. The second force is related to overestimating the precision of already collected information. The third force reflects the discrepancy between how much information the agent expects to collect and how much information he actually collects in expectation. The first force pushes an overconfident agent to collect more information, while the second and the third forces work in the other direction. We show that under some symmetry conditions, the second and third force unequivocally dominate the first, leading to underinvestment in information.</p>https://thesis.library.caltech.edu/id/eprint/10983Essays On Decision Theory
https://resolver.caltech.edu/CaltechTHESIS:06072019-212943893
Authors: {'items': [{'email': 'hhamzeyi@gmail.com', 'id': 'Hamze-Bajgiran-Hamed', 'name': {'family': 'Hamze Bajgiran', 'given': 'Hamed'}, 'orcid': '0000-0002-6246-2783', 'show_email': 'NO'}]}
Year: 2019
DOI: 10.7907/MVE7-HP81
<p>This thesis introduces some general frameworks for studying problems in decision theory. The purpose of this dissertation is two-fold. First, I develop general mathematical frameworks and tools to explore different decision theoretic phenomena. Second, I apply my developed frameworks and tools in different topics of Microeconomics and Decision Theory.</p>
<p>Chapter 1 introduces a notion of the classifier, to represent the different classes of data revealed through some observations. I present a general model of classification, notion of complexity, and how a complicated classification procedure can be generated through some simpler classification procedures.</p>
<p>My goal is to show how an individual's complex behavior can be derived from some simple underlying heuristics. In this chapter, I model a classifier (as a general model for decision making) that based on observing some data points classifies them into different categories with a set of different labels. The only assumption for my model is that whenever a data point is in two categories, there should be an additional category representing the intersection of the two categories. First, I derive a duality result similar to the duality in convex geometry. Then, using my result, I find all representations of a complex classifier by aggregating simpler forms of classifiers. For example, I show how a complex classifier can be represented by simpler classifiers with only two categories (similar to a single linear classifier in a neural network). Finally, I show an application in the context of dynamic choice behaviors. Notably, I use my model to reinterpret the seminal works by Kreps (1979) and Dekel, Lipman, and Rustichini (2001) on representing preference ordering over menus with a subjective state space. I also show the connection between the notion of the minimal subjective state space in economics with my proposed notion of complexity of a classifier.</p>
<p>In Chapter 2, I provide a general characterization of recursive methods of aggregation and show that recursive aggregation lies behind many seemingly different results in economic theory. Recursivity means that the aggregate outcome of a model over two disjoint groups of features is a weighted average of the outcome of each group separately.</p>
<p>This chapter makes two contributions. The first contribution is to pin down any aggregation procedure that satisfies my definition of recursivity. The result unifies aggregation procedures across many different economic environments, showing that all of them rely on the same basic result. The second contribution is to show different extensions of the result in the context of belief formation, choice theory, and welfare economics.</p>
<p>In the context of belief formation, I model an agent who predicts the true state of nature, based on observing some signals in her information structure. I interpret each subset of signals as an event in her information structure. I show that, as long as the information structure has a finite cardinality, my weighted averaging axiom is the necessary and sufficient condition for the agent to behaves as a Bayesian updater. This result answers the question raised by Shmaya and Yariv (2007), regarding finding a necessary and sufficient condition for a belief formation process to act as a Bayesian updating rule.</p>
<p>In the context of choice theory, I consider the standard theory of discrete choice. An agent chooses randomly from a menu. The outcome of my model is the average choice (mean of the distribution of choices) rather than the entire distribution of choices. Average choice is easier to report and obtain than the entire distribution. However, an average choice does not uniquely reveal the underlying distribution of choices. In this context, I show that (1) it is possible to uniquely extract the underlying distribution of choices as long as the average choice satisfies weighted averaging axiom, and (2) there is a close connection between my weighted averaging axiom and the celebrated Luce (or Logit) model of discrete choice.</p>
<p>Chapter 3 is about the aggregation of the preference orderings of individuals over a set of alternatives. The role of an aggregation rule is to associate with each group of individuals another preference ordering of alternatives, representing the group's aggregated preference. I consider the class of aggregation rules satisfying the extended Pareto axiom. Extended Pareto means that whenever we partition a group of individuals into two subgroups, if both subgroups prefer one alternative over another (as indicated by their aggregated preferences), then the aggregated preference ordering of the union of the subgroups also prefers the first alternative over the second one.</p>
<p>I show that (1) the extended Pareto is equivalent to my weighted averaging axiom, and (2) I derive a generalization of Harsanyi's (1955) famous theorem on Utilitarianism. Harsanyi considers a single profile of individuals and a variant of Pareto to obtain Utilitarianism. However, in my approach, I partition a profile into smaller groups. Then, I aggregate the preference ordering of these smaller groups using the extended Pareto. Hence, I obtain Utilitarianism through this consistent form of aggregation. As a result, in my representation, the weight associated with each individual appears in all sub-profiles that contain her. </p>
<p>In another application, I find the class of extended Pareto social welfare functions. My result has a positive nature, compared to the claims by Kalai and Schmeidler (1977) and Hylland (1980) that the negative conclusion of Arrow's theorem holds even with vN-M preferences.</p>
<p>Finally, in Chapter 4, I derive a simple subjective conditional expectation theory of state-dependent preferences. In many applications such as models for buying health insurance, the standard assumption about the independence of the utility and the set of states is not a plausible one. Hence, I derive a model in which the main force behind the separation of beliefs and state-dependent utility comes from the extended Pareto condition. Moreover, I show that, as long as the model satisfies my strong minimal agreement condition, we can uniquely separate beliefs from the state-dependent utility.</p>https://thesis.library.caltech.edu/id/eprint/11724Essays in Mechanism Design and Contest Theory
https://resolver.caltech.edu/CaltechTHESIS:05292023-003412487
Authors: {'items': [{'email': 'sumitgoel58@gmail.com', 'id': 'Goel-Sumit', 'name': {'family': 'Goel', 'given': 'Sumit'}, 'orcid': '0000-0003-3266-9035', 'show_email': 'YES'}]}
Year: 2023
DOI: 10.7907/97qy-1m35
<p>This dissertation contains three essays. They offer contributions to the fields of mechanism design (Chapters 1 and 2) and contest theory (Chapter 3).</p>
<p>Chapter 1, co-authored with Wade Hann-Caruthers, studies the problem of aggregating privately-held preferences for a facility to be located on a plane. We show that for a large class of social cost functions, the mechanism that locates the facility at the coordinate-wise median of the agent’s ideal points is quantitatively optimal (in the sense that it has the smallest worst-case approximation ratio) among all deterministic, anonymous, and incentive-compatible mechanisms. We also obtain bounds on the worst-case approximation ratio of the coordinate-wise median mechanism for an important subclass of social cost functions.</p>
<p>Chapter 2, co-authored with Wade Hann-Caruthers, studies a principal-agent project selection problem with asymmetric information and demonstrates the value for the principal in inducing partial verifiability constraints, such as no-overselling, on the agent. We consider a setting where the principal has to choose one among a set of available projects but the relevant information, such as each project's profitability, is held by a self-interested agent who might also have its own preference over the projects. If the agent is unconstrained in its ability to manipulate its private information, the principal can do no better than randomly choosing a project. But if the agent cannot oversell any of the projects, maybe because it must support its claims with evidence, we show that a simple cutoff mechanism (agent's favorite project is chosen among those that meet a cutoff profit level and a default project) is optimal for the principal. We also find evidence in support of the well-known ally-principle which says that principal delegates more authority to an agent with more aligned preferences.</p>
<p>Chapter 3 studies the effect of increasing the value of prizes and competitiveness of contests on the effort exerted by participants in an incomplete information environment. We identify two natural sufficient conditions on the distribution of abilities in the population under which the interventions have opposite effects on effort. We also discuss applications to the design of optimal contests in three different environments, including the design of grading contests. Assuming that the value of a grade is determined by the information it reveals about the agent's ability, we establish a link between the informativeness of a grading scheme and the effort induced by it.</p>https://thesis.library.caltech.edu/id/eprint/15220