CaltechTHESIS committee: Monograph
https://feeds.library.caltech.edu/people/Echenique-F/combined_committee.rss
A Caltech Library Repository Feedhttp://www.rssboard.org/rss-specificationpython-feedgenenThu, 13 Jun 2024 19:25:38 -0700Institutions, Incentives and Behavior: Essays in Public Economics and Mechanism Design
https://resolver.caltech.edu/CaltechETD:etd-05152005-021009
Year: 2005
DOI: 10.7907/X53T-PZ38
<p>The economic outcomes realized by a society are a function of the institutions put in place, the incentives they create, and the behavior of agents in the face of those incentives. Selecting the appropriate institutions for a given economy is particularly important in the domain of public economics, where individual incentives are often inconsistent with efficiency. Three major concerns in institutional design are addressed. First, do agents select the equilibrium strategies at which efficient allocations obtain? Second, does the repeated game nature of a long-lived institution impact behavior? Third, what degree of coercion is necessary for a planner to guarantee that the allocation selected by a mechanism can be enforced? Answering these questions helps to understand which institutions are most appropriate in various environments. In Chapter 2, five public goods mechanisms are experimentally tested in a repeated game environment. Behavior is well approximated by a model in which agents best respond to an avrage of recently observed data. This model provides various sufficient conditions a mechanism must satisfy for play to converge to an efficient equilibrium. In Chapter 3, it is assumed that the designer of a one-shot mechanism must allow agents a 'no trade' option in which they are free to contribute nothing but enjoy the public good produced by others' contributions. It is shown that a large set of economies exist in which there is some agent at every allocation who prefers this option. Even in economies where this is not true, it becomes true as the economy is replicated, making it impossible to implement any allocation except the endowment in large economies.</p>
<p>In the final chapter, a model of group reputations is developed to explain why moral hazard problems are significant in some laboratory experiments and less significant in others. If firms believe that either all workers are selfish or all workers are reciprocal, then selfish workers may have an incentive to develop a 'group reputation' of being reciprocal for a fixed number of periods in order to extract higher wages. As predicted, only in those experiments in which this incentive is sufficiently large is the moral hazard problem mitigated.</p>https://resolver.caltech.edu/CaltechETD:etd-05152005-021009Robust Bilateral Trade and an Essay on Awareness as an Equilibrium Notion
https://resolver.caltech.edu/CaltechETD:etd-06012006-164452
Year: 2006
DOI: 10.7907/002H-F862
<p>The aim of this thesis is to analyze various effects of informational constraints. In chapters 1 and 2 we consider a robust model of bilateral trade where traders have private reservation values and utility functions are common knowledge. In chapter 1 we study direct-revelation mechanisms. Under incentive and participation constraints, we define the notion of ex-post constrained efficiency, which does not depend on the distribution of types. Given ex-post incentive and participation constraints, a sufficient condition for constrained efficiency is simplicity: the outcome is a lottery between trade at one type-contingent price and no trade. For constant-relative-risk-aversion environments, we characterize simple mechanisms. Under risk neutrality they are equivalent to probability distributions over posted prices. Generically, simple mechanisms converge to full efficiency as agents' risk aversion goes to infinity. Under risk neutrality, ex-ante optimal mechanisms are deterministic, and under risk aversion, they are not.</p>
<p>In chapter 2 we address indirect implementation. We define Mediated Bargaining Game - a continuous-time double auction with a hidden order book. It is the optimal bargaining game in the sense that its ex-post Nash equilibria in weakly undominated strategies constitute the Pareto-optimal frontier of the set of all ex-post Nash equilibria of all bargaining games. In Mediated Bargaining Game, type-monotone Bayesian equilibria coincide with ex-post Nash equilibria. The inefficiency due to incomplete information is manifested through delay. In contrast with the direct revelation mechanisms, in Mediated Bargaining Game the mechanism designer does not need to know the agents' risk attitudes.</p>
<p>Informational constraints may also be a result of agents' subjective knowledge of the economic situation. In chapter 3 we study normal-form games, where each player may be aware of a subset of the set of possible actions, and has a set of possible awareness architectures. Awareness architecture is given by agents' perceptions, and an infinite regress of conjectures about others. Awareness equilibrium is a steady state where neither actions nor awareness architectures can change. We provide conditions under which awareness equilibria exists and study a parametrization of the set of possible awareness architectures.</p>https://resolver.caltech.edu/CaltechETD:etd-06012006-164452Selection, Learning, and Nomination: Essays on Supermodular Games, Design, and Political Theory
https://resolver.caltech.edu/CaltechETD:etd-05282008-122413
Year: 2008
DOI: 10.7907/BQAQ-KV91
Games with strategic complementarities (GSC) possess nice properties in terms of learning and structure of the equilibria. Two major concerns in the theory of GSC and mechanism design are addressed. Firstly, complementarities often result in multiple equilibria, which requires a theory of equilibrium selection for GSC to have predictive power. Chapter 2 deals with global games, a selection paradigm for GSC. I provide a new proof of equilibrium uniqueness in a wide class of global games. I show that the joint best-response in these games is a contraction. The uniqueness result then follows as a corollary of the contraction principle. Furthermore, the contraction-mapping approach provides an intuition for why uniqueness arises: complementarities generate multiple equilibria, but the global-games structure dampens complementarities until one equilibrium survives. Secondly, there is a concern in mechanism design about the assumption of equilibrium play. Chapter 3 examines the problem of designing mechanisms that induce supermodular games, thereby guiding agents to play desired equilibrium strategies via learning. In quasilinear environments, I prove that if a scf can be implemented by a mechanism that generates bounded substitutes - as opposed to strategic complementarities - then this mechanism can be converted into a supermodular mechanism that implements the scf. If the scf also satisfies some efficiency criterion, then it admits a supermodular mechanism that balances budget. Then I provide general sufficient conditions for a scf to be implementable with a supermodular mechanism whose equilibria are contained in the smallest interval among all supermodular mechanisms. I also give conditions for the equilibrium to be unique. Finally, a supermodular revelation principle is provided for general preferences. The final chapter is an independent chapter on political economics. It provides three different processes by which two political parties nominate candidates for a general election: Nominations by party leaders, by a vote of party members, and by a spending competition. It is shown that more extreme outcomes can emerge from spending competition and that non-median outcomes can result via any process. Under endogenous party membership, median outcomes ensue when nominations are decided by a vote but not with spending competition.https://resolver.caltech.edu/CaltechETD:etd-05282008-122413Incentives and Institutions: Essays in Mechanism Design and Game Theory with Applications
https://resolver.caltech.edu/CaltechETD:etd-04232009-163542
Year: 2009
DOI: 10.7907/EAZA-N950
<p>In the first part of this dissertation we study the problem of designing desirable mechanisms for economic environments with different types of informational and consumption externalities. We first study the mechanism design problem for the class of Bayesian environments where preferences of individuals depend not only on their allocations but also on the welfare of other individuals. For these environments, we fully characterize interim efficient mechanisms and examine their properties. This set of mechanisms is compelling since interim efficient mechanisms are the best in the sense that there is no other mechanism which generates unanimous improvement. For public good environments, we show that these mechanisms produce the public good closer to the efficient level of production as the degree of altruism in the preferences increases. For private good environments, we show that altruistic agents trade more often than selfish agents.</p>
<p>We next consider mechanism design problem for matching markets where externalities are present. We present mechanisms that implement the core correspondence of many-to-one matching markets, such as college admissions problems, where the students have preferences over the other students who would attend the same college. With an unrestricted domain of preferences the non-emptiness of the core is not guaranteed. We present a sequential mechanism implementing the core without any restrictions on the preferences. We also show that simple two-stage mechanisms cannot be used to implement the core correspondence in subgame perfect Nash equilibrium without strong assumptions on agents' preferences.</p>
<p>In the final part of the dissertation we focus on another matching market, one-to-one assignment games with money. We present an alternative way to characterize the core as the fixed points of a certain mapping. We also introduce the first algorithm that finds all core outcomes in assignment games. The lattice property of the stable payoffs, as well as its non-emptiness, are proved using Tarski's fixed point theorem. We show that there is a polarization of interests in the core by using our formulation.</p>
https://resolver.caltech.edu/CaltechETD:etd-04232009-163542Limited Randomness in Games, and Computational Perspectives in Revealed Preference
https://resolver.caltech.edu/CaltechETD:etd-06042009-233839
Year: 2009
DOI: 10.7907/KH85-HJ73
<p>In this dissertation, we explore two particular themes in connection with the study of games and general economic interactions: bounded resources and rationality. The rapidly maturing field of algorithmic game theory concerns itself with looking at the computational limits and effects when agents in such an interaction make choices in their "self-interest." The solution concepts that have been studied in this regard, and which we shall focus on in this dissertation, assume that agents are capable of randomizing over their set of choices. We posit that agents are randomness-limited in addition to being computationally bounded, and determine how this affects their equilibrium strategies in different scenarios.</p>
<p>In particular, we study three interpretations of what it means for agents to be randomness-limited, and offer results on finding (approximately) optimal strategies that are randomness-efficient:<br />
1. One-shot games with access to the support of the optimal strategies: for this case, our results are obtained by sampling strategies from the optimal support by performing a random walk on an expander graph.<br />
2. Multiple-round games where agents have no a priori knowledge of their payoffs: we significantly improve the randomness-efficiency of known online algorithms for such games by utilizing distributions based on almost pairwise independent random variables.<br />
3. Low-rank games: for games in which agents' payoff matrices have low rank, we devise "fixed-parameter" algorithms that compute strategies yielding approximately optimal payoffs for agents, and are polynomial-time in the size of the input and the rank of the payoff tensors.</p>
<p>In regard to rationality, we look at some computational questions in a related line of work known as revealed preference theory, with the purpose of understanding the computational limits of inferring agents' payoffs and motives when they reveal their preferences by way of how they act. We investigate two problem settings as applications of this theory and obtain results about their intractability:<br />
1. Rationalizability of matchings: we consider the problem of rationalizing a given collection of bipartite matchings and show that it is NP-hard to determine agent preferences for which matchings would be stable. Further, we show, assuming P ≠ NP, that this problem does not admit polynomial-time approximation schemes under two suitably defined notions of optimization.<br />
2. Rationalizability of network formation games: in the case of network formation games, we take up a particular model of connections known as the Jackson-Wolinsky model in which nodes in a graph have valuations for each other and take their valuations into consideration when they choose to build edges. We show that under a notion of stability, known as pairwise stability, the problem of finding valuations that rationalize a collection of networks as pairwise stable is NP-hard. More significantly, we show that this problem is hard even to approximate to within a factor 1/2 and that this is tight.</p>
<p>Our results on hardness and inapproximability of these problems use well-known techniques from complexity theory, and particularly in the case of the inapproximability of rationalizing network formation games, PCPs for the problem of satisfying the optimal number of linear equations in positive integers, building on recent results of Guruswami and Raghavendra.</p>
https://resolver.caltech.edu/CaltechETD:etd-06042009-233839Firm Behaviour in Markets with Capacity Constraints
https://resolver.caltech.edu/CaltechETD:etd-08122008-102414
Year: 2009
DOI: 10.7907/P1EX-FW81
I study firms' behaviour in markets where firms' long-run capacity decisions, made in the presence of uncertain demand, constrains short-run competition. In Chapter 2, I analyse firms' investment and pricing incentives in a differentiated products framework with uncertain demand. Firms choose production capacities before observing demand and choose prices after demand is realised. Unlike previous models, when firms are identical, symmetric pure-strategy equilibria exist, even in the presence of very low capacity costs. Furthermore, industry capacity in these symmetric equilibria is strictly greater than the equivalent Cournot equilibrium industry capacity for low costs, and equal to the Cournot industry capacity for higher costs. Subsidies on capacity costs have a greater positive impact on equilibrium capacity than an equivalent subsidy on production costs. In Chapter 3, I use this model to analyse how the market changes when firms practice `withholding'. This is when firms withdraw capacity from the market in the short-run, after demand is realised, in the hope of making greater profits. I show that withholding is an optimal strategy for firms in these markets, and that compared to the no-withholding case, equilibrium output is lower in low demand states and higher in high demand states. Equilibrium capacity strictly increases. I discuss why it is hard to find real world examples of withholding in action, despite the increased profits. Chapter 4 looks at the specific case of the electricity industry. Electricity markets are a good example where capacity constraints and random demand affect competitive outcomes. However, trade in electricity is subject to additional constraints caused by the transmission of electricity through a network. Network constraints are well understood to cause considerable non-convexities in firms' optimisation problems; thus theoretical models have limited use in analysing the behaviour of electricity generating firms. An alternative approach, economic experiments, has become an important tool to study these markets, but questions remain on whether subjects can really imitate large firms in the presence of such complexity. This chapter provides evidence in the affirmative, specifically showing that experimental subjects can understand loop flows in the presence of Kirchoff's Laws, a key physical constraint, and how this affects firms' pricing decisions. The results suggest that electricity market experiments could be scaled up successfully to more realistic networks.
https://resolver.caltech.edu/CaltechETD:etd-08122008-102414Organizational and Financial Economics
https://resolver.caltech.edu/CaltechETD:etd-05292009-150803
Year: 2009
DOI: 10.7907/B75A-MW79
<p>We investigate behaviors in organizational and financial economics by utilizing and developing the latest techniques from game theory, experimental economics, computational testbed, and decision-making under risk and uncertainty.</p>
<p>In the first chapter, we use game theory and experimental economics approaches to analyze the relationships between corporate culture and the persistent performance differences among seemingly similar enterprises. First, we show that competition leads to higher minimum effort levels in the minimum effort coordination game. Furthermore, we show that organizations with better coordination also lead to higher rates of cooperation in the prisoner's dilemma game. This supports the theory that the high-efficiency culture developed in coordination games act as a focal point for the outcome of subsequent prisoner's dilemma game. In turn, we argue that these endogenous features of culture developed from coordination and cooperation can help explain the persistent performance differences.</p>
<p>In the second chapter, using a computational testbed, we theoretically predict and experimentally show that in the minimum effort coordination game, as the cost of effort increases: 1. the game converges to lower effort levels, 2. convergence speed increases, and 3. average payoff is not monotonically decreasing. In fact, the average profit is an U-shaped curve as a function of cost. Therefore, contrary to the intuition, one can obtain a higher average profit by increasing the cost of effort.</p>
<p>In the last chapter, we investigate a well-known paradox in finance. The equity market home bias occurs when the investors over-invest in their home country assets. The equity market home bias is a paradox because the investors are not hedging their risk optimally. Even with unrealistic levels of risk aversion, the equity market home bias cannot be explained using the standard mean-variance model. We propose ambiguity aversion to be the behavioral explanation. We design six experiments using real-world assets and derivatives to show the relationship between ambiguity aversion and home bias. We tested for ambiguity aversion by showing that the investor's subjective probability is sub-additive. The result from the experiment provides support for the assertion that ambiguity aversion is related to the equity market home bias paradox.</p>
https://resolver.caltech.edu/CaltechETD:etd-05292009-150803Contracts and Markets
https://resolver.caltech.edu/CaltechTHESIS:05282010-090118586
Year: 2010
DOI: 10.7907/FBS0-F288
I merge the standard Principal Agent model with a CAPM-type financial market, to study the interactions of contracts and financial markets. I prove existence of equilibrium in two models, a more general economy allowing for hidden type and action under generic mean variance preferences and a hidden action economy with Markowitz mean-variance preferences. I study economies for which markets have an insurance effect on compensation contracts. I show sufficient conditions for lower variance to obtain in large economies, even with asymmetric information. In this context I show the effect of markets' size on efficiency. I also study moral hazard economies, for which I prove existence of a unique pure strategy equilibrium, and I show that financial markets negatively affect the equilibrium returns of firms. In the final chapter I study the efficiency of securities issued under symmetric information. I find that small markets and low correlation of firms' returns generate inefficiency. I also show that the assumption of symmetry or independence is crucial to obtaining the insurance results in the previous Chapters.https://resolver.caltech.edu/CaltechTHESIS:05282010-090118586Contracts and Markets
https://resolver.caltech.edu/CaltechTHESIS:05282010-090118586
Year: 2010
DOI: 10.7907/FBS0-F288
I merge the standard Principal Agent model with a CAPM-type financial market, to study the interactions of contracts and financial markets. I prove existence of equilibrium in two models, a more general economy allowing for hidden type and action under generic mean variance preferences and a hidden action economy with Markowitz mean-variance preferences. I study economies for which markets have an insurance effect on compensation contracts. I show sufficient conditions for lower variance to obtain in large economies, even with asymmetric information. In this context I show the effect of markets' size on efficiency. I also study moral hazard economies, for which I prove existence of a unique pure strategy equilibrium, and I show that financial markets negatively affect the equilibrium returns of firms. In the final chapter I study the efficiency of securities issued under symmetric information. I find that small markets and low correlation of firms' returns generate inefficiency. I also show that the assumption of symmetry or independence is crucial to obtaining the insurance results in the previous Chapters.https://resolver.caltech.edu/CaltechTHESIS:05282010-090118586Three Essays on Microeconomic Theory
https://resolver.caltech.edu/CaltechTHESIS:05192012-101726941
Year: 2012
DOI: 10.7907/F5DX-5375
<p>This thesis considers three issues in microeconomic theory - two-sided matching, strategic voting, and revealed preferences.</p>
<p>In the first chapter I discuss the strategic manipulation of stable matching mechanisms commonly used in two-sided matching markets. Stable matching mechanisms are very successful in practice, despite theoretical concerns that they are manipulable by participants. The key finding is that most agents in large markets are close to being indifferent among partners in all stable matchings. It is known that the utility gain by manipulating a stable matching mechanism is bounded by the difference between utilities from the best and the worst stable matching partners. Thus, the main finding implies that the proportion of agents who may obtain a significant utility gain from manipulation vanishes in large markets. This result reconciles the success of stable mechanisms in practice with the theoretical concerns about strategic manipulation. Methodologically, I introduce techniques from the theory of random bipartite graphs for the analysis of large matching markets.</p>
<p>In the second chapter I study the criminal court process, focusing on plea bargaining. Plea bargains screen the types of defendants, guilty or innocent, who go to jury trial, which affects the jurors' voting decision and, in turn, the performance of the entire criminal court. The equilibrium jurors' voting behavior in the case of plea bargaining resembles the equilibrium behavior in the classical jury model in the absence of plea bargaining. By optimizing a plea bargain offer, a prosecutor, however, may induce jurors to act as if they echo the prosecutor's preferences against convicting innocent defendants and acquitting guilty defendants. With reference to Feddersen and Pesendorfer (1998), I study different voting rules in the trial stage and their consequences in the entire court process. Compared to general super-majority rules, we find that a court using the unanimity rule delivers more expected punishment to innocent defendants and less punishment to guilty defendants.</p>
<p>In the third chapter I study collective choices from the revealed preference theory viewpoint. For every product set of individual actions, joint choices are called Nash-rationalizable if there exists a preference relation for each player such that the selected joint actions are Nash equilibria of the corresponding game. I characterize Nash-rationalizable joint choice behavior by zero-sum games, or games of conflicting interests. If the joint choice behavior forms a product subset, the behavior is called interchangeable. I prove that interchangeability is the only additional empirical condition which distinguishes zero-sum games from general noncooperative games.</p>https://resolver.caltech.edu/CaltechTHESIS:05192012-101726941Essays on Cooperation and Reciprocity
https://resolver.caltech.edu/CaltechTHESIS:05232013-113856514
Year: 2013
DOI: 10.7907/G536-ZW17
<p>This dissertation comprises three essays that use theory-based experiments to gain understanding of how cooperation and efficiency is affected by certain variables and institutions in different types of strategic interactions prevalent in our society.</p>
<p>Chapter 2 analyzes indefinite horizon two-person dynamic favor exchange games with private information in the laboratory. Using a novel experimental design to implement a dynamic game with a stochastic jump signal process, this study provides insights into a relation where cooperation is without immediate reciprocity. The primary finding is that favor provision under these conditions is considerably less than under the most efficient equilibrium. Also, individuals do not engage in exact score-keeping of net favors, rather, the time since the last favor was provided affects decisions to stop or restart providing favors.</p>
<p>Evidence from experiments in Cournot duopolies is presented in Chapter 3 where players indulge in a form of pre-play communication, termed as revision phase, before playing the one-shot game. During this revision phase individuals announce their tentative quantities, which are publicly observed, and revisions are costless. The payoffs are determined only by the quantities selected at the end under real time revision, whereas in a Poisson revision game, opportunities to revise arrive according to a synchronous Poisson process and the tentative quantity corresponding to the last revision opportunity is implemented. Contrasting results emerge. While real time revision of quantities results in choices that are more competitive than the static Cournot-Nash, significantly lower quantities are implemented in the Poisson revision games. This shows that partial cooperation can be sustained even when individuals interact only once.</p>
<p>Chapter 4 investigates the effect of varying the message space in a public good game with pre-play communication where player endowments are private information. We find that neither binary communication nor a larger finite numerical message space results in any efficiency gain relative to the situation without any form of communication. Payoffs and public good provision are higher only when participants are provided with a discussion period through unrestricted text chat.</p>
https://resolver.caltech.edu/CaltechTHESIS:05232013-113856514Essays on Contests, Coordination Games, and Matching
https://resolver.caltech.edu/CaltechTHESIS:02252013-111730958
Year: 2013
DOI: 10.7907/BZX2-PF49
<p>In this thesis, I use theory and experiments (sometimes only experiments), to investigate the impact of agents’ heterogeneity on economic environments such as tournaments, decentralized matching, and coordination games.</p>
<p>The first chapter analyzes a coordination game (a stag-hunt game) in which one of the equilibria gives a higher payoff to the players, but playing the corresponding strategy profile leading to this equilibrium entails strategic risk. In this chapter, I ask whether agents can coordinate on the equilibrium that gives a higher payoff when they are provided information about an opponent’s risk aversion. Two key insights result from my analysis. First, a subject’s propensity to choose the risky action depends on her opponent’s risk attitude. Second, this propensity is independent of the subject’s own risk attitude.</p>
<p>The second chapter compares the performance of two tournament designs when contestants are heterogeneous in their abilities. One of the designs is the standard winner-take-all (WTA) tournament, which is common both in the literature and in the real world. The alternative tournament design involves two tournaments with different prizes (parallel tournaments) where individuals choose which tournament to enter before competing. With a simple model and an experiment, I show that when contestants’ abilities differ substantially, the designer makes higher profit using parallel tournaments. Nevertheless, when the contestants’ abilities are similar, the designer makes higher profit in the WTA tournament.</p>
<p>The third chapter studies a two-period decentralized matching game under complete information with frictions in the form of time discounting. I find that the sub-game perfect Nash equilibrium outcome of this game coincides with a stable outcome for most preference profiles. The selection of which stable outcome emerges depends on the level of frictions: the sub-game perfect Nash equilibrium outcome of this game yields the firm-optimal stable match (a worker-optimal stable match) when the time discount is sufficiently high or low (intermediate values).</p>https://resolver.caltech.edu/CaltechTHESIS:02252013-111730958Essays on Information, Competition, and Cooperation
https://resolver.caltech.edu/CaltechTHESIS:10272012-131405182
Year: 2013
DOI: 10.7907/CQMD-PQ41
<p>This thesis consists of three papers that study the relationships between information, competition, and cooperation in two novel environments. We first examine the competitive behavior of firms with private information in two-sided matching markets. This part of the thesis employs purely game-theoretic tools. Second, we study voluntary contributions towards a linear public good by players who are connected through a network. In this environment, we use experimental and theoretical techniques to examine the effects of different information treatments and network structures on contributions.</p>
<p>In Chapter 2, we study the behavior of firms in a competitive market for workers. In particular, we study a game in which firms with private information compete for workers by committing to a single salary offer. Workers care only about salary and the matching process follows the deferred-acceptance approach introduced by Gale and Shapley (1962). For a two-firm, two-worker model, there exists a Bayesian-Nash equilibrium in which each firm type chooses a distributional strategy with interval support in the salary space. This equilibrium exhibits a separation of types, in the sense that types with a common most preferred worker choose non overlapping, adjacent supports. The type that makes higher offers is determined by the relative marginal value for the preferred worker. In larger markets, which replicate the two-firm, two-worker case, a comparable Bayesian-Nash equilibrium exists and the separation result endures. In the limit, there is no aggregate uncertainty about the realization of firm types, and competition is confined to the most popular worker type. Numerical results suggest that the finite market equilibrium strategies converge with replication to the corresponding equilibrium strategies in the limit case.</p>
<p>Chapter 3 studies individual contributions in a repeated public goods experiment. Subjects are connected through a circle network, and consumption of the public good depends on a player's own contribution and the contributions of his neighbors. We study whether contributions depend on the nature of the information players are shown about others between rounds of the repeated game. We extend the approach of Arifovic and Ledyard (2009), which merges a modified model of other-regarding preferences (ORP) with a theory of learning. Our model predicts individual behavior that ranges from free-riding, to conditional cooperation, to unconditional giving. Many subjects switch between these different behavioral strategies across games with different information treatments. Individual contributions are remarkably consistent with our model, which combines other-regarding preferences, learning, and the information treatment. Both the data and model simulations suggest that learning (to play the benchmark Nash equilibrium of the game) is differential and contagious across players. Free-riders and unconditional givers learn faster than conditional cooperators, and provide an anchor that accelerates learning by their neighbors. These results suggest that the network or neighborhood structure may be important for contributions through its effects on learning.</p>
<p>In Chapter 4, we extend the analysis of learning and contributions in network public goods experiments. Using a set of five different network structures, we examine three key aspects of individual behavior. First, we report a negative finding regarding the predictions from our theory of other-regarding preferences. The theory provides certain predictions about how a particular subject should and should not behave across networks. We find several violations of these predictions, particularly in small, complete network groups, but also in the larger, more interesting networks. Second, we report on the average contributions by players in groups that consist of all conditional cooperators. In the one-shot game, these groups have a continuum of equilibria, in which every player contributes the same amount. While one might expect contributions to average half of the endowment, we find in both the data and learning simulations that average contributions decline over time to less than half of the endowment. We conjecture that learning dynamics may provide a method of equilibrium selection, for players trying to coordinate on one equilibrium in the repeated game.</p>
<p>Our main finding in this chapter is that learning is contagious in networks other than the circle, which we studied in Chapter 3. We find considerable evidence at the individual match level that free-riders and altruists provide an anchor that stabilizes behavior and accelerates learning by their conditional cooperator neighbors. Our analysis highlights the possibility that, even when the distribution of free-riders, altruists, and conditional cooperators is the same across networks, the different neighborhood structures may affect contributions differently through their effects on learning. Thus, the main contribution of this chapter is the confirmation that learning is contagious across a range of different network structures.</p> https://resolver.caltech.edu/CaltechTHESIS:10272012-131405182Essays on Economics Networks
https://resolver.caltech.edu/CaltechTHESIS:05312013-135942135
Year: 2013
DOI: 10.7907/C9B1-3M92
<p>This thesis belongs to the growing field of economic networks. In particular, we develop three essays in which we study the problem of bargaining, discrete choice representation, and pricing in the context of networked markets. Despite analyzing very different problems, the three essays share the common feature of making use of a network representation to describe the market of interest.</p>
<p>In Chapter 1 we present an analysis of bargaining in networked markets. We make two contributions. First, we characterize market equilibria in a bargaining model, and find that players' equilibrium payoffs coincide with their degree of centrality in the network, as measured by Bonacich's centrality measure. This characterization allows us to map, in a simple way, network structures into market equilibrium outcomes, so that payoffs dispersion in networked markets is driven by players' network positions. Second, we show that the market equilibrium for our model converges to the so called eigenvector centrality measure. We show that the economic condition for reaching convergence is that the players' discount factor goes to one. In particular, we show how the discount factor, the matching technology, and the network structure interact in a very particular way in order to see the eigenvector centrality as the limiting case of our market equilibrium.</p>
<p>We point out that the eigenvector approach is a way of finding the most central or relevant players in terms of the “global” structure of the network, and to pay less attention to patterns that are more “local”. Mathematically, the eigenvector centrality captures the relevance of players in the bargaining process, using the eigenvector associated to the largest eigenvalue of the adjacency matrix of a given network. Thus our result may be viewed as an economic justification of the eigenvector approach in the context of bargaining in networked markets.</p>
<p>As an application, we analyze the special case of seller-buyer networks, showing how our framework may be useful for analyzing price dispersion as a function of sellers and buyers' network positions.</p>
<p>Finally, in Chapter 3 we study the problem of price competition and free entry in networked markets subject to congestion effects. In many environments, such as communication networks in which network flows are allocated, or transportation networks in which traffic is directed through the underlying road architecture, congestion plays an important role. In particular, we consider a network with multiple origins and a common destination node, where each link is owned by a firm that sets prices in order to maximize profits, whereas users want to minimize the total cost they face, which is given by the congestion cost plus the prices set by firms. In this environment, we introduce the notion of Markovian traffic equilibrium to establish the existence and uniqueness of a pure strategy price equilibrium, without assuming that the demand functions are concave nor imposing particular functional forms for the latency functions. We derive explicit conditions to guarantee existence and uniqueness of equilibria. Given this existence and uniqueness result, we apply our framework to study entry decisions and welfare, and establish that in congested markets with free entry, the number of firms exceeds the social optimum. </p>
https://resolver.caltech.edu/CaltechTHESIS:05312013-135942135Essays in Social and Economic Networks
https://resolver.caltech.edu/CaltechTHESIS:05292015-045627301
Year: 2015
DOI: 10.7907/Z9639MPX
<p>This thesis consists of three chapters, and they concern the formation of social and
economic networks. In particular, this thesis investigates the solution concepts of
Nash equilibrium and pairwise stability in models of strategic network formation.
While the first chapter studies the robustness property of Nash equilibrium in network
formation games, the second and third chapters investigate the testable implication
of pairwise stability in networks.</p>
<p>The first chapter of my thesis is titled "The Robustness of Network Formation
Games". In this chapter, I propose a notion of equilibrium robustness, and analyze
the robustness of Nash equilibria in a class of well-studied network formation
games that suffers from multiplicity of equilibria. Under this notion of robustness,
efficiency is also achieved. A Nash equilibrium is k-robust if k is the smallest integer
such that the Nash equilibrium network can be perturbed by adding some k number
of links. This chapter shows that acyclic networks are particularly fragile: with
the exception of the periphery-sponsored star, all Nash equilibrium networks without
cycles are 1-robust, or minimally robust. The main result of this paper then proves
that for all Nash equilibria, cyclic or acyclic, the periphery-sponsored star is the most
robust Nash equilibrium. Moreover the periphery-sponsored star is by far the most
robust in the sense that asymptotically in large network, it must be at least twice as
robust as any other Nash equilibria.</p>
<p>The second chapter of my thesis is titled "On the Consistency of Network Data with
Pairwise Stability: Theory". In this chapter, I characterize the consistency of social
network data with pairwise stability, which is a solution concept that in a pairwise
stable network, no agents prefer to deviate by forming or dissolving links. I take
preferences as unobserved and nonparametric, and seek to characterize the networks
that are consistent with pairwise stability. Specifically, given data on a single network,
I provide a necessary and sufficient condition for the existence of some preferences
that would induce this observed network as pairwise stable. When such preferences
exist, I say that the observed network is rationalizable as pairwise stable. Without any
restriction on preferences, any network can be rationalized as pairwise stable. Under
one assumption that agents who are observed to be similar in the network have similar
preferences, I show that an observed network is rationalizable as pairwise stable if
and only if it satisfies the Weak Axiom of Revealed Pairwise Stability (WARPS). This
result is generalized to include any arbitrary notion of similarity.</p>
<p>The third chapter of my thesis is titled "On the Consistency of Network Data with
Pairwise Stability: Application". In this chapter, I investigate the extent to which
real-world networks are consistent with WARPS. In particular, using the network
data collected by Banerjee et al. (2013), I explore how consistency with WARPS
is empirically associated with economic outcomes and social characteristics. The
main empirical finding is that targeting of nodes that have central positions in social
networks to increase the spread of information is more effective when the underlying
networks are also more consistent with WARPS.</p>https://resolver.caltech.edu/CaltechTHESIS:05292015-045627301Essays on Correlated Equilibrium and Voter Turnout
https://resolver.caltech.edu/CaltechTHESIS:05142015-114936374
Year: 2015
DOI: 10.7907/Z94B2Z8T
<p>This thesis consists of three essays in the areas of political economy and game theory, unified by their focus on the effects of pre-play communication on equilibrium outcomes.</p>
<p>Communication is fundamental to elections. Chapter 2 extends canonical voter turnout models, where citizens, divided into two competing parties, choose between costly voting and abstaining, to include any form of communication, and characterizes the resulting set of Aumann's correlated equilibria. In contrast to previous research, high-turnout equilibria exist in large electorates and uncertain environments. This difference arises because communication can coordinate behavior in such a way that citizens find it incentive compatible to follow their correlated signals to vote more. The equilibria have expected turnout of at least twice the size of the minority for a wide range of positive voting costs.</p>
<p>In Chapter 3 I introduce a new equilibrium concept, called subcorrelated equilibrium, which fills the gap between Nash and correlated equilibrium, extending the latter to multiple mediators. Subcommunication equilibrium similarly extends communication equilibrium for incomplete information games. I explore the properties of these solutions and establish an equivalence between a subset of subcommunication equilibria and Myerson's quasi-principals' equilibria. I characterize an upper bound on expected turnout supported by subcorrelated equilibrium in the turnout game.</p>
<p>Chapter 4, co-authored with Thomas Palfrey, reports a new study of the effect of communication on voter turnout using a laboratory experiment. Before voting occurs, subjects may engage in various kinds of pre-play communication through computers. We study three communication treatments: No Communication, a control; Public Communication, where voters exchange public messages with all other voters, and Party Communication, where messages are exchanged only within one's own party. Our results point to a strong interaction effect between the form of communication and the voting cost. With a low voting cost, party communication increases turnout, while public communication decreases turnout. The data are consistent with correlated equilibrium play. With a high voting cost, public communication increases turnout. With communication, we find essentially no support for the standard Nash equilibrium turnout predictions.</p>https://resolver.caltech.edu/CaltechTHESIS:05142015-114936374Essays in Behavioral Decision Theory
https://resolver.caltech.edu/CaltechTHESIS:05292015-095332979
Year: 2015
DOI: 10.7907/Z9D21VJH
<p>This thesis studies decision making under uncertainty and how economic agents respond to information. The classic model of subjective expected utility and Bayesian updating is often at odds with empirical and experimental results; people exhibit systematic biases in information processing and often exhibit aversion to ambiguity. The aim of this work is to develop simple models that capture observed biases and study their economic implications.</p>
<p>In the first chapter I present an axiomatic model of cognitive dissonance, in which an agent's response to information explicitly depends upon past actions. I introduce novel behavioral axioms and derive a representation in which beliefs are directionally updated. The agent twists the information and overweights states in which his past actions provide a higher payoff. I then characterize two special cases of the representation. In the first case, the agent distorts the likelihood ratio of two states by a function of the utility values of the previous action in those states. In the second case, the agent's posterior beliefs are a convex combination of the Bayesian belief and the one which maximizes the conditional value of the previous action. Within the second case a unique parameter captures the agent's sensitivity to dissonance, and I characterize a way to compare sensitivity to dissonance between individuals. Lastly, I develop several simple applications and show that cognitive dissonance contributes to the equity premium and price volatility, asymmetric reaction to news, and belief polarization.</p>
<p>The second chapter characterizes a decision maker with sticky beliefs. That is, a decision maker who does not update enough in response to information, where enough means as a Bayesian decision maker would. This chapter provides axiomatic foundations for sticky beliefs by weakening the standard axioms of dynamic consistency and consequentialism. I derive a representation in which updated beliefs are a convex combination of the prior and the Bayesian posterior. A unique parameter captures the weight on the prior and is interpreted as the agent's measure of belief stickiness or conservatism bias. This parameter is endogenously identified from preferences and is easily elicited from experimental data.</p>
<p>The third chapter deals with updating in the face of ambiguity, using the framework of Gilboa and Schmeidler. There is no consensus on the correct way way to update a set of priors. Current methods either do not allow a decision maker to make an inference about her priors or require an extreme level of inference. In this chapter I propose and axiomatize a general model of updating a set of priors. A decision maker who updates her beliefs in accordance with the model can be thought of as one that chooses a threshold that is used to determine whether a prior is plausible, given some observation. She retains the plausible priors and applies Bayes' rule. This model includes generalized Bayesian updating and maximum likelihood updating as special cases.</p>https://resolver.caltech.edu/CaltechTHESIS:05292015-095332979Three Essays on Information Economics
https://resolver.caltech.edu/CaltechTHESIS:05272016-192342051
Year: 2016
DOI: 10.7907/Z9TB14WS
<p>The main theme of my thesis is how uncertainty affects behaviors. I explore how agents seek to resolve uncertainty in different environments. In Chapter 1, agents learn from the messages of informed experts in a signaling game. In Chapter 2, an agent learns about a fixed and uncertain physical environment through dynamic experimentation. In the last chapter, agents learn about others' preferences through the outcome of a central matching mechanism. </p>
<p>Motivated by the question of how opposing political candidates who are policy experts can communicate to voters in a way that helps them win the election, I study a delegation problem with two informed, self-interested agents. Agents make proposals before the decision maker decides to whom to delegate a task. The innovation is that there are multiple issues that the principal and agents care about, and the agents can be vague about any issue in their proposals. Intuition says that agents should be specific about the issues that they are trusted on and vague about other issues. I find the opposite: an agent is disadvantaged by revealing information about certain issues to the decision maker, those on which he is trusted by the principal on. The reason is that doing so enables his opponent to take advantage of this revealed information and undercut him. Essentially, when the principal is on an agent's side for some issue, that agent does not want to be specific, because it creates a visible target for his opponent to react to. He wants to be vague, because that allows the principal's ignorance about the optimal action create an insurmountable obstacle for his opponent. As a result, it is to an agent's advantage to be vague about the issue that he is trusted on. </p>
<p>The second chapter investigates the implication of biased updating in dynamic experimentation such as a firm's R&D process. People exhibit near miss effect during gambling. For example, if the first two wheels of a slot machine indicate a potential final outcome of jackpot but the last wheel indicates a loss, people are motivated to gamble more. An outcome that is close to a success but is still a failure is called a "near miss." In this chapter, I explain the near miss effect in a firm's repeated R&D process. There are two factors that sequentially affect the profitability of R&D, both of which are uncertain. First is whether the R&D team is skilled enough to make a technical breakthrough. If a breakthrough occurs, then a second factor comes into play, which is whether the market demand is high enough to make the product profitable. Moreover, good news for the first stage is a prerequisite for learning about the second stage. In each one of the infinite periods, the decision maker of the firm decides whether to involve in risky R&D and observe whether the outcome is a failure (no breakthrough), a success (with breakthrough and high market demand), or a near miss (with breakthrough but low market demand). I assume that the decision maker of the firm learns about the skill of the team properly, but when she updates about the market demand, she updates incorrectly and overweighs her prior. In particular, her posterior about the market demand is a convex combination of her prior and the Bayesian posterior. This bias affects the relative updating of the two factors, which gives rise to the near miss effect: after a near miss is observed, the decision maker values doing R&D more than before although she has received no payoff. </p>
<p>I show that if the decision maker is sufficiently biased and overweighs her prior enough, then she exhibits the near miss effect. I also compare the near miss effect for decision makers with different degrees of biases. As it turns out, the more biased a decision maker is, the more sever she exhibits the near miss effect. However, given the decision maker's belief about the two factors, the more biased she is, the less she values R&D. Consequently, the value of R&D is highest for a Bayesian. </p>
<p>In the last chapter, I study how well a centralized matching mechanism works when agents do not know others' preferences. I consider a standard two-sided marriage matching problem, except that agents only know their own preferences. Roth(1989) proved by an example the non-existence of a mechanism with at least one stable equilibria. In his proof, an agent is allowed to report a preference that is realized with ex ante zero probability, which violates the setup of a Bayesian game. Instead, by restricting agents to report only preferences with positive realization probabilities, I show that Roth's result still holds. More interestingly, as long as agents are allowed to form blocking pairs after a matching outcome is announced, the final outcome is always stable with respect to the true preferences. This means that even when the mechanism fails to produce a stable outcome, it can still release enough information for agents to initialize a blocking pair, which induces a stable outcome. </p>https://resolver.caltech.edu/CaltechTHESIS:05272016-192342051Essays in Behavioral Decision Theory
https://resolver.caltech.edu/CaltechTHESIS:05272016-142109004
Year: 2016
DOI: 10.7907/Z93R0QT5
<p>Many different behavioral phenomena that cannot be rationalized by standard models in economics have been well-documented both in the real world and in lab experiments. Motivated by these behavioral phenomena, the purpose of this dissertation is three-fold. First, I develop axiomatic models of individual decision-making to explain these well-documented phenomena. Second, I derive the implications and predictions of these axiomatic models for intertemporal choice, asset pricing, and other economic contexts. Third, I provide connections between these seemingly separate behavioral phenomena and widely-used properties of preferences in economics and psychology. This dissertation consists of five chapters. The first chapter studies dynamic choice under uncertainty. The second and third chapters study choice over multi-attribute alternatives. The fourth and fifth chapters study stochastic choice.</p>
<p>The first chapter studies history-dependent risk aversion and focuses on a behavioral phenomenon called the reinforcement effect (RE), which states that people become less risk-averse after a good history than after a bad history. The RE is well-documented in consumer choices, financial markets, and lab experiments. I show that this seemingly anomalous behavior occurs whenever risk preferences are history-dependent (in a nontrivial way) and satisfy monotonicity with respect to first-order stochastic dominance. To study history-dependent risk aversion and the RE formally, I develop a behaviorally-founded model of dynamic choice under risk that generalizes standard discounted expected utility. To illustrate the usefulness of my model, I apply it to the Lucas tree model of asset pricing and draw implications of the RE for asset price dynamics. I find that, compared to history-independent models, assets are overpriced when the economy is in a good state and are underpriced in a bad state. Moreover, my model generates high, volatile, and predictable asset returns, and low and smooth bond returns, consistent with empirical evidence.</p>
<p>In the second chapter, I develop an axiomatic model of reference-dependent preferences in which reference points are endogenous. In particular, I focus on choices from menus of two-attribute alternatives, and the reference point for a given menu is a vector that consists of the minimums of each dimension of the menu. I characterize this model by two weakenings of the Weak Axiom of Revealed Preference (WARP) in addition to standard axioms. My model is not just consistent with the attraction effect and the compromise effect, well-known preference reversals, but it also provides a connection between these two effects and diminishing sensitivity, a widely used behavioral property in economics. The model also provides bounds on preference reversals. I apply the model to two different contexts, intertemporal choice and risky choice, and diminishing sensitivity has interesting implications. In intertemporal choice, the main implication of the model is that borrowing constraints produce a psychological pressure to move away from the constraints even if they are not binding. In risky choice, the model allows conflicting risk behaviors.</p>
<p>In the third chapter, I study choice over multidimensional alternatives. Making a choice between multidimensional alternatives is a difficult task. Therefore, a decision maker may adopt some procedure (heuristic) to simplify this task. I provide an axiomatic model of one such heuristic called the Intra-Dimensional Comparison (IDC) heuristic. The IDC heuristic is well-documented in the experimental literature on choice under risk. The IDC heuristic is a procedure in which a decision maker compares multidimensional alternatives dimension-by-dimension and makes a decision based on those comparisons. The model of the IDC heuristic provides a general framework applicable to many different contexts, including risky choice and social choice.</p>
<p>The fourth chapter is joint work with Federico Echenique and Kota Saito. We develop an axiomatic theory of random choice that builds on Luce's (1959) model to incorporate a role for perception. We capture the role of perception through perception priorities; priorities that determine whether an object or alternative is perceived sooner or later than other alternatives. We identify agents' perception priorities from their violations of Luce's axiom of independence from irrelevant alternatives (IIA). The direction of the violation of IIA implies an orientation of agents' priority rankings. We adjust choice probabilities to account for the effects of perception, and impose that adjusted choice probabilities satisfy IIA. So all violations of IIA are accounted for by the perception order. The theory can explain some very well-documented behavioral phenomena in individual choice. We can also explain the effects of forced choice and choice overload in experiments.</p>
<p>The fifth chapter studies how the ordering of alternatives (e.g., the location of products in a grocery store, the order of candidates on a ballot) affects a decision maker's choices. I develop an axiomatic model of random choice that builds on Luce's (1959) and incorporates the effect of the ordering of alternatives on choice frequencies. When the ordering of alternatives is observed, I characterize the model by two weakenings of IIA. When the ordering of alternatives is not observed, I can identify it from choice data. The model can accommodate the similarity, compromise, and attraction effects, violations of stochastic transitivity, and the choice overload, which are well-known behavioral phenomena in individual choice.</p>
https://resolver.caltech.edu/CaltechTHESIS:05272016-142109004Essays on Social Networks and Political Economy
https://resolver.caltech.edu/CaltechTHESIS:05262016-144636131
Year: 2016
DOI: 10.7907/Z9028PGG
<p>This dissertation consists of two original studies in social networks and one original study in political economy. In the first two chapters, I study (i) how social networks form, and (ii) how economic agents optimize their behaviors for a given network structure. In the last chapter, I examine how election rules affect individual voting decisions and ultimate election outcomes.</p>
<p>In Chapter 1, "Social Network Formation and Strategic Interaction in Large Networks," I present a dynamic network formation model that aims to explain why some empirical degree distributions exhibit the increasing hazard rate property (IHRP). In my model, a sequentially arriving node forms a link with one existing node through a bilateral agreement. A newborn node prefers a highly linked node; however, the more links an existing node has, the more the marginal return from an additional link diminishes. I prove that the IHRP emerges if and only if the latter effect prevails over the former. I present two implications of the IHRP for strategic interactions in networks. First, when there is uncertainty about neighboring agents' connectivity, the IHRP guarantees that a unique Bayesian equilibrium exists in a network game with strategic complementarities. Second, the IHRP characterizes a monotone revenue-maximizing mechanism with allocative externalities.</p>
<p>In Chapter 2, "Monopoly Pricing and Diffusion of a (Social) Network Good," I present a model of dynamic pricing and diffusion of a network good sold by a monopolist. In the model, the network good is a subscription social network good. This means that in each period, each consumer has to pay a subscription price to use the good, and the utility derived from subscribing to the good increases as more of her neighboring consumers subscribe. Consumers myopically optimize their subscription decisions, and the monopolist chooses a sequence of subscription prices that maximizes his discounted sum of per-period profits. Three main results emerge. First, I characterize a unique steady state of the monopoly market. Second, I find that optimal sequences of subscription prices oscillate around the subscription price at the steady state as time passes. Third, I analyze how changes in the monopolist's discount factor and the density of the social network affect the subscription price, subscription rate, and deadweight loss at the steady state.</p>
<p>In Chapter 3, "A Model of Pre-Electoral Coalition Formation," I study how two different election rules, simple plurality (e.g., as in South Korea) and two-round runoff (e.g., as in France), affect political candidates’ incentives to form pre-electoral coalitions (PECs). In my model, three candidates compete for a single office, and two candidates can form a PEC. Since the candidates are both policy- and office-motivated, one candidate can incentivize the other candidate to withdraw his candidacy by choosing a joint policy platform. I find that PECs are more likely to form in plurality elections than in two-round runoff elections. I further examine how other electoral environments, such as ideological distance and pre-election polls, influence incentives to form PECs.</p>https://resolver.caltech.edu/CaltechTHESIS:05262016-144636131Essays in Econometrics and Political Economy
https://resolver.caltech.edu/CaltechTHESIS:05272016-213755050
Year: 2016
DOI: 10.7907/Z9MS3QQZ
<p>This dissertation comprises three essays in Econometrics and Political Economy offering both methodological and substantive contributions to the study of electoral coalitions (Chapter 2), the effectiveness of campaign expenditures (Chapter 3), and the general practice of experimentation (Chapter 4).</p>
<p>Chapter 2 presents an empirical investigation of coalition formation in elections. Despite its prevalence in most democracies, there is little evidence documenting the impact of electoral coalition formation on election outcomes. To address this imbalance, I develop and estimate a structural model of electoral competition that enables me to conduct counterfactual analyses of election outcomes under alternative coalitional scenarios. The results uncover substantial equilibrium savings in campaign expenditures from coalition formation, as well as significant electoral gains benefitting electorally weaker partners.</p>
<p>Chapter 3, co-authored with Benjamin J. Gillen, Hyungsik Roger Moon, and Matthew Shum, proposes a novel data-driven approach to the problem of variable selection in econometric models of discrete choice estimated using aggregate data. Our approach applies penalized estimation algorithms imported from the machine learning literature along with confidence intervals that are robust to variable selection. We illustrate our approach with an application that explores the effect of campaign expenditures on candidate vote shares in data from Mexican elections.</p>
<p>Chapter 4, co-authored with Abhijit Banerjee, Sylvain Chassang, and Erik Snowberg, provides a decision-theoretic framework in which to study the question of optimal experiment design. We model experimenters as ambiguity-averse decision makers who trade off their own subjective expected payoff against that of an adversarial audience. We establish that ambiguity aversion is required for randomized controlled trials to be optimal. We also use this framework to shed light on the important practical questions of rerandomization and resampling.</p>https://resolver.caltech.edu/CaltechTHESIS:05272016-213755050Essays on Information Collection
https://resolver.caltech.edu/CaltechTHESIS:05312017-141442186
Year: 2017
DOI: 10.7907/Z9DV1GWC
<p>This thesis is devoted to the problem of information collection from theoretical and experimental perspectives.</p>
<p>In Chapter 2, I characterize the unique optimal learning strategy when there are two information sources, three possible states of the world, and learning is modeled as a search process. The optimal strategy consists of two phases. During the first phase, only beliefs about the state and the quality of information sources matter for the optimal choice between these sources. During the second phase, this choice also depends on how much the agent values different types of information. The information sources are substitutes when each individual source is likely to reveal the state eventually, and they are complements otherwise.</p>
<p>In Chapter 3, co-authored with Li Song, we conducted an experiment which demonstrates that even in a simple four person circle network people appear to fail to account for possible repetition of information they receive. Moreover, we show that this phenomenon can be partially attributed to rational considerations, which take into account other people’s deviations from optimal behavior.</p>
<p>In Chapter 4, co-authored with Marcelo A. Fernández,we model overconfidence as if a decision maker perceives information as being more precise than it actually is. We show that the effect of overconfidence on the quality of the final decision is shaped by three forces, overestimating the precision of future information, overestimating the precision of past information and overestimating the amount of information to be collected in the future. The first force pushes an overconfident decision maker to collect more information, while the second and the third forces work in the other direction.</p>https://resolver.caltech.edu/CaltechTHESIS:05312017-141442186The Implications of Privacy-Aware Choice
https://resolver.caltech.edu/CaltechTHESIS:03232017-160527848
Year: 2017
DOI: 10.7907/Z9057CZP
<p>Privacy concerns are becoming a major obstacle to using data in the way that we want. It's often unclear how current regulations should translate into technology, and the changing legal landscape surrounding privacy can cause valuable data to go unused. In addition, when people know that their current choices may have future consequences, they might modify their behavior to ensure that their data reveal less---or perhaps, more favorable---information about themselves. Given these concerns, how can we continue to make use of potentially sensitive data, while providing satisfactory privacy guarantees to the people whose data we are using? Answering this question requires an understanding of how people reason about their privacy and how privacy concerns affect behavior.</p>
<p>In this thesis, we study how strategic and human aspects of privacy interact with existing tools for data collection and analysis. We begin by adapting the standard model of consumer choice theory to a setting where consumers are aware of, and have preferences over, the information revealed by their choices. In this model of privacy-aware choice, we show that little can be inferred about a consumer's preferences once we introduce the possibility that she has concerns about privacy, even when her preferences are assumed to satisfy relatively strong structural properties. Next, we analyze how privacy technologies affect behavior in a simple economic model of data-driven decision making. Intuition suggests that strengthening privacy protections will both increase utility for the individuals providing data and decrease usefulness of the computation. However, we demonstrate that this intuition can fail when strategic concerns affect consumer behavior. Finally, we study the problem an analyst faces when purchasing and aggregating data from strategic individuals with complex incentives and privacy concerns. For this problem, we provide both mechanisms for eliciting data that satisfy the necessary desiderata, and impossibility results showing the limitations of privacy-preserving data collection.</p>https://resolver.caltech.edu/CaltechTHESIS:03232017-160527848Essays on Matching Theory
https://resolver.caltech.edu/CaltechTHESIS:05192017-101424341
Year: 2017
DOI: 10.7907/Z9S75DCJ
<p>Matching theory is a rapidly growing field in economics that often deals with markets in which monetary transfers are forbidden. Hence, policy makers often use centralized procedures to organize markets and coordinate players' behavior. Three concerns play central roles in designing the procedures: efficiency, fairness, and incentive compatibility. These concerns are also what I focus on in my studies. Specifically, my dissertation consists of three original studies on the allocation of indivisible resources to agents. The first chapter studies school choice, which is a centralized market to assign students to public schools. I compare popular matching mechanisms used in school choice by accommodating the fact that students and their parents often have heterogeneous sophistication in understanding the mechanisms. In the second chapter I study abstract object allocation problem in which objects do not have priority rankings of agents. I want to show that the three objectives of efficiency, fairness, and incentive compatibility can be incompatible with each other: a mechanism that satisfies a minimal efficiency requirement and mild fairness requirements must be manipulable by some group of agents in a strong sense. Since the efficiency requirement is weak enough such that policy makers are likely to pursue, my results suggest that policy makers have to make a choice between fairness and group incentive compatibility. In the third chapter I study same object allocation problems except that some agents have private endowments. I propose a new mechanism that has desirable properties in efficiency, fairness, and incentive compatibility. In the following I provide more details of each chapter.</p>
<p>School choice is a trend in the K-12 public education of US and many other countries that allows children to choose schools across neighborhoods. In Chapter 1, "Level-k Reasoning in School Choice", I compare two matching algorithms that many cities use to assign children to public schools in school choice. The algorithms are called Boston Mechanism and Deferred Acceptance. BM is manipulable, while DA is strategy-proof. Recently several cities decide to switch from BM to DA to avoid manipulation. However, the effect of the switch has not been well understood. In this paper I use the level-k model to study the strategies used by parents in BM by taking account of the fact that parents often have different abilities to manipulate BM, which are due to their heterogeneous sophistication. Interestingly, I find that the level-k reasoning process in BM is analogous to the procedure of DA. This analogy provides a new way to understand how parents may behave in BM. Under some mild assumption it implies that for any school choice problem and any sophistication distribution of parents, the assignment found by BM is never less efficient than the assignment found by DA. I also examine how parents' beliefs about others' sophistication affect their welfare. I find that, in general, a child is guaranteed to benefit from his parent's sophistication in BM only when his parent's level is high relative to others and his parent's belief about others' sophistication levels is accurate. The simulation results of my model exhibit patterns similar to empirical datasets.</p>
<p>Without monetary transfers, the concern of fairness motivates policy makers to use random assignments in objection allocation problems. In Chapter 2, "Efficient and Fair Assignment Mechanism is Strongly Group Manipulable", I study group incentive compatibility in random assignment mechanisms. I show that if a mechanism satisfies the minimal efficiency requirement (ex-post efficiency), then it cannot satisfy some mild fairness requirements and be minimally group incentive compatible simultaneously: by misreporting preferences, a group of agents can obtain lotteries that strictly first-order stochastically dominate the lotteries they obtain in the truth-telling case. Hence, fairness concerns may force policy maker to give up group incentive compatibility. My results hold as long as there are at least three agents and at least three objects, no matter outside option is available or not. Possibility results exist when there are only two objects and outside option is not available.</p>
<p>In some object allocation problems, some players have private endowments and are willing to bring them to the market in exchange for better ones. In Chapter 3, "A New Solution to the Random Assignment Problem with Private Endowment", I propose a new mechanism to solve the problems. Intuitively, in my mechanism the popularity of a private endowment plays the role of "price" in determining his owner's advantage in the market. Formally, the mechanism is a simultaneous eating algorithm, which generalizes Probabilistic Serial, by letting agents obtain additional eating speeds if their private endowments are consumed by others, and letting multiple agents trade their private endowments if they form cycles. This feature can be summarized by the idea of "you request my house - I get your speed". Indifferent preferences often cause difficulty in efficient random assignment mechanisms. Interestingly, I show that the same idea can also be used to deal with indifferent preferences in a straightforward way. It is in contrast to the mainstream method of iteratively solving maximum network flow problems in the literature.</p>https://resolver.caltech.edu/CaltechTHESIS:05192017-101424341Essays in Market Design
https://resolver.caltech.edu/CaltechTHESIS:05312018-141046982
Year: 2018
DOI: 10.7907/PXYF-WS15
<p>This thesis investigates the impact of incomplete information and behavioral biases in the context of market design.</p>
<p>In chapter 2, I analyze centralized matching markets and rationalize why the arguably most heavily used mechanism in applications, the deferred acceptance mechanism, has been so successful in practice, despite the fact that it provides participants with opportunities to “game the system.” Accounting for the lack of information that participants typically have in these markets in practice, I introduce a new notion of behavior under uncertainty that captures participants’ aversion to experience regret. I show that participants optimally choose not to manipulate the deferred acceptance mechanism in order to avoid regret. Moreover, the deferred acceptance mechanism is the unique mechanism within an interesting class (quantile stable) to induce honesty from participants in this way.</p>
<p>In chapter 3, co-authored with Leeat Yariv, we study the impacts of incomplete information on centralized one-to-one matching markets. We focus on the commonly used deferred acceptance mechanism (Gale and Shapley, 1962). We characterize settings in which many of the results known when information is complete are overturned. In particular, small (complete-information) cores may still be associated with multiple outcomes and incentives to misreport, selection of equilibria can affect the set of individuals who are unmatched—i.e., there is no analogue for the Rural Hospital Theorem, and agents might prefer to be on the receiving side of the of the algorithm underlying the mechanism. Nonetheless, when either side of the market has assortative preferences, incomplete information does not hinder stability, and results from the complete-information setting carry through.</p>
<p>In chapter 4, co-authored with Tatiana Mayskaya, we present a dynamic model that illustrates three forces that shape the effect of overconfidence (overprecision of consumed information) on the amount of collected information. The first force comes from overestimating the precision of the next consumed piece of information. The second force is related to overestimating the precision of already collected information. The third force reflects the discrepancy between how much information the agent expects to collect and how much information he actually collects in expectation. The first force pushes an overconfident agent to collect more information, while the second and the third forces work in the other direction. We show that under some symmetry conditions, the second and third force unequivocally dominate the first, leading to underinvestment in information.</p>https://resolver.caltech.edu/CaltechTHESIS:05312018-141046982Essays On Decision Theory
https://resolver.caltech.edu/CaltechTHESIS:06072019-212943893
Year: 2019
DOI: 10.7907/MVE7-HP81
<p>This thesis introduces some general frameworks for studying problems in decision theory. The purpose of this dissertation is two-fold. First, I develop general mathematical frameworks and tools to explore different decision theoretic phenomena. Second, I apply my developed frameworks and tools in different topics of Microeconomics and Decision Theory.</p>
<p>Chapter 1 introduces a notion of the classifier, to represent the different classes of data revealed through some observations. I present a general model of classification, notion of complexity, and how a complicated classification procedure can be generated through some simpler classification procedures.</p>
<p>My goal is to show how an individual's complex behavior can be derived from some simple underlying heuristics. In this chapter, I model a classifier (as a general model for decision making) that based on observing some data points classifies them into different categories with a set of different labels. The only assumption for my model is that whenever a data point is in two categories, there should be an additional category representing the intersection of the two categories. First, I derive a duality result similar to the duality in convex geometry. Then, using my result, I find all representations of a complex classifier by aggregating simpler forms of classifiers. For example, I show how a complex classifier can be represented by simpler classifiers with only two categories (similar to a single linear classifier in a neural network). Finally, I show an application in the context of dynamic choice behaviors. Notably, I use my model to reinterpret the seminal works by Kreps (1979) and Dekel, Lipman, and Rustichini (2001) on representing preference ordering over menus with a subjective state space. I also show the connection between the notion of the minimal subjective state space in economics with my proposed notion of complexity of a classifier.</p>
<p>In Chapter 2, I provide a general characterization of recursive methods of aggregation and show that recursive aggregation lies behind many seemingly different results in economic theory. Recursivity means that the aggregate outcome of a model over two disjoint groups of features is a weighted average of the outcome of each group separately.</p>
<p>This chapter makes two contributions. The first contribution is to pin down any aggregation procedure that satisfies my definition of recursivity. The result unifies aggregation procedures across many different economic environments, showing that all of them rely on the same basic result. The second contribution is to show different extensions of the result in the context of belief formation, choice theory, and welfare economics.</p>
<p>In the context of belief formation, I model an agent who predicts the true state of nature, based on observing some signals in her information structure. I interpret each subset of signals as an event in her information structure. I show that, as long as the information structure has a finite cardinality, my weighted averaging axiom is the necessary and sufficient condition for the agent to behaves as a Bayesian updater. This result answers the question raised by Shmaya and Yariv (2007), regarding finding a necessary and sufficient condition for a belief formation process to act as a Bayesian updating rule.</p>
<p>In the context of choice theory, I consider the standard theory of discrete choice. An agent chooses randomly from a menu. The outcome of my model is the average choice (mean of the distribution of choices) rather than the entire distribution of choices. Average choice is easier to report and obtain than the entire distribution. However, an average choice does not uniquely reveal the underlying distribution of choices. In this context, I show that (1) it is possible to uniquely extract the underlying distribution of choices as long as the average choice satisfies weighted averaging axiom, and (2) there is a close connection between my weighted averaging axiom and the celebrated Luce (or Logit) model of discrete choice.</p>
<p>Chapter 3 is about the aggregation of the preference orderings of individuals over a set of alternatives. The role of an aggregation rule is to associate with each group of individuals another preference ordering of alternatives, representing the group's aggregated preference. I consider the class of aggregation rules satisfying the extended Pareto axiom. Extended Pareto means that whenever we partition a group of individuals into two subgroups, if both subgroups prefer one alternative over another (as indicated by their aggregated preferences), then the aggregated preference ordering of the union of the subgroups also prefers the first alternative over the second one.</p>
<p>I show that (1) the extended Pareto is equivalent to my weighted averaging axiom, and (2) I derive a generalization of Harsanyi's (1955) famous theorem on Utilitarianism. Harsanyi considers a single profile of individuals and a variant of Pareto to obtain Utilitarianism. However, in my approach, I partition a profile into smaller groups. Then, I aggregate the preference ordering of these smaller groups using the extended Pareto. Hence, I obtain Utilitarianism through this consistent form of aggregation. As a result, in my representation, the weight associated with each individual appears in all sub-profiles that contain her. </p>
<p>In another application, I find the class of extended Pareto social welfare functions. My result has a positive nature, compared to the claims by Kalai and Schmeidler (1977) and Hylland (1980) that the negative conclusion of Arrow's theorem holds even with vN-M preferences.</p>
<p>Finally, in Chapter 4, I derive a simple subjective conditional expectation theory of state-dependent preferences. In many applications such as models for buying health insurance, the standard assumption about the independence of the utility and the set of states is not a plausible one. Hence, I derive a model in which the main force behind the separation of beliefs and state-dependent utility comes from the extended Pareto condition. Moreover, I show that, as long as the model satisfies my strong minimal agreement condition, we can uniquely separate beliefs from the state-dependent utility.</p>https://resolver.caltech.edu/CaltechTHESIS:06072019-212943893Essays On Decision Theory
https://resolver.caltech.edu/CaltechTHESIS:06072019-212943893
Year: 2019
DOI: 10.7907/MVE7-HP81
<p>This thesis introduces some general frameworks for studying problems in decision theory. The purpose of this dissertation is two-fold. First, I develop general mathematical frameworks and tools to explore different decision theoretic phenomena. Second, I apply my developed frameworks and tools in different topics of Microeconomics and Decision Theory.</p>
<p>Chapter 1 introduces a notion of the classifier, to represent the different classes of data revealed through some observations. I present a general model of classification, notion of complexity, and how a complicated classification procedure can be generated through some simpler classification procedures.</p>
<p>My goal is to show how an individual's complex behavior can be derived from some simple underlying heuristics. In this chapter, I model a classifier (as a general model for decision making) that based on observing some data points classifies them into different categories with a set of different labels. The only assumption for my model is that whenever a data point is in two categories, there should be an additional category representing the intersection of the two categories. First, I derive a duality result similar to the duality in convex geometry. Then, using my result, I find all representations of a complex classifier by aggregating simpler forms of classifiers. For example, I show how a complex classifier can be represented by simpler classifiers with only two categories (similar to a single linear classifier in a neural network). Finally, I show an application in the context of dynamic choice behaviors. Notably, I use my model to reinterpret the seminal works by Kreps (1979) and Dekel, Lipman, and Rustichini (2001) on representing preference ordering over menus with a subjective state space. I also show the connection between the notion of the minimal subjective state space in economics with my proposed notion of complexity of a classifier.</p>
<p>In Chapter 2, I provide a general characterization of recursive methods of aggregation and show that recursive aggregation lies behind many seemingly different results in economic theory. Recursivity means that the aggregate outcome of a model over two disjoint groups of features is a weighted average of the outcome of each group separately.</p>
<p>This chapter makes two contributions. The first contribution is to pin down any aggregation procedure that satisfies my definition of recursivity. The result unifies aggregation procedures across many different economic environments, showing that all of them rely on the same basic result. The second contribution is to show different extensions of the result in the context of belief formation, choice theory, and welfare economics.</p>
<p>In the context of belief formation, I model an agent who predicts the true state of nature, based on observing some signals in her information structure. I interpret each subset of signals as an event in her information structure. I show that, as long as the information structure has a finite cardinality, my weighted averaging axiom is the necessary and sufficient condition for the agent to behaves as a Bayesian updater. This result answers the question raised by Shmaya and Yariv (2007), regarding finding a necessary and sufficient condition for a belief formation process to act as a Bayesian updating rule.</p>
<p>In the context of choice theory, I consider the standard theory of discrete choice. An agent chooses randomly from a menu. The outcome of my model is the average choice (mean of the distribution of choices) rather than the entire distribution of choices. Average choice is easier to report and obtain than the entire distribution. However, an average choice does not uniquely reveal the underlying distribution of choices. In this context, I show that (1) it is possible to uniquely extract the underlying distribution of choices as long as the average choice satisfies weighted averaging axiom, and (2) there is a close connection between my weighted averaging axiom and the celebrated Luce (or Logit) model of discrete choice.</p>
<p>Chapter 3 is about the aggregation of the preference orderings of individuals over a set of alternatives. The role of an aggregation rule is to associate with each group of individuals another preference ordering of alternatives, representing the group's aggregated preference. I consider the class of aggregation rules satisfying the extended Pareto axiom. Extended Pareto means that whenever we partition a group of individuals into two subgroups, if both subgroups prefer one alternative over another (as indicated by their aggregated preferences), then the aggregated preference ordering of the union of the subgroups also prefers the first alternative over the second one.</p>
<p>I show that (1) the extended Pareto is equivalent to my weighted averaging axiom, and (2) I derive a generalization of Harsanyi's (1955) famous theorem on Utilitarianism. Harsanyi considers a single profile of individuals and a variant of Pareto to obtain Utilitarianism. However, in my approach, I partition a profile into smaller groups. Then, I aggregate the preference ordering of these smaller groups using the extended Pareto. Hence, I obtain Utilitarianism through this consistent form of aggregation. As a result, in my representation, the weight associated with each individual appears in all sub-profiles that contain her. </p>
<p>In another application, I find the class of extended Pareto social welfare functions. My result has a positive nature, compared to the claims by Kalai and Schmeidler (1977) and Hylland (1980) that the negative conclusion of Arrow's theorem holds even with vN-M preferences.</p>
<p>Finally, in Chapter 4, I derive a simple subjective conditional expectation theory of state-dependent preferences. In many applications such as models for buying health insurance, the standard assumption about the independence of the utility and the set of states is not a plausible one. Hence, I derive a model in which the main force behind the separation of beliefs and state-dependent utility comes from the extended Pareto condition. Moreover, I show that, as long as the model satisfies my strong minimal agreement condition, we can uniquely separate beliefs from the state-dependent utility.</p>https://resolver.caltech.edu/CaltechTHESIS:06072019-212943893Data: Implications for Markets and for Society
https://resolver.caltech.edu/CaltechTHESIS:05292019-162418941
Year: 2019
DOI: 10.7907/XZHX-1M46
<p>Every day, massive amounts of data are gathered, exchanged, and used to run statistical computations, train machine learning algorithms, and inform decisions on individuals and populations. The quick rise of data, the need to exchange and process it, to take data privacy concerns into account, and to understand how it affects decision-making, introduce many new and interesting economic, game theoretic, and algorithmic challenges.</p>
<p>The goal of this thesis is to provide theoretical foundations to approach these challenges. The first part of this thesis focuses on the design of mechanisms that purchase then aggregate data from many sources, in order to perform statistical tasks. The second part of this thesis revolves around the societal concerns associated with the use of individuals' data. The first such concern we examine is that of privacy, when using sensitive data about individuals in statistical computations; we focus our attention on how privacy constraints interact with the task of designing mechanisms for acquisition and aggregation of sensitive data. The second concern we focus on is that of fairness in decision-making: we aim to provide tools to society that help prevent discrimination against individuals and populations based on sensitive attributes in their data, when making important decisions about them. Finally, we end this thesis on a study of the interactions between data and strategic behavior. There, we see data as a source of information that informs and affects agents' incentives; we study how information revelation impacts agent behavior in auctions, and in turn how a seller should design auctions that take such information revelation into account.</p>https://resolver.caltech.edu/CaltechTHESIS:05292019-162418941Essays on Social Learning and Networks
https://resolver.caltech.edu/CaltechTHESIS:05122020-180653707
Year: 2020
DOI: 10.7907/0m03-b330
<p>This thesis offers a contribution to the study of Social Learning and Networks. It studies information aggregation and its effect on individual's actions (Chapter 2, 3) and social network (Chapter 4).</p>
<p>Chapter 2, co-authored with Omer Tamuz and Wade Hann-Caruthers, studies how quickly does the public belief converge to its true value when agents are able to observe actions of their predecessors. In the classical herding literature, agents receive a private signal regarding a binary state of nature, and sequentially choose an action, after observing the actions of their predecessors. When the informativeness of private signals is unbounded, it is known that agents converge to the correct action and correct belief. We study how quickly convergence occurs, and show that it happens more slowly than it does when agents observe signals. However, we also show that the speed of learning from actions can be arbitrarily close to the speed of learning from signals. In particular, the expected time until the agents stop taking the wrong action can be either finite or infinite, depending on the private signal distribution. In the canonical case of Gaussian private signals, we calculate the speed of convergence precisely, and show explicitly that, in this case, learning from actions is significantly slower than learning from signals.</p>
<p>In Chapter 3, I investigate how social planning can reduce the inefficiencies of social learning, stemming from herding and informational cascades. A social planner is introduced to the classical sequential social learning model. She can tax or subsidize players' actions in order to maximize social welfare, a discounted sum of agents' utilities. We solve or accurately approximate the expected utility of the social planner and the optimal pricing strategy for various signal distributions. In equilibrium, it is optimal to increase the price for the better action, causing a reduction in current agent's utility, but also a net gain, due to the information this action reveals. The addition of the social planner significantly improves social welfare and the asymptotic speed of learning.</p>
<p>Chapter 4 analyzes how different types of social connections between people shape their social networks. There are two possible types of ties between individuals, strong and weak, that differ in maintenance costs and reliability. A network formation game is played in which agents choose the number of ties of each type to maximize their chances of hearing about a new job opportunity. We find that in equilibrium, people maintain both types of connections, which was not explained in previous theoretical models. Furthermore, in the socially optimal symmetric network, there are more strong ties than in the equilibrium one.</p>https://resolver.caltech.edu/CaltechTHESIS:05122020-180653707Essays on Market Design and Industrial Organization
https://resolver.caltech.edu/CaltechTHESIS:05192020-083203607
Year: 2020
DOI: 10.7907/5tfa-3987
<p>This dissertation contains three essays. They offer contributions to the study of matching in foster care (Chapters 1 and 2), and to the study of the effect of product market competition on managerial incentives (Chapter 3).</p>
<p>Chapter 1 presents an empirical framework to study the assignment of children into foster homes and its implications on placement outcomes. The empirical application uses a novel dataset of confidential foster care records from Los Angeles County, California. The estimates of the empirical model are used to examine policy interventions aimed at improving placement outcomes by increasing market thickness. If placements were assigned across all the administrative regions of the county, the model predicts that (i) the average number of foster homes children go through before exiting foster care would decrease by 8% and (ii) the distance between foster homes and children’s schools would be reduced by 54%.</p>
<p>Chapter 2 proposes and studies a dynamic model of centralized matching in foster care. The optimal matching policy is characterized by minimizing the number of children who remain unmatched in every period. The main finding is that the optimal matching policy gives priority to younger children. The model captures several dynamic trade-offs, most notably between children’s ages and the heterogeneity in the expected duration of placements. I also analyze federal data from the Adoption and Foster Care Analysis and Reporting System (AFCARS). I find that, in Los Angeles County, placements and their durations are strongly correlated with the race of children and their foster parents.</p>
<p>Chapter 3, co-authored with Kaniṣka Dam, develops an incentive contracting model under oligopolistic competition to study how incumbent firms adjust managerial incentives following deregulation policies that enhance competition. We show that firms elicit higher managerial effort by offering stronger incentives as an optimal response to entry, as long as incumbent firms act as production leaders. Our model draws a link between an industry-specific feature, the time needed to build production capacity, and the effect that product market competition has on executive compensation. We offer new testable implications regarding how this industry-specific feature shapes the incentive structure of executive pay.</p>https://resolver.caltech.edu/CaltechTHESIS:05192020-083203607Essays on Social Learning and Networks
https://resolver.caltech.edu/CaltechTHESIS:05122020-180653707
Year: 2020
DOI: 10.7907/0m03-b330
<p>This thesis offers a contribution to the study of Social Learning and Networks. It studies information aggregation and its effect on individual's actions (Chapter 2, 3) and social network (Chapter 4).</p>
<p>Chapter 2, co-authored with Omer Tamuz and Wade Hann-Caruthers, studies how quickly does the public belief converge to its true value when agents are able to observe actions of their predecessors. In the classical herding literature, agents receive a private signal regarding a binary state of nature, and sequentially choose an action, after observing the actions of their predecessors. When the informativeness of private signals is unbounded, it is known that agents converge to the correct action and correct belief. We study how quickly convergence occurs, and show that it happens more slowly than it does when agents observe signals. However, we also show that the speed of learning from actions can be arbitrarily close to the speed of learning from signals. In particular, the expected time until the agents stop taking the wrong action can be either finite or infinite, depending on the private signal distribution. In the canonical case of Gaussian private signals, we calculate the speed of convergence precisely, and show explicitly that, in this case, learning from actions is significantly slower than learning from signals.</p>
<p>In Chapter 3, I investigate how social planning can reduce the inefficiencies of social learning, stemming from herding and informational cascades. A social planner is introduced to the classical sequential social learning model. She can tax or subsidize players' actions in order to maximize social welfare, a discounted sum of agents' utilities. We solve or accurately approximate the expected utility of the social planner and the optimal pricing strategy for various signal distributions. In equilibrium, it is optimal to increase the price for the better action, causing a reduction in current agent's utility, but also a net gain, due to the information this action reveals. The addition of the social planner significantly improves social welfare and the asymptotic speed of learning.</p>
<p>Chapter 4 analyzes how different types of social connections between people shape their social networks. There are two possible types of ties between individuals, strong and weak, that differ in maintenance costs and reliability. A network formation game is played in which agents choose the number of ties of each type to maximize their chances of hearing about a new job opportunity. We find that in equilibrium, people maintain both types of connections, which was not explained in previous theoretical models. Furthermore, in the socially optimal symmetric network, there are more strong ties than in the equilibrium one.</p>https://resolver.caltech.edu/CaltechTHESIS:05122020-180653707Essays in Mechanism Design and Contest Theory
https://resolver.caltech.edu/CaltechTHESIS:05292023-003412487
Year: 2023
DOI: 10.7907/97qy-1m35
<p>This dissertation contains three essays. They offer contributions to the fields of mechanism design (Chapters 1 and 2) and contest theory (Chapter 3).</p>
<p>Chapter 1, co-authored with Wade Hann-Caruthers, studies the problem of aggregating privately-held preferences for a facility to be located on a plane. We show that for a large class of social cost functions, the mechanism that locates the facility at the coordinate-wise median of the agent’s ideal points is quantitatively optimal (in the sense that it has the smallest worst-case approximation ratio) among all deterministic, anonymous, and incentive-compatible mechanisms. We also obtain bounds on the worst-case approximation ratio of the coordinate-wise median mechanism for an important subclass of social cost functions.</p>
<p>Chapter 2, co-authored with Wade Hann-Caruthers, studies a principal-agent project selection problem with asymmetric information and demonstrates the value for the principal in inducing partial verifiability constraints, such as no-overselling, on the agent. We consider a setting where the principal has to choose one among a set of available projects but the relevant information, such as each project's profitability, is held by a self-interested agent who might also have its own preference over the projects. If the agent is unconstrained in its ability to manipulate its private information, the principal can do no better than randomly choosing a project. But if the agent cannot oversell any of the projects, maybe because it must support its claims with evidence, we show that a simple cutoff mechanism (agent's favorite project is chosen among those that meet a cutoff profit level and a default project) is optimal for the principal. We also find evidence in support of the well-known ally-principle which says that principal delegates more authority to an agent with more aligned preferences.</p>
<p>Chapter 3 studies the effect of increasing the value of prizes and competitiveness of contests on the effort exerted by participants in an incomplete information environment. We identify two natural sufficient conditions on the distribution of abilities in the population under which the interventions have opposite effects on effort. We also discuss applications to the design of optimal contests in three different environments, including the design of grading contests. Assuming that the value of a grade is determined by the information it reveals about the agent's ability, we establish a link between the informativeness of a grading scheme and the effort induced by it.</p>https://resolver.caltech.edu/CaltechTHESIS:05292023-003412487Essays on Rational Social Learning
https://resolver.caltech.edu/CaltechTHESIS:05132024-181232920
Year: 2024
DOI: 10.7907/p1xt-ys43
<p>This dissertation contains three essays, each contributing to the study of social learning among rational agents in various contexts.</p>
<p>In Chapter 1, I study whether individuals can learn the informativeness of their information technology through social learning. Building on the classic sequential social learning model, I introduce the possibility that a common source is completely uninformative. I then define asymptotic learning as the situation in which an outsider, who observes the actions of all agents, eventually distinguishes between uninformative and informative sources. I show that asymptotic learning in this setting is not guaranteed; it depends crucially on the relative tail distributions of private beliefs induced by uninformative and informative signals. Furthermore, I identify the phenomenon of perpetual disagreement as the cause of learning and provide a characterization of learning in the canonical Gaussian environment.</p>
<p>In Chapter 2, co-authored with Omer Tamuz and Philipp Strack, we study the asymptotic rate at which the probability of a group of long-lived, rational agents in a social network taking the correct action converges to one. In every period, after observing the past actions of his neighbors, each agent receives a private signal, and chooses an action whose payoff depends only on the state. Since equilibrium actions depend on higher-order beliefs, characterizing agents' behavior becomes difficult. Nevertheless, we show that the rate of learning in any equilibrium is bounded from above by a constant, regardless of the size and shape of the network, the utility function, and the patience of the agents. This bound only depends on the private signal distribution.</p>
<p>In Chapter 3, I study how fads emerge from social learning in a changing environment. I consider a simple sequential learning model in which rational agents arrive in order, each acting only once, and the underlying unknown state is constantly evolving. Each agent receives a private signal, observes all past actions of others, and chooses an action to match the current state. Because the state changes over time, cascades cannot last forever, and actions also fluctuate. I show that despite the rise of temporary information cascades, in the long run, actions change more often than the state. This provides a theoretical foundation for faddish behavior in which people often change their actions more frequently than necessary.</p>https://resolver.caltech.edu/CaltechTHESIS:05132024-181232920