CaltechTHESIS committee: Monograph
https://feeds.library.caltech.edu/people/Cvitanić-J/combined_committee.rss
A Caltech Library Repository Feedhttp://www.rssboard.org/rss-specificationpython-feedgenenMon, 11 Nov 2024 06:57:46 -0800Filtering, Stability, and Robustness
https://resolver.caltech.edu/CaltechETD:etd-12122006-164640
Year: 2007
DOI: 10.7907/4p53-1h42
<p>The theory of nonlinear filtering concerns the optimal estimation of a Markov signal in noisy observations. Such estimates necessarily depend on the model that is chosen for the signal and observations processes. This thesis studies the sensitivity of the filter to the choice of underlying model over long periods of time, within the framework of continuous time filtering with white noise type observations.</p>
<p>The first topic of this thesis is the asymptotic stability of the filter, which is studied using the theory of conditional diffusions. This leads to improvements on pathwise stability bounds, and to new insight into existing stability results in a fully probabilistic setting. Furthermore, I develop in detail the theory of conditional diffusions for finite-state Markov signals and clarify the duality between estimation and stochastic control in this context.</p>
<p>The second topic of this thesis is the sensitivity of the nonlinear filter to the model parameters of the signal and observations processes. This section concentrates on the finite state case, where the corresponding model parameters are the jump rates of the signal, the observation function, and the initial measure. The main result is that the expected difference between the filters with the true and modified model parameters is bounded uniformly on the infinite time interval, provided that the signal process satisfies a mixing property. The proof uses properties of the stochastic flow generated by the filter on the simplex, as well as the Malliavin calculus and anticipative stochastic calculus.</p>
<p>The third and final topic of this thesis is the asymptotic stability of quantum filters. I begin by developing quantum filtering theory using reference probability methods. The stability of the resulting filters is not easily studied using the preceding methods, as smoothing violates the nondemolition requirement. Fortunately, progress can be made by randomizing the initial state of the filter. Using this technique, I prove that the filtered estimate of the measurement observable is stable regardless of the underlying model, provided that the initial states are absolutely continuous in a suitable sense.</p>https://resolver.caltech.edu/CaltechETD:etd-12122006-164640Filtering, Stability, and Robustness
https://resolver.caltech.edu/CaltechETD:etd-12122006-164640
Year: 2007
DOI: 10.7907/4p53-1h42
<p>The theory of nonlinear filtering concerns the optimal estimation of a Markov signal in noisy observations. Such estimates necessarily depend on the model that is chosen for the signal and observations processes. This thesis studies the sensitivity of the filter to the choice of underlying model over long periods of time, within the framework of continuous time filtering with white noise type observations.</p>
<p>The first topic of this thesis is the asymptotic stability of the filter, which is studied using the theory of conditional diffusions. This leads to improvements on pathwise stability bounds, and to new insight into existing stability results in a fully probabilistic setting. Furthermore, I develop in detail the theory of conditional diffusions for finite-state Markov signals and clarify the duality between estimation and stochastic control in this context.</p>
<p>The second topic of this thesis is the sensitivity of the nonlinear filter to the model parameters of the signal and observations processes. This section concentrates on the finite state case, where the corresponding model parameters are the jump rates of the signal, the observation function, and the initial measure. The main result is that the expected difference between the filters with the true and modified model parameters is bounded uniformly on the infinite time interval, provided that the signal process satisfies a mixing property. The proof uses properties of the stochastic flow generated by the filter on the simplex, as well as the Malliavin calculus and anticipative stochastic calculus.</p>
<p>The third and final topic of this thesis is the asymptotic stability of quantum filters. I begin by developing quantum filtering theory using reference probability methods. The stability of the resulting filters is not easily studied using the preceding methods, as smoothing violates the nondemolition requirement. Fortunately, progress can be made by randomizing the initial state of the filter. Using this technique, I prove that the filtered estimate of the measurement observable is stable regardless of the underlying model, provided that the initial states are absolutely continuous in a suitable sense.</p>https://resolver.caltech.edu/CaltechETD:etd-12122006-164640Organizational and Financial Economics
https://resolver.caltech.edu/CaltechETD:etd-05292009-150803
Year: 2009
DOI: 10.7907/B75A-MW79
<p>We investigate behaviors in organizational and financial economics by utilizing and developing the latest techniques from game theory, experimental economics, computational testbed, and decision-making under risk and uncertainty.</p>
<p>In the first chapter, we use game theory and experimental economics approaches to analyze the relationships between corporate culture and the persistent performance differences among seemingly similar enterprises. First, we show that competition leads to higher minimum effort levels in the minimum effort coordination game. Furthermore, we show that organizations with better coordination also lead to higher rates of cooperation in the prisoner's dilemma game. This supports the theory that the high-efficiency culture developed in coordination games act as a focal point for the outcome of subsequent prisoner's dilemma game. In turn, we argue that these endogenous features of culture developed from coordination and cooperation can help explain the persistent performance differences.</p>
<p>In the second chapter, using a computational testbed, we theoretically predict and experimentally show that in the minimum effort coordination game, as the cost of effort increases: 1. the game converges to lower effort levels, 2. convergence speed increases, and 3. average payoff is not monotonically decreasing. In fact, the average profit is an U-shaped curve as a function of cost. Therefore, contrary to the intuition, one can obtain a higher average profit by increasing the cost of effort.</p>
<p>In the last chapter, we investigate a well-known paradox in finance. The equity market home bias occurs when the investors over-invest in their home country assets. The equity market home bias is a paradox because the investors are not hedging their risk optimally. Even with unrealistic levels of risk aversion, the equity market home bias cannot be explained using the standard mean-variance model. We propose ambiguity aversion to be the behavioral explanation. We design six experiments using real-world assets and derivatives to show the relationship between ambiguity aversion and home bias. We tested for ambiguity aversion by showing that the investor's subjective probability is sub-additive. The result from the experiment provides support for the assertion that ambiguity aversion is related to the equity market home bias paradox.</p>
https://resolver.caltech.edu/CaltechETD:etd-05292009-150803Continuous Double Auctions and Microstructure
https://resolver.caltech.edu/CaltechETD:etd-05282009-103105
Year: 2009
DOI: 10.7907/QX5Q-1F94
<p>Chapter One focuses on the movement of quote prices and the role of asymmetric information. Standard methods of estimating the impact of order flow shocks are made inappropriate by the existence of runs in trade initiation, which are theoretically impossible. We find runs that exist in trade initiation persist even after accounting for standard explanations. The chapter modifies the methodology of (Huang and Stoll, 1997) to use runs in trade initiation to account for the phenomena and estimates effects using ASX data.</p>
<p>Chapter Two introduces a new experimental environment in which the market is continuously shocked by new traders’ incentives. The new environment joins two branches of theory. Classical economic theory has prices determined by the preferences of agents, but says little about the price formation process. The second theory is derived from finance in which prices are determined by the order flow coming to the market, but there is no connection between order flow and preferences.</p>
<p>We show that in such markets, two competing generalizations of the Walrasian equilibria exist corresponding to these competing literatures, each with an independent pull on market prices. Prices and efficiencies reveal a strong roll of expectations in price discovery and reject the idea that convergence is due to random or zero-intelligence trading strategies alone.</p>
<p>Chapter Three continues the analysis of Chapter Two by asking how the process of equilibration occurs in random arrival markets. We find that prices move proportional to the distance to the temporal equilibrium and show that this model’s predictive power is due to Marshallian features of the trading process as opposed the classical Walrasian adjustment model.</p>
<p>Chapter Four studies an RA environment in which some traders have asymmetric information regarding the distribution of latent incentives and arrival rates. We find that much of insiders’ information is diffused as theory suggests and that much of the information is incorporated in outsiders’ market actions. This diffusion of information is not a result of cumulative signed order flow, but is instead related to the observable rate of aggregate speculation. The ultimate implications of this phenomenon remain unknown.</p>
https://resolver.caltech.edu/CaltechETD:etd-05282009-103105Continuous Double Auctions and Microstructure
https://resolver.caltech.edu/CaltechETD:etd-05282009-103105
Year: 2009
DOI: 10.7907/QX5Q-1F94
<p>Chapter One focuses on the movement of quote prices and the role of asymmetric information. Standard methods of estimating the impact of order flow shocks are made inappropriate by the existence of runs in trade initiation, which are theoretically impossible. We find runs that exist in trade initiation persist even after accounting for standard explanations. The chapter modifies the methodology of (Huang and Stoll, 1997) to use runs in trade initiation to account for the phenomena and estimates effects using ASX data.</p>
<p>Chapter Two introduces a new experimental environment in which the market is continuously shocked by new traders’ incentives. The new environment joins two branches of theory. Classical economic theory has prices determined by the preferences of agents, but says little about the price formation process. The second theory is derived from finance in which prices are determined by the order flow coming to the market, but there is no connection between order flow and preferences.</p>
<p>We show that in such markets, two competing generalizations of the Walrasian equilibria exist corresponding to these competing literatures, each with an independent pull on market prices. Prices and efficiencies reveal a strong roll of expectations in price discovery and reject the idea that convergence is due to random or zero-intelligence trading strategies alone.</p>
<p>Chapter Three continues the analysis of Chapter Two by asking how the process of equilibration occurs in random arrival markets. We find that prices move proportional to the distance to the temporal equilibrium and show that this model’s predictive power is due to Marshallian features of the trading process as opposed the classical Walrasian adjustment model.</p>
<p>Chapter Four studies an RA environment in which some traders have asymmetric information regarding the distribution of latent incentives and arrival rates. We find that much of insiders’ information is diffused as theory suggests and that much of the information is incorporated in outsiders’ market actions. This diffusion of information is not a result of cumulative signed order flow, but is instead related to the observable rate of aggregate speculation. The ultimate implications of this phenomenon remain unknown.</p>
https://resolver.caltech.edu/CaltechETD:etd-05282009-103105Credit Risk and Nonlinear Filtering: Computational Aspects and Empirical Evidence
https://resolver.caltech.edu/CaltechETD:etd-05272009-141742
Year: 2009
DOI: 10.7907/7XV3-9Q45
<p>This thesis proposes a novel credit risk model which deals with incomplete information on the firm's asset value. Such incompleteness is due to reporting bias deliberately introduced by insider managers and executives of the firm and unobserved by outsiders.</p>
<p>The pricing of corporate securities and the evaluation of default measures in our credit risk framework requires the solution of a computationally unfeasible nonlinear filtering problem. The model introduces computational issues arising from the fact that the optimal probability density on the firm's asset value is the solution of a nonlinear filtering problem, which is computationally unfeasible. We propose a polynomial time-sequential Bayesian approximation scheme which employs convex optimization methods to iteratively approximate the optimal conditional density of the state on the basis of received market observations. We also provide an upper bound on the total variation distance between the actual filter density and our approximate estimator. We use the filter estimator to derive analytical expressions for the price of corporate securities (bond and equity) as well as for default measures (default probabilities, recovery rates, and credit spreads) under our credit risk framework. We propose a novel statistical calibration method to recover the parameters of our credit risk model from market price of equity and balance sheet indicators. We apply the method to the Parmalat case, a real case of misreporting and show that the model is able to successfully isolate the misreporting component. We also provide empirical evidence that the term structure of credit default swaps quotes exhibits special patterns in cases of misreporting by using three well known cases of accounting irregularities in US history: Tyco, Enron, and WorldCom.</p>
<p>We conclude the thesis with a study of bilateral credit risk, which accommodates the case in which both parties of the financial contract may default on their payments. We introduce the general arbitrage-free valuation framework for counterparty risk adjustments in presence of bilateral default risk. We illustrate the symmetry in the valuation and show that the adjustment involves a long position in a put option plus a short position in a call option, both with zero strike and written on the residual net value of the contract at the relevant default times. We allow for correlation between the default times of each party of the contract and the underlying portfolio risk factors. We introduce stochastic intensity models and a trivariate copula function on the default times exponential variables to model default dependence. We provide evidence that both default correlation and credit spread volatilities have a relevant and structured impact on the adjustment. We also study a case involving British Airways, Lehman Brothers, and Royal Dutch Shell, illustrating the bilateral adjustments in concrete crisis situations.</p>
https://resolver.caltech.edu/CaltechETD:etd-05272009-141742Contracts and Markets
https://resolver.caltech.edu/CaltechTHESIS:05282010-090118586
Year: 2010
DOI: 10.7907/FBS0-F288
I merge the standard Principal Agent model with a CAPM-type financial market, to study the interactions of contracts and financial markets. I prove existence of equilibrium in two models, a more general economy allowing for hidden type and action under generic mean variance preferences and a hidden action economy with Markowitz mean-variance preferences. I study economies for which markets have an insurance effect on compensation contracts. I show sufficient conditions for lower variance to obtain in large economies, even with asymmetric information. In this context I show the effect of markets' size on efficiency. I also study moral hazard economies, for which I prove existence of a unique pure strategy equilibrium, and I show that financial markets negatively affect the equilibrium returns of firms. In the final chapter I study the efficiency of securities issued under symmetric information. I find that small markets and low correlation of firms' returns generate inefficiency. I also show that the assumption of symmetry or independence is crucial to obtaining the insurance results in the previous Chapters.https://resolver.caltech.edu/CaltechTHESIS:05282010-090118586Contracts and Markets
https://resolver.caltech.edu/CaltechTHESIS:05282010-090118586
Year: 2010
DOI: 10.7907/FBS0-F288
I merge the standard Principal Agent model with a CAPM-type financial market, to study the interactions of contracts and financial markets. I prove existence of equilibrium in two models, a more general economy allowing for hidden type and action under generic mean variance preferences and a hidden action economy with Markowitz mean-variance preferences. I study economies for which markets have an insurance effect on compensation contracts. I show sufficient conditions for lower variance to obtain in large economies, even with asymmetric information. In this context I show the effect of markets' size on efficiency. I also study moral hazard economies, for which I prove existence of a unique pure strategy equilibrium, and I show that financial markets negatively affect the equilibrium returns of firms. In the final chapter I study the efficiency of securities issued under symmetric information. I find that small markets and low correlation of firms' returns generate inefficiency. I also show that the assumption of symmetry or independence is crucial to obtaining the insurance results in the previous Chapters.https://resolver.caltech.edu/CaltechTHESIS:05282010-090118586Markets and Microstructure
https://resolver.caltech.edu/CaltechTHESIS:04262013-021647332
Year: 2013
DOI: 10.7907/AQV1-S968
<p>This document contains three papers examining the microstructure of financial interaction in development and market settings. I first examine the industrial organization of financial exchanges, specifically limit order markets. In this section, I perform a case study of Google stock surrounding a surprising earnings announcement in the 3rd quarter of 2009, uncovering parameters that describe information flows and liquidity provision. I then explore the disbursement process for community-driven development projects. This section is game theoretic in nature, using a novel three-player ultimatum structure. I finally develop econometric tools to simulate equilibrium and identify equilibrium models in limit order markets.</p>
<p>In chapter two, I estimate an equilibrium model using limit order data, finding parameters that describe information and liquidity preferences for trading. As a case study, I estimate the model for Google stock surrounding an unexpected good-news earnings announcement in the 3rd quarter of 2009. I find a substantial decrease in asymmetric information prior to the earnings announcement. I also simulate counterfactual dealer markets and find empirical evidence that limit order markets perform more efficiently than do their dealer market counterparts.</p>
<p>In chapter three, I examine Community-Driven Development. Community-Driven Development is considered a tool empowering communities to develop their own aid projects. While evidence has been mixed as to the effectiveness of CDD in achieving disbursement to intended beneficiaries, the literature maintains that local elites generally take control of most programs. I present a three player ultimatum game which describes a potential decentralized aid procurement process. Players successively split a dollar in aid money, and the final player--the targeted community member--decides between whistle blowing or not. Despite the elite capture present in my model, I find conditions under which money reaches targeted recipients. My results describe a perverse possibility in the decentralized aid process which could make detection of elite capture more difficult than previously considered. These processes may reconcile recent empirical work claiming effectiveness of the decentralized aid process with case studies which claim otherwise.</p>
<p>In chapter four, I develop in more depth the empirical and computational means to estimate model parameters in the case study in chapter two. I describe the liquidity supplier problem and equilibrium among those suppliers. I then outline the analytical forms for computing certainty-equivalent utilities for the informed trader. Following this, I describe a recursive algorithm which facilitates computing equilibrium in supply curves. Finally, I outline implementation of the Method of Simulated Moments in this context, focusing on Indirect Inference and formulating the pseudo model.</p>
https://resolver.caltech.edu/CaltechTHESIS:04262013-021647332Essays in Behavioral Decision Theory
https://resolver.caltech.edu/CaltechTHESIS:05292015-095332979
Year: 2015
DOI: 10.7907/Z9D21VJH
<p>This thesis studies decision making under uncertainty and how economic agents respond to information. The classic model of subjective expected utility and Bayesian updating is often at odds with empirical and experimental results; people exhibit systematic biases in information processing and often exhibit aversion to ambiguity. The aim of this work is to develop simple models that capture observed biases and study their economic implications.</p>
<p>In the first chapter I present an axiomatic model of cognitive dissonance, in which an agent's response to information explicitly depends upon past actions. I introduce novel behavioral axioms and derive a representation in which beliefs are directionally updated. The agent twists the information and overweights states in which his past actions provide a higher payoff. I then characterize two special cases of the representation. In the first case, the agent distorts the likelihood ratio of two states by a function of the utility values of the previous action in those states. In the second case, the agent's posterior beliefs are a convex combination of the Bayesian belief and the one which maximizes the conditional value of the previous action. Within the second case a unique parameter captures the agent's sensitivity to dissonance, and I characterize a way to compare sensitivity to dissonance between individuals. Lastly, I develop several simple applications and show that cognitive dissonance contributes to the equity premium and price volatility, asymmetric reaction to news, and belief polarization.</p>
<p>The second chapter characterizes a decision maker with sticky beliefs. That is, a decision maker who does not update enough in response to information, where enough means as a Bayesian decision maker would. This chapter provides axiomatic foundations for sticky beliefs by weakening the standard axioms of dynamic consistency and consequentialism. I derive a representation in which updated beliefs are a convex combination of the prior and the Bayesian posterior. A unique parameter captures the weight on the prior and is interpreted as the agent's measure of belief stickiness or conservatism bias. This parameter is endogenously identified from preferences and is easily elicited from experimental data.</p>
<p>The third chapter deals with updating in the face of ambiguity, using the framework of Gilboa and Schmeidler. There is no consensus on the correct way way to update a set of priors. Current methods either do not allow a decision maker to make an inference about her priors or require an extreme level of inference. In this chapter I propose and axiomatize a general model of updating a set of priors. A decision maker who updates her beliefs in accordance with the model can be thought of as one that chooses a threshold that is used to determine whether a prior is plausible, given some observation. She retains the plausible priors and applies Bayes' rule. This model includes generalized Bayesian updating and maximum likelihood updating as special cases.</p>https://resolver.caltech.edu/CaltechTHESIS:05292015-095332979Essays in Behavioral Decision Theory
https://resolver.caltech.edu/CaltechTHESIS:05292015-095332979
Year: 2015
DOI: 10.7907/Z9D21VJH
<p>This thesis studies decision making under uncertainty and how economic agents respond to information. The classic model of subjective expected utility and Bayesian updating is often at odds with empirical and experimental results; people exhibit systematic biases in information processing and often exhibit aversion to ambiguity. The aim of this work is to develop simple models that capture observed biases and study their economic implications.</p>
<p>In the first chapter I present an axiomatic model of cognitive dissonance, in which an agent's response to information explicitly depends upon past actions. I introduce novel behavioral axioms and derive a representation in which beliefs are directionally updated. The agent twists the information and overweights states in which his past actions provide a higher payoff. I then characterize two special cases of the representation. In the first case, the agent distorts the likelihood ratio of two states by a function of the utility values of the previous action in those states. In the second case, the agent's posterior beliefs are a convex combination of the Bayesian belief and the one which maximizes the conditional value of the previous action. Within the second case a unique parameter captures the agent's sensitivity to dissonance, and I characterize a way to compare sensitivity to dissonance between individuals. Lastly, I develop several simple applications and show that cognitive dissonance contributes to the equity premium and price volatility, asymmetric reaction to news, and belief polarization.</p>
<p>The second chapter characterizes a decision maker with sticky beliefs. That is, a decision maker who does not update enough in response to information, where enough means as a Bayesian decision maker would. This chapter provides axiomatic foundations for sticky beliefs by weakening the standard axioms of dynamic consistency and consequentialism. I derive a representation in which updated beliefs are a convex combination of the prior and the Bayesian posterior. A unique parameter captures the weight on the prior and is interpreted as the agent's measure of belief stickiness or conservatism bias. This parameter is endogenously identified from preferences and is easily elicited from experimental data.</p>
<p>The third chapter deals with updating in the face of ambiguity, using the framework of Gilboa and Schmeidler. There is no consensus on the correct way way to update a set of priors. Current methods either do not allow a decision maker to make an inference about her priors or require an extreme level of inference. In this chapter I propose and axiomatize a general model of updating a set of priors. A decision maker who updates her beliefs in accordance with the model can be thought of as one that chooses a threshold that is used to determine whether a prior is plausible, given some observation. She retains the plausible priors and applies Bayes' rule. This model includes generalized Bayesian updating and maximum likelihood updating as special cases.</p>https://resolver.caltech.edu/CaltechTHESIS:05292015-095332979Essays in Revealed Preference Theory and Behavioral Economics
https://resolver.caltech.edu/CaltechTHESIS:01072016-175429851
Year: 2016
DOI: 10.7907/Z9X63JT5
Time, risk, and attention are all integral to economic decision making. The aim of this work is to understand those key components of decision making using a variety of approaches: providing axiomatic characterizations to investigate time discounting, generating measures of visual attention to infer consumers' intentions, and examining data from unique field settings.
<br/><br/>
Chapter 2, co-authored with Federico Echenique and Kota Saito, presents the first revealed-preference characterizations of exponentially-discounted utility model and its generalizations. My characterizations provide non-parametric revealed-preference tests. I apply the tests to data from a recent experiment, and find that the axiomatization delivers new insights on a dataset that had been analyzed by traditional parametric methods.
<br/><br/>
Chapter 3, co-authored with Min Jeong Kang and Colin Camerer, investigates whether "pre-choice" measures of visual attention improve in prediction of consumers' purchase intentions. We measure participants' visual attention using eyetracking or mousetracking while they make hypothetical as well as real purchase decisions. I find that different patterns of visual attention are associated with hypothetical and real decisions. I then demonstrate that including information on visual attention improves prediction of purchase decisions when attention is measured with mousetracking.
<br/><br/>
Chapter 4 investigates individuals' attitudes towards risk in a high-stakes environment using data from a TV game show, Jeopardy!. I first quantify players' subjective beliefs about answering questions correctly. Using those beliefs in estimation, I find that the representative player is risk averse. I then find that trailing players tend to wager more than "folk" strategies that are known among the community of contestants and fans, and this tendency is related to their confidence. I also find gender differences: male players take more risk than female players, and even more so when they are competing against two other male players.
<br/><br/>
Chapter 5, co-authored with Colin Camerer, investigates the dynamics of the favorite-longshot bias (FLB) using data on horse race betting from an online exchange that allows bettors to trade "in-play." I find that probabilistic forecasts implied by market prices before start of the races are well-calibrated, but the degree of FLB increases significantly as the events approach toward the end. https://resolver.caltech.edu/CaltechTHESIS:01072016-175429851Essays on the Impact of Information Asymmetry
https://resolver.caltech.edu/CaltechTHESIS:06042017-015517588
Year: 2017
DOI: 10.7907/Z9571925
<p>This dissertation consists of three essays focusing on how information asymmetry affects agents’ behavior across different environments. The first essay characterizes the optimal contract when a firm can employ two incentive schemes, promotion and pay for performance, simultaneously (Chapter 2). In the second essay, I study how information asymmetry can lead a firm to choose a less profitable short-term over a more profitable long-term project (Chapter 3). The other essay analyzes a career choice problem when agents have private information about their ability (Chapter 4).</p>
<p>Chapter 2 presents the effect of information asymmetry on executive pay structure to examine the cause of the rise in CEO compensation and wage inequality between CEO and other executives. To analyze the effect of the interaction of two incentive schemes, promotion and pay for performance, on CEO compensation and within-firm wage inequality, I embed a pay for performance framework into a tournament structure. The model shows that when CEO and managers contribute to a firm’s output independently, it is optimal for the firm to provide the CEO a compensation far beyond her reservation value in order to provide promotion incentives for managers. However, I find that the promotion incentive motive can disappear if there is interdependency between the CEO’s and managers’ outputs. In this case, the main purpose of a high CEO compensation is to induce the CEO to exert effort. The tension between incentives for CEO and managers makes it difficult to interpret the meaning of within-firm wage gap. As a possible solution, this paper suggests the use of CEO’s base salary to identify which incentive factor is driving the pay gap.</p>
<p>In Chapter 3, I study the optimal contract problem when a firm faces a long-term project. I consider a long-term project as one that requires an indefinite amount of time to complete its objective. I assume that the long-term project generates profits once it is accomplished. Using a continuous-time moral hazard model, I characterize the incentive compatibility condition in a relatively general contracting space. Moreover, I find a unique optimal contract under a restricted contracting space which consists of the two components: the termination level and the completion payment. The firm might invest in a short-term project: one that generates an instantaneous profit to the firm without any effect on the future, as analyzed by DeMarzo and Sannikov (2006). Comparison of optimal contracts for long and short-term projects provides an interesting insight to managerial short-termism: the firm, not the agent, could prefer a short-term project to a long-term project if there is a moral hazard problem.</p>
<p>Chapter 4 analyzes the role of asymmetry information on one’s career choice. I examine how people choose their career when they do not know ability of the rest of the applicant pool. The goal is to understand labor supply in the markets where ability is widely distributed. In particular, I consider a situation where there are two exclusive labor markets and the upper and lower bounds of one market’s payoffs are both higher than those of the other market. Under the market setting, agents decide which market to participate in. I find that the symmetric Bayesian Nash equilibrium of this problem is unique. In the equilibrium, agents are divided into two groups according to their ability. Members of the high ability group use a pure strategy and only apply to the more desirable market. Members of the low ability group apply to both markets with positive probability.</p>
https://resolver.caltech.edu/CaltechTHESIS:06042017-015517588Essays on Information Collection
https://resolver.caltech.edu/CaltechTHESIS:05312017-141442186
Year: 2017
DOI: 10.7907/Z9DV1GWC
<p>This thesis is devoted to the problem of information collection from theoretical and experimental perspectives.</p>
<p>In Chapter 2, I characterize the unique optimal learning strategy when there are two information sources, three possible states of the world, and learning is modeled as a search process. The optimal strategy consists of two phases. During the first phase, only beliefs about the state and the quality of information sources matter for the optimal choice between these sources. During the second phase, this choice also depends on how much the agent values different types of information. The information sources are substitutes when each individual source is likely to reveal the state eventually, and they are complements otherwise.</p>
<p>In Chapter 3, co-authored with Li Song, we conducted an experiment which demonstrates that even in a simple four person circle network people appear to fail to account for possible repetition of information they receive. Moreover, we show that this phenomenon can be partially attributed to rational considerations, which take into account other people’s deviations from optimal behavior.</p>
<p>In Chapter 4, co-authored with Marcelo A. Fernández,we model overconfidence as if a decision maker perceives information as being more precise than it actually is. We show that the effect of overconfidence on the quality of the final decision is shaped by three forces, overestimating the precision of future information, overestimating the precision of past information and overestimating the amount of information to be collected in the future. The first force pushes an overconfident decision maker to collect more information, while the second and the third forces work in the other direction.</p>https://resolver.caltech.edu/CaltechTHESIS:05312017-141442186Essays on Investor Beliefs and Asset Pricing
https://resolver.caltech.edu/CaltechTHESIS:05292018-140741507
Year: 2018
DOI: 10.7907/F2BV-8Y73
<p>This dissertation is composed of three chapters addressing the connections between investor beliefs and asset pricing. Specifically, I focus on one prevailing pattern of investor beliefs in the finance literature, return extrapolation. The idea is that investor expectations about future market returns are a positive function of the recent past returns. In this dissertation, I use this concept to understand a number of facts in the asset pricing literature.</p>
<p>Return extrapolation attracts growing attention in the literature, not only because it well explains real-world investors' expectations in the survey, but also because it significantly drives investor demand towards stocks. Therefore, we should anticipate a connection between return extrapolation measurement and the stock market dynamics. However, contrary to the intuition, previous empirical studies fail to document a significant connection. In Chapter 1, "Time-varying Impact of Investor Sentiment", I recover this connection. Specifically, I formally define investors who extrapolate past returns as extrapolators and incorporate their wealth level into analysis. My main finding is that return extrapolation interacts strongly with extrapolators' wealth level in predicting future market returns. Therefore, conditional on extrapolators' wealth level, return extrapolation significantly explains stock market returns.</p>
<p>The return extrapolation concept also raises challenges to the asset pricing models under the rational expectation frameworks. Specifically, rational expectation theories lead to a positive correlation between expectations and future realized returns, whereas return extrapolation indicates a negative correlation. Given this discrepancy, there is a clear demand for a behavioral asset pricing model that can simultaneously explain survey evidence on investor expectations and the classical asset pricing puzzles. In Chapter 2, "Asset Pricing with Return Extrapolation", coauthored with Lawrence Jin, we present a new model of asset prices based on return extrapolation. The model is a Lucas-type general equilibrium framework, in which the agent has Epstein-Zin preferences and extrapolative beliefs. Unlike earlier return extrapolation models, our model allows for a quantitative comparison with the data on asset prices. When the agent's beliefs are calibrated to match survey expectations of investors, the model generates excess volatility and predictability of stock returns, a high equity premium, a low and stable risk-free rate, and a low correlation between stock returns and consumption growth.</p>
<p>In Chapter 3, "Dark Matter" of Finance in the Survey, I investigate another attribute of investor beliefs—tail risk perceptions. Although tail risks play significant roles in explaining asset pricing puzzles, researchers have very limited knowledge about them because tail events are difficult to observe. I use Shiller tail risk survey to empirically investigate tail risk perceptions. In this survey, investors are asked to report their estimated probability for a crash event in the U.S. stock market. However, when using survey data to understand investors’ perception of tail risks, there are two fundamental challenges. First, is tail risks survey reliable? Second, to avoid cherry-picking, is there a unified framework to explain different attributes of investor beliefs? My analysis provides positive answers to both questions. First, I show that Shiller tail risk survey is reliable. More importantly, I show that return extrapolation can serve as a unified belief formation framework to explain not only variations in investor expectations but also in tail risk perceptions.</p>
https://resolver.caltech.edu/CaltechTHESIS:05292018-140741507Essays on Investor Beliefs and Asset Pricing
https://resolver.caltech.edu/CaltechTHESIS:05292018-140741507
Year: 2018
DOI: 10.7907/F2BV-8Y73
<p>This dissertation is composed of three chapters addressing the connections between investor beliefs and asset pricing. Specifically, I focus on one prevailing pattern of investor beliefs in the finance literature, return extrapolation. The idea is that investor expectations about future market returns are a positive function of the recent past returns. In this dissertation, I use this concept to understand a number of facts in the asset pricing literature.</p>
<p>Return extrapolation attracts growing attention in the literature, not only because it well explains real-world investors' expectations in the survey, but also because it significantly drives investor demand towards stocks. Therefore, we should anticipate a connection between return extrapolation measurement and the stock market dynamics. However, contrary to the intuition, previous empirical studies fail to document a significant connection. In Chapter 1, "Time-varying Impact of Investor Sentiment", I recover this connection. Specifically, I formally define investors who extrapolate past returns as extrapolators and incorporate their wealth level into analysis. My main finding is that return extrapolation interacts strongly with extrapolators' wealth level in predicting future market returns. Therefore, conditional on extrapolators' wealth level, return extrapolation significantly explains stock market returns.</p>
<p>The return extrapolation concept also raises challenges to the asset pricing models under the rational expectation frameworks. Specifically, rational expectation theories lead to a positive correlation between expectations and future realized returns, whereas return extrapolation indicates a negative correlation. Given this discrepancy, there is a clear demand for a behavioral asset pricing model that can simultaneously explain survey evidence on investor expectations and the classical asset pricing puzzles. In Chapter 2, "Asset Pricing with Return Extrapolation", coauthored with Lawrence Jin, we present a new model of asset prices based on return extrapolation. The model is a Lucas-type general equilibrium framework, in which the agent has Epstein-Zin preferences and extrapolative beliefs. Unlike earlier return extrapolation models, our model allows for a quantitative comparison with the data on asset prices. When the agent's beliefs are calibrated to match survey expectations of investors, the model generates excess volatility and predictability of stock returns, a high equity premium, a low and stable risk-free rate, and a low correlation between stock returns and consumption growth.</p>
<p>In Chapter 3, "Dark Matter" of Finance in the Survey, I investigate another attribute of investor beliefs—tail risk perceptions. Although tail risks play significant roles in explaining asset pricing puzzles, researchers have very limited knowledge about them because tail events are difficult to observe. I use Shiller tail risk survey to empirically investigate tail risk perceptions. In this survey, investors are asked to report their estimated probability for a crash event in the U.S. stock market. However, when using survey data to understand investors’ perception of tail risks, there are two fundamental challenges. First, is tail risks survey reliable? Second, to avoid cherry-picking, is there a unified framework to explain different attributes of investor beliefs? My analysis provides positive answers to both questions. First, I show that Shiller tail risk survey is reliable. More importantly, I show that return extrapolation can serve as a unified belief formation framework to explain not only variations in investor expectations but also in tail risk perceptions.</p>
https://resolver.caltech.edu/CaltechTHESIS:05292018-140741507Essays On Decision Theory
https://resolver.caltech.edu/CaltechTHESIS:06072019-212943893
Year: 2019
DOI: 10.7907/MVE7-HP81
<p>This thesis introduces some general frameworks for studying problems in decision theory. The purpose of this dissertation is two-fold. First, I develop general mathematical frameworks and tools to explore different decision theoretic phenomena. Second, I apply my developed frameworks and tools in different topics of Microeconomics and Decision Theory.</p>
<p>Chapter 1 introduces a notion of the classifier, to represent the different classes of data revealed through some observations. I present a general model of classification, notion of complexity, and how a complicated classification procedure can be generated through some simpler classification procedures.</p>
<p>My goal is to show how an individual's complex behavior can be derived from some simple underlying heuristics. In this chapter, I model a classifier (as a general model for decision making) that based on observing some data points classifies them into different categories with a set of different labels. The only assumption for my model is that whenever a data point is in two categories, there should be an additional category representing the intersection of the two categories. First, I derive a duality result similar to the duality in convex geometry. Then, using my result, I find all representations of a complex classifier by aggregating simpler forms of classifiers. For example, I show how a complex classifier can be represented by simpler classifiers with only two categories (similar to a single linear classifier in a neural network). Finally, I show an application in the context of dynamic choice behaviors. Notably, I use my model to reinterpret the seminal works by Kreps (1979) and Dekel, Lipman, and Rustichini (2001) on representing preference ordering over menus with a subjective state space. I also show the connection between the notion of the minimal subjective state space in economics with my proposed notion of complexity of a classifier.</p>
<p>In Chapter 2, I provide a general characterization of recursive methods of aggregation and show that recursive aggregation lies behind many seemingly different results in economic theory. Recursivity means that the aggregate outcome of a model over two disjoint groups of features is a weighted average of the outcome of each group separately.</p>
<p>This chapter makes two contributions. The first contribution is to pin down any aggregation procedure that satisfies my definition of recursivity. The result unifies aggregation procedures across many different economic environments, showing that all of them rely on the same basic result. The second contribution is to show different extensions of the result in the context of belief formation, choice theory, and welfare economics.</p>
<p>In the context of belief formation, I model an agent who predicts the true state of nature, based on observing some signals in her information structure. I interpret each subset of signals as an event in her information structure. I show that, as long as the information structure has a finite cardinality, my weighted averaging axiom is the necessary and sufficient condition for the agent to behaves as a Bayesian updater. This result answers the question raised by Shmaya and Yariv (2007), regarding finding a necessary and sufficient condition for a belief formation process to act as a Bayesian updating rule.</p>
<p>In the context of choice theory, I consider the standard theory of discrete choice. An agent chooses randomly from a menu. The outcome of my model is the average choice (mean of the distribution of choices) rather than the entire distribution of choices. Average choice is easier to report and obtain than the entire distribution. However, an average choice does not uniquely reveal the underlying distribution of choices. In this context, I show that (1) it is possible to uniquely extract the underlying distribution of choices as long as the average choice satisfies weighted averaging axiom, and (2) there is a close connection between my weighted averaging axiom and the celebrated Luce (or Logit) model of discrete choice.</p>
<p>Chapter 3 is about the aggregation of the preference orderings of individuals over a set of alternatives. The role of an aggregation rule is to associate with each group of individuals another preference ordering of alternatives, representing the group's aggregated preference. I consider the class of aggregation rules satisfying the extended Pareto axiom. Extended Pareto means that whenever we partition a group of individuals into two subgroups, if both subgroups prefer one alternative over another (as indicated by their aggregated preferences), then the aggregated preference ordering of the union of the subgroups also prefers the first alternative over the second one.</p>
<p>I show that (1) the extended Pareto is equivalent to my weighted averaging axiom, and (2) I derive a generalization of Harsanyi's (1955) famous theorem on Utilitarianism. Harsanyi considers a single profile of individuals and a variant of Pareto to obtain Utilitarianism. However, in my approach, I partition a profile into smaller groups. Then, I aggregate the preference ordering of these smaller groups using the extended Pareto. Hence, I obtain Utilitarianism through this consistent form of aggregation. As a result, in my representation, the weight associated with each individual appears in all sub-profiles that contain her. </p>
<p>In another application, I find the class of extended Pareto social welfare functions. My result has a positive nature, compared to the claims by Kalai and Schmeidler (1977) and Hylland (1980) that the negative conclusion of Arrow's theorem holds even with vN-M preferences.</p>
<p>Finally, in Chapter 4, I derive a simple subjective conditional expectation theory of state-dependent preferences. In many applications such as models for buying health insurance, the standard assumption about the independence of the utility and the set of states is not a plausible one. Hence, I derive a model in which the main force behind the separation of beliefs and state-dependent utility comes from the extended Pareto condition. Moreover, I show that, as long as the model satisfies my strong minimal agreement condition, we can uniquely separate beliefs from the state-dependent utility.</p>https://resolver.caltech.edu/CaltechTHESIS:06072019-212943893Essays On Decision Theory
https://resolver.caltech.edu/CaltechTHESIS:06072019-212943893
Year: 2019
DOI: 10.7907/MVE7-HP81
<p>This thesis introduces some general frameworks for studying problems in decision theory. The purpose of this dissertation is two-fold. First, I develop general mathematical frameworks and tools to explore different decision theoretic phenomena. Second, I apply my developed frameworks and tools in different topics of Microeconomics and Decision Theory.</p>
<p>Chapter 1 introduces a notion of the classifier, to represent the different classes of data revealed through some observations. I present a general model of classification, notion of complexity, and how a complicated classification procedure can be generated through some simpler classification procedures.</p>
<p>My goal is to show how an individual's complex behavior can be derived from some simple underlying heuristics. In this chapter, I model a classifier (as a general model for decision making) that based on observing some data points classifies them into different categories with a set of different labels. The only assumption for my model is that whenever a data point is in two categories, there should be an additional category representing the intersection of the two categories. First, I derive a duality result similar to the duality in convex geometry. Then, using my result, I find all representations of a complex classifier by aggregating simpler forms of classifiers. For example, I show how a complex classifier can be represented by simpler classifiers with only two categories (similar to a single linear classifier in a neural network). Finally, I show an application in the context of dynamic choice behaviors. Notably, I use my model to reinterpret the seminal works by Kreps (1979) and Dekel, Lipman, and Rustichini (2001) on representing preference ordering over menus with a subjective state space. I also show the connection between the notion of the minimal subjective state space in economics with my proposed notion of complexity of a classifier.</p>
<p>In Chapter 2, I provide a general characterization of recursive methods of aggregation and show that recursive aggregation lies behind many seemingly different results in economic theory. Recursivity means that the aggregate outcome of a model over two disjoint groups of features is a weighted average of the outcome of each group separately.</p>
<p>This chapter makes two contributions. The first contribution is to pin down any aggregation procedure that satisfies my definition of recursivity. The result unifies aggregation procedures across many different economic environments, showing that all of them rely on the same basic result. The second contribution is to show different extensions of the result in the context of belief formation, choice theory, and welfare economics.</p>
<p>In the context of belief formation, I model an agent who predicts the true state of nature, based on observing some signals in her information structure. I interpret each subset of signals as an event in her information structure. I show that, as long as the information structure has a finite cardinality, my weighted averaging axiom is the necessary and sufficient condition for the agent to behaves as a Bayesian updater. This result answers the question raised by Shmaya and Yariv (2007), regarding finding a necessary and sufficient condition for a belief formation process to act as a Bayesian updating rule.</p>
<p>In the context of choice theory, I consider the standard theory of discrete choice. An agent chooses randomly from a menu. The outcome of my model is the average choice (mean of the distribution of choices) rather than the entire distribution of choices. Average choice is easier to report and obtain than the entire distribution. However, an average choice does not uniquely reveal the underlying distribution of choices. In this context, I show that (1) it is possible to uniquely extract the underlying distribution of choices as long as the average choice satisfies weighted averaging axiom, and (2) there is a close connection between my weighted averaging axiom and the celebrated Luce (or Logit) model of discrete choice.</p>
<p>Chapter 3 is about the aggregation of the preference orderings of individuals over a set of alternatives. The role of an aggregation rule is to associate with each group of individuals another preference ordering of alternatives, representing the group's aggregated preference. I consider the class of aggregation rules satisfying the extended Pareto axiom. Extended Pareto means that whenever we partition a group of individuals into two subgroups, if both subgroups prefer one alternative over another (as indicated by their aggregated preferences), then the aggregated preference ordering of the union of the subgroups also prefers the first alternative over the second one.</p>
<p>I show that (1) the extended Pareto is equivalent to my weighted averaging axiom, and (2) I derive a generalization of Harsanyi's (1955) famous theorem on Utilitarianism. Harsanyi considers a single profile of individuals and a variant of Pareto to obtain Utilitarianism. However, in my approach, I partition a profile into smaller groups. Then, I aggregate the preference ordering of these smaller groups using the extended Pareto. Hence, I obtain Utilitarianism through this consistent form of aggregation. As a result, in my representation, the weight associated with each individual appears in all sub-profiles that contain her. </p>
<p>In another application, I find the class of extended Pareto social welfare functions. My result has a positive nature, compared to the claims by Kalai and Schmeidler (1977) and Hylland (1980) that the negative conclusion of Arrow's theorem holds even with vN-M preferences.</p>
<p>Finally, in Chapter 4, I derive a simple subjective conditional expectation theory of state-dependent preferences. In many applications such as models for buying health insurance, the standard assumption about the independence of the utility and the set of states is not a plausible one. Hence, I derive a model in which the main force behind the separation of beliefs and state-dependent utility comes from the extended Pareto condition. Moreover, I show that, as long as the model satisfies my strong minimal agreement condition, we can uniquely separate beliefs from the state-dependent utility.</p>https://resolver.caltech.edu/CaltechTHESIS:06072019-212943893Essays on Social Learning and Networks
https://resolver.caltech.edu/CaltechTHESIS:05122020-180653707
Year: 2020
DOI: 10.7907/0m03-b330
<p>This thesis offers a contribution to the study of Social Learning and Networks. It studies information aggregation and its effect on individual's actions (Chapter 2, 3) and social network (Chapter 4).</p>
<p>Chapter 2, co-authored with Omer Tamuz and Wade Hann-Caruthers, studies how quickly does the public belief converge to its true value when agents are able to observe actions of their predecessors. In the classical herding literature, agents receive a private signal regarding a binary state of nature, and sequentially choose an action, after observing the actions of their predecessors. When the informativeness of private signals is unbounded, it is known that agents converge to the correct action and correct belief. We study how quickly convergence occurs, and show that it happens more slowly than it does when agents observe signals. However, we also show that the speed of learning from actions can be arbitrarily close to the speed of learning from signals. In particular, the expected time until the agents stop taking the wrong action can be either finite or infinite, depending on the private signal distribution. In the canonical case of Gaussian private signals, we calculate the speed of convergence precisely, and show explicitly that, in this case, learning from actions is significantly slower than learning from signals.</p>
<p>In Chapter 3, I investigate how social planning can reduce the inefficiencies of social learning, stemming from herding and informational cascades. A social planner is introduced to the classical sequential social learning model. She can tax or subsidize players' actions in order to maximize social welfare, a discounted sum of agents' utilities. We solve or accurately approximate the expected utility of the social planner and the optimal pricing strategy for various signal distributions. In equilibrium, it is optimal to increase the price for the better action, causing a reduction in current agent's utility, but also a net gain, due to the information this action reveals. The addition of the social planner significantly improves social welfare and the asymptotic speed of learning.</p>
<p>Chapter 4 analyzes how different types of social connections between people shape their social networks. There are two possible types of ties between individuals, strong and weak, that differ in maintenance costs and reliability. A network formation game is played in which agents choose the number of ties of each type to maximize their chances of hearing about a new job opportunity. We find that in equilibrium, people maintain both types of connections, which was not explained in previous theoretical models. Furthermore, in the socially optimal symmetric network, there are more strong ties than in the equilibrium one.</p>https://resolver.caltech.edu/CaltechTHESIS:05122020-180653707Essays on Social Learning and Networks
https://resolver.caltech.edu/CaltechTHESIS:05122020-180653707
Year: 2020
DOI: 10.7907/0m03-b330
<p>This thesis offers a contribution to the study of Social Learning and Networks. It studies information aggregation and its effect on individual's actions (Chapter 2, 3) and social network (Chapter 4).</p>
<p>Chapter 2, co-authored with Omer Tamuz and Wade Hann-Caruthers, studies how quickly does the public belief converge to its true value when agents are able to observe actions of their predecessors. In the classical herding literature, agents receive a private signal regarding a binary state of nature, and sequentially choose an action, after observing the actions of their predecessors. When the informativeness of private signals is unbounded, it is known that agents converge to the correct action and correct belief. We study how quickly convergence occurs, and show that it happens more slowly than it does when agents observe signals. However, we also show that the speed of learning from actions can be arbitrarily close to the speed of learning from signals. In particular, the expected time until the agents stop taking the wrong action can be either finite or infinite, depending on the private signal distribution. In the canonical case of Gaussian private signals, we calculate the speed of convergence precisely, and show explicitly that, in this case, learning from actions is significantly slower than learning from signals.</p>
<p>In Chapter 3, I investigate how social planning can reduce the inefficiencies of social learning, stemming from herding and informational cascades. A social planner is introduced to the classical sequential social learning model. She can tax or subsidize players' actions in order to maximize social welfare, a discounted sum of agents' utilities. We solve or accurately approximate the expected utility of the social planner and the optimal pricing strategy for various signal distributions. In equilibrium, it is optimal to increase the price for the better action, causing a reduction in current agent's utility, but also a net gain, due to the information this action reveals. The addition of the social planner significantly improves social welfare and the asymptotic speed of learning.</p>
<p>Chapter 4 analyzes how different types of social connections between people shape their social networks. There are two possible types of ties between individuals, strong and weak, that differ in maintenance costs and reliability. A network formation game is played in which agents choose the number of ties of each type to maximize their chances of hearing about a new job opportunity. We find that in equilibrium, people maintain both types of connections, which was not explained in previous theoretical models. Furthermore, in the socially optimal symmetric network, there are more strong ties than in the equilibrium one.</p>https://resolver.caltech.edu/CaltechTHESIS:05122020-180653707Mathematical Models of Trading
https://resolver.caltech.edu/CaltechTHESIS:09282020-021601265
Year: 2021
DOI: 10.7907/9ks2-fa45
<p>This thesis presents a mathematical framework to model trading of financial assets on an exchange. The interaction between agents on the exchange is modeled as the Nash equilibrium of a demand schedule auction. The submission of demand schedules in the auction is meant to proxy for the submission of limit and market orders on an exchange. Chapter 1 considers this auction in a one-period setting, highlighting the importance of noisy flow for obtaining a unique Nash equilibrium.</p>
<p>Chapter 2 is the core of the thesis and considers the auction in a continuous time setting. Here the agents trading on the exchange have quadratic-type preferences, and in equilibrium they must clear an exogenously specified stream of market orders. Chapter 3 considers alternative and more realistic dynamics for the exogenous market orders. Chapter 4 endogenizes the market orders by considering an agent executing orders on behalf of noisy clients.. Chapter 5 considers the same model as in Chapter 2, except with a consumption based utility function for each agent.</p>https://resolver.caltech.edu/CaltechTHESIS:09282020-021601265Theory of Mathematical Optimization for Delegated Portfolio Management
https://resolver.caltech.edu/CaltechTHESIS:05272022-034901698
Year: 2022
DOI: 10.7907/km2b-er60
<p>We study the optimization problem of finding closed convex sets Γ ⊆ R<sup>d</sup> containing the origin that minimize F(Γ) = ∑<sub>i=1</sub><sup>k</sup> w<sub>i</sub> | θ<sub>i</sub>/2 - p<sub>Γ</sub>(θ<sub>i</sub>) | <sup>2</sup>, where w<sub>1</sub>, ..., w<sub>k</sub> > 0, θ<sub>1</sub>, ..., θ<sub>k</sub> in R<sup>d</sup> are given, and p<sub>Γ</sub>(θ<sub>i</sub>) are the closest points in Γ to θ<sub>i</sub>, i = 1, ..., k. This problem is motivated by the topic of delegated portfolio management in finance. In Chapter 2, we will explore this connection. To approach the problem, we first prove existence of a solution for the general problem. To further study properties of the solution, we next introduce the semidefinite programming relaxation, for which we have a first-order characterization of optimality. We then explore the question of exactness of this relaxation, which turns out to be equivalent to the notion of localizability: the shape optimization problem embedded in higher dimensions must have solutions in the original dimension. Finally, we present special cases for which localizability holds.</p>https://resolver.caltech.edu/CaltechTHESIS:05272022-034901698Essays on Rational Social Learning
https://resolver.caltech.edu/CaltechTHESIS:05132024-181232920
Year: 2024
DOI: 10.7907/p1xt-ys43
<p>This dissertation contains three essays, each contributing to the study of social learning among rational agents in various contexts.</p>
<p>In Chapter 1, I study whether individuals can learn the informativeness of their information technology through social learning. Building on the classic sequential social learning model, I introduce the possibility that a common source is completely uninformative. I then define asymptotic learning as the situation in which an outsider, who observes the actions of all agents, eventually distinguishes between uninformative and informative sources. I show that asymptotic learning in this setting is not guaranteed; it depends crucially on the relative tail distributions of private beliefs induced by uninformative and informative signals. Furthermore, I identify the phenomenon of perpetual disagreement as the cause of learning and provide a characterization of learning in the canonical Gaussian environment.</p>
<p>In Chapter 2, co-authored with Omer Tamuz and Philipp Strack, we study the asymptotic rate at which the probability of a group of long-lived, rational agents in a social network taking the correct action converges to one. In every period, after observing the past actions of his neighbors, each agent receives a private signal, and chooses an action whose payoff depends only on the state. Since equilibrium actions depend on higher-order beliefs, characterizing agents' behavior becomes difficult. Nevertheless, we show that the rate of learning in any equilibrium is bounded from above by a constant, regardless of the size and shape of the network, the utility function, and the patience of the agents. This bound only depends on the private signal distribution.</p>
<p>In Chapter 3, I study how fads emerge from social learning in a changing environment. I consider a simple sequential learning model in which rational agents arrive in order, each acting only once, and the underlying unknown state is constantly evolving. Each agent receives a private signal, observes all past actions of others, and chooses an action to match the current state. Because the state changes over time, cascades cannot last forever, and actions also fluctuate. I show that despite the rise of temporary information cascades, in the long run, actions change more often than the state. This provides a theoretical foundation for faddish behavior in which people often change their actions more frequently than necessary.</p>https://resolver.caltech.edu/CaltechTHESIS:05132024-181232920