Committee Feed
https://feeds.library.caltech.edu/people/McKelvey-R-D/committee.rss
A Caltech Library Repository Feedhttp://www.rssboard.org/rss-specificationpython-feedgenenTue, 16 Apr 2024 15:38:57 +0000Imperfect Information and Oligopoly with Endogenous Market Power
https://resolver.caltech.edu/CaltechTHESIS:10232019-151459787
Authors: {'items': [{'id': 'Sadanand-Venkatraman', 'name': {'family': 'Sadanand', 'given': 'Venkatraman'}, 'show_email': 'NO'}]}
Year: 1983
DOI: 10.7907/k2ry-0783
<p>This thesis consists of three essays. The first essay describes a model in which a dominant player can be endogenously determined. The model is developed in the context of Cournot and Stackelberg equilibria. Cournot equilibria are obtained in games where players move simultaneously (or sequentially but unobservably), and the extensive form strategy spaces of these players are isomorphic to each other. Stackelberg equilibria, on the other hand, are obtained as the perfect equilibria of perfect information games in which the players move sequentially, with the dominant player or the leader firm moving first and the other player moving second. Thus, the question of how to model an industry -- Cournot or Stackelberg -- is answered by examining timing and information conditions both of which are presumed exogenous. Firm sizes and technologies and demand characteristics are, in this context irrelevant. What we do instead, is to note that if demand is resolved over time, then firms may face a trade-off between making decisions before the uncertainty in demand is revealed and thereby establishing a "leadership" position, or waiting until after resolution of demand in order to avoid production mistakes. The sequentially rational Nash equilibrium of the resulting game is examined. It is shown that in a market with one large firm (i.e., a firm whose output affects price) and a nonatomic continuum of small firms (i.e., firms whose individual outputs do not affect price), the only equilibrium of the game described above, with nontrivial but small uncertainty, is a Stackelberg equilibrium with the large firm as the endogenously determined dominant player. The difference between a large and a small firm is also embodied in their respective cost functions.</p>
<p>The second essay answers the question of whether markets with one large firm and several small but atomic firms can be approximated by or can approximate a Stackelberg equilibrium. This is answered by establishing that the equilibrium correspondence of a family of games, each of which has one large firm and several small firms, and the number of small firms increases to infinity, is continuous.</p>
<p>The third essay adapts the model developed in the first essay to a model of noncooperative general exchange in which the traders are in the same strategic position with respect to each other. Thus a noncooperative game is defined in an exchange economy such that a price-setting monopolist is determined endogenously in equilibrium, and this is the unique sequentially rational Nash equilibrium.</p>https://thesis.library.caltech.edu/id/eprint/11852Political and Market Equilibria with Income Taxes
https://resolver.caltech.edu/CaltechTHESIS:01232019-102219749
Authors: {'items': [{'id': 'Snyder-James-Millett-Jr', 'name': {'family': 'Snyder', 'given': 'James Millett, Jr.'}, 'show_email': 'NO'}]}
Year: 1985
DOI: 10.7907/bb2v-n765
<p>In this thesis we explore political and market equilibria in worlds with income taxes. In part I we study individual and majority-rule choice of an income tax schedule in the context of a simple two-sector economy in which individuals respond to higher taxes by earning less taxable income and devoting more time to untaxed activities. If voters are concerned with the "fairness" of the distribution of after-tax incomes in society, then a majority-rule equilibrium tax schedule exists, and is linear. If voters care primarily about their own after-tax income however, then in general no such equilibrium exists, although equilibria may exist within special classes of taxes. In characterizing individual preferences, we find that "middle-class" voters prefer sharply progressive schedules that impose low marginal tax rates on lower-income taxpayers and high marginal rates on upper-income taxpayers. This suggests that the observed preference for marginal-rate progression has little to do with "fairness," but results from the middle-class' successfully reducing its own tax burden.</p>
<p>In Part II we study the effects of income taxation on capital asset market equilibrium, using a popular model of asset pricing, the Arbitrage Pricing Theory (APT). We focus on two features found in many tax codes, the differential treatment of dividends and capital gains, and the different treatment of various types of investors. We show first that, with restrictions on the portfolios investors may hold, in general at any prices there will be some investor who can make unlimited arbitrage profits. Next we restrict portfolios, requiring that no investor borrow so much that her total dividend payment on short sales exceeds her total dividend income on the assets she owns. Given this restriction there exist prices at which no investors can make unlimited arbitrage profits. We show that if at least one investor faces a higher tax rate on capital gains than dividends (true for corporations in the U.S. today) then the prices must be different from those predicted by the APT without taxes.</p>https://thesis.library.caltech.edu/id/eprint/11359Signaling Games: Theory and Applications
https://resolver.caltech.edu/CaltechTHESIS:04152019-112738556
Authors: {'items': [{'id': 'Banks-Jeffrey-Scott', 'name': {'family': 'Banks', 'given': 'Jeffrey Scott'}, 'show_email': 'NO'}]}
Year: 1986
DOI: 10.7907/hhm5-8h24
<p>This thesis concerns the interactions between asymmetrically informed agents where information can potentially be transmitted through the actions of the agents. Refinements of the sequential equilibrium concept are derived and applied to (i) a model of pretrial bargaining between litigants to a civil suit, where both parties possess private information, and (ii) a model of electoral competition where the voters attempt to deduce the private information held by the candidates.</p>https://thesis.library.caltech.edu/id/eprint/11463Essays on Speculation and Futures Markets
https://resolver.caltech.edu/CaltechTHESIS:10212019-130944242
Authors: {'items': [{'id': 'Lien-Da-Hsiang-Donald', 'name': {'family': 'Lien', 'given': 'Da-Hsiang Donald'}, 'show_email': 'NO'}]}
Year: 1986
DOI: 10.7907/9n64-3x45
<p>The thesis consists basically of two parts. The first part deals with speculators in commodity markets. In particular, we are interested in the role of speculators in stabilizing or destabilizing market price. The second part takes up hedgers in commodity futures markets. Here, we are concerned with the asymmetries between short and long hedgers. Specifically, we study whether or not the asymmetries discussed in the literature will lead to a backwardation equilibrium in futures markets.</p>
<p>The two approaches differ in the way speculators are treated in the framework as market participants. In the literature dealing with speculators and stabilization, the non-speculators are inactive; their only role is to provide an (exogenous) non-speculative excess demand function based on which speculators choose their transactions to maximize their objective functions. Conversely, in the futures market literature, under rational expectations and common beliefs on the part of all traders, speculators are only the supporting actors while hedgers play the leading roles; speculators act only to reduce the imbalance between short and long hedging. The difference between these two approaches is, however, not as clear-cut as it seems to be. The reason is simply that hedgers often take some speculative positions in their decision-making process. Consequently, it can be argued that both speculators and non-speculators are active participants in the futures markets. This specific characteristic thus generates the ambiguities about the role of speculators in stabilizing or destabilizing market price in the futures market framework.</p>
<p>The main results of the thesis are as follows. From an ex post viewpoint, Chapter 1 indicates that profitable speculation will necessarily stabilize market price if and only if the non-speculative excess demand function is linear, with no lag structure and with the law of demand being satisfied. This conclusion falsifies the famous Friedman conjecture (i.e., profitable speculation necessarily stabilizes market price). We then study the case of linear non-speculative excess demand function using an ex ante approach. At a rational expectations equilibrium, it is shown that Friedman's conjecture holds when speculators' expected utility function can be expressed in terms of mean-variance consideration. Whether or not there are nonlinear non-speculative excess demand functions that verify the Friedman conjecture in ex ante framework is a matter for future research.</p>
<p>In Chapters 3 through S, we deal with two well-known asymmetries between short and long hedging, namely, asymmetric arbitrage opportunities and the so-called Houthakker effect. First, we show that the asymmetric arbitrage argument has no standing in the way of establishing the existence of a backwardation equilibrium in forward markets, whereas some highly restrictive assumptions must be imposed for the asymmetric arbitrage argument to lead to a backwardation equilibrium in a true futures market. Thus the theoretical argument for a link between asymmetric arbitrage opportunities and a backwardation equilibrium is weak. Yet the question remains as to whether or not asymmetric arbitrage opportunities prevail in functioning futures markets. This is studied in Chapter 4 with respect to wheat and corn futures contracts traded on the Chicago Board of Trade (CBOT). The results indicate that asymmetric arbitrage opportunities have impacts upon CBOT wheat futures markets, but not upon CBOT corn futures markets. Consequently, the asymmetric arbitrage argument may apply only to some specific commodities.</p>
<p>Finally, in Chapter 5, we apply the same sample to test the existence of the Houthakker effect. Again, the hypothesis is rejected. Therefore, the two well-known asymmetries between short and long hedging do not have impacts upon CBOT wheat and corn futures markets. notwithstanding their roles in the way of a backwardation equilibrium.</p>
<p>The thesis is concerned with developing an understanding of the way in which futures markets function, and the role of speculators and hedgers in the markets. The results presented here indicate that it is only under rather restrictive conditions that definite results concerning these issues can be derived, particularly in the context of the true futures markets, that is, markets in which several delivery options exist under a futures contract.</p>https://thesis.library.caltech.edu/id/eprint/11831Games in Econometrics with Applications to Labor Economics
https://resolver.caltech.edu/CaltechTHESIS:08292019-173340103
Authors: {'items': [{'id': 'Bjorn-Paul-Anders', 'name': {'family': 'Bjorn', 'given': 'Paul Anders'}, 'show_email': 'NO'}]}
Year: 1986
DOI: 10.7907/fpd1-w748
<p>The essential starting point of this dissertation presents an alternative approach for formulating simultaneous equation models for qualitative endogenous variables. To be explicit, the endogenous variables will be generated as Nash equilibria of a game between two players, and the statistical model will be generated by invoking the random utility framework introduced by McFadden (1974, 1981). Contrary to the earlier simultaneous equations models (Heckman (1978)), the approach presented in Chapter II will not ~pose logical consistency constraints on the parameters. A distinctive feature of the model is that it extends the usual simultaneous model with structural shift to cases where the parameters need not satisfy the logical consistency conditions.</p>
<p>Following the game theoretic formulation set out in Chapter II, Chapter III proposes an alternative model where the equilibrium concept is that of Stackelberg. As in Chapter II, we will still assume that each player maximizes his own utility, with the statistical model again being derived using McFadden's random utility approach. A distinctive feature of this model is that it contains as a special case the usual recursive model for discrete endogenous variables.</p>
<p>With Chapters II and III as a theoretical background, the purpose of Chapter IV is to present an empirical study of the Nash and Stackelberg equilibrium models. The problem we examine concerns a married couple's joint decision whether or not to participate in the labor. market. We examine three competing specifications. Chapter V concludes this dissertation with a discussion of which of the three empirical models most adequately describes the joint labor force participation decision of a random sample of married couples. Since none of the three models are completely nested in each other, we are not able to employ any of the classical tests. As such, we use an alternative method developed by Vuong (1985) for choosing the most adequate model.</p>https://thesis.library.caltech.edu/id/eprint/11772Financial Equilibrium, Voting Procedures, and Coalition Structures in Allocational Mechanisms
https://resolver.caltech.edu/CaltechTHESIS:07092013-083047130
Authors: {'items': [{'id': 'Williamson-Jack-Marshall', 'name': {'family': 'Williamson', 'given': 'Jack Marshall'}, 'show_email': 'NO'}]}
Year: 1987
DOI: 10.7907/3dsd-4003
<p>This thesis is comprised of three chapters, each of which is concerned with properties of allocational mechanisms which include voting procedures as part of their operation. The theme of interaction between economic and political forces recurs in the three chapters, as described below.</p>
<p>Chapter One demonstrates existence of a non-controlling interest shareholders' equilibrium for a stylized one-period stock market economy with fewer securities than states of the world. The economy has two decision mechanisms: Owners vote to change firms' production plans across states, fixing shareholdings; and individuals trade shares and the current production / consumption good, fixing production plans. A shareholders' equilibrium is a production plan profile, and a shares / current good allocation stable for both mechanisms. In equilibrium, no (Kramer direction-restricted) plan revision is supported by a share-weighted majority, and there exists no Pareto superior reallocation.</p>
<p>Chapter Two addresses efficient management of stationary-site, fixed-budget, partisan voter registration drives. Sufficient conditions obtain for unique optimal registrar deployment within contested districts. Each census tract is assigned an expected net plurality return to registration investment index, computed from estimates of registration, partisanship, and turnout. Optimum registration intensity is a logarithmic transformation of a tract's index. These conditions are tested using a merged data set including both census variables and Los Angeles County Registrar data from several 1984 Assembly registration drives. Marginal registration spending benefits, registrar compensation, and the general campaign problem are also discussed.</p>
<p>The last chapter considers social decision procedures at a higher level of abstraction. Chapter Three analyzes the structure of decisive coalition families, given a quasitransitive-valued social decision procedure satisfying the universal domain and IIA axioms. By identifying those alternatives X* ⊆ X on which the Pareto principle fails, imposition in the social ranking is characterized. Every coaliton is weakly decisive for X* over X~X*, and weakly antidecisive for X~X* over X*; therefore, alternatives in X~X* are never socially ranked above X*. Repeated filtering of alternatives causing Pareto failure shows states in X<sup>n</sup>*~X<sup>(n+1)</sup>* are never socially ranked above X<sup>(n+1)</sup>*. Limiting results of iterated application of the *-operator are also discussed.</p>https://thesis.library.caltech.edu/id/eprint/7911Efficiency and Stability in Partnerships
https://resolver.caltech.edu/CaltechETD:etd-05222007-095847
Authors: {'items': [{'id': 'Legros-Patrick', 'name': {'family': 'Legros', 'given': 'Patrick'}, 'show_email': 'NO'}]}
Year: 1989
DOI: 10.7907/twmw-b767
<p>A partnership is an organization in which the owners of the firm provide inputs into the production process and in which they have, collectively, the power to make decisions. An <i>institution</i> defines how the output of the partnership is shared among the partners and also the collective decision process that will be used. An institution should have two desirable properties: efficiency and stability. Efficiency means that the partners have an incentive to provide efficient levels of inputs (the moral hazard problem) and that the decision process selects an efficient decision. Stability means that the partners do not want to modify the institution (renegotiation proofness).</p>
<p>When the inputs that the partners provide are not verifiable, there is a well established belief in the literature that efficiency cannot be sustained in partnerships. The first part of the dissertation establishes, contrary to this common belief, that the moral hazard problem can be almost eliminated in partnerships: there exists an allocation of the final output which induces each partner to almost always take an efficient action. It is in fact sometimes possible for the partners to attain full efficiency: necessary and sufficient conditions are established.</p>
<p>The second part of the thesis considers a situation in which renegotiation takes place through a mediator. It is shown that, under some sufficient conditions on the environment, there exist collective decision making processes which are (interim) efficient and which are renegotiation proof, i.e., stable.</p>https://thesis.library.caltech.edu/id/eprint/1950Pre-auction investment and equivalence of auctions
https://resolver.caltech.edu/CaltechETD:etd-06122007-082704
Authors: {'items': [{'id': 'Guler-K', 'name': {'family': 'Guler', 'given': 'Kemal'}, 'show_email': 'NO'}]}
Year: 1990
DOI: 10.7907/737d-qj73
In this thesis we investigate some extensions of game theoretic auction models and models of R&D by allowing the participants' cost of producing an indivisible object to be determined by their R&D decisions prior to the auctioning of a fixed price production contract. We establish that when the production cost distributions are endogenously determined as a result of private investment expenditures which are only privately observable, first and second price auctions are equivalent: both give rise to the same level of total investment, same reserve price, same expected price to the buyer and same expected level of profits for the sellers, at the symmetric Nash equilibria. This is an extension of the equivalence results known in the context of standard independent private value auction models with risk neutral bidders. We also show using a discrete cost model that, when investment is observable, the requirement of subgame perfection eliminates the symmetric investment equilibrium from the set of equilibria in pure strategies, and the only pure strategy equilibria are asymmetric. The buyer's optimal response to this asymmetry in the investment equilibria is to reduce her reserve price so that equilibrium total investment level is lower when the buyer knows that the sellers know one another's investment levels. We also consider ex ante incentives to collude under first and second price auctions and find that equilibrium patterns of collusion differ significantly. Finally, we report some experimental results.https://thesis.library.caltech.edu/id/eprint/2559Optimal procurement and contracting with research and development
https://resolver.caltech.edu/CaltechTHESIS:07292014-111212753
Authors: {'items': [{'id': 'Tan-Guofu', 'name': {'family': 'Tan', 'given': 'Guofu'}, 'show_email': 'NO'}]}
Year: 1990
DOI: 10.7907/ccg7-br11
<p> Government procurement of a new good or service is a process that usually includes basic research, development, and production. Empirical evidences indicate that investments in research and development (R and D) before production are significant in many defense procurements. Thus, optimal procurement policy should not be only to select the most efficient producer, but also to induce the contractors to design the best product and to develop the best technology. It is difficult to apply the current economic theory of optimal procurement and contracting, which has emphasized production, but ignored R and D, to many cases of procurement.</p>
<p> In this thesis, I provide basic models of both R and D and production in the procurement process where a number of firms invest in private R and D and compete for a government contract. R and D is modeled as a stochastic cost-reduction process. The government is considered both as a profit-maximizer and a procurement cost minimizer. In comparison to the literature, the following results derived from my models are significant. First, R and D matters in procurement contracting. When offering the optimal contract the government will be better off if it correctly takes into account costly private R and D investment. Second, competition matters. The optimal contract and the total equilibrium R and D expenditures vary with the number of firms. The government usually does not prefer infinite competition among firms. Instead, it prefers free entry of firms. Third, under a R and D technology with the constant marginal returns-to-scale, it is socially optimal to have only one firm to conduct all of the R and D and production. Fourth, in an independent private values environment with risk-neutral firms, an informed government should select one of four standard auction procedures with an appropriate announced reserve price, acting as if it does not have any private information.</p>
https://thesis.library.caltech.edu/id/eprint/8620The effect of political information on direct democracy strategies and outcomes
https://resolver.caltech.edu/CaltechETD:etd-06282007-094240
Authors: {'items': [{'id': 'Lupia-A-W', 'name': {'family': 'Lupia', 'given': 'Arthur William'}, 'show_email': 'NO'}]}
Year: 1991
DOI: 10.7907/rxpx-pg64
The intent of the dissertation is to detail the effects of political information on participant strategies and outcomes in an electoral environment called "direct democracy." Direct democracy is a decision-making institution in which an agenda setter chooses an alternative to a pre-determined Status Quo and voters vote for either the Status Quo or the agenda setter's alternative. Through the use of a spatial election model, a survey of California insurance reform voters, and a series of laboratory experiments, I show how the direct democracy outcome corresponds to the underlying preferences of a majority of the electorate. The spatial model is used to establish that under conditions of incomplete information, the direct democracy outcome corresponds to the (full information) wishes of a majority of the electorate only when there are sufficient opportunities to cue off of the actions of other, credible, electoral participants. The empirical tools and experiments are used to examine electoral environments where different types of information are available. It is established that voters do not require full information in order to vote for their full information preferred alternative. It is also established that, in the absence of certain types of information, rational voters can cast votes for alternatives that lead to their least preferred outcome.
That voters do not require full information in order to vote for their full information preferred alternative suggests that voters do not necessarily need to understand an issue to vote in their own best interest. That rational voters can cast "ex post mistaken" votes under conditions of incomplete information implies that direct democracy outcomes can be manipulated by well-endowed interests. The dissertation details the conditions under which each of these outcomes is likely to occur.https://thesis.library.caltech.edu/id/eprint/2761The Assignment Problem: Theory and Experiments
https://resolver.caltech.edu/CaltechETD:etd-07232007-145323
Authors: {'items': [{'email': 'molson@olsonhome.org', 'id': 'Olson-Mark-Allen', 'name': {'family': 'Olson', 'given': 'Mark Allen'}, 'show_email': 'YES'}]}
Year: 1991
DOI: 10.7907/8ZCK-E373
<p>In this thesis I consider the problem of assigning a fixed and heterogeneous set of goods or services to a fixed set of individuals. I analyze this allocation problem with and without the use of monetary transfers to allocate good.</p>
<p>There are many applications in the literature associated with this problem. The usual approach to this problem has been to discuss the properties of individual mechanisms (variously called procedures, algorithms, or rules) to solve the problem, often ignoring the incentive properties. In this thesis I take a different approach, that is, to look at a large class of mechanisms and to determine the conditions necessary to induce mechanisms with desired optimality and incentive properties. This analytic technique is augmented by an experimental examination of some of the mechanisms that have been proposed to solve this problem. Mechanisms that use transfers and consider incentive properties exist in the literature, but mechanisms that do not use transfers do not. None of these mechanisms has been tested or compared. The thesis is divided into two chapters; in chapter I, I examine the class of nontransfer dominant and Nash strategy mechanisms, and in chapter II, I discuss the experimental tests of the known transfer mechanisms and of the nontransfer mechanisms discussed in chapter I.</p>
<p>In the first chapter of this thesis, I characterize the conditions necessary for a nontransfer mechanism to be implementable in dominant and Nash strategies. This characterization is an extension of the Gibbard-Satterthwaite theorem. One of the conditions, ordinality, explains a distinction that is observed in the mechanisms described in the literature, that is, the use of cardinal information when transfers are used, and the use of ordinal information when transfers are not used. In addition, I apply a little-known concept for strategic behavior, nonbossiness, which is a necessary condition for implementability.</p>
<p>In the second chapter of this thesis, I use experimental methods to explore some procedures that could be used to assign individuals to slots. I look at four mechanisms, two transfer mechanisms, a sealed-bid auction and a progressive auction, and two nontransfer mechanisms, a choice mechanism and a chit mechanism (which are also studied in part I of this thesis). The mechanisms were compared to their theoretical predictions and to each other. For the chit mechanism a genetic algorithm was used to compute the predicted outcome; since this is a new use for the technique, I discuss the methodology that I used.</p>
<p>The experimental results for the transfer auctions are similar to the results found for single and multiple unit auctions; that is, progressive auctions tend to be more efficient and extract higher revenue from the bidders. While the transfer mechanisms studied had the properties that they are efficient and extract surplus (in terms of revenue) from the bidders, nontransfer mechanisms retain most of the surplus for bidders but tend to be less efficient. The difference between the two classes of mechanisms was most apparent in a high-contention environment where the use of nontransfer mechanisms resulted in a much larger surplus to the individual bidders, and the transfer mechanisms resulted in slightly higher efficiencies (the differences in efficiencies were small in comparison to the differences in consumer surplus). In a low-contention environment the use of either a transfer or a nontransfer mechanism had little effect on either the efficiencies or the consumer surplus.</p>
<p>The results of this study are a first step to understanding the assignment problem and to understanding more difficult allocation problems with heterogeneous goods. Two simple results are evident from our results. In the low-contention environment the planner can choose among the mechanisms discussed and not be concerned about their relative merits, since there is little difference in the outcomes of these mechanisms; in the high-contention environment the planner must determine whether efficiency or consumer surplus is more important; if efficiency or revenue is most important then, the progressive auction is clearly superior, if consumer welfare is most important then the chit mechanism is superior.</p>https://thesis.library.caltech.edu/id/eprint/2971A theoretical and experimental investigation of auctions in multi-unit demand environments
https://resolver.caltech.edu/CaltechTHESIS:01042013-114124291
Authors: {'items': [{'id': 'Noussair-C-N', 'name': {'family': 'Noussair', 'given': 'Charles Nabih'}, 'show_email': 'NO'}]}
Year: 1993
DOI: 10.7907/vsj7-a337
<p>In many existing markets demanders wish to buy more than one unit from a group of identical units of a commodity. Often, the units are sold simultaneously by auction. The
vast majority of literature pertaining to the economics of auctions, however, considers environments in which demanders buy at most one object. In this dissertation we present a collection of results concerning the generalization of theoretical and experimental results
from environments in which buyers have single-unit demands to environments with two-unit demands. We derive necessary and sufficient conditions for a set of bidding strategies to be a symmetric monotone equilibrium to a uniform price sealed bid auction. We prove that equilibrium bidding strategies converge to truthful revelation as the number of bidders gets large. We also prove that the uniform price sealed bid auction and the English clock are not isomorphic in the two-unit demand environment. Either type of auction may generate higher efficiency and either may generate higher revenue. Finally, we report a set of experimental results which demonstrates that the revenue generating properties of the two auctions are different in two-unit demand environments. In the experimental environment, more revenue is generated by the uniform price sealed bid auction than the English clock, and more revenue is generated per market period if the market is run only once than if it is repeated with the same participants.</p>
https://thesis.library.caltech.edu/id/eprint/7370Allocation and computation in rail networks : a binary conflicts ascending price mechanism (BICAP) for the decentralized allocation of the right to use railroad tracks
https://resolver.caltech.edu/CaltechETD:etd-09172007-152133
Authors: {'items': [{'email': 'drpaulbrewer@gmail.com', 'id': 'Brewer-P-J', 'name': {'family': 'Brewer', 'given': 'Paul J.'}, 'show_email': 'YES'}]}
Year: 1995
DOI: 10.7907/C1GG-RE55
The thesis addresses problems that surfaced as part of the proposal to deregulate access to railroads in Sweden. Skepticism exists about the feasibility and efficiency of competitive processes for access to the publicly owned track network. The skepticism is related to the capacity of any competitive process to solve certain technical problems that stem from performance criteria (efficiency, safety), informational requirements (values of track access are initially known only to the operators) and computational requirements. In the thesis, auction-like processes are developed for allocating the rights to operate trains on the track and for procuring the necessary computational effort to solve a related optimization problem inherent in the track auction process. The processes are tested in a series of human subject laboratory experiments. The data are examined to determine the degree to which the evaluative criteria are met and the degree to which the performance of the processes are consistent with the behavioral principles on which they are based.
https://thesis.library.caltech.edu/id/eprint/3585Three aspects of multicandidate competition in plurality rule elections
https://resolver.caltech.edu/CaltechETD:etd-10022007-130421
Authors: {'items': [{'id': 'Fey-M', 'name': {'family': 'Fey', 'given': 'Mark'}, 'show_email': 'NO'}]}
Year: 1995
DOI: 10.7907/1j5k-pc62
This thesis considers three issues relevant to multicandidate competition in plurality rule elections--entry decisions by candidates, strategic voting, and informational concerns.
In the first chapter, we consider a model of spatial competition with entry introduced by Palfrey (1984). In the model, there are two dominant candidates and a potential entrant. The established candidates choose positions first, anticipating the entry decision of the third candidate. In the resulting equilibrium, this threat of entry forces the established parties to adopt spatially separated "moderate" positions. We develop a general model that applies to the complex institutional features of modern elections. Specifically, we introduce the winner-take-all aspects of the Electoral College and show how these characteristics make a difference in the equilibrium predictions of the model. We find that, in one case at least, increasing diversity in the electorate causes the established candidates to initially shift toward more moderate positions and then back toward more extreme positions.
The second chapter examines strategic voting and Duverger's Law. A voter whose favorite candidate has no hope of victory may choose to avoid a "wasted vote" by settling for a less preferred candidate with a higher chance of winning. This behavior erodes the electoral support of minor candidates and results in Duverger's Law: "plurality rule elections favor two party competition." Palfrey (1989) constructs an incomplete information game among voters and shows that as the size of the electorate gets large, the support for the least popular candidate vanishes. We show that there exist equilibria in this model in which all three candidates receive votes under plurality rule, in violation of Duverger's Law, as suggested by Myerson and Weber (1993). However, we proceed to demonstrate that these equilibria are unstable and any uncertainty by voters leads voters back toward Duvergerian equilibria. In addition, we develop a dynamic model of pre-election polls that describes how voters react to changing information about the viability of the candidates and show that this process leads voters to coordinate on a Duvergerian outcome. Thus, we not only reestablish Duverger's Law, we also describe how voters can use pre-election polls to coordinate on a particular pair of competitive candidates.
In the third chapter we analyze the relationship between voter information and election outcomes in a multicandidate setting. We extend a model originally developed by McKelvey and Ordeshook for two candidate elections to the multicandidate case. In the model, voters are either informed or uninformed about the exact positions of the candidates. The uninformed voters, however, are able to make plausible inferences about these positions based on the vote share each candidate receives. In equilibrium, voters vote optimally, given their beliefs, and beliefs are self-fulfilling in the sense that they are not contradicted by observable information. Our first result is that in the unique voter equilibrium of our model, all voters, informed and uninformed alike, vote as if they had perfect information. We then define a dynamic process involving a sequence of polls that illustrates that this equilibrium is always reached. In addition, we obtain results about candidate positioning equilibria when candidates are also uncertain about the characteristics of the voters. Finally, we show that if a small minority of voters are fully informed and use this information to vote strategically, in equilibrium all voters, including uninformed sincere voters, act as if they were voting strategically based on full information. The uninformed voters view the lack of support for trailing candidates by informed voters as evidence that these candidates are undesirable and react by voting for a more prominent candidate.
https://thesis.library.caltech.edu/id/eprint/3868Bayesian Implementation
https://resolver.caltech.edu/CaltechETD:etd-09182007-084408
Authors: {'items': [{'email': 'dugg@ur.rochester.edu', 'id': 'Duggan-John-R', 'name': {'family': 'Duggan', 'given': 'John R.'}, 'show_email': 'YES'}]}
Year: 1995
DOI: 10.7907/7A6K-F810
<p>In Chapter 1, I briefly survey the literature on Bayesian implementation, discuss its shortcomings, and summarize the contribution of this thesis. In Chapter 2, I formally state the implementation problem, making no assumptions about the agents' sets of types, preferences, or beliefs, and I prove Jackson's (1991) necessity and sufficiency results for environments satisfying two weak conditions called "invariance" and "independence." In short, incentive compatibility and Bayesian monotonicity are necessary for Bayesian implementability, and incentive compatibility and monotonicity-no-veto are sufficient. I prove Jackson's result that, for environments with conflict of interest, Bayesian monotonicity and monotonicity-no-veto are equivalent, but I show that conflict-of-interest places an unnatural restriction on agents' beliefs when the set of states is uncountable. I note that, when agents have uncountable sets of types, preferences over social choice functions derived from conditional expected utility calculations will generally be incomplete, and I show that this incompleteness sometimes leads to implausible Bayesian equilibrium predictions. I propose an extension of expected utility preferences that preserves the properties of invariance and independence.</p>
<p>In Chapter 3, I consider environments satisfying invariance and a condition called "interiority," and I show that incentive compatibility and an extension of Bayesian monotonicity are necessary and sufficient for Bayesian implementability. Using the extension of expected utility preferences proposed in Chapter 2 and assuming best-element-private values, I then show that interiority is satisfied in two important classes of environments: it holds in private and public good economies, and it holds in lottery environments, for which the set of outcomes is the set of probability measures over a measurable space of pure outcomes.</p>
<p>In Chapter 4, I consider lottery environments satisfying best-element-private values and a condition called "strict separability," and I use the results of Chapter 3 to show that incentive compatibility is necessary and sufficient for virtual Bayesian implement ability. I then show that strict separability is satisfied for a suitably large class of environments. It holds when private values and value-distinguished types are satisfied and the set of pure outcomes is finite, and it holds when private values and value-distinguished types are satisfied and the set of pure outcomes is a finite set crossed with an open set of allocations of a transferable private good.</p>
https://thesis.library.caltech.edu/id/eprint/3626Stochastic Bargaining Theory and Order Flow
https://resolver.caltech.edu/CaltechTHESIS:06292020-132308787
Authors: {'items': [{'email': 'kdahm@deloitte.com', 'id': 'Kato-Kaoru', 'name': {'family': 'Kato', 'given': 'Kaoru'}, 'show_email': 'NO'}]}
Year: 1996
DOI: 10.7907/5z8b-d658
<p>This thesis is composed of two parts, each of which reflects our attempt to describe order flow determinants in a bilateral and multilateral trading environment, respectively.</p>
<p>In Part I of this research, we investigate noncooperative bilateral sequential bargaining games in which the value of the asset changes stochastically according to a sequence of perfectly observable time-varying random variables. We attempt to model scientific speculations of the game participants that lead to varied length of bargaining durations. Previous studies, which have focused on the analyses of incomplete information games in interpreting bargaining delays, have shown that such delays are attributed to information asymmetry on asset values among players that results in differences in players' personal valuation of the asset. However, following the viewpoint of the Efficient Market Hypothesis, we assume in our models that there is no uneven assimilation of information of vital importance that affects the asset value once the players are at a negotiating table. Hence, one of the important features of the investigated models is that both players observe identical information regarding the future asset value, and that there is no uncertainty regarding one's opponent's preferences during the bargaining process. Despite the assumption of complete information, we argue that a delay before an agreement under certain conditions is an inevitable consequence of the stochastic component in this model.</p>
<p>We give game theoretic specifications for two types of bargaining games, which we call the Basic game and the Alternative game. The two games differ from each other in their timing of information arrivals with respect to players' actions. We characterize their subgame perfect equilibria that follow our particular behavioral assumptions. Characteristics of the equilibrium outcomes of the two games are compared. We direct special attention to the study of the analytical results in comparison with those of Rubinstein (1982), Osborne and Rubinstein (1990), and Merlo and Wilson (1995). We then give statistical specifications for two types of stochastic bargaining simulations, which are the Autoregressive Binomial Model and the Generalized Wiener Process Model. Comparative statics of several variables and bargaining durations are investigated thoroughly through numerous simulation runs. Subsequently, through our research we clarify the importance of integrating stochastic concepts into the bargaining theory and its applications in search of alternative explanations for various bargaining durations.</p>
<p>In Part II of this research, we provide a set of experimental results in our study of order flow determinants in experimental financial markets with asymmetrically informed human subjects. The markets are organized as computerized double auctions accommodated with an order book that contains a complete set of current limit and market orders and that can be inspected by every market participant at any time during each trading period. Our empirical analysis focuses on the series of actions taken by the subjects that include quote revisions, limit order arrivals, and trades. At first, we report thorough descriptive statistics on the extracted data sets, where we do not assume any particular theory of the market microstructure. Then we show serial dependencies of order flow on the previous event type, the state of the order book, the size of bid-ask spread, and the time intervals. In so doing, we ascertain the significance of the impact of information carried in the order book.</p>https://thesis.library.caltech.edu/id/eprint/13829Four Puzzles in Information and Politics : Product Bans, Informed Voters, Social Insurance, & Persistent Disagreement
https://resolver.caltech.edu/CaltechTHESIS:07102017-155916282
Authors: {'items': [{'email': 'rhanson@gmu.edu', 'id': 'Hanson-Robin-Dale', 'name': {'family': 'Hanson', 'given': 'Robin Dale'}, 'show_email': 'YES'}]}
Year: 1998
DOI: 10.7907/C98N-EV75
<p>In four puzzling areas of information in politics, simple intuition and simple theory seem to conflict, muddling policy choices. This thesis elaborates theory to help resolve these conflicts.</p>
<p>The puzzle of product bans is why regulators don't instead offer the equivalent information, for example through a "would have banned" label. Regulators can want to lie with labels, however, either due to regulatory capture or to correct for market imperfections. Knowing this, consumers discount regulator warnings, and so regulators can prefer bans over the choices of skeptical consumers. But all sides can prefer regulators who are unable to ban products, since then regulator warnings will be taken more seriously.</p>
<p>The puzzle of voter information is why voters are not even more poorly informed; press coverage of politics seems out of proportion to its entertainment value. Voters can, however, want to commit to becoming informed, either by learning about issues or by subscribing to sources, to convince candidates to take favorable positions. Voters can also prefer to be in large groups, and to be ignorant in certain ways. This complicates the evaluation of institutions, like voting pools, which reduce ignorance.</p>
<p>The puzzle of group insurance as a cure for adverse selection is why this should be less a problem for groups than individuals. The usual argument about reduced variance of types for groups doesn't work in separating equilibria; what matters is the range, not variance, of types. Democratic group choice can, however, narrow the group type range by failing to represent part of the electorate. Furthermore, random juries can completely eliminate adverse selection losses.</p>
<p>The puzzle of persistent political disagreement is that for ideal Bayesians with common priors, the mere fact of a factual disagreement is enough of a clue to induce agreement. But what about agents like humans with severe computational limitations? If such agents agree that they are savvy in being aware of these limitations, then any factual disagreement implies disagreement about their average biases. Yet average bias can in principle be computed without any private information. Thus disagreements seem to be fundamentally about priors or computation, rather than information.</p>
https://thesis.library.caltech.edu/id/eprint/10345Public Institutions and Private Incentives: Three Essays
https://resolver.caltech.edu/CaltechTHESIS:06302020-161202022
Authors: {'items': [{'id': 'Coughlan-Peter-Judd Coughlan', 'name': {'family': 'Coughlan', 'given': 'Peter Judd'}, 'show_email': 'NO'}]}
Year: 1999
DOI: 10.7907/4k08-rw73
No abstract available.https://thesis.library.caltech.edu/id/eprint/13834Asymmetric Information and Cooperation
https://resolver.caltech.edu/CaltechTHESIS:06292020-115132957
Authors: {'items': [{'email': 'kwasnica@psu.edu', 'id': 'Kwasnica-Anthony-Mark', 'name': {'family': 'Kwasnica', 'given': 'Anthony Mark'}, 'orcid': '0000-0001-6714-8147', 'show_email': 'NO'}]}
Year: 2000
DOI: 10.7907/7xpp-6f16
<p>This thesis investigates the theory of cooperative behavior in the presence of asymmetric information.</p>
<p>Traditionally, the core has been a powerful and much used solution concept to describe cooperative outcomes. In settings where agents have some private information, it may be appropriate to include the opportunity for communication in the development of the core. I study the relationship of various core solution concepts with prevalent noncooperative solution concepts for environments with asymmetric information. The core definitions examined vary by the level of communication assumed. In Chapter 2, I investigate the welfare properties of market equilibria. I demonstrate that appropriate communication restrictions can be placed on the core (and efficiency) in order to obtain first and second welfare theorems. In Chapter 3, I discuss the Bayesian implementation of core solutions. If full communication is assumed, Palfrey and Srivastava (1987) have shown that the core is not Bayesian implementable: a game cannot be constructed that has only core allocations as its equilibria. I demonstrate that communication restrictions on the core are sufficient to obtain positive Bayesian implementation results in the environment studied by Palfrey and Srivastava. In other words, a game can be constructed that entices noncooperative players to choose strategies that are cooperative under limited communication.</p>
<p>In Chapter 4, I examine cooperation between bidders in private value, sealed bid auctions. I assume that bidders can overcome their one period temptation to break any collusive agreement, and that they attempt to formulate a collusive mechanism. However, each bidder's valuations are still his own, private information. If he is not given the proper incentives, he may lie about his values in order to increase his profits. Therefore, any collusive mechanism must be incentive compatible and is likely to be, at a minimum, interim efficient. I demonstrate that the theory provides some predictions about the set of collusive mechanisms chosen by bidders and that, when moving to a setting where multiple objects are for sale, the set of feasible collusive mechanisms grows. When multiple objects are for sale, there exist incentive compatible mechanisms that are preferred by all bidders to the only incentive compatible mechanisms in the single object case. Laboratory experiments indicate that these predictions are often consistent with actual behavior. However, deviations by some bidders suggest some weaknesses in this approach.</p>https://thesis.library.caltech.edu/id/eprint/13825Interdependence in organizations and laboratory groups
https://resolver.caltech.edu/CaltechTHESIS:10052010-145303282
Authors: {'items': [{'id': 'Weber-R', 'name': {'family': 'Weber', 'given': 'Roberto'}, 'show_email': 'NO'}]}
Year: 2000
DOI: 10.7907/qnh7-dd04
Interdependence arises in organizations when the appropriate action by an individual or group depends on what action others are taking. The following chapters examine cases of interdependence through the lenses of game theory and laboratory experiments. The research focuses on two games — the dirty faces game and the weak-link coordination game — that, while apparently very different, are in fact quite similar in key aspects. The most important of these is that (under some assumptions) in both games the efficient or "high effort" action is only optimal if it is the action being taken by everyone.
The dirty faces game is first presented, analyzed and tested experimentally to determine whether actual behavior conforms to the optimal, theoretically proposed outcome. This is shown not to be the case even in simple versions of the game. The latter chapters examine a possible means for inducing optimal behavior in the weak-link coordination game. This method involves starting with small group sizes, which are better suited for efficient coordination, and then growing large groups slowly. The effectiveness of this method is supported by an example from the field, theory, and experimental results.
https://thesis.library.caltech.edu/id/eprint/6101Voting Games with Incomplete Information
https://resolver.caltech.edu/CaltechTHESIS:03242014-155640911
Authors: {'items': [{'id': 'Patty-John-Wiggs', 'name': {'family': 'Patty', 'given': 'John Wiggs'}, 'orcid': '0000-0002-1142-9334', 'show_email': 'NO'}]}
Year: 2001
DOI: 10.7907/0ebb-vz47
<p>We examine voting situations in which individuals have incomplete information over
each others' true preferences. In many respects, this work is motivated by a desire to
provide a more complete understanding of so-called probabilistic voting. </p>
<p>Chapter 2 examines the similarities and differences between the incentives faced
by politicians who seek to maximize expected vote share, expected plurality, or probability
of victory in single member: single vote, simple plurality electoral systems.
We find that, in general, the candidates' optimal policies in such an electoral system
vary greatly depending on their objective function. We provide several examples, as
well as a genericity result which states that almost all such electoral systems (with
respect to the distributions of voter behavior) will exhibit different incentives for candidates
who seek to maximize expected vote share and those who seek to maximize
probability of victory. </p>
<p>In Chapter 3, we adopt a random utility maximizing framework in which individuals'
preferences are subject to action-specific exogenous shocks. We show that
Nash equilibria exist in voting games possessing such an information structure and in
which voters and candidates are each aware that every voter's preferences are subject
to such shocks. A special case of our framework is that in which voters are playing
a Quantal Response Equilibrium (McKelvey and Palfrey (1995), (1998)). We then
examine candidate competition in such games and show that, for sufficiently large
electorates, regardless of the dimensionality of the policy space or the number of candidates,
there exists a strict equilibrium at the social welfare optimum (i.e., the point
which maximizes the sum of voters' utility functions). In two candidate contests we
find that this equilibrium is unique. </p>
<p>Finally, in Chapter 4, we attempt the first steps towards a theory of equilibrium in
games possessing both continuous action spaces and action-specific preference shocks.
Our notion of equilibrium, Variational Response Equilibrium, is shown to exist in all
games with continuous payoff functions. We discuss the similarities and differences
between this notion of equilibrium and the notion of Quantal Response Equilibrium
and offer possible extensions of our framework. </p>
https://thesis.library.caltech.edu/id/eprint/8162Voting and Electoral Competition
https://resolver.caltech.edu/CaltechETD:etd-01252008-133155
Authors: {'items': [{'id': 'Callander-Steven- Joseph', 'name': {'family': 'Callander', 'given': 'Steven Joseph'}, 'show_email': 'NO'}]}
Year: 2002
DOI: 10.7907/67R6-E540
<p>The behavior of individuals and groups in the political realm is subject to many and varied incentives. These incentives impact significantly not only the candidates who win elections, but also the policies that they implement. This thesis analyzes several aspects of this problem that have until now gone unexplained.</p>
<p>Part 1 contains two models of candidate competition. Chapter 1 details a model of competition under the plurality rule that simultaneously explains two well-documented empirical regularities: that typically only two parties compete in each election (Duverger's Law), and that these parties choose non-centrist policy platforms. I show that if, and only if, competition is for multiple districts does an equilibrium consistent with these phenomena exist. I characterize bounds on district heterogeneity for this to be true, which can be interpreted as describing the domain for Duverger's Law. In Chapter 2, I turn attention to the run-off rule and study a similar model to that of Chapter 1. I find that this subtle change to the counting rule has a significant impact on the incentives and equilibria of the model. In the traditional single district environment there now exists a continuum of two-party non-centrist equilibria, which are robust to simultaneous competition for multiple districts.</p>
<p>In Part 2, I investigate the behavior of voters, and particularly the effect of vote timing on voter behavior and election outcomes. In Chapter 3, I study a model of sequential voting and explain when and why the commonly observed phenomena of bandwagons and momentum arise. I show that only if voters have a desire to vote for the winning candidate, in addition to their desire to select the better candidate, is momentum observed and bandwagons begun. In Chapter 4, I compare these results with analogous results for when voting is simultaneous and characterize when each process is superior. The conclusions confirm commonly held views about the front-loading of U.S. presidential primaries: that in tight races a simultaneous vote is preferred, but in lopsided races a sequential vote is better. Strangely, the superior performance of sequential voting in lopsided races is precisely because bandwagons occur.</p>https://thesis.library.caltech.edu/id/eprint/343Principal Agent Models of Bureaucratic and Public Decision Making
https://resolver.caltech.edu/CaltechTHESIS:01302012-111845798
Authors: {'items': [{'id': 'Gailmard-Sean-Patrick', 'name': {'family': 'Gailmard', 'given': 'Sean Patrick'}, 'show_email': 'NO'}]}
Year: 2002
DOI: 10.7907/AAG7-XR53
<p>In this thesis I investigate three situations in which a principal must make a public decision. The optimal decision from the principal's point of view depends on information
held only by agents, who have different preferences from the principal about how the information is used.</p>
<p>In the first two situations (Chapters 2 and 3) the principals and agents - legislatures and bureaus, respectively - are each part of the government and interact to create public policy. In Chapter 2 the bureau has private information about the cost of a public project, performed for multiple legislative principals who can each seek out cost information through oversight. The multiplicity of principals can cause the level of oversight to be inefficiently low due to a collective action problem. Further, the inefficiency becomes more likely as oversight becomes a more important part of the principals' utility functions, and as the oversight technology becomes more effective. For some parameters an increase in the effectiveness of the auditing technology reduces the welfare of the principals collectively.</p>
<p>In Chapter 3 the bureau has substantive expertise about the effects of various policy choices. The principal can delegate policy making authority to the bureau to tap its expertise, but bureaus are imperfectly controlled by statutory restrictions. On the other hand, the scope for delegation can be reduced endogenously if the legislature
chooses to acquire its own substantive expertise. I examine how strategic accounting for both bureaucratic subversion and costly development of legislative expertise affect
the legislature's delegation decision. I also show that legislatures may in fact want subversion to be "cheap," while bureaucrats may want their own authority constrained
and subversion to be costly.</p>
<p>In the third situation (Chapter 4) the information desired by the principal is the valuation of an excludable public good for each member of society. I experimentally
compare three collective choice procedures for determining public good consumption and cost shares. The first, Serial Cost Sharing, has attractive incentive properties but
is not efficient; the other two are "hybrid" bidding procedures that never exclude any agents but are manipulable. I characterize Bayesian Nash equilibria in the hybrid mechanisms, and prove some more general properties as well. Serial Cost Sharing tends to elicit values successfully, but is outperformed on several efficiency criteria by a hybrid mechanism - despite its incentive problems and coordination problems due to multiple equilibria.</p>https://thesis.library.caltech.edu/id/eprint/6792Information Aggregation, with Application to Monotone Ordering, Advocacy, and Conviviality
https://resolver.caltech.edu/CaltechETD:etd-06022003-155827
Authors: {'items': [{'email': 'ben@klemens.org', 'id': 'Klemens-Ben', 'name': {'family': 'Klemens', 'given': 'Ben'}, 'orcid': '0000-0001-7845-5978', 'show_email': 'YES'}]}
Year: 2003
DOI: 10.7907/JB45-Z027
<p>I. Chapter 1 presents a convenient notation for describing methods of aggregating information to form posterior distributions, allowing a description of Bayesian updating and many of the cognitive errors people commit in the lab. Chapter 2 looks at the monotone ordering problem: if the prior distributions are ordered in some manner, what updating operations will preserve that ordering? Bayesian updating is a member of a small class of operators which preserve the monotone likelihood ratio property, but is not in the class of functions which preserve first-order stochastic dominance. It also considers ordering distributions by their medians, which is useful for Political Science and other decisionmaking applications.</p>
<p>II. Chapter 3 presents a literature review of existing models of information aggregation from one party, and gives the very weak conditions under which one or two biased advocates will always reveal full information. Chapter 4 then presents a model of a trial, in which events are grouped into causal stories. Each story may point to a specific verdict, but the judge has leeway in selecting a verdict when multiple stories are shown to simultaneously be sufficient to explain an event. Two judges may be `perfect Bayesians', share the same priors, and still arrive at different verdicts for the same trial. Unlike the information revelation literature to date, there may be apropos stories and facts that neither party will want to reveal in equilibrium.</p>
<p>III. Chapter 5 presents a simultaneous model of goods or actions which demonstrate conformity effects. Previous models of such goods universally describe people as acting in sequence; actors in the model here act simultaneously, so they must decide what to do based only on prior information about the distribution of tastes in the society. The shape of this distribution (e.g., centered around zero, skewed upward, or fat-tailed) predicts the number of people who will act in some systematic ways, which I catalog here.</p>https://thesis.library.caltech.edu/id/eprint/2379