Book Section records
https://feeds.library.caltech.edu/people/Ligett-K/book_section.rss
A Caltech Library Repository Feedhttp://www.rssboard.org/rss-specificationpython-feedgenenThu, 30 Nov 2023 18:11:40 +0000Routing without regret: on convergence to Nash equilibria of regret-minimizing algorithms in routing games
https://resolver.caltech.edu/CaltechAUTHORS:20190111-133629898
Authors: Blum, Avrim; Even-Dar, Eyal; Ligett, Katrina
Year: 2006
DOI: 10.1145/1146381.1146392
There has been substantial work developing simple, efficient no-regret algorithms for a wide class of repeated decision-making problems including online routing. These are adaptive strategies an individual can use that give strong guarantees on performance even in adversarially-changing environments. There has also been substantial work on analyzing properties of Nash equilibria in routing games. In this paper, we consider the question: if each player in a routing game uses a no-regret strategy, will behavior converge to a Nash equilibrium? In general games the answer to this question is known to be no in a strong sense, but routing games have substantially more structure.In this paper we show that in the Wardrop setting of multicommodity flow and infinitesimal agents, behavior will approach Nash equilibrium (formally, on most days, the cost of the flow will be close to the cost of the cheapest paths possible given that flow) at a rate that depends polynomially on the players' regret bounds and the maximum slope of any latency function. We also show that price-of-anarchy results may be applied to these approximate equilibria, and also consider the finite-size (non-infinitesimal) load-balancing model of Azar [2].https://authors.library.caltech.edu/records/nqyrf-xn312Compressing rectilinear pictures and minimizing access control lists
https://resolver.caltech.edu/CaltechAUTHORS:20190110-140236006
Authors: Applegate, David A.; Calinescu, Gruia; Johnson, David S.; Karloff, Howard; Ligett, Katrina; Wang, Jia
Year: 2007
We consider a geometric model for the problem of minimizing access control lists (ACLs) in network routers, a model that also has applications to rectilinear picture compression and figure drawing in common graphics software packages. Here the goal is to create a colored rectilinear pattern within an initially white rectangular canvas, and the basic operation is to choose a subrectangle and paint it a single color, overwriting all previous colors in the rectangle. Rectangle Rule List (RRL) minimization is the problem of finding the shortest list of rules needed to create a given pattern. ACL minimization is a restricted version of this problem where the set of allowed rectangles must correspond to pairs of IP address prefixes. Motivated by the ACL application, we study the special cases of RRL and ACL minimization in which all rectangles must be strips that extend either the full width or the full height of the canvas (strip-rules). We provide several equivalent characterizations of the patterns achievable using strip-rules and present polynomial-time algorithms for optimally constructing such patterns when, as in the ACL application, the only colors are black and white (permit or deny). We also show that RRL minimization is NP-hard in general and provide O(min(n^(1/3), OPT^(1/2)))-approximation algorithms for general RRL and ACL minimization by exploiting our results about strip-rule patterns.https://authors.library.caltech.edu/records/r20jx-cdq31Playing games with approximation algorithms
https://resolver.caltech.edu/CaltechAUTHORS:20190114-151448803
Authors: Kakade, Sham M.; Kalai, Adam Tauman; Ligett, Katrina
Year: 2007
DOI: 10.1145/1250790.1250870
In an online linear optimization problem, on each period t, an online algorithm chooses st ∈ S from a fixed (possibly infinite) set S of feasible decisions. Nature (who may be adversarial) chooses a weight vector wt ∈ R, and the algorithm incurs cost c(st,wt), where c is a fixed cost function that is linear in the weight vector. In the full-information setting, the vector wt is then revealed to the algorithm, and in the bandit setting, only the cost experienced, c(st,wt), is revealed. The goal of the online algorithm is to perform nearly as well as the best fixed s ∈ S in hindsight. Many repeated decision-making problems with weights fit naturally into this framework, such as online shortest-path, online TSP, online clustering, and online weighted set cover.
Previously, it was shown how to convert any efficient exact offline optimization algorithm for such a problem into an efficient online bandit algorithm in both the full-information and the bandit settings, with average cost nearly as good as that of the best fixed s ∈ S in hindsight. However, in the case where the offline algorithm is an approximation algorithm with ratio α > 1, the previous approach only worked for special types of approximation algorithms. We show how to convert any offline approximation algorithm for a linear optimization problem into a corresponding online approximation algorithm, with a polynomial blowup in runtime. If the offline algorithm has an α-approximation guarantee, then the expected cost of the online algorithm on any sequence is not much larger than α times that of the best s ∈ S, where the best is chosen with the benefit of hindsight. Our main innovation is combining Zinkevich's algorithm for convex optimization with a geometric transformation that can be applied to any approximation algorithm. Standard techniques generalize the above result to the bandit setting, except that a "Barycentric Spanner" for the problem is also (provably) necessary as input.Our algorithm can also be viewed as a method for playing largerepeated games, where one can only compute approximate best-responses, rather than best-responses.https://authors.library.caltech.edu/records/314q6-dwt10Regret minimization and the price of total anarchy
https://resolver.caltech.edu/CaltechAUTHORS:20190111-151102188
Authors: Blum, Avrim; Hajiaghayi, MohammadTaghi; Ligett, Katrina; Roth, Aaron
Year: 2008
DOI: 10.1145/1374376.1374430
We propose weakening the assumption made when studying the price of anarchy: Rather than assume that self-interested players will play according to a Nash equilibrium (which may even be computationally hard to find), we assume only that selfish players play so as to minimize their own regret. Regret minimization can be done via simple, efficient algorithms even in many settings where the number of action choices for each player is exponential in the natural parameters of the problem. We prove that despite our weakened assumptions, in several broad classes of games, this "price of total anarchy" matches the Nash price of anarchy, even though play may never converge to Nash equilibrium. In contrast to the price of anarchy and the recently introduced price of sinking, which require all players to behave in a prescribed manner, we show that the price of total anarchy is in many cases resilient to the presence of Byzantine players, about whom we make no assumptions. Finally, because the price of total anarchy is an upper bound on the price of anarchy even in mixed strategies, for some games our results yield as corollaries previously unknown bounds on the price of anarchy in mixed strategies.https://authors.library.caltech.edu/records/h5qsn-17346A learning theory approach to non-interactive database privacy
https://resolver.caltech.edu/CaltechAUTHORS:20190114-152232213
Authors: Blum, Avrim; Ligett, Katrina; Roth, Aaron
Year: 2008
DOI: 10.1145/1374376.1374464
We demonstrate that, ignoring computational constraints, it is possible to release privacy-preserving databases that are useful for all queries over a discretized domain from any given concept class with polynomial VC-dimension. We show a new lower bound for releasing databases that are useful for halfspace queries over a continuous domain. Despite this, we give a privacy-preserving polynomial time algorithm that releases information useful for all halfspace queries, for a slightly relaxed definition of usefulness. Inspired by learning theory, we introduce a new notion of data privacy, which we call distributional privacy, and show that it is strictly stronger than the prevailing privacy notion, differential privacy.https://authors.library.caltech.edu/records/vny00-ccp02Differentially private combinatorial optimization
https://resolver.caltech.edu/CaltechAUTHORS:20190110-141744321
Authors: Gupta, Anupam; Ligett, Katrina; McSherry, Frank; Roth, Aaron; Talwar, Kunal
Year: 2009
DOI: 10.48550/arXiv.0903.4510
Consider the following problem: given a metric space, some of whose points are "clients," select a set of at most k facility locations to minimize the average distance from the clients to their nearest facility. This is just the well-studied k-median problem, for which many approximation algorithms and hardness results are known. Note that the objective function encourages opening facilities in areas where there are many clients, and given a solution, it is often possible to get a good idea of where the clients are located. This raises the following quandary: what if the locations of the clients are sensitive information that we would like to keep private? Is it even possible to design good algorithms for this problem that preserve the privacy of the clients?
In this paper, we initiate a systematic study of algorithms for discrete optimization problems in the framework of differential privacy (which formalizes the idea of protecting the privacy of individual input elements). We show that many such problems indeed have good approximation algorithms that preserve differential privacy; this is even in cases where it is impossible to preserve cryptographic definitions of privacy while computing any non-trivial approximation to even the value of an optimal solution, let alone the entire solution.
Apart from the k-median problem, we consider the problems of vertex and set cover, min-cut, k-median, facility location, and Steiner tree, and give approximation algorithms and lower bounds for these problems. We also consider the recently introduced sub-modular maximization problem, "Combinatorial Public Projects" (CPP), shown by Papadimitriou et al. [28] to be inapproximable to subpolynomial multiplicative factors by any efficient and truthful algorithm. We give a differentially private (and hence approximately truthful) algorithm that achieves a logarithmic additive approximation.https://authors.library.caltech.edu/records/eg1xj-0yt63Contention Resolution under Selfishness
https://resolver.caltech.edu/CaltechAUTHORS:20141208-131734269
Authors: Christodoulou, George; Ligett, Katrina; Pyrga, Evangelia
Year: 2010
In many communications settings, such as wired and wireless local-area
networks, when multiple users attempt to access a communication channel at
the same time, a conflict results and none of the communications are successful.
Contention resolution is the study of distributed transmission and retransmission
protocols designed to maximize notions of utility such as channel utilization in
the face of blocking communications.
An additional issue to be considered in the design of such protocols is that selfish
users may have incentive to deviate from the prescribed behavior, if another
transmission strategy increases their utility. The work of Fiat et al. [8] addresses
this issue by constructing an asymptotically optimal incentive-compatible protocol.
However, their protocol assumes the cost of any single transmission is zero,
and the protocol completely collapses under non-zero transmission costs.
In this paper, we treat the case of non-zero transmission cost c. We present
asymptotically optimal contention resolution protocols that are robust to selfish
users, in two different channel feedback models. Our main result is in the Collision
Multiplicity Feedback model, where after each time slot, the number of
attempted transmissions is returned as feedback to the users. In this setting, we
give a protocol that has expected cost 2n + clog n and is in o(1)-equilibrium,
where n is the number of users.https://authors.library.caltech.edu/records/ykzbe-ddj35Beyond myopic best response (in Cournot competition)
https://resolver.caltech.edu/CaltechAUTHORS:20120615-115818156
Authors: Fiat, Amos; Koutsoupias, Elias; Ligett, Katrina; Mansour, Yishay; Olonetsky, Svetlana
Year: 2012
A Nash Equilibrium is a joint strategy profile at which each agent myopically plays a best response to the other agents' strategies, ignoring the possibility that deviating from the equilibrium could lead to an avalanche of successive changes by other agents. However, such changes could potentially be beneficial to the agent, creating incentive to act non-myopically, so as to take advantage of others' responses.
To study this phenomenon, we consider a non-myopic Cournot competition, where each firm selects whether it wants to maximize profit (as in the classical Cournot competition) or to maximize revenue (by masquerading as a firm with zero production costs).
The key observation is that profit may actually be higher when acting to maximize revenue, (1) which will depress market prices, (2) which will reduce the production of other firms, (3) which will gain market share for the revenue maximizing firm, (4) which will, overall, increase profits for the revenue maximizing firm. Implicit in this line of thought is that one might take other firms' responses into account when choosing a market strategy. The Nash Equilibria of the non-myopic Cournot competition capture this action/response issue appropriately, and this work is a step towards understanding the impact of such strategic manipulative play in markets.
We study the properties of Nash Equilibria of non-myopic Cournot competition with linear demand functions and show existence of pure Nash Equilibria, that simple best response dynamics will produce such an equilibrium, and that for some natural dynamics this convergence is within linear time. This is in contrast to the well known fact that best response dynamics need not converge in the standard myopic Cournot competition.
Furthermore, we compare the outcome of the non-myopic Cournot competition with that of the standard myopic Cournot competition. Not surprisingly, perhaps, prices in the non-myopic game are lower and the firms, in total, produce more and have a lower aggregate utility.https://authors.library.caltech.edu/records/4wmvt-08m93Privacy and Coordination: Computing on Databases with Endogenous Participation
https://resolver.caltech.edu/CaltechAUTHORS:20131009-104946452
Authors: Ghosh, Arpita; Ligett, Katrina
Year: 2013
DOI: 10.1145/2482540.2482585
We propose a simple model where individuals in a privacy-sensitive population decide whether or not to participate in
a pre-announced noisy computation by an analyst, so that the database itself is
endogenously
determined by individuals'
participation choices. The privacy an agent receives depends both on the announced noise level,
as well as
how many
agents choose to participate in the database. Each agent has some minimum privacy requirement, and decides whether or
not to participate based on how her privacy requirement compares against her expectation of the privacy she will receive
if she participates in the computation. This gives rise to a game amongst the agents, where each individual's privacy if she
participates, and therefore her participation choice, depends on the choices of the rest of the population.
We investigate symmetric Bayes-Nash equilibria, which in this game consist of
threshold strategies, where all agents
whose privacy requirements are weaker than a certain threshold participate and the remaining agents do not. We characterize these equilibria, which depend both on the noise announced by the analyst and the population size; present results on
existence, uniqueness, and multiplicity; and discuss a number of surprising properties they display.https://authors.library.caltech.edu/records/6kpgm-83x93Improved Bounds on the Price of Stability in Network Cost Sharing Games
https://resolver.caltech.edu/CaltechAUTHORS:20131009-110149445
Authors: Lee, Euiwoong; Ligett, Katrina
Year: 2013
DOI: 10.1145/2482540.2482562
We study the price of stability in undirected network design games with fair cost sharing. Our work provides
multiple new pieces of evidence that the true price of stability, at least for special subclasses of games, may
be a constant.
We make progress on this long-outstanding problem, giving a bound of O(log log log n) on the price of
stability of undirected broadcast games (where
n is the number of players). This is the first progress on the upper bound for this problem since the
O(log log n) bound of [Fiat et al. 2006] (despite much attention, the known lower bound remains at 1.818, from [Bilò et al. 2010]). Our proofs introduce several new techniques that may be useful in future work.
We provide further support for the conjectured constant price of stability in the form of a comprehensive
analysis of an alternative solution concept that forces deviating players to bear the entire costs of building
alternative paths. This solution concept includes all Nash equilibria and can be viewed as a relaxation
thereof, but we show that it preserves many properties of Nash equilibria. We prove that the price of
stability in multicast games for this relaxed solution concept is Θ(1), which may suggest that similar results
should hold for Nash equilibria. This result also demonstrates that the existing techniques for lower bounds
on the Nash price of stability in undirected network design games cannot be extended to be super-constant,
as our relaxation concept encompasses all equilibria constructed in them.https://authors.library.caltech.edu/records/q08ge-0qx48A Tale of Two Metrics: Simultaneous Bounds on Competitiveness and Regret
https://resolver.caltech.edu/CaltechAUTHORS:20131008-164143666
Authors: Andrew, Lachlan; Barman, Siddharth; Ligett, Katrina; Lin, Minghong; Meyerson, Adam; Roytman, Alan; Wierman, Adam
Year: 2013
DOI: 10.1145/2465529.2465533
We consider algorithms for "smoothed online convex optimization"
(SOCO) problems, which are a hybrid between online convex optimization (OCO) and metrical task system (MTS) problems. Historically, the performance metric for OCO was regret and that for MTS was competitive ratio (CR). There are algorithms with either sublinear regret or constant CR, but no known algorithm achieves both simultaneously. We show that this is a fundamental limitation – no algorithm (deterministic or randomized) can achieve sublinear regret and a constant CR, even when the objective functions are linear and the decision space is one dimensional. However, we present an algorithm that, for the important one dimensional case, provides sublinear regret and a CR that grows arbitrarily slowly.https://authors.library.caltech.edu/records/1dhgx-3ge03Privacy as a coordination game
https://resolver.caltech.edu/CaltechAUTHORS:20170125-140637576
Authors: Ghosh, Arpita; Ligett, Katrina
Year: 2013
DOI: 10.1109/Allerton.2013.6736721
In Ghosh-Ligett 2013, we propose a simple model where individuals in a privacy-sensitive population with privacy requirements decide whether or not to participate in a pre-announced noisy computation by an analyst, so that the database itself is endogenously determined by individuals participation choices. The privacy an agent receives depends both on the announced noise level, as well as how many agents choose to participate in the database. Agents decide whether or not to participate based on how their privacy requirement compares against their expectation of the privacy they will receive. This gives rise to a game amongst the agents, where each individual's privacy if she participates, and therefore her participation choice, depends on the choices of the rest of the population. We investigate symmetric Bayes-Nash equilibria in this game which consist of threshold strategies, where all agents with requirements above a certain threshold participate and the remaining agents do not. We characterize these equilibria, which depend both on the noise announced by the analyst and the population size; present results on existence, uniqueness, and multiplicity; and discuss a number of surprising properties they display.https://authors.library.caltech.edu/records/ravn2-rxd65Network Improvement for Equilibrium Routing
https://resolver.caltech.edu/CaltechAUTHORS:20150223-101614687
Authors: Bhaskar, Umang; Ligett, Katrina; Schulman, Leonard J.
Year: 2014
In routing games, agents pick routes through a network to
minimize their own delay. A primary concern for the network designer
in routing games is the average agent delay at equilibrium. A number of
methods to control this average delay have received substantial attention,
including network tolls, Stackelberg routing, and edge removal.
A related approach with arguably greater practical relevance is that
of making investments in improvements to the edges of the network, so
that, for a given investment budget, the average delay at equilibrium in
the improved network is minimized. This problem has received considerable
attention in the literature on transportation research. We study
a model for this problem introduced in transportation research literature,
and present both hardness results and algorithms that obtain tight
performance guarantees.
– In general graphs, we show that a simple algorithm obtains a
4/3-approximation for affine delay functions and an O(p/ log p)-
approximation for polynomial delay functions of degree p. For affine
delays, we show that it is NP-hard to improve upon the 4/3 approximation.
– Motivated by the practical relevance of the problem, we consider restricted
topologies to obtain better bounds. In series-parallel graphs,
we show that the problem is still NP-hard. However, we show that
there is an FPTAS in this case.
– Finally, for graphs consisting of parallel paths, we show that an optimal
allocation can be obtained in polynomial time.https://authors.library.caltech.edu/records/59t3z-qw360Buying Private Data without Verification
https://resolver.caltech.edu/CaltechAUTHORS:20140804-112502954
Authors: Ghosh, Arpita; Ligett, Katrina; Roth, Aaron; Schoenebeck, Grant
Year: 2014
DOI: 10.1145/2600057.2602902
We consider the problem of designing a survey to aggregate non-verifiable information from a privacy-sensitive population: an analyst wants to compute some aggregate statistic from the private bits held by each member of a population, but cannot verify the correctness of the bits reported by participants in his survey. Individuals in the population are strategic agents with a cost for privacy, ie, they not only account for the payments they expect to receive from the mechanism, but also their privacy costs from any information revealed about them by the mechanism's outcome---the computed statistic as well as the payments---to determine their utilities. How can the analyst design payments to obtain an accurate estimate of the population statistic when individuals strategically decide both whether to participate and whether to truthfully report their sensitive information'
We design a differentially private peer-prediction mechanism [Miller et al. 2005] that supports accurate estimation of the population statistic as a Bayes-Nash equilibrium in settings where agents have explicit preferences for privacy. The mechanism requires knowledge of the marginal prior distribution on bits bi, but does not need full knowledge of the marginal distribution on the costs ci, instead requiring only an approximate upper bound. Our mechanism guarantees ε-differential privacy to each agent i against any adversary who can observe the statistical estimate output by the mechanism, as well as the payments made to the n-1 other agents j ≠ i. Finally, we show that with slightly more structured assumptions on the privacy cost functions of each agent [Chen et al. 2013], the cost of running the survey goes to 0 as the number of agents diverges.https://authors.library.caltech.edu/records/z1de8-h2t02Achieving Target Equilibria in Network Routing Games without Knowing the Latency Functions
https://resolver.caltech.edu/CaltechAUTHORS:20160105-073143688
Authors: Bhaskar, Umang; Ligett, Katrina; Schulman, Leonard J.; Swamy, Chaitanya
Year: 2014
DOI: 10.1109/FOCS.2014.12
The analysis of network routing games typically assumes, right at the onset, precise and detailed information about the latency functions. Such information may, however, be unavailable or difficult to obtain. Moreover, one is often primarily interested in enforcing a desirable target flow as the equilibrium by suitably influencing player behavior in the routing game. We ask whether one can achieve target flows as equilibria without knowing the underlying latency functions. Our main result gives a crisp positive answer to this question. We show that, under fairly general settings, one can efficiently compute edge tolls that induce a given target multicommodity flow in a nonatomic routing game using a polynomial number of queries to an oracle that takes candidate tolls as input and returns the resulting equilibrium flow. This result is obtained via a novel application of the ellipsoid method, and applies to arbitrary multicommodity settings and non-linear latency functions. Our algorithm extends easily to many other settings, such as (i) when certain edges cannot be tolled or there is an upper bound on the total toll paid by a user, and (ii) general nonatomic congestion games. We obtain tighter bounds on the query complexity for series-parallel networks, and single-commodity routing games with linear latency functions, and complement these with a query-complexity lower bound applicable even to single-commodity routing games on parallel-link graphs with linear latency functions. We also explore the use of Stackelberg routing to achieve target equilibria and obtain strong positive results for series-parallel graphs. Our results build upon various new techniques that we develop pertaining to the computation of, and connections between, different notions of approximate equilibrium, properties of multicommodity flows and tolls in series-parallel graphs, and sensitivity of equilibrium flow with respect to tolls. Our results demonstrate that one can indeed circumvent the potentially-onerous task of modeling latency functions, and yet obtain meaningful results for the underlying routing game.https://authors.library.caltech.edu/records/4char-3ct96Accuracy for Sale: Aggregating Data with a Variance Constraint
https://resolver.caltech.edu/CaltechAUTHORS:20150218-142919409
Authors: Cummings, Rachel; Ligett, Katrina; Roth, Aaron; Wu, Zhiwei-Steven; Ziani, Juba
Year: 2015
DOI: 10.1145/2688073.2688106
We consider the problem of a data analyst who may purchase an unbiased estimate of some statistic from multiple data providers. From each provider i, the analyst has a choice: she may purchase an estimate from that provider that has variance chosen from a finite menu of options. Each level of variance has a cost associated with it, reported (possibly strategically) by the data provider. The analyst wants to choose the minimum cost set of variance levels, one from each provider, that will let her combine her purchased estimators into an aggregate estimator that has variance at most some fixed desired level. Moreover, she wants to do so in such a way that incentivizes the data providers to truthfully report their costs to the mechanism. We give a dominant strategy truthful solution to this problem that yields an estimator that has optimal expected cost, and violates the variance constraint by at most an additive term that tends to zero as the number of data providers grows large.https://authors.library.caltech.edu/records/6kzt7-vbz69Finding Any Nontrivial Coarse Correlated Equilibrium Is Hard
https://resolver.caltech.edu/CaltechAUTHORS:20150715-141510461
Authors: Barman, Siddharth; Ligett, Katrina
Year: 2015
DOI: 10.1145/2764468.2764497
One of the most appealing aspects of the (coarse) correlated equilibrium concept is that natural dynamics quickly arrive at approximations of such equilibria, even in games with many players. In addition, there exist polynomial-time algorithms that compute exact (coarse) correlated equilibria. In light of these results, a natural question is how good are the (coarse) correlated equilibria that can arise from any efficient algorithm or dynamics.
In this paper we address this question, and establish strong negative results. In particular, we show that in multiplayer games that have a succinct representation, it is NP-hard to compute any coarse correlated equilibrium (or approximate coarse correlated equilibrium) with welfare strictly better than the worst possible. The focus on succinct games ensures that the underlying complexity question is interesting; many multiplayer games of interest are in fact succinct. Our results imply that, while one can efficiently compute a coarse correlated equilibrium, one cannot provide any nontrivial welfare guarantee for the resulting equilibrium, unless P=NP. We show that analogous hardness results hold for correlated equilibria, and persist under the egalitarian objective or Pareto optimality.
To complement the hardness results, we develop an algorithmic framework that identifies settings in which we can efficiently compute an approximate correlated equilibrium with near-optimal welfare. We use this framework to develop an efficient algorithm for computing an approximate correlated equilibrium with near-optimal welfare in aggregative games.https://authors.library.caltech.edu/records/8xxfd-zzx03Commitment in First-Price Auctions
https://resolver.caltech.edu/CaltechAUTHORS:20160105-095011433
Authors: Xu, Yunjian; Ligett, Katrina
Year: 2015
DOI: 10.1007/978-3-662-48433-3_23
We study a variation of the single-item sealed-bid first-price auction where one bidder (the leader) is given the option to publicly pre-commit to a distribution from which her bid will be drawn.https://authors.library.caltech.edu/records/xgb9b-mf333Approximating Nash Equilibria in Tree Polymatrix Games
https://resolver.caltech.edu/CaltechAUTHORS:20160106-134033532
Authors: Barman, Siddharth; Ligett, Katrina; Piliouras, Georgios
Year: 2015
DOI: 10.1007/978-3-662-48433-3_22
We develop a quasi-polynomial time Las Vegas algorithm for approximating Nash equilibria in polymatrix games over trees, under a mild renormalizing assumption. Our result, in particular, leads to an expected polynomial-time algorithm for computing approximate Nash equilibria of tree polymatrix games in which the number of actions per player is a fixed constant. Further, for trees with constant degree, the running time of the algorithm matches the best known upper bound for approximating Nash equilibria in bimatrix games (Lipton, Markakis, and Mehta 2003).
Notably, this work closely complements the hardness result of Rubinstein (2015), which establishes the inapproximability of Nash equilibria in polymatrix games over constant-degree bipartite graphs with two actions per player.https://authors.library.caltech.edu/records/gj24f-eqj11Coordination Complexity: Small Information Coordinating Large Populations
https://resolver.caltech.edu/CaltechAUTHORS:20160120-105900294
Authors: Cummings, Rachel; Ligett, Katrina; Radhakrishnan, Jaikumar; Roth, Aaron; Wu, Zhiwei Steven
Year: 2016
DOI: 10.1145/2840728.2840767
We initiate the study of a quantity that we call coordination complexity. In a distributed optimization problem, the information defining a problem instance is distributed among n parties, who need to each choose an action, which jointly will form a solution to the optimization problem. The coordination complexity represents the minimal amount of information that a centralized coordinator, who has full knowledge of the problem instance, needs to broadcast in order to coordinate the n parties to play a nearly optimal solution.
We show that upper bounds on the coordination complexity of a problem imply the existence of good jointly differentially private algorithms for solving that problem, which in turn are known to upper bound the price of anarchy in certain games with dynamically changing populations.
We show several results. We fully characterize the coordination complexity for the problem of computing a many-to-one matching in a bipartite graph. Our upper bound in fact extends much more generally to the problem of solving a linearly separable convex program. We also give a different upper bound technique, which we use to bound the coordination complexity of coordinating a Nash equilibrium in a routing game, and of computing a stable matching.https://authors.library.caltech.edu/records/r2f29-gda78The Strange Case of Privacy in Equilibrium Models
https://resolver.caltech.edu/CaltechAUTHORS:20161117-133623412
Authors: Cummings, Rachel; Ligett, Katrina; Pai, Mallesh M.; Roth, Aaron
Year: 2016
DOI: 10.1145/2940716.2940740
We study how privacy technologies affect user and advertiser behavior in a simple economic model of targeted advertising. In our model, a consumer first decides whether or not to buy a good, and then an advertiser chooses an advertisement to show the consumer. The consumer's value for the good is correlated with her type, which determines which ad the advertiser would prefer to show to her---and hence, the advertiser would like to use information about the consumer's purchase decision to target the ad that he shows.
In our model, the advertiser is given only a differentially private signal about the consumer's behavior---which can range from no signal at all to a perfect signal, as we vary the differential privacy parameter. This allows us to study equilibrium behavior as a function of the level of privacy provided to the consumer. We show that this behavior can be highly counter-intuitive, and that the effect of adding privacy in equilibrium can be completely different from what we would expect if we ignored equilibrium incentives. Specifically, we show that increasing the level of privacy can actually increase the amount of information about the consumer's type contained in the signal the advertiser receives, lead to decreased utility for the consumer, and increased profit for the advertiser, and that generally these quantities can be non-monotonic and even discontinuous in the privacy level of the signal.https://authors.library.caltech.edu/records/9pm7g-dq484Accuracy First: Selecting a Differential Privacy Level for Accuracy-Constrained ERM
https://resolver.caltech.edu/CaltechAUTHORS:20190107-104411349
Authors: Ligett, Katrina; Neel, Seth; Roth, Aaron; Waggoner, Bo; Wu, Zhiwei Steven
Year: 2017
DOI: 10.48550/arXiv.1705.10829
Traditional approaches to differential privacy assume a fixed privacy requirement epsilon for a computation, and attempt to maximize the accuracy of the computation subject to the privacy constraint. As differential privacy is increasingly deployed in practical settings, it may often be that there is instead a fixed accuracy requirement for a given computation and the data analyst would like to maximize the privacy of the computation subject to the accuracy constraint. This raises the question of how to find and run a maximally private empirical risk minimizer subject to a given accuracy requirement. We propose a general "noise reduction" framework that can apply to a variety of private empirical risk minimization (ERM) algorithms, using them to "search" the space of privacy levels to find the empirically strongest one that meets the accuracy constraint, and incurring only logarithmic overhead in the number of privacy levels searched. The privacy analysis of our algorithm leads naturally to a version of differential privacy where the privacy parameters are dependent on the data, which we term ex-post privacy, and which is related to the recently introduced notion of privacy odometers. We also give an ex-post privacy analysis of the classical AboveThreshold privacy tool, modifying it to allow for queries chosen depending on the database. Finally, we apply our approach to two common objective functions, regularized linear and logistic regression, and empirically compare our noise reduction methods to (i) inverting the theoretical utility guarantees of standard private ERM algorithms and (ii) a stronger, empirical baseline based on binary search.https://authors.library.caltech.edu/records/9c4jq-d7m27Access to Population-Level Signaling as a Source of Inequality
https://resolver.caltech.edu/CaltechAUTHORS:20190111-072027347
Authors: Immorlica, Nicole; Ligett, Katrina; Ziani, Juba
Year: 2019
DOI: 10.1145/3287560.3287579
We identify and explore differential access to population-level signaling (also known as information design) as a source of unequal access to opportunity. A population-level signaler has potentially noisy observations of a binary type for each member of a population and, based on this, produces a signal about each member. A decision-maker infers types from signals and accepts those individuals whose type is high in expectation. We assume the signaler of the disadvantaged population reveals her observations to the decision-maker, whereas the signaler of the advantaged population forms signals strategically. We study the expected utility of the populations as measured by the fraction of accepted members, as well as the false positive rates (FPR) and false negative rates (FNR).
We first show the intuitive results that for a fixed environment, the advantaged population has higher expected utility, higher FPR, and lower FNR, than the disadvantaged one (despite having identical population quality), and that more accurate observations improve the expected utility of the advantaged population while harming that of the disadvantaged one. We next explore the introduction of a publicly-observable signal, such as a test score, as a potential intervention. Our main finding is that this natural intervention, intended to reduce the inequality between the populations' utilities, may actually exacerbate it in settings where observations and test scores are noisy.https://authors.library.caltech.edu/records/88yr2-tsh38