Article records
https://feeds.library.caltech.edu/people/Wierman-A/article.rss
A Caltech Library Repository Feedhttp://www.rssboard.org/rss-specificationpython-feedgenenTue, 16 Apr 2024 00:29:04 +0000Asymptotic convergence of scheduling policies with respect to slowdown
https://resolver.caltech.edu/CaltechAUTHORS:20201020-074935944
Authors: {'items': [{'id': 'Harchol-Balter-M', 'name': {'family': 'Harchol-Balter', 'given': 'Mor'}}, {'id': 'Sigman-K', 'name': {'family': 'Sigman', 'given': 'Karl'}}, {'id': 'Wierman-A', 'name': {'family': 'Wierman', 'given': 'Adam'}}]}
Year: 2002
DOI: 10.1016/s0166-5316(02)00132-3
We explore the performance of an M/GI/1 queue under various scheduling policies from the perspective of a new metric: the slowdown experienced by the largest jobs. We consider scheduling policies that bias against large jobs, towards large jobs, and those that are fair, e.g., processor-sharing (PS). We prove that as job size increases to infinity, all work conserving policies converge almost surely with respect to this metric to no more than 1/(1−ρ), where ρ denotes the load. We also find that the expected slowdown under any work conserving policy can be made arbitrarily close to that under PS, for all job sizes that are sufficiently large.https://authors.library.caltech.edu/records/5g47m-4sk44Understanding the slowdown of large jobs in an M/GI/1 system
https://resolver.caltech.edu/CaltechAUTHORS:20201104-093156865
Authors: {'items': [{'id': 'Harchol-Balter-M', 'name': {'family': 'Harchol-Balter', 'given': 'Mor'}}, {'id': 'Sigman-K', 'name': {'family': 'Sigman', 'given': 'Karl'}}, {'id': 'Wierman-A', 'name': {'family': 'Wierman', 'given': 'Adam'}}]}
Year: 2002
DOI: 10.1145/605521.605526
We explore the performance of an M/GI/1 queue under various scheduling policies from the perspective of a new metric: the it slowdown experienced by largest jobs. We consider scheduling policies that bias against large jobs, towards large jobs, and those that are fair, e.g., Processor-Sharing. We prove that as job size increases to infinity, all work conserving policies converge almost surely with respect to this metric to no more than 1/(1-ρ), where ρ denotes load. We also find that the expected slowdown under any work conserving policy can be made arbitrarily close to that under Processor-Sharing, for all job sizes that are sufficiently large.https://authors.library.caltech.edu/records/5w429-jkq43Classifying scheduling policies with respect to unfairness in an M/GI/1
https://resolver.caltech.edu/CaltechAUTHORS:20201104-093530569
Authors: {'items': [{'id': 'Wierman-A', 'name': {'family': 'Wierman', 'given': 'Adam'}}, {'id': 'Harchol-Balter-M', 'name': {'family': 'Harchol-Balter', 'given': 'Mor'}}]}
Year: 2003
DOI: 10.1145/885651.781057
It is common to evaluate scheduling policies based on their mean response times. Another important, but sometimes opposing, performance metric is a scheduling policy's fairness. For example, a policy that biases towards small job sizes so as to minimize mean response time may end up being unfair to large job sizes. In this paper we define three types of unfairness and demonstrate large classes of scheduling policies that fall into each type. We end with a discussion on which jobs are the ones being treated unfairly.https://authors.library.caltech.edu/records/rxyej-tnj09Modeling TCP-vegas under on/off traffic
https://resolver.caltech.edu/CaltechAUTHORS:20201104-094050639
Authors: {'items': [{'id': 'Wierman-A', 'name': {'family': 'Wierman', 'given': 'Adam'}}, {'id': 'Osogami-Takayuki', 'name': {'family': 'Osogami', 'given': 'Takayuki'}}, {'id': 'Olsén-Jörgen', 'name': {'family': 'Olsén', 'given': 'Jörgen'}}]}
Year: 2003
DOI: 10.1145/959143.959146
There has been a significant amount of research toward modeling variants of the Transmission Control Protocol (TCP) in order to understand the impact of this protocol on file transmission times and network utilization. Analytical models have emerged as a way to reduce the time required for evaluation when compared with more traditional evaluations performed using event driven simulators such as ns. In addition, when designed carefully, analytical models help researchers make design decisions about novel TCP mechanisms.https://authors.library.caltech.edu/records/ecw9p-jck30A note on comparing response times in the M/GI/1/FB and M/GI/1/PS queues
https://resolver.caltech.edu/CaltechAUTHORS:20200729-101550519
Authors: {'items': [{'id': 'Wierman-A', 'name': {'family': 'Wierman', 'given': 'Adam'}}, {'id': 'Bansal-N', 'name': {'family': 'Bansal', 'given': 'Nikhil'}}, {'id': 'Harchol-Balter-M', 'name': {'family': 'Harchol-Balter', 'given': 'Mor'}}]}
Year: 2004
DOI: 10.1016/s0167-6377(03)00061-0
We compare the overall mean response time (a.k.a. sojourn time) of the processor sharing (PS) and feedback (FB) queues under an M/GI/1 system. We show that FB outperforms PS under service distributions having decreasing failure rates; whereas PS outperforms FB under service distributions having increasing failure rates.https://authors.library.caltech.edu/records/zndfe-chp38An improved upper bound for the pebbling threshold of the n-path
https://resolver.caltech.edu/CaltechAUTHORS:20201020-083144033
Authors: {'items': [{'id': 'Wierman-A', 'name': {'family': 'Wierman', 'given': 'Adam'}}, {'id': 'Salzman-J', 'name': {'family': 'Salzman', 'given': 'Julia'}}, {'id': 'Jablonski-M', 'name': {'family': 'Jablonski', 'given': 'Michael'}}, {'id': 'Godbole-A-P', 'name': {'family': 'Godbole', 'given': 'Anant P.'}}]}
Year: 2004
DOI: 10.1016/j.disc.2002.10.001
Given a configuration of t indistinguishable pebbles on the n vertices of a graph G, we say that a vertex v can be reached if a pebble can be placed on it in a finite number of "moves". G is said to be pebbleable if all its vertices can be thus reached. Now given the n-path P_n how large (resp. small) must t be so as to be able to pebble the path almost surely (resp. almost never)? It was known that the threshold th(P_n) for pebbling the path satisfies n2^c√lgn ⩽ th(Pn_) ⩽ n2²√lgn, where lg = log₂ and c < 1/√2 is arbitrary. We improve the upper bound for the threshold function to th(P_n) ⩽ n2^d√lgn, where d > 1 is arbitrary.https://authors.library.caltech.edu/records/n9rp8-1xb11A recursive analysis technique for multi-dimensionally infinite Markov chains
https://resolver.caltech.edu/CaltechAUTHORS:20201104-094617374
Authors: {'items': [{'id': 'Osogami-Takayuki', 'name': {'family': 'Osogami', 'given': 'Takayuki'}}, {'id': 'Wierman-A', 'name': {'family': 'Wierman', 'given': 'Adam'}}, {'id': 'Harchol-Balter-M', 'name': {'family': 'Harchol-Balter', 'given': 'Mor'}}, {'id': 'Scheller-Wolf-A', 'name': {'family': 'Scheller-Wolf', 'given': 'Alan'}}]}
Year: 2004
DOI: 10.1145/1035334.1035337
Performance analysis of multiserver systems with multiple classes of jobs often has a common source of difficulty: the state space needed to capture the system behavior grows infinitely in multiple dimensions. For example, consider two processors, each serving its own M/M/1 queue, where one of the processors (the "donor") can help the other processor (the "beneficiary") with its jobs, during times when the donor processor is idle [5, 16] or when some threshold conditions are met [14, 15]. Since the behavior of beneficiary jobs depends on the number of donor jobs in system, performance analysis of beneficiary jobs involves a two dimensionally infinite (2D-infinite) state space, where one dimension corresponds to the number of beneficiary jobs and the other dimension corresponds to the number of donor jobs. Another example is an M/M/2 queue with two priority classes, where high priority jobs have preemptive priority over low priority jobs (see for example [1, 3, 4, 8, 10, 11, 12, 17] and references therein). Since the behavior of low priority jobs depends on the number of high priority jobs in system, performance analysis of low priority jobs involves 2D-infinite state space, where each dimension corresponds to the number of each class of jobs in system. As we will see, when there are m priority classes, performance analysis of the lowest priority classes involves m dimensionally infinite state space.https://authors.library.caltech.edu/records/5r688-r2e06Nearly insensitive bounds on SMART scheduling
https://resolver.caltech.edu/CaltechAUTHORS:20210310-083932675
Authors: {'items': [{'id': 'Wierman-A', 'name': {'family': 'Wierman', 'given': 'Adam'}}, {'id': 'Harchol-Balter-Mor', 'name': {'family': 'Harchol-Balter', 'given': 'Mor'}}, {'id': 'Osogami-Takayuki', 'name': {'family': 'Osogami', 'given': 'Takayuki'}}]}
Year: 2005
DOI: 10.1145/1064212.1064236
We define the class of SMART scheduling policies. These are policies that bias towards jobs with small remaining service times, jobs with small original sizes, or both, with the motivation of minimizing mean response time and/or mean slowdown. Examples of SMART policies include PSJF, SRPT, and hybrid policies such as RS (which biases according to the product of the remaining size and the original size of a job).For many policies in the SMART class, the mean response time and mean slowdown are not known or have complex representations involving multiple nested integrals, making evaluation difficult. In this work, we prove three main results. First, for all policies in the SMART class, we prove simple upper and lower bounds on mean response time. Second, we show that all policies in the SMART class, surprisingly, have very similar mean response times. Third, we show that the response times of SMART policies are largely insensitive to the variability of the job size distribution. In particular, we focus on the SRPT and PSJF policies and prove insensitive bounds in these cases.https://authors.library.caltech.edu/records/5hvd8-j9010Classifying scheduling policies with respect to higher moments of conditional response time
https://resolver.caltech.edu/CaltechAUTHORS:20210310-083407671
Authors: {'items': [{'id': 'Wierman-A', 'name': {'family': 'Wierman', 'given': 'Adam'}}, {'id': 'Harchol-Balter-Mor', 'name': {'family': 'Harchol-Balter', 'given': 'Mor'}}]}
Year: 2005
DOI: 10.1145/1064212.1064238
In addition to providing small mean response times, modern applications seek to provide users predictable service and, in some cases, Quality of Service (QoS) guarantees. In order to understand the predictability of response times under a range of scheduling policies, we study the conditional variance in response times seen by jobs of different sizes. We define a metric and a criterion that distinguish between contrasting functional behaviors of conditional variance, and we then classify large groups of scheduling policies.In addition to studying the conditional variance of response times, we also derive metrics appropriate for comparing higher conditional moments of response time across job sizes. We illustrate that common statistics such as raw and central moments are not appropriate when comparing higher conditional moments of response time. Instead, we find that cumulant moments should be used.https://authors.library.caltech.edu/records/6bf2q-at461Multi-Server queueing systems with multiple priority classes
https://resolver.caltech.edu/CaltechAUTHORS:20200810-134515351
Authors: {'items': [{'id': 'Harchol-Balter-M', 'name': {'family': 'Harchol-Balter', 'given': 'Mor'}}, {'id': 'Osogami-Takayuki', 'name': {'family': 'Osogami', 'given': 'Takayuki'}}, {'id': 'Scheller-Wolf-A', 'name': {'family': 'Scheller-Wolf', 'given': 'Alan'}}, {'id': 'Wierman-A', 'name': {'family': 'Wierman', 'given': 'Adam'}}]}
Year: 2005
DOI: 10.1007/s11134-005-2898-7
We present the first near-exact analysis of an M/PH/k queue with m > 2 preemptive-resume priority classes. Our analysis introduces a new technique, which we refer to as Recursive Dimensionality Reduction (RDR). The key idea in RDR is that the m-dimensionally infinite Markov chain, representing the m class state space, is recursively reduced to a 1-dimensionally infinite Markov chain, that is easily and quickly solved. RDR involves no truncation and results in only small inaccuracy when compared with simulation, for a wide range of loads and variability in the job size distribution. Our analytic methods are then used to derive insights on how multi-server systems with prioritization compare with their single server counterparts with respect to response time. Multi-server systems are also compared with single server systems with respect to the effect of different prioritization schemes—"smart" prioritization (giving priority to the smaller jobs) versus "stupid" prioritization (giving priority to the larger jobs). We also study the effect of approximating m class performance by collapsing the m classes into just two classes.https://authors.library.caltech.edu/records/s9vw0-w8303On the effect of inexact size information in size based policies
https://resolver.caltech.edu/CaltechAUTHORS:20201104-085827475
Authors: {'items': [{'id': 'Wierman-A', 'name': {'family': 'Wierman', 'given': 'Adam'}}]}
Year: 2006
DOI: 10.1145/1215956.1215966
Recently, there have been a number of scheduling success stories in computer applications. Across a wide array of applications, the simple heuristic of "prioritizing small jobs" has been used to reduce user response times with enormous success. For instance, variants of Shortest-Remaining-Processing-Time (SRPT) and Preemptive-Shortest-Job-First (PSJF) have been suggested for use in web servers [5, 12], wireless applications [6], and databases [8]. As a result of the attention given to size based policies by computer systems researchers, there has been a resurgence in analytical work studying these policies. However, the policies studied in theory, e.g. SRPT and PSJF, are idealized versions of the policies implemented by practitioners. In particular, the intricacies of computer systems force the use of complex hybrid policies in practice, though these more complex policies are still built around the heuristic of "prioritizing small jobs." Thus, there exists a gap between the results provided by theoretical research and the needs of practitioners. This gap results from three primary disconnects between the model studied in theory and the needs of system designers. First, in designing systems, the goal is not simply to provide small response times; other performance measures are also important. Thus, idealized policies such as SRPT and PSJF are often tweaked by practitioners to perform well on secondary performance measures (e.g. fairness and slowdown) [3, 11, 12]. Second, the overhead involved in distinguishing between an infinite number of different priority classes typically causes system designers to discretize policies such as SRPT and PSJF so that they use only a small number of priority classes (5-10) [5, 11]. Third, in many cases information about the service demands (sizes) of jobs is inexact. For instance, when serving static content, web servers have exact knowledge of the sizes of the files being served, but have inexact knowledge of network conditions. Thus, the web server only has an estimate of the true service demand [7, 12].https://authors.library.caltech.edu/records/w5qez-yt939How many servers are best in a dual-priority system?
https://resolver.caltech.edu/CaltechAUTHORS:20201020-075618304
Authors: {'items': [{'id': 'Wierman-A', 'name': {'family': 'Wierman', 'given': 'Adam'}}, {'id': 'Osogami-Takayuki', 'name': {'family': 'Osogami', 'given': 'Takayuki'}}, {'id': 'Harchol-Balter-M', 'name': {'family': 'Harchol-Balter', 'given': 'Mor'}}, {'id': 'Scheller-Wolf-A', 'name': {'family': 'Scheller-Wolf', 'given': 'Alan'}}]}
Year: 2006
DOI: 10.1016/j.peva.2005.12.004
We ask the question, "for minimizing mean response time (sojourn time), which is preferable: one fast server of speed 1, or slow servers each of speed?" Our setting is the system with two priority classes of customers, high priority and low priority, where PH is a phase-type distribution. We find that multiple slow servers are often preferable, and we demonstrate exactly how many servers are preferable as a function of the load and service time distribution. In addition, we find that the optimal number of servers with respect to the high priority jobs may be very different from that preferred by low priority jobs, and we characterize these preferences. We also study the optimal number of servers with respect to overall mean response time, averaged over high and low priority jobs. Lastly, we ascertain the effect of the service demand variability of high priority jobs on low priority jobs.https://authors.library.caltech.edu/records/bvktq-zc844Fairness and classifications
https://resolver.caltech.edu/CaltechAUTHORS:20201104-083857037
Authors: {'items': [{'id': 'Wierman-A', 'name': {'family': 'Wierman', 'given': 'Adam'}}]}
Year: 2007
DOI: 10.1145/1243401.1243405
The growing trend in computer systems towards using scheduling policies that prioritize jobs with small service requirements has resulted in a new focus on the fairness of such policies. In particular, researchers have been interested in whether prioritizing small job sizes results in large jobs being treated "unfairly." However, fairness is an amorphous concept and thus difficult to define and study. This article provides a short survey of recent work in this area.https://authors.library.caltech.edu/records/46vgx-rr704Scheduling in polling systems
https://resolver.caltech.edu/CaltechAUTHORS:20100506-110944543
Authors: {'items': [{'id': 'Wierman-A', 'name': {'family': 'Wierman', 'given': 'Adam'}}, {'id': 'Winands-E-M-M', 'name': {'family': 'Winands', 'given': 'Erik M.M.'}}, {'id': 'Boxma-O-J', 'name': {'family': 'Boxma', 'given': 'Onno J.'}}]}
Year: 2007
DOI: 10.1016/j.peva.2007.06.015
We present a simple mean value analysis (MVA) framework for analyzing the effect of scheduling within queues in classical asymmetric polling systems with gated or exhaustive service. Scheduling in polling systems finds many applications in computer and communication systems. Our framework leads not only to unification but also to extension of the literature studying scheduling in polling systems. It illustrates that a large class of scheduling policies behaves similarly in the exhaustive polling model and the standard M/GI/1 model, whereas scheduling policies in the gated polling model behave very differently than in an M/GI/1.https://authors.library.caltech.edu/records/ff7s5-2fn53Preventing Large Sojourn Times Using SMART Scheduling
https://resolver.caltech.edu/CaltechAUTHORS:20170408-151537902
Authors: {'items': [{'id': 'Nuyens-M', 'name': {'family': 'Nuyens', 'given': 'Misja'}}, {'id': 'Wierman-A', 'name': {'family': 'Wierman', 'given': 'Adam'}}, {'id': 'Zwart-B', 'name': {'family': 'Zwart', 'given': 'Bert'}}]}
Year: 2008
DOI: 10.1287/opre.1070.0504
Recently, the so-called class of SMART scheduling policies has been introduced to formalize the common heuristic of "biasing toward small jobs." We study the tail of the sojourn-time (response-time) distribution under both SMART policies and the foreground-background policy (FB) in the GI/GI/1 queue. We prove that these policies behave very well under heavy-tailed service times. Specifically, we show that the sojourn-time tail under all SMART policies and FB is similar to that of the service-time tail, up to a constant, which makes the SMART class superior to first-come-first-served (FCFS). In contrast, for light-tailed service times, we prove that the sojourn-time tail under FB and SMART is larger than that under FCFS. However, we show that the sojourn-time tail for a job of size y under FB and all SMART policies still outperforms FCFS as long as y is not too large.https://authors.library.caltech.edu/records/7y4w9-rsx11Scheduling despite inexact job-size information
https://resolver.caltech.edu/CaltechAUTHORS:20170105-110703130
Authors: {'items': [{'id': 'Wierman-A', 'name': {'family': 'Wierman', 'given': 'Adam'}}, {'id': 'Nuyens-M', 'name': {'family': 'Nuyens', 'given': 'Misja'}}]}
Year: 2008
DOI: 10.1145/384529.1375461
Motivated by the optimality of Shortest Remaining Processing Time (SRPT) for mean response time, in recent years many computer systems have used the heuristic of "favoring small jobs" in order to dramatically reduce user response times. However, rarely do computer systems have knowledge of exact remaining sizes. In this paper, we introduce the class of ε-SMART policies, which formalizes the heuristic of "favoring small jobs" in a way that includes a wide range of policies that schedule using inexact job-size information. Examples of ε-SMART policies include (i) policies that use exact size information, e.g., SRPT and PSJF, (ii) policies that use job-size estimates, and (iii) policies that use a finite number of size-based priority levels.
For many ε-SMART policies, e.g., SRPT with inexact job-size information, there are no analytic results available in the literature. In this work, we prove four main results: we derive upper and lower bounds on the mean response time, the mean slowdown, the response-time tail, and the conditional response time of ε-SMART policies. In each case, the results explicitly characterize the tradeoff between the accuracy of the job-size information used to prioritize and the performance of the resulting policy. Thus, the results provide designers insight into how accurate job-size information must be in order to achieve desired performance guarantees.https://authors.library.caltech.edu/records/k6qfy-vsj29The effect of local scheduling in load balancing designs
https://resolver.caltech.edu/CaltechAUTHORS:20160819-110902462
Authors: {'items': [{'id': 'Chen-Ho-Lin', 'name': {'family': 'Chen', 'given': 'Ho-Lin'}}, {'id': 'Marden-J-R', 'name': {'family': 'Marden', 'given': 'Jason R.'}}, {'id': 'Wierman-A', 'name': {'family': 'Wierman', 'given': 'Adam'}}]}
Year: 2008
DOI: 10.1145/1453175.1453200
Load balancing is a common approach to task assignment in distributed architectures such as web server farms, database systems, grid computing clusters, and others. In such designs there is a dispatcher that seeks to balance the assignment of service requests (jobs) across the servers in the system so that the response time of
jobs at each server is (nearly) the same. Such designs are popular due to the increased robustness they provide to bursts of traffic, server failures, etc., as well as the inherent scalability they provide. However, there is also a major drawback to load balancing designs – some performance is sacrificed. Specifically, it would be possible to reduce user response times by moving away from load balancing
designs.
Our goal in this paper is to study the degree of inefficiency in load balancing designs. Further, we will show that the degree of inefficiency depends on the scheduling discipline used locally at each of the servers, i.e. the local scheduler.
Our results (see Section 3) show that the local scheduling policy has a significant impact on the degree of inefficiency in load balancing designs. In particular, the local scheduler in traditional designs is often modeled by Processor Sharing (PS), which shares the server evenly among all jobs in the system. When the local scheduler is PS, the degree of inefficiency grows linearly with the
number of servers in the system. In contrast, if the local scheduler is changed to Shortest Remaining Processing Time first (SRPT), as has been suggested in a variety modern designs [7, 3, 10], the degree of inefficiency can be independent of the number of servers in the system and instead depend only on the heterogeneity of the
speed of the servers.https://authors.library.caltech.edu/records/1z78m-xa045Optimal speed scaling under arbitrary power functions
https://resolver.caltech.edu/CaltechAUTHORS:20160420-132610613
Authors: {'items': [{'id': 'Andrew-L-L-H', 'name': {'family': 'Andrew', 'given': 'Lachlan L. H.'}}, {'id': 'Wierman-A', 'name': {'family': 'Wierman', 'given': 'Adam'}}, {'id': 'Tang-Ao', 'name': {'family': 'Tang', 'given': 'Ao'}, 'orcid': '0000-0001-6296-644X'}]}
Year: 2009
DOI: 10.1145/1639562.1639576
This paper investigates the performance of online dynamic
speed scaling algorithms for the objective of minimizing a
linear combination of energy and response time. We prove
that (SRPT, P ^−1(n)), which uses Shortest Remaining Processing Time (SRPT) scheduling and processes at speed
such that the power used is equal to the queue length, is
2-competitive for a very wide class of power-speed tradeoff
functions. Further, we prove that there exist tradeoff functions such that no online algorithm can attain a competitive ratio less than 2.https://authors.library.caltech.edu/records/0g5qp-2pc47Optimality, fairness, and robustness in speed scaling designs
https://resolver.caltech.edu/CaltechAUTHORS:20160420-131334648
Authors: {'items': [{'id': 'Andrew-L-L-H', 'name': {'family': 'Andrew', 'given': 'Lachlan L. H.'}}, {'id': 'Lin-Minghong', 'name': {'family': 'Lin', 'given': 'Minghong'}}, {'id': 'Wierman-A', 'name': {'family': 'Wierman', 'given': 'Adam'}}]}
Year: 2010
DOI: 10.1145/1811099.1811044
This work examines fundamental tradeoffs incurred by a
speed scaler seeking to minimize the sum of expected response
time and energy use per job. We prove that a popular
speed scaler is 2-competitive for this objective and no
"natural" speed scaler can do better. Additionally, we prove
that energy-proportional speed scaling works well for both
Shortest Remaining Processing Time (SRPT) and Processor
Sharing (PS) and we show that under both SRPT and PS,
gated-static speed scaling is nearly optimal when the mean
workload is known, but that dynamic speed scaling provides
robustness against uncertain workloads. Finally, we prove
that speed scaling magnifies unfairness under SRPT but that
PS remains fair under speed scaling. These results show that
these speed scalers can achieve any two, but only two, of optimality,
fairness, and robustness.https://authors.library.caltech.edu/records/krzav-s0a98Distance-Dependent Kronecker Graphs for Modeling Social Networks
https://resolver.caltech.edu/CaltechAUTHORS:20101025-094230043
Authors: {'items': [{'id': 'Bodine-Baron-E', 'name': {'family': 'Bodine-Baron', 'given': 'Elizabeth'}}, {'id': 'Hassibi-B', 'name': {'family': 'Hassibi', 'given': 'Babak'}}, {'id': 'Wierman-A', 'name': {'family': 'Wierman', 'given': 'Adam'}}]}
Year: 2010
DOI: 10.1109/JSTSP.2010.2049412
This paper focuses on a generalization of stochastic
Kronecker graphs, introducing a Kronecker-like operator and
defining a family of generator matrices H dependent on distances
between nodes in a specified graph embedding. We prove
that any lattice-based network model with sufficiently small
distance-dependent connection probability will have a Poisson
degree distribution and provide a general framework to prove
searchability for such a network. Using this framework, we focus
on a specific example of an expanding hypercube and discuss
the similarities and differences of such a model with recently
proposed network models based on a hidden metric space. We
also prove that a greedy forwarding algorithm can find very short
paths of length O((log log n)^2) on the hypercube with n nodes,
demonstrating that distance-dependent Kronecker graphs can
generate searchable network models.https://authors.library.caltech.edu/records/b6mjq-7zh42The Average Response Time in a Heavy-traffic SRPT Queue
https://resolver.caltech.edu/CaltechAUTHORS:20161122-162740344
Authors: {'items': [{'id': 'Lin-Minghong', 'name': {'family': 'Lin', 'given': 'Minghong'}}, {'id': 'Wierman-A', 'name': {'family': 'Wierman', 'given': 'Adam'}}, {'id': 'Zwart-B', 'name': {'family': 'Zwart', 'given': 'Bert'}}]}
Year: 2010
DOI: 10.1145/1870178.1870183
Shortest Remaining Processing Time first (SRPT) has long been known to optimize the queue length distribution and the mean response time (a.k.a. flow time, sojourn time). As such, it has been the focus of a wide body of analysis. However, results about the heavy-traffic behavior of SRPT have only recently started to emerge. In this work, we characterize the growth rate of the mean response time under SRPT in the M/GI/1 system under general job size distributions. Our results illustrate the relationship between the job size tail and the heavy traffic growth rate of mean response time. Further, we show that the heavy traffic growth rate can be used to provide an accurate approximation for mean response time outside of heavy traffic.https://authors.library.caltech.edu/records/b0jnn-jp274Tail-robust scheduling via limited processor sharing
https://resolver.caltech.edu/CaltechAUTHORS:20101124-110928510
Authors: {'items': [{'id': 'Nair-Jayakrishnan', 'name': {'family': 'Nair', 'given': 'Jayakrishnan'}}, {'id': 'Wierman-A', 'name': {'family': 'Wierman', 'given': 'Adam'}}, {'id': 'Zwart-B', 'name': {'family': 'Zwart', 'given': 'Bert'}}]}
Year: 2010
DOI: 10.1016/j.peva.2010.08.012
From a rare events perspective, scheduling disciplines that work well under light (exponential) tailed workload distributions do not perform well under heavy (power) tailed workload distributions, and vice versa, leading to fundamental problems in designing schedulers that are robust to distributional assumptions on the job sizes. This paper shows how to exploit partial workload information (system load) to design a scheduler that provides robust performance across heavy-tailed and light-tailed workloads. Specifically, we derive new asymptotics for the tail of the stationary sojourn time under Limited Processor Sharing (LPS) scheduling for both heavy-tailed and light-tailed job size distributions, and show that LPS can be robust to the tail of the job size distribution if the multiprogramming level is chosen carefully as a function of the load.https://authors.library.caltech.edu/records/jk64r-vm664An architectural view of game theoretic control
https://resolver.caltech.edu/CaltechAUTHORS:20120521-093604505
Authors: {'items': [{'id': 'Gopalakrishnan-R', 'name': {'family': 'Gopalakrishnan', 'given': 'Ragavendran'}}, {'id': 'Marden-J-R', 'name': {'family': 'Marden', 'given': 'Jason R.'}}, {'id': 'Wierman-A', 'name': {'family': 'Wierman', 'given': 'Adam'}}]}
Year: 2010
DOI: 10.1145/1925019.1925026
Game-theoretic control is a promising new approach for distributed resource allocation. In this paper, we describe how game-theoretic control can be viewed as having an intrinsic layered architecture, which provides a modularization that simplifies the control design. We illustrate this architectural view by presenting details about one particular instantiation using potential games as an interface. This example serves to highlight the strengths and limitations of the proposed architecture while also illustrating the relationship between game-theoretic control and other existing approaches to distributed resource allocation.https://authors.library.caltech.edu/records/9tjt7-95q03Complexity and economics: computational constraints may not matter empirically
https://resolver.caltech.edu/CaltechAUTHORS:20160321-131503486
Authors: {'items': [{'id': 'Echenique-F', 'name': {'family': 'Echenique', 'given': 'Federico'}, 'orcid': '0000-0002-1567-6770'}, {'id': 'Golovin-D', 'name': {'family': 'Golovin', 'given': 'Daniel'}}, {'id': 'Wierman-A', 'name': {'family': 'Wierman', 'given': 'Adam'}}]}
Year: 2011
DOI: 10.1145/1978721.1978722
Recent results in complexity theory suggest that various economic theories require agents to solve intractable problems. However, such results assume the agents are optimizing explicit utility functions, whereas the economic theories merely assume the agents' behavior is rationalizable by the optimization of some utility function.
For a major economic theory, the theory of the consumer, we show that behaving in a rationalizable way is easier than the corresponding optimization problem. Specifically, if an agent's behavior is at all rationalizable, then it is rationalizable using a utility function that is easy to maximize in every budget set.https://authors.library.caltech.edu/records/gw8qh-3y569Greening geographical load balancing
https://resolver.caltech.edu/CaltechAUTHORS:20161128-151734850
Authors: {'items': [{'id': 'Liu-Zhenhua', 'name': {'family': 'Liu', 'given': 'Zhenhua'}}, {'id': 'Lin-Minghong', 'name': {'family': 'Lin', 'given': 'Minghong'}}, {'id': 'Wierman-A', 'name': {'family': 'Wierman', 'given': 'Adam'}}, {'id': 'Low-S-H', 'name': {'family': 'Low', 'given': 'Steven H.'}, 'orcid': '0000-0001-6476-3048'}, {'id': 'Andrew-L-L-H', 'name': {'family': 'Andrew', 'given': 'Lachlan L. H.'}}]}
Year: 2011
Energy expenditure has become a significant fraction of data center operating costs. Recently, "geographical load balancing" has been suggested to reduce energy cost by exploiting the electricity price differences across regions. However, this reduction of cost can paradoxically increase total energy use.
This paper explores whether the geographical diversity of Internet-scale systems can additionally be used to provide environmental gains. Specifically, we explore whether geographical load balancing can encourage use of "green" renewable energy and reduce use of "brown" fossil fuel energy. We make two contributions. First, we derive two distributed algorithms for achieving optimal geographical load balancing. Second, we show that if electricity is dynamically priced in proportion to the instantaneous fraction of the total energy that is brown, then geographical load balancing provides significant reductions in brown energy use. However, the benefits depend strongly on the degree to which systems accept dynamic energy pricing and the form of pricing used.https://authors.library.caltech.edu/records/hww5w-9w763Exploiting network effects in the provisioning of large scale systems
https://resolver.caltech.edu/CaltechAUTHORS:20120521-092530952
Authors: {'items': [{'id': 'Nair-Jayakrishnan', 'name': {'family': 'Nair', 'given': 'Jayakrishnan'}}, {'id': 'Wierman-A', 'name': {'family': 'Wierman', 'given': 'Adam'}}, {'id': 'Zwart-B', 'name': {'family': 'Zwart', 'given': 'Bert'}}]}
Year: 2011
DOI: 10.1145/2034832.2034837
Online services today are characterized by a highly congestion sensitive user base, that also experiences strong positive network effects. A majority of these services are supported by advertising and are offered for free to the end user. We study the problem of optimal capacity provisioning for a profit maximizing firm operating such an online service in the asymptotic regime of a large market size. We show that network effects heavily influence the optimal capacity provisioning strategy, as well as the profit of the firm. In particular, strong positive network effects allow the firm to operate the service with fewer servers, which translates to increased profit.https://authors.library.caltech.edu/records/916td-x7r71Heavy-traffic analysis of mean response time under Shortest Remaining Processing Time
https://resolver.caltech.edu/CaltechAUTHORS:20111025-080010192
Authors: {'items': [{'id': 'Lin-Minghong', 'name': {'family': 'Lin', 'given': 'Minghong'}}, {'id': 'Wierman-A', 'name': {'family': 'Wierman', 'given': 'Adam'}}, {'id': 'Zwart-B', 'name': {'family': 'Zwart', 'given': 'Bert'}}]}
Year: 2011
DOI: 10.1016/j.peva.2011.06.001
Shortest Remaining Processing time (SRPT) has long been known to optimize the queue length distribution and the mean response time (a.k.a. flow time, sojourn time). As such, it has been the focus of a wide body of analysis. However, results about the heavy-traffic behavior of SRPT have only recently started to emerge. In this work, we characterize the growth rate of the mean response time under SRPT in the M/GI/1 system under general job size distributions. Our results illustrate the relationship between the job size tail and the heavy traffic growth rate of mean response time. Further, we show that the heavy traffic growth rate can be used to provide an accurate approximation for mean response time outside of heavy traffic regime.https://authors.library.caltech.edu/records/4hjx7-79x15Competition yields efficiency in load balancing games
https://resolver.caltech.edu/CaltechAUTHORS:20111128-101928328
Authors: {'items': [{'id': 'Anselmi-J', 'name': {'family': 'Anselmi', 'given': 'Jonatha'}}, {'id': 'Ayesta-U', 'name': {'family': 'Ayesta', 'given': 'Urtzi'}}, {'id': 'Wierman-A', 'name': {'family': 'Wierman', 'given': 'Adam'}}]}
Year: 2011
DOI: 10.1016/j.peva.2011.07.005
We study a nonatomic congestion game with N parallel links, with each link under the control of a profit maximizing provider. Within this 'load balancing game', each provider has the freedom to set a price, or toll, for access to the link and seeks to maximize its own profit. Given prices, a Wardrop equilibrium among users is assumed, under which users all choose paths of minimal and identical effective cost. Within this model we have oligopolistic price competition which, in equilibrium, gives rise to situations where neither providers nor users have incentives to adjust their prices or routes, respectively. In this context, we provide new results about the existence and efficiency of oligopolistic equilibria. Our main theorem shows that, when the number of providers is small, oligopolistic equilibria can be extremely inefficient; however as the number of providers N grows, the oligopolistic equilibria become increasingly efficient (at a rate of 1/N) and, as N→∞, the oligopolistic equilibrium matches the socially optimal allocation.https://authors.library.caltech.edu/records/gjcb5-cmb38Geographical load balancing with renewables
https://resolver.caltech.edu/CaltechAUTHORS:20161128-151114708
Authors: {'items': [{'id': 'Liu-Zhenhua', 'name': {'family': 'Liu', 'given': 'Zhenhua'}}, {'id': 'Lin-Minghong', 'name': {'family': 'Lin', 'given': 'Minghong'}}, {'id': 'Wierman-A', 'name': {'family': 'Wierman', 'given': 'Adam'}}, {'id': 'Low-S-H', 'name': {'family': 'Low', 'given': 'Steven H.'}, 'orcid': '0000-0001-6476-3048'}, {'id': 'Andrew-L-L-H', 'name': {'family': 'Andrew', 'given': 'Lachlan L. H.'}}]}
Year: 2011
DOI: 10.1145/2160803.2160862
Given the significant energy consumption of data centers, improving their energy efficiency is an important social problem. However, energy efficiency is necessary but not sufficient for sustainability, which demands reduced usage of energy from fossil fuels. This paper investigates the feasibility of powering internet-scale systems using (nearly) entirely renewable energy. We perform a trace-based study to evaluate three issues related to achieving this goal: the impact of geographical load balancing, the role of storage, and the optimal mix of renewables. Our results highlight that geographical load balancing can significantly reduce the required capacity of renewable energy by using the energy more efficiently with "follow the renewables" routing. Further, our results show that small-scale storage can be useful, especially in combination with geographical load balancing, and that an optimal mix of renewables includes significantly more wind than photovoltaic solar.https://authors.library.caltech.edu/records/jz61t-zg441Dispatching to incentivize fast service in multi-server queues
https://resolver.caltech.edu/CaltechAUTHORS:20120521-101935284
Authors: {'items': [{'id': 'Doroudi-S', 'name': {'family': 'Doroudi', 'given': 'Sherwin'}}, {'id': 'Gopalakrishnan-R', 'name': {'family': 'Gopalakrishnan', 'given': 'Ragavendran'}}, {'id': 'Wierman-A', 'name': {'family': 'Wierman', 'given': 'Adam'}}]}
Year: 2011
DOI: 10.1145/2160803.2160855
As a field, queueing theory predominantly assumes that
the arrival rate of jobs and the system parameters, e.g., service
rates, are fixed exogenously, and then proceeds to design
and analyze scheduling policies that provide efficient performance,
e.g., small response time (sojourn time). However,
in reality, the arrival rate and/or service rate may depend on
the scheduling and, more generally, the performance of the
system. For example, if arrivals are strategic then a decrease
in the mean response time due to improved scheduling may
result in an increase in the arrival rate.https://authors.library.caltech.edu/records/xeha6-r7x21Many Flows Asymptotics for SMART Scheduling Policies
https://resolver.caltech.edu/CaltechAUTHORS:20120326-084346980
Authors: {'items': [{'id': 'Yang-Changwoo', 'name': {'family': 'Yang', 'given': 'Changwoo'}}, {'id': 'Wierman-A', 'name': {'family': 'Wierman', 'given': 'Adam'}}, {'id': 'Shakkottai-S', 'name': {'family': 'Shakkottai', 'given': 'Sanjay'}}, {'id': 'Harchol-Balter-M', 'name': {'family': 'Harchol-Balter', 'given': 'Mor'}}]}
Year: 2012
DOI: 10.1109/TAC.2011.2173418
Scheduling policies that favor small jobs have received growing attention due to their superior performance with respect to mean delay, e.g., Shortest Remaining Processing Time (SRPT) and Preemptive Shortest Job First (PSJF). In this paper, we study the delay distribution of a generalization of the class of scheduling policies called SMART (because policies in it have "SMAll Response Times"), which includes SRPT, PSJF, and a range of practical variants, in a discrete-time queueing system under the many sources large deviations regime. Our analysis of SMART in this regime (large number of flows and large capacity) hinges on a novel two-dimensional (2-D) queueing framework that employs virtual queues and total ordering of jobs. We prove that all SMART policies have the same asymptotic delay distribution as SRPT, i.e., the delay distribution has the same decay rate. In addition, we illustrate the improvements SMART policies make over First Come First Serve (FCFS) and Processor Sharing (PS). Our 2-D queueing technique is generalizable to other policies as well. As an example, we show how the Foreground-Background (FB) policy can be analyzed using a 2-D queueing framework. FB is a policy, not contained in SMART, which manages to bias towards small jobs without knowing which jobs are small in advance.https://authors.library.caltech.edu/records/4drev-wsa20Renewable and Cooling Aware Workload Management for
Sustainable Data Centers
https://resolver.caltech.edu/CaltechAUTHORS:20130108-115043744
Authors: {'items': [{'id': 'Liu-Zhenhua', 'name': {'family': 'Liu', 'given': 'Zhenhua'}}, {'id': 'Cheng-Yuan', 'name': {'family': 'Cheng', 'given': 'Yuan'}}, {'id': 'Bash-C', 'name': {'family': 'Bash', 'given': 'Cullen'}}, {'id': 'Wierman-A', 'name': {'family': 'Wierman', 'given': 'Adam'}}, {'id': 'Gmach-D', 'name': {'family': 'Gmach', 'given': 'Daniel'}}, {'id': 'Wang-Zhikui', 'name': {'family': 'Wang', 'given': 'Zhikui'}}, {'id': 'Marwah-M', 'name': {'family': 'Marwah', 'given': 'Manish'}}, {'id': 'Hyser-C', 'name': {'family': 'Hyser', 'given': 'Chris'}}]}
Year: 2012
DOI: 10.1145/2318857.2254779
Recently, the demand for data center computing has surged,
increasing the total energy footprint of data centers worldwide. Data centers typically comprise three subsystems: IT equipment provides services to customers; power infrastructure supports the IT and cooling equipment; and the cooling infrastructure removes heat generated by these subsystems. This work presents a novel approach to model the energy flows in a data center and optimize its operation. Traditionally, supply-side constraints such as energy or cooling availability were treated independently from IT workload management. This work reduces electricity cost and environmental impact using a holistic approach that integrates renewable supply, dynamic pricing, and cooling supply including chiller and outside air cooling, with IT workload planning to improve the overall sustainability of data center operations. Specifically, we first predict renewable energy as well as IT demand. Then we use these predictions to generate an IT workload management plan that schedules IT workload and allocates IT resources within a data center according to time varying power supply and cooling efficiency. We have implemented and evaluated our approach using traces from real data centers and production systems. The results demonstrate that our approach can reduce both the recurring power costs and the use of non-renewable energy by as much as 60% compared to existing techniques, while still meeting the Service Level Agreements.https://authors.library.caltech.edu/records/dykbv-5ec39Fairness and efficiency for polling models with the κ-gated service discipline
https://resolver.caltech.edu/CaltechAUTHORS:20120705-102059217
Authors: {'items': [{'id': 'van-Wijk-A-C-C', 'name': {'family': 'van Wijk', 'given': 'A. C. C.'}}, {'id': 'Adan-I-J-B-F', 'name': {'family': 'Adan', 'given': 'I. J. B. F.'}}, {'id': 'Boxma-O-J', 'name': {'family': 'Boxma', 'given': 'O. J.'}}, {'id': 'Wierman-A', 'name': {'family': 'Wierman', 'given': 'A.'}}]}
Year: 2012
DOI: 10.1016/j.peva.2012.02.003
We study a polling model in which we want to achieve a balance between the fairness of the waiting times and the efficiency of the system. For this purpose, we introduce a novel service discipline: the κ-gated service discipline. It is a hybrid of the classical gated and exhausted disciplines, and consists of using κ_i consecutive gated service phases at queue i before the server switches to the next queue. The advantage of this discipline is that the parameters κ_i can be used to balance fairness and efficiency. We derive the distributions and means of the waiting times, a pseudo conservation law for the weighted sum of the mean waiting times, and the fluid limits of the waiting times. Our goal is to optimize the κ_i so as to minimize the differences in the mean waiting times, i.e. to achieve maximal fairness, without giving up too much on the efficiency of the system. From the fluid limits we derive a heuristic rule for setting the κ_i. In a numerical study, the heuristic is shown to perform well in most cases.https://authors.library.caltech.edu/records/kh9x7-vhb91Characterizing the Impact of the Workload on the Value of Dynamic Resizing in Data Centers
https://resolver.caltech.edu/CaltechAUTHORS:20130109-085806826
Authors: {'items': [{'id': 'Wang-Kai', 'name': {'family': 'Wang', 'given': 'Kai'}}, {'id': 'Lin-Minghong', 'name': {'family': 'Lin', 'given': 'Minghong'}}, {'id': 'Ciucu-F', 'name': {'family': 'Ciucu', 'given': 'Florin'}}, {'id': 'Wierman-A', 'name': {'family': 'Wierman', 'given': 'Adam'}}, {'id': 'Lin-Chuang', 'name': {'family': 'Lin', 'given': 'Chuang'}}]}
Year: 2012
DOI: 10.1145/2318857.2254815
Energy consumption imposes a significant cost for data centers;
yet much of that energy is used to maintain excess
service capacity during periods of predictably low load. Resultantly,
there has recently been interest in developing designs
that allow the service capacity to be dynamically resized
to match the current workload. However, there is still
much debate about the value of such approaches in real settings.
In this paper, we show that the value of dynamic resizing
is highly dependent on statistics of the workload process.
In particular, both slow time-scale non-stationarities
of the workload (e.g., the peak-to-mean ratio) and the fast
time-scale stochasticity (e.g., the burstiness of arrivals) play
key roles. To illustrate the impact of these factors, we
combine optimization-based modeling of the slow time-scale
with stochastic modeling of the fast time scale. Within this
framework, we provide both analytic and numerical results
characterizing when dynamic resizing does (and does not)
provide benefits.https://authors.library.caltech.edu/records/eme48-gz052Is Tail-Optimal Scheduling Possible?
https://resolver.caltech.edu/CaltechAUTHORS:20121221-104528945
Authors: {'items': [{'id': 'Wierman-A', 'name': {'family': 'Wierman', 'given': 'Adam'}}, {'id': 'Zwart-B', 'name': {'family': 'Zwart', 'given': 'Bert'}}]}
Year: 2012
DOI: 10.1287/opre.1120.1086
This paper focuses on the competitive analysis of scheduling disciplines in a large deviations setting. Although there are policies that are known to optimize the sojourn time tail under a large class of heavy-tailed job sizes (e.g., processor sharing and shortest remaining processing time) and there are policies known to optimize the sojourn time tail in the case of light-tailed job sizes (e.g., first come first served), no policies are known that can optimize the sojourn time tail across both light- and heavy-tailed job size distributions. We prove that no such work-conserving, nonanticipatory, nonlearning policy exists, and thus that a policy must learn (or know) the job size distribution in order to optimize the sojourn time tail.https://authors.library.caltech.edu/records/5jvpq-hyc27Power-aware speed scaling in processor sharing systems: Optimality and robustness
https://resolver.caltech.edu/CaltechAUTHORS:20121214-152442054
Authors: {'items': [{'id': 'Wierman-A', 'name': {'family': 'Wierman', 'given': 'Adam'}}, {'id': 'Andrew-L-L-H', 'name': {'family': 'Andrew', 'given': 'Lachlan L. H.'}}, {'id': 'Tang-Ao', 'name': {'family': 'Tang', 'given': 'Ao'}, 'orcid': '0000-0001-6296-644X'}]}
Year: 2012
DOI: 10.1016/j.peva.2012.07.002
Adapting the speed of a processor is an effective method to reduce energy consumption. This paper studies the optimal way to scale speed to balance response time and energy consumption under processor sharing scheduling. It is shown that using a static rate while the system is busy provides nearly optimal performance, but having a wider range of available speeds increases robustness to different traffic loads. In particular, the dynamic speed scaling optimal for Poisson arrivals is also constant-competitive in the worst case. The scheme that equates power consumption with queue occupancy is shown to be 10-competitive when power is cubic in speed.https://authors.library.caltech.edu/records/07ts9-6xm49Online optimization with switching cost
https://resolver.caltech.edu/CaltechAUTHORS:20161122-155941033
Authors: {'items': [{'id': 'Lin-Minghong', 'name': {'family': 'Lin', 'given': 'Minghong'}}, {'id': 'Wierman-A', 'name': {'family': 'Wierman', 'given': 'Adam'}}, {'id': 'Roytman-A', 'name': {'family': 'Roytman', 'given': 'Alan'}}, {'id': 'Meyerson-A', 'name': {'family': 'Meyerson', 'given': 'Adam'}}, {'id': 'Andrew-L-L-H', 'name': {'family': 'Andrew', 'given': 'Lachlan L. H.'}}]}
Year: 2012
DOI: 10.1145/2425248.2425275
We consider algorithms for "smoothed online convex optimization (SOCO)" problems. SOCO is a variant of the class of "online convex optimization (OCO)" problems that is strongly related to the class of "metrical task systems", each of which have been studied extensively. Prior literature on these problems has focused on two performance metrics: regret and competitive ratio. There exist known algorithms with sublinear regret and known algorithms with constant competitive ratios; however no known algorithms achieve both. In this paper, we show that this is due to a fundamental incompatibility between regret and the competitive ratio -- no algorithm (deterministic or randomized) can achieve sublinear regret and a constant competitive ratio, even in the case when the objective functions are linear.https://authors.library.caltech.edu/records/96vkn-atn68Distributed Welfare Games
https://resolver.caltech.edu/CaltechAUTHORS:20130411-094140621
Authors: {'items': [{'id': 'Marden-J-R', 'name': {'family': 'Marden', 'given': 'Jason R.'}}, {'id': 'Wierman-A', 'name': {'family': 'Wierman', 'given': 'Adam'}}]}
Year: 2013
DOI: 10.1287/opre.1120.1137
Game-theoretic tools are becoming a popular design choice for distributed resource allocation algorithms. A central component of this design choice is the assignment of utility functions to the individual agents. The goal is to assign each agent an admissible utility function such that the resulting game possesses a host of desirable properties, including scalability, tractability, and existence and efficiency of pure Nash equilibria. In this paper we formally study this question of utility design on a class of games termed distributed welfare games. We identify several utility design methodologies that guarantee desirable game properties irrespective of the specific application domain. Lastly, we illustrate the results in this paper on two commonly studied classes of resource allocation problems: "coverage" problems and "coloring" problems.https://authors.library.caltech.edu/records/bc67q-z0253The Fundamentals of Heavy-tails: Properties, Emergence, and Identification
https://resolver.caltech.edu/CaltechAUTHORS:20161206-160007274
Authors: {'items': [{'id': 'Nair-J', 'name': {'family': 'Nair', 'given': 'Jayakrishnan'}}, {'id': 'Wierman-A', 'name': {'family': 'Wierman', 'given': 'Adam'}}, {'id': 'Zwart-B', 'name': {'family': 'Zwart', 'given': 'Bert'}}]}
Year: 2013
DOI: 10.1145/2494232.2466587
Heavy-tails are a continual source of excitement and confusion across disciplines as they are repeatedly "discovered" in new contexts. This is especially true within computer systems, where heavy-tails seemingly pop up everywhere -- from degree distributions in the internet and social networks to file sizes and interarrival times of workloads. However, despite nearly a decade of work on heavy-tails they are still treated as mysterious, surprising, and even controversial.
The goal of this tutorial is to show that heavy-tailed distributions need not be mysterious and should not be surprising or controversial. In particular, we will demystify heavy-tailed distributions by showing how to reason formally about their counter-intuitive properties; we will highlight that their emergence should be expected (not surprising) by showing that a wide variety of general processes lead to heavy-tailed distributions; and we will highlight that most of the controversy surrounding heavy-tails is the result of bad statistics, and can be avoided by using the proper tools.https://authors.library.caltech.edu/records/bwxtf-nbp83Overcoming the Limitations of Utility Design for Multiagent Systems
https://resolver.caltech.edu/CaltechAUTHORS:20130718-111911401
Authors: {'items': [{'id': 'Marden-J-R', 'name': {'family': 'Marden', 'given': 'Jason R.'}}, {'id': 'Wierman-A', 'name': {'family': 'Wierman', 'given': 'Adam'}}]}
Year: 2013
DOI: 10.1109/TAC.2013.2237831
Cooperative control focuses on deriving desirable collective behavior in multiagent systems through the design of local control algorithms. Game theory is beginning to emerge as a valuable set of tools for achieving this objective. A central component of this game theoretic approach is the assignment of utility functions to the individual agents. Here, the goal is to assign utility functions within an "admissible" design space such that the resulting game possesses desirable properties. Our first set of results illustrates the complexity associated with such a task. In particular, we prove that if we restrict the class of utility functions to be local, scalable, and budget-balanced then 1) ensuring that the resulting game possesses a pure Nash equilibrium requires computing a Shapley value, which can be computationally prohibitive for large-scale systems, and 2) ensuring that the allocation which optimizes the system level objective is a pure Nash equilibrium is impossible. The last part of this paper demonstrates that both limitations can be overcome by introducing an underlying state space into the potential game structure.https://authors.library.caltech.edu/records/ana1w-2z968Data center demand response: avoiding the coincident peak via workload shifting and local generation
https://resolver.caltech.edu/CaltechAUTHORS:20161128-165016440
Authors: {'items': [{'id': 'Liu-Zhenhua', 'name': {'family': 'Liu', 'given': 'Zhenhua'}}, {'id': 'Wierman-A', 'name': {'family': 'Wierman', 'given': 'Adam'}}, {'id': 'Cheng-Yuan', 'name': {'family': 'Cheng', 'given': 'Yuan'}}, {'id': 'Razon-B', 'name': {'family': 'Razon', 'given': 'Benjamin'}}, {'id': 'Chen-Niangjun', 'name': {'family': 'Chen', 'given': 'Niangjun'}, 'orcid': '0000-0002-2289-9737'}]}
Year: 2013
DOI: 10.1145/2494232.2465740
Demand response is a crucial aspect of the future smart grid. It has the potential to provide significant peak demand reduction and to ease the incorporation of renewable energy into the grid. Data centers' participation in demand response is becoming increasingly important given the high and increasing energy consumption and the flexibility in demand management in data centers compared to conventional industrial facilities. In this extended abstract we briefly describe recent work in our full paper on two demand response schemes to reduce a data center's peak loads and energy expenditure: workload shifting and the use of local power generations. In our full paper, we conduct a detailed characterization study of coincident peak data over two decades from Fort Collins Utilities, Colorado and then develop two algorithms for data centers by combining workload scheduling and local power generation to avoid the coincident peak and reduce the energy expenditure. The first algorithm optimizes the expected cost and the second one provides a good worst-case guarantee for any coincident peak pattern. We evaluate these algorithms via numerical simulations based on real world traces from production systems. The results show that using workload shifting in combination with local generation can provide significant cost savings (up to 40% in the Fort Collins Utilities' case) compared to either alone.https://authors.library.caltech.edu/records/t6bcv-qs782A Tale of Two Metrics: Simultaneous Bounds on Competitiveness and Regret
https://resolver.caltech.edu/CaltechAUTHORS:20160420-130614870
Authors: {'items': [{'id': 'Andrew-L-L-H', 'name': {'family': 'Andrew', 'given': 'Lachlan'}}, {'id': 'Barman-S', 'name': {'family': 'Barman', 'given': 'Siddharth'}}, {'id': 'Ligett-K', 'name': {'family': 'Ligett', 'given': 'Katrina'}, 'orcid': '0000-0003-2780-6656'}, {'id': 'Lin-Minghong', 'name': {'family': 'Lin', 'given': 'Minghong'}}, {'id': 'Meyerson-A', 'name': {'family': 'Meyerson', 'given': 'Adam'}}, {'id': 'Roytman-A', 'name': {'family': 'Roytman', 'given': 'Alan'}}, {'id': 'Wierman-A', 'name': {'family': 'Wierman', 'given': 'Adam'}}]}
Year: 2013
DOI: 10.1145/2494232.2465533
We consider algorithms for "smoothed online convex optimization" (SOCO) problems, which are a hybrid between
online convex optimization (OCO) and metrical task system
(MTS) problems. Historically, the performance metric
for OCO was regret and that for MTS was competitive ratio
(CR). There are algorithms with either sublinear regret or
constant CR, but no known algorithm achieves both simultaneously. We show that this is a fundamental limitation – no algorithm (deterministic or randomized) can achieve sublinear regret and a constant CR, even when the objective functions are linear and the decision space is one dimensional. However, we present an algorithm that, for the important one dimensional case, provides sublinear regret and a CR that grows arbitrarily slowly.https://authors.library.caltech.edu/records/cbbaw-cqb13Joint Optimization of Overlapping Phases in MapReduce
https://resolver.caltech.edu/CaltechAUTHORS:20130930-120231815
Authors: {'items': [{'id': 'Lin-Minghong', 'name': {'family': 'Lin', 'given': 'Minghong'}}, {'id': 'Zhang-Li', 'name': {'family': 'Zhang', 'given': 'Li'}}, {'id': 'Wierman-A', 'name': {'family': 'Wierman', 'given': 'Adam'}}, {'id': 'Tan-Jian', 'name': {'family': 'Tan', 'given': 'Jian'}}]}
Year: 2013
DOI: 10.1016/j.peva.2013.08.013
MapReduce is a scalable parallel computing framework for big data processing. It exhibits multiple
processing phases, and thus an efficient job scheduling mechanism is crucial for ensuring efficient resource
utilization. There are a variety of scheduling challenges within the MapReduce architecture, and this paper
studies the challenges that result from the overlapping of the "map" and "shuffle" phases. We propose
a new, general model for this scheduling problem, and validate this model using cluster experiments.
Further, we prove that scheduling to minimize average response time in this model is strongly NP-hard
in the offline case and that no online algorithm can be constant-competitive. However, we provide two
online algorithms that match the performance of the offline optimal when given a slightly faster service
rate (i.e., in the resource augmentation framework). Finally, we validate the algorithms using a workload
trace from a Google cluster and show that the algorithms are near optimal in practical settings.https://authors.library.caltech.edu/records/gxzj1-p9421Data center demand response: Avoiding the coincident peak via workload shifting and local generation
https://resolver.caltech.edu/CaltechAUTHORS:20131114-120720977
Authors: {'items': [{'id': 'Liu-Zhenhua', 'name': {'family': 'Liu', 'given': 'Zhenhua'}}, {'id': 'Wierman-A', 'name': {'family': 'Wierman', 'given': 'Adam'}}, {'id': 'Cheng-Yuan', 'name': {'family': 'Cheng', 'given': 'Yuan'}}, {'id': 'Razon-B', 'name': {'family': 'Razon', 'given': 'Benjamin'}}, {'id': 'Chen-Niangjun', 'name': {'family': 'Chen', 'given': 'Niangjun'}, 'orcid': '0000-0002-2289-9737'}]}
Year: 2013
DOI: 10.1016/j.peva.2013.08.014
Demand response is a crucial aspect of the future smart grid. It has the potential to provide significant peak demand reduction and to ease the incorporation of renewable energy into the grid. Data centers' participation in demand response is becoming increasingly important given their high and increasing energy consumption and their flexibility in demand management compared to conventional industrial facilities. In this paper, we study two demand response schemes to reduce a data center's peak loads and energy expenditure: workload shifting and the use of local power generation. We conduct a detailed characterization study of coincident peak data over two decades from Fort Collins Utilities, Colorado and then develop two algorithms for data centers by combining workload scheduling and local power generation to avoid the coincident peak and reduce the energy expenditure. The first algorithm optimizes the expected cost and the second one provides a good worst-case guarantee for any coincident peak pattern, workload demand and renewable generation prediction error distributions. We evaluate these algorithms via numerical simulations based on real world traces from production systems. The results show that using workload shifting in combination with local generation can provide significant cost savings (up to 40% under the Fort Collins Utilities charging scheme) compared to either alone.https://authors.library.caltech.edu/records/2ma2b-6qk27Dynamic Right-Sizing for Power-Proportional Data Centers
https://resolver.caltech.edu/CaltechAUTHORS:20140221-092537914
Authors: {'items': [{'id': 'Lin-Minghong', 'name': {'family': 'Lin', 'given': 'Minghong'}}, {'id': 'Wierman-A', 'name': {'family': 'Wierman', 'given': 'Adam'}}, {'id': 'Andrew-L-L-H', 'name': {'family': 'Andrew', 'given': 'Lachlan L. H.'}}, {'id': 'Thereska-E', 'name': {'family': 'Thereska', 'given': 'Eno'}}]}
Year: 2013
DOI: 10.1109/TNET.2012.2226216
Power consumption imposes a significant cost for data centers implementing cloud services, yet much of that power is used to maintain excess service capacity during periods of low load. This paper investigates how much can be saved by dynamically "right-sizing" the data center by turning off servers during such periods and how to achieve that saving via an online algorithm. We propose a very general model and prove that the optimal offline algorithm for dynamic right-sizing has a simple structure when viewed in reverse time, and this structure is exploited to develop a new "lazy" online algorithm, which is proven to be 3-competitive. We validate the algorithm using traces from two real data-center workloads and show that significant cost savings are possible. Additionally, we contrast this new algorithm with the more traditional approach of receding horizon control.https://authors.library.caltech.edu/records/ck0zw-r0d05Joint Optimization of Overlapping Phases in MapReduce
https://resolver.caltech.edu/CaltechAUTHORS:20140730-162927910
Authors: {'items': [{'id': 'Lin-Minghong', 'name': {'family': 'Lin', 'given': 'Minghong'}}, {'id': 'Zhang-Li', 'name': {'family': 'Zhang', 'given': 'Li'}}, {'id': 'Wierman-A', 'name': {'family': 'Wierman', 'given': 'Adam'}}, {'id': 'Tan-Jian', 'name': {'family': 'Tan', 'given': 'Jian'}}]}
Year: 2013
DOI: 10.1145/2567529.2567534
MapReduce is a scalable parallel computing framework for
big data processing. It exhibits multiple processing phases,
and thus an efficient job scheduling mechanism is crucial for ensuring efficient resource utilization. This work studies the scheduling challenge that results from the overlapping of the "map" and "shuffle" phases in MapReduce. We propose a new, general model for this scheduling problem. Further, we prove that scheduling to minimize average response time
in this model is strongly NP-hard in the offline case and
that no online algorithm can be constant-competitive in the
online case. However, we provide two online algorithms that
match the performance of the offline optimal when given a
slightly faster service rate.https://authors.library.caltech.edu/records/6g543-08780Greening Geographical Load Balancing
https://resolver.caltech.edu/CaltechAUTHORS:20150112-084413455
Authors: {'items': [{'id': 'Liu-Zhenhua', 'name': {'family': 'Liu', 'given': 'Zhenhua'}}, {'id': 'Lin-Minghong', 'name': {'family': 'Lin', 'given': 'Minghong'}}, {'id': 'Wierman-A', 'name': {'family': 'Wierman', 'given': 'Adam'}}, {'id': 'Low-S-H', 'name': {'family': 'Low', 'given': 'Steven'}, 'orcid': '0000-0001-6476-3048'}, {'id': 'Andrew-L-L-H', 'name': {'family': 'Andrew', 'given': 'Lachlan L. H.'}}]}
Year: 2014
DOI: 10.1109/TNET.2014.2308295
Energy expenditure has become a significant fraction of data center operating costs. Recently, "geographical load balancing" has been proposed to reduce energy cost by exploiting the electricity price differences across regions. However, this reduction of cost can paradoxically increase total energy use. We explore whether the geographical diversity of Internet-scale systems can also provide environmental gains. Specifically, we explore whether geographical load balancing can encourage use of "green" renewable energy and reduce use of "brown" fossil fuel energy. We make two contributions. First, we derive three distributed algorithms for achieving optimal geographical load balancing. Second, we show that if the price of electricity is proportional to the instantaneous fraction of the total energy that is brown, then geographical load balancing significantly reduces brown energy use. However, the benefits depend strongly on dynamic energy pricing and the form of pricing used.https://authors.library.caltech.edu/records/nqgqj-djm27The Economics of the Cloud: Price Competition and Congestion
https://resolver.caltech.edu/CaltechAUTHORS:20150109-074030107
Authors: {'items': [{'id': 'Anselmi-J', 'name': {'family': 'Anselmi', 'given': 'Jonatha'}}, {'id': 'Ardagna-D', 'name': {'family': 'Ardagna', 'given': 'Danilo'}}, {'id': 'Lui-John-C-S', 'name': {'family': 'Lui', 'given': 'John C. S.'}}, {'id': 'Wierman-A', 'name': {'family': 'Wierman', 'given': 'Adam'}}, {'id': 'Xu-Yunjian', 'name': {'family': 'Xu', 'given': 'Yunjian'}}, {'id': 'Yang-Zichao', 'name': {'family': 'Yang', 'given': 'Zichao'}}]}
Year: 2014
DOI: 10.1145/2692375.2692380
This letter provides an overview of our recent work studying the impacts of price competition and congestion in the cloud marketplace. Specifically, we discuss a three-tier market model that studies a vertical marketplace where users purchase services from Software-as-a-Service (SaaS) providers, which in turn purchase computing resources from either Provider-as-a-Service (PaaS) or Infrastructure-as-a-Service (IaaS) providers.https://authors.library.caltech.edu/records/hh5kz-zmf44Pricing Data Center Demand Response
https://resolver.caltech.edu/CaltechAUTHORS:20140804-133949957
Authors: {'items': [{'id': 'Liu-Zhenhua', 'name': {'family': 'Liu', 'given': 'Zhenhua'}}, {'id': 'Liu-Iris', 'name': {'family': 'Liu', 'given': 'Iris'}}, {'id': 'Low-S-H', 'name': {'family': 'Low', 'given': 'Steven'}, 'orcid': '0000-0001-6476-3048'}, {'id': 'Wierman-A', 'name': {'family': 'Wierman', 'given': 'Adam'}}]}
Year: 2014
DOI: 10.1145/2591971.2592004
Demand response is crucial for the incorporation of renewable energy into the grid. In this paper, we focus on a particularly promising industry for demand response: data centers. We use simulations to show that, not only are data centers large loads, but they can provide as much (or possibly more) flexibility as large-scale storage if given the proper incentives. However, due to the market power most data centers maintain, it is difficult to design programs that are efficient for data center demand response. To that end, we propose that prediction-based pricing is an appealing market design, and show that it outperforms more traditional supply function bidding mechanisms in situations where market power is an issue. However, prediction-based pricing may be inefficient when predictions are inaccurate, and so we provide analytic, worst-case bounds on the impact of prediction error on the efficiency of prediction-based pricing. These bounds hold even when network constraints are considered, and highlight that prediction-based pricing is surprisingly robust to prediction error.https://authors.library.caltech.edu/records/8vqy3-0p575Energy Procurement Strategies in the Presence of Intermittent Sources
https://resolver.caltech.edu/CaltechAUTHORS:20140804-132430669
Authors: {'items': [{'id': 'Nair-J', 'name': {'family': 'Nair', 'given': 'Jayakrishnan'}}, {'id': 'Adlakha-S', 'name': {'family': 'Adlakha', 'given': 'Sachin'}}, {'id': 'Wierman-A', 'name': {'family': 'Wierman', 'given': 'Adam'}}]}
Year: 2014
DOI: 10.1145/2591971.2591982
The increasing penetration of intermittent, unpredictable renewable energy sources such as wind energy, poses significant challenges for utility companies trying to incorporate renewable energy in their portfolio. In this work, we study the problem of conventional energy procurement in the presence of intermittent renewable resources. We model the problem as a variant of the newsvendor problem, in which the presence of renewable resources induces supply side uncertainty, and in which conventional energy may be procured in three stages to balance supply and demand. We compute closed-form expressions for the optimal energy procurement strategy and study the impact of increasing renewable penetration, and of proposed changes to the structure of electricity markets. We explicitly characterize the impact of a growing renewable penetration on the procurement policy by considering a scaling regime that models the aggregation of unpredictable renewable sources. A key insight from our results is that there is a separation between the impact of the stochastic nature of this aggregation, and the impact of market structure and forecast accuracy. Additionally, we study the impact on procurement of two proposed changes to the market structure: the addition and the placement of an intermediate market. We show that addition of an intermediate market does not necessarily increase the efficiency of utilization of renewable sources. Further, we show that the optimal placement of the intermediate market is insensitive to the level of renewable penetration.https://authors.library.caltech.edu/records/2cmpe-xt313On competitive provisioning of cloud services
https://resolver.caltech.edu/CaltechAUTHORS:20140917-082453194
Authors: {'items': [{'id': 'Nair-J', 'name': {'family': 'Nair', 'given': 'Jayakrishnan'}}, {'id': 'Subramanian-V-G', 'name': {'family': 'Subramanian', 'given': 'Vijay G.'}}, {'id': 'Wierman-A', 'name': {'family': 'Wierman', 'given': 'Adam'}}]}
Year: 2014
DOI: 10.1145/2667522.2667531
Motivated by cloud services, we consider the interplay of network effects, congestion, and competition in ad-supported services. We study the strategic interactions between competing service providers and a user base, modeling congestion sensitivity and two forms of positive network effects: "firm-specific" versus "industry-wide." Our analysis reveals that users are generally no better off due to the competition in a marketplace of ad-supported services. Further, our analysis highlights an important contrast between firm-specific and industry-wide network effects: firms can coexist in a marketplace with industry-wide network effects, but near-monopolies tend to emerge in marketplaces with firm-specific network effects.https://authors.library.caltech.edu/records/cg4a0-30b78Special Issue on Pricing and Incentives in Networks and Systems: Guest Editors' Introduction
https://resolver.caltech.edu/CaltechAUTHORS:20141204-160518575
Authors: {'items': [{'id': 'Courcoubetis-C', 'name': {'family': 'Courcoubetis', 'given': 'Costas'}}, {'id': 'Guérin-R', 'name': {'family': 'Guérin', 'given': 'Roch'}}, {'id': 'Loiseau-P', 'name': {'family': 'Loiseau', 'given': 'Patrick'}}, {'id': 'Parkes-D', 'name': {'family': 'Parkes', 'given': 'David'}}, {'id': 'Walrand-J', 'name': {'family': 'Walrand', 'given': 'Jean'}}, {'id': 'Wierman-A', 'name': {'family': 'Wierman', 'given': 'Adam'}}]}
Year: 2014
DOI: 10.1145/2665064
Today's communication networks and networked systems are highly complex and heterogeneous
and are often owned by profit-making entities. For new technologies or
infrastructure designs to be adopted, they must not only be based on sound engineering
performance considerations but also present the right economic incentives. Recent
changes in regulations of the telecommunication industry make such economic considerations
even more urgent. For instance, new concerns such as network neutrality
have a significant impact on the evolution of communication networks.
At the same time, communication networks and networked systems support increasing
economic activity based on applications and services such as cloud computing,
social networks, and peer-to-peer networks. These applications pose new challenges
including the development of good pricing and incentive mechanisms to promote effective
system-wide behavior. Similarly, the security and privacy of these applications are
themselves heavily dependent on economic considerations, which therefore need to be
fully understood.
To address these questions, this special issue brings together a relevant set of state-of-the-art research contributions on complementary topics including communication
networks, wireless networks, web content and security, and the use of multidisciplinary
approaches ranging from game theory and economic modeling to algorithms
and mechanism design, and including empirical studies.https://authors.library.caltech.edu/records/5d2yb-5b959Potential Games Are Necessary to Ensure Pure Nash Equilibria in Cost Sharing Games
https://resolver.caltech.edu/CaltechAUTHORS:20150108-102735345
Authors: {'items': [{'id': 'Gopalakrishnan-R', 'name': {'family': 'Gopalakrishnan', 'given': 'Ragavendran'}}, {'id': 'Marden-J-R', 'name': {'family': 'Marden', 'given': 'Jason R.'}}, {'id': 'Wierman-A', 'name': {'family': 'Wierman', 'given': 'Adam'}}]}
Year: 2014
DOI: 10.1287/moor.2014.0651
We consider the problem of designing distribution rules to share "welfare" (cost or revenue) among individually strategic agents. There are many known distribution rules that guarantee the existence of a (pure) Nash equilibrium in this setting, e.g., the Shapley value and its weighted variants; however, a characterization of the space of distribution rules that guarantees the existence of a Nash equilibrium is unknown. Our work provides an exact characterization of this space for a specific class of scalable and separable games that includes a variety of applications such as facility location, routing, network formation, and coverage games. Given arbitrary local welfare functions.https://authors.library.caltech.edu/records/3sgh0-qn985Characterizing the impact of the workload on the value of
dynamic resizing in data centers
https://resolver.caltech.edu/CaltechAUTHORS:20150423-142020594
Authors: {'items': [{'id': 'Wang-Kai-ComputerScience', 'name': {'family': 'Wang', 'given': 'Kai'}}, {'id': 'Lin-Minghong', 'name': {'family': 'Lin', 'given': 'Minghong'}}, {'id': 'Ciucu-F', 'name': {'family': 'Ciucu', 'given': 'Florin'}}, {'id': 'Wierman-A', 'name': {'family': 'Wierman', 'given': 'Adam'}}, {'id': 'Lin-Chuang', 'name': {'family': 'Lin', 'given': 'Chuang'}}]}
Year: 2015
DOI: 10.1016/j.peva.2014.12.001
Energy consumption imposes a significant cost for data centers; yet much of that energy is used to maintain excess service capacity during periods of predictably low load. Resultantly, there has recently been interest in developing designs that allow the service capacity to be dynamically resized to match the current workload. However, there is still much debate about the value of such approaches in real settings. In this paper, we show that the value of dynamic resizing is highly dependent on statistics of the workload process. In particular, both slow time-scale non-stationarities of the workload (e.g., the peak-to-mean ratio) and the fast time-scale stochasticity (e.g., the burstiness of arrivals) play key roles. To illustrate the impact of these factors, we combine optimization-based modeling of the slow time-scale with stochastic modeling of the fast time-scale. Within this framework, we provide both analytic and numerical results characterizing when dynamic resizing does (and does not) provide benefits.https://authors.library.caltech.edu/records/39n1e-c6366Online Convex Optimization Using Predictions
https://resolver.caltech.edu/CaltechAUTHORS:20160823-104652452
Authors: {'items': [{'id': 'Chen-Niangjun', 'name': {'family': 'Chen', 'given': 'Niangjun'}, 'orcid': '0000-0002-2289-9737'}, {'id': 'Agarwal-A', 'name': {'family': 'Agarwal', 'given': 'Anish'}}, {'id': 'Wierman-A', 'name': {'family': 'Wierman', 'given': 'Adam'}}, {'id': 'Barman-S', 'name': {'family': 'Barman', 'given': 'Siddharth'}}, {'id': 'Andrew-L-L-H', 'name': {'family': 'Andrew', 'given': 'Lachlan L. H.'}}]}
Year: 2015
DOI: 10.1145/2796314.2745854
Making use of predictions is a crucial, but under-explored, area of online algorithms. This paper studies a class of on-line optimization problems where we have external noisy predictions available. We propose a stochastic prediction error model that generalizes prior models in the learning and stochastic control communities, incorporates correlation among prediction errors, and captures the fact that predictions improve as time passes. We prove that achieving sublinear regret and constant competitive ratio for online algorithms requires the use of an unbounded prediction window in adversarial settings, but that under more realistic stochastic prediction error models it is possible to use Averaging Fixed Horizon Control (AFHC) to simultaneously achieve sublinear regret and constant competitive ratio in expectation using only a constant-sized prediction window. Furthermore, we show that the performance of AFHC is tightly concentrated around its mean.https://authors.library.caltech.edu/records/k9exb-x2241Speculation-aware Cluster Scheduling
https://resolver.caltech.edu/CaltechAUTHORS:20151015-074525793
Authors: {'items': [{'id': 'Ren-Xiaoqi', 'name': {'family': 'Ren', 'given': 'Xiaoqi'}, 'orcid': '0000-0002-1121-9046'}, {'id': 'Ananthanarayanan-G', 'name': {'family': 'Ananthanarayanan', 'given': 'Ganesh'}}, {'id': 'Wierman-A', 'name': {'family': 'Wierman', 'given': 'Adam'}}, {'id': 'Yu-Minlan', 'name': {'family': 'Yu', 'given': 'Minlan'}}]}
Year: 2015
DOI: 10.1145/2825236.2825254
Stragglers are a crucial roadblock to achieving predictable performance in today's clusters. Speculation has been widely adopted in order to mitigate the impact of stragglers; however speculation mechanisms are designed and operated independently of job scheduling when, in fact, scheduling a speculative copy of a task has a direct impact on the resources available for other jobs. In this work, based on a simple model and its analysis, we design Hopper, a job scheduler that is speculation-aware, i.e., that integrates the tradeoffs associated with speculation into job scheduling decisions.https://authors.library.caltech.edu/records/13w39-9b151Greening multi-tenant data center demand response
https://resolver.caltech.edu/CaltechAUTHORS:20150924-103805864
Authors: {'items': [{'id': 'Chen-Niangjun', 'name': {'family': 'Chen', 'given': 'Niangjun'}, 'orcid': '0000-0002-2289-9737'}, {'id': 'Ren-Xiaoqi', 'name': {'family': 'Ren', 'given': 'Xiaoqi'}, 'orcid': '0000-0002-1121-9046'}, {'id': 'Ren-Shaolei', 'name': {'family': 'Ren', 'given': 'Shaolei'}}, {'id': 'Wierman-A', 'name': {'family': 'Wierman', 'given': 'Adam'}}]}
Year: 2015
DOI: 10.1016/j.peva.2015.06.014
Data centers have emerged as promising resources for demand response, particularly for emergency demand response (EDR), which saves the power grid from incurring blackouts during emergency situations. However, currently, data centers typically participate in EDR by turning on backup (diesel) generators, which is both expensive and environmentally unfriendly. In this paper, we focus on "greening" demand response in multi-tenant data centers, i.e., colocation data centers, by designing a pricing mechanism through which the data center operator can efficiently extract load reductions from tenants during emergency periods for EDR. In particular, we propose a pricing mechanism for both mandatory and voluntary EDR programs, ColoEDR, that is based on parameterized supply function bidding and provides provably near-optimal efficiency guarantees, both when tenants are price-taking and when they are price-anticipating. In addition to analytic results, we extend the literature on supply function mechanism design, and evaluate ColoEDR using trace-based simulation studies. These validate the efficiency analysis and conclude that the pricing mechanism is both beneficial to the environment and to the data center operator (by decreasing the need for backup diesel generation), while also aiding tenants (by providing payments for load reductions).https://authors.library.caltech.edu/records/wdn4g-a4g56Greening multi-tenant data center demand response
https://resolver.caltech.edu/CaltechAUTHORS:20151015-080757078
Authors: {'items': [{'id': 'Chen-Niangjun', 'name': {'family': 'Chen', 'given': 'Niangjun'}, 'orcid': '0000-0002-2289-9737'}, {'id': 'Ren-Xiaoqi', 'name': {'family': 'Ren', 'given': 'Xiaoqi'}, 'orcid': '0000-0002-1121-9046'}, {'id': 'Ren-Shaolei', 'name': {'family': 'Ren', 'given': 'Shaolei'}}, {'id': 'Wierman-A', 'name': {'family': 'Wierman', 'given': 'Adam'}}]}
Year: 2015
DOI: 10.1145/2825236.2825252
Data centers have become critical resources for emergency
demand response (EDR). However, currently, data centers
typically participate in EDR by turning on backup (diesel)
generators, which are both expensive and environmentally
unfriendly. In this paper, we focus on "greening" demand
response in multi-tenant data centers by incentivizing tenants' load reduction and reducing on-site diesel generation. Our proposed mechanism, ColoEDR, which is based on parameterized supply function mechanism, provides provably near-optimal efficiency guarantees, both when tenants are price-taking and when they are price-anticipating.https://authors.library.caltech.edu/records/q0619-qb556A Unifying Market Power Measure for Deregulated Transmission-Constrained Electricity Markets
https://resolver.caltech.edu/CaltechAUTHORS:20150821-103411487
Authors: {'items': [{'id': 'Bose-Subhonmesh', 'name': {'family': 'Bose', 'given': 'Subhonmesh'}, 'orcid': '0000-0002-3445-4479'}, {'id': 'Wu-Chenye', 'name': {'family': 'Wu', 'given': 'Chenye'}}, {'id': 'Xu-Yunjian', 'name': {'family': 'Xu', 'given': 'Yunjian'}}, {'id': 'Wierman-A', 'name': {'family': 'Wierman', 'given': 'Adam'}}, {'id': 'Mohsenian-Rad-H', 'name': {'family': 'Mohsenian-Rad', 'given': 'Hamed'}}]}
Year: 2015
DOI: 10.1109/TPWRS.2014.2360216
Market power assessment is a prime concern when designing a deregulated electricity market. In this paper, we propose a new functional market power measure, termed transmission constrained network flow (TCNF), that unifies three large classes of transmission constrained structural market power indices in the literature: residual supply based, network flow based, and minimal generation based. Furthermore, it is suitable for demand-response and renewable integration and hence more amenable to identifying market power in the future smart grid. The measure is defined abstractly, and allows incorporation of power flow equations in multiple ways; we investigate the current market operations using a DC approximation and further explore the possibility of including detailed AC power flow models through semidefinite relaxation, and interior-point algorithms from Matpower. Finally, we provide extensive simulations on IEEE benchmark systems and highlight the complex interaction of engineering constraints with market power assessment.https://authors.library.caltech.edu/records/9ayqz-pdm47Hopper: Decentralized Speculation-aware Cluster Scheduling at Scale
https://resolver.caltech.edu/CaltechAUTHORS:20161213-150643278
Authors: {'items': [{'id': 'Ren-Xiaoqi', 'name': {'family': 'Ren', 'given': 'Xiaoqi'}, 'orcid': '0000-0002-1121-9046'}, {'id': 'Ananthanarayanan-G', 'name': {'family': 'Ananthanarayanan', 'given': 'Ganesh'}}, {'id': 'Wierman-A', 'name': {'family': 'Wierman', 'given': 'Adam'}}, {'id': 'Yu-Minlan', 'name': {'family': 'Yu', 'given': 'Minlan'}}]}
Year: 2015
DOI: 10.1145/2829988.2787481
As clusters continue to grow in size and complexity, providing scalable and predictable performance is an increasingly important challenge. A crucial roadblock to achieving predictable performance is stragglers, i.e., tasks that take significantly longer than expected to run. At this point, speculative execution has been widely adopted to mitigate the impact of stragglers. However, speculation mechanisms are designed and operated independently of job scheduling when, in fact, scheduling a speculative copy of a task has a direct impact on the resources available for other jobs. In this work, we present Hopper, a job scheduler that is speculation-aware, i.e., that integrates the tradeoffs associated with speculation into job scheduling decisions. We implement both centralized and decentralized prototypes of the Hopper scheduler and show that 50% (66%) improvements over state-of-the-art centralized (decentralized) schedulers and speculation strategies can be achieved through the coordination of scheduling and speculation.https://authors.library.caltech.edu/records/gs16m-2qx02The Empirical Implications of Privacy-Aware Choice
https://resolver.caltech.edu/CaltechAUTHORS:20160602-090011567
Authors: {'items': [{'id': 'Cummings-R', 'name': {'family': 'Cummings', 'given': 'Rachel'}}, {'id': 'Echenique-F', 'name': {'family': 'Echenique', 'given': 'Federico'}, 'orcid': '0000-0002-1567-6770'}, {'id': 'Wierman-A', 'name': {'family': 'Wierman', 'given': 'Adam'}}]}
Year: 2016
DOI: 10.1287/opre.2015.1458
This paper initiates the study of the testable implications of choice data in settings where agents have privacy preferences. We adapt the standard conceptualization of consumer choice theory to a situation where the consumer is aware of, and has preferences over, the information revealed by her choices. The main message of the paper is that little can be inferred about consumers' preferences once we introduce the possibility that the consumer has concerns about privacy. This holds even when consumers' privacy preferences are assumed to be monotonic and separable. This motivates the consideration of stronger assumptions and, to that end, we introduce an additive model for privacy preferences that has testable implications.https://authors.library.caltech.edu/records/60xaz-bkz11When Heavy-Tailed and Light-Tailed Flows Compete: The Response Time Tail Under Generalized Max-Weight Scheduling
https://resolver.caltech.edu/CaltechAUTHORS:20160527-135934012
Authors: {'items': [{'id': 'Nair-J', 'name': {'family': 'Nair', 'given': 'Jayakrishnan'}}, {'id': 'Jagannathan-K', 'name': {'family': 'Jagannathan', 'given': 'Krishna'}}, {'id': 'Wierman-A', 'name': {'family': 'Wierman', 'given': 'Adam'}}]}
Year: 2016
DOI: 10.1109/TNET.2015.2415874
This paper focuses on the design and analysis of scheduling policies for multi-class queues, such as those found in wireless networks and high-speed switches. In this context, we study the response-time tail under generalized max-weight policies in settings where the traffic flows are highly asymmetric. Specifically, we consider a setting where a bursty flow, modeled using heavy-tailed statistics, competes with a more benign, light-tailed flow. In this setting, we prove that classical max-weight scheduling, which is known to be throughput optimal, results in the light-tailed flow having heavy-tailed response times. However, we show that via a careful design of inter-queue scheduling policy (from the class of generalized max-weight policies) and intra-queue scheduling policies, it is possible to maintain throughput optimality, and guarantee light-tailed delays for the light-tailed flow, without affecting the response-time tail for the heavy-tailed flow.https://authors.library.caltech.edu/records/bgcpn-h6e98Using Predictions in Online Optimization: Looking Forward with an Eye on the Past
https://resolver.caltech.edu/CaltechAUTHORS:20170110-154433095
Authors: {'items': [{'id': 'Chen-Niangjun', 'name': {'family': 'Chen', 'given': 'Niangjun'}, 'orcid': '0000-0002-2289-9737'}, {'id': 'Comden-J', 'name': {'family': 'Comden', 'given': 'Joshua'}}, {'id': 'Liu-Zhenhua', 'name': {'family': 'Liu', 'given': 'Zhenhua'}}, {'id': 'Gandhi-A', 'name': {'family': 'Gandhi', 'given': 'Anshul'}}, {'id': 'Wierman-A', 'name': {'family': 'Wierman', 'given': 'Adam'}}]}
Year: 2016
DOI: 10.1145/2964791.2901464
We consider online convex optimization (OCO) problems with switching costs and noisy predictions. While the design of online algorithms for OCO problems has received considerable attention, the design of algorithms in the context of noisy predictions is largely open. To this point, two promising algorithms have been proposed: Receding Horizon Control (RHC) and Averaging Fixed Horizon Control (AFHC). The comparison of these policies is largely open. AFHC has been shown to provide better worst-case performance, while RHC outperforms AFHC in many realistic settings. In this paper, we introduce a new class of policies, Committed Horizon Control (CHC), that generalizes both RHC and AFHC. We provide average-case analysis and concentration results for CHC policies, yielding the first analysis of RHC for OCO problems with noisy predictions. Further, we provide explicit results characterizing the optimal CHC policy as a function of properties of the prediction noise, e.g., variance and correlation structure. Our results provide a characterization of when AFHC outperforms RHC and vice versa, as well as when other CHC policies outperform both RHC and AFHC.https://authors.library.caltech.edu/records/jgxjc-j3z92Provisioning of Large-Scale Systems: The Interplay Between Network Effects and Strategic Behavior in the User Base
https://resolver.caltech.edu/CaltechAUTHORS:20160701-073920780
Authors: {'items': [{'id': 'Nair-J', 'name': {'family': 'Nair', 'given': 'Jayakrishnan'}}, {'id': 'Wierman-A', 'name': {'family': 'Wierman', 'given': 'Adam'}}, {'id': 'Zwart-B', 'name': {'family': 'Zwart', 'given': 'Bert'}}]}
Year: 2016
DOI: 10.1287/mnsc.2015.2210
In this paper, we consider the problem of capacity provisioning for an online service supported by advertising. We analyse the strategic interaction between the service provider and the user base in this setting, modeling positive network effects, as well as congestion sensitivity in the user base. We focus specifically on the influence of positive network effects, as well as the impact of noncooperative behavior in the user base on the firm's capacity provisioning decision and its profit. Our analysis reveals that stronger positive network effects, as well as noncooperation in the user base, drive the service into a more congested state and lead to increased profit for the service provider. However, the impact of noncooperation, or "anarchy" in the user base strongly dominates the impact of network effects.https://authors.library.caltech.edu/records/scptd-gyp92Routing and Staffing When Servers Are Strategic
https://resolver.caltech.edu/CaltechAUTHORS:20160915-085435818
Authors: {'items': [{'id': 'Gopalakrishnan-R', 'name': {'family': 'Gopalakrishnan', 'given': 'Ragavendran'}}, {'id': 'Doroudi-S', 'name': {'family': 'Doroudi', 'given': 'Sherwin'}}, {'id': 'Ward-A-R', 'name': {'family': 'Ward', 'given': 'Amy R.'}}, {'id': 'Wierman-A', 'name': {'family': 'Wierman', 'given': 'Adam'}}]}
Year: 2016
DOI: 10.1287/opre.2016.1506
Traditionally, research focusing on the design of routing and staffing policies for service systems has modeled servers as having fixed (possibly heterogeneous) service rates. However, service systems are generally staffed by people. Furthermore, people respond to workload incentives; that is, how hard a person works can depend both on how much work there is and how the work is divided between the people responsible for it. In a service system, the routing and staffing policies control such workload incentives; and so the rate servers work will be impacted by the system's routing and staffing policies. This observation has consequences when modeling service system performance, and our objective in this paper is to investigate those consequences.
We do this in the context of the M/M/N queue, which is the canonical model for large service systems. First, we present a model for "strategic" servers that choose their service rate to maximize a trade-off between an "effort cost," which captures the idea that servers exert more effort when working at a faster rate, and a "value of idleness," which assumes that servers value having idle time. Next, we characterize the symmetric Nash equilibrium service rate under any routing policy that routes based on the server idle time (such as the longest idle server first policy). We find that the system must operate in a quality-driven regime, in which servers have idle time, for an equilibrium to exist. The implication is that to have an equilibrium solution the staffing must have a first-order term that strictly exceeds that of the common square-root staffing policy. Then, within the class of policies that admit an equilibrium, we (asymptotically) solve the problem of minimizing the total cost, when there are linear staffing costs and linear waiting costs. Finally, we end by exploring the question of whether routing policies that are based on the service rate, instead of the server idle time, can improve system performance.https://authors.library.caltech.edu/records/shbgk-j4w55Opportunities for Price Manipulation by Aggregators in Electricity Markets
https://resolver.caltech.edu/CaltechAUTHORS:20160630-135555342
Authors: {'items': [{'id': 'Azizan-Ruhi-N', 'name': {'family': 'Azizan Ruhi', 'given': 'Navid'}, 'orcid': '0000-0002-4299-2963'}, {'id': 'Chen-Niangjun', 'name': {'family': 'Chen', 'given': 'Niangjun'}, 'orcid': '0000-0002-2289-9737'}, {'id': 'Dvijotham-K', 'name': {'family': 'Dvijotham', 'given': 'Krishnamurthy'}, 'orcid': '0000-0002-1328-4677'}, {'id': 'Wierman-A', 'name': {'family': 'Wierman', 'given': 'Adam'}}]}
Year: 2016
DOI: 10.1145/3003977.3003995
Aggregators are playing an increasingly crucial role for integrating renewable generation into power systems. However, the intermittent nature of renewable generation makes market interactions of aggregators difficult to monitor and regulate, raising concerns about potential market manipulations. In this paper, we address this issue by quantifying the profit an aggregator can obtain through strategic curtailment of generation in an electricity market. We show that, while the problem of maximizing the benefit from curtailment is hard in general, efficient algorithms exist when the topology of the network is radial (acyclic). Further, we highlight that significant increases in profit can be obtained through strategic curtailment in practical settings.https://authors.library.caltech.edu/records/3q9pa-2am72Energy Portfolio Optimization of Data Centers
https://resolver.caltech.edu/CaltechAUTHORS:20160906-111711386
Authors: {'items': [{'id': 'Ghamkhari-M', 'name': {'family': 'Ghamkhari', 'given': 'Mahdi'}}, {'id': 'Wierman-A', 'name': {'family': 'Wierman', 'given': 'Adam'}}, {'id': 'Mohsenian-Rad-H', 'name': {'family': 'Mohsenian-Rad', 'given': 'Hamed'}}]}
Year: 2017
DOI: 10.1109/TSG.2015.2510428
Data centers have diverse options to procure electricity. However, the current literature on exploiting these options is very fractured. Specifically, it is still not clear how utilizing one energy option may affect selecting other energy options. To address this open problem, we propose a unified energy portfolio optimization framework that takes into consideration a broad range of energy choices for data centers. Despite the complexity and nonlinearity of the original models, the proposed analysis boils down to solving tractable linear mixed-integer stochastic programs. Using experimental electricity market and Internet workload data, various insightful numerical observations are reported. It is shown that the key to link different energy options with different short- and long-term profit characteristics is to conduct risk management at different time horizons. Also, there is a direct relationship between data centers' service-level agreement parameters and their ability to exploit certain energy options. The use of on-site storage and the deployment of geographical workload distribution can particularly help data centers in utilizing high-risk energy choices, such as offering ancillary services or participating in wholesale electricity markets.https://authors.library.caltech.edu/records/pq6yx-7py56Controlling the Variability of Capacity Allocations Using Service Deferrals
https://resolver.caltech.edu/CaltechAUTHORS:20170814-150846810
Authors: {'items': [{'id': 'Ferragut-A', 'name': {'family': 'Ferragut', 'given': 'Andres'}}, {'id': 'Paganini-F', 'name': {'family': 'Paganini', 'given': 'Fernando'}}, {'id': 'Wierman-A', 'name': {'family': 'Wierman', 'given': 'Adam'}}]}
Year: 2017
DOI: 10.1145/3086506
Ensuring predictability is a crucial goal for service systems. Traditionally, research has focused on designing systems that ensure predictable performance for service requests. Motivated by applications in cloud computing and electricity markets, this article focuses on a different form of predictability: predictable allocations of service capacity. The focus of the article is a new model where service capacity can be scaled dynamically and service deferrals (subject to deadline constraints) can be used to control the variability of the active service capacity. Four natural policies for the joint problem of scheduling and managing the active service capacity are considered. For each, the variability of service capacity and the likelihood of deadline misses are derived. Further, the paper illustrates how pricing can be used to provide incentives for jobs to reveal deadlines and thus enable the possibility of service deferral in systems where the flexibility of jobs is not known to the system a priori.https://authors.library.caltech.edu/records/ztspb-8jn76Thinking Fast and Slow: Optimization Decomposition Across Timescales
https://resolver.caltech.edu/CaltechAUTHORS:20180409-162520743
Authors: {'items': [{'id': 'Goel-Gautam', 'name': {'family': 'Goel', 'given': 'Gautam'}, 'orcid': '0000-0002-7054-7218'}, {'id': 'Chen-Niangjun', 'name': {'family': 'Chen', 'given': 'Niangjun'}, 'orcid': '0000-0002-2289-9737'}, {'id': 'Wierman-A', 'name': {'family': 'Wierman', 'given': 'Adam'}, 'orcid': '0000-0002-5923-0199'}]}
Year: 2017
DOI: 10.1145/3152042.3152052
Many real-world control systems, such as the smart grid and software defined networks, have decentralized components that react quickly using local information and centralized components that react slowly using a more global view. This work seeks to provide a theoretical framework for how to design controllers that are decomposed across timescales in this way. The framework is analogous to how the network utility maximization framework uses optimization decomposition to distribute a global control problem across independent controllers, each of which solves a local problem; except our goal is to decompose a global problem temporally, extracting a timescale separation. Our results highlight that decomposition of a multi-timescale controller into a fast timescale, reactive controller and a slow timescale, predictive controller can be near-optimal in a strong sense. In particular, we exhibit such a design, named Multi-timescale Reflexive Predictive Control (MRPC), which maintains a per-timestep cost within a constant factor of the offline optimal in an adversarial setting.https://authors.library.caltech.edu/records/fvkzs-7f132Networked Cournot Competition in Platform Markets: Access Control and Efficiency Loss
https://resolver.caltech.edu/CaltechAUTHORS:20180409-162520446
Authors: {'items': [{'id': 'Lin-Weixuan', 'name': {'family': 'Lin', 'given': 'Weixuan'}}, {'id': 'Pang-John-Z-F', 'name': {'family': 'Pang', 'given': 'John Z. F.'}, 'orcid': '0000-0002-6485-7922'}, {'id': 'Bitar-E', 'name': {'family': 'Bitar', 'given': 'Eilyan'}}, {'id': 'Wierman-A', 'name': {'family': 'Wierman', 'given': 'Adam'}}]}
Year: 2017
DOI: 10.1145/3152042.3152048
This paper studies network design and efficiency loss in open and discriminatory access platforms under networked Cournot competition. In open platforms, every firm connects to every market, while discriminatory platforms limit connections between firms and markets to improve social welfare. We provide tight bounds on the efficiency loss of both platforms; (i) that the efficiency loss at a Nash equilibrium under open access is bounded by 3/2, and (ii) for discriminatory access platforms, we provide a greedy algorithm for optimizing network connections that guarantees efficiency loss at a Nash equilibrium is bounded by 4/3, under an assumption on the linearity of cost functions.https://authors.library.caltech.edu/records/3xqpx-cza86Distributed Optimization via Local Computation Algorithms
https://resolver.caltech.edu/CaltechAUTHORS:20180409-162519917
Authors: {'items': [{'id': 'London-P', 'name': {'family': 'London', 'given': 'Palma'}}, {'id': 'Chen-Niangjun', 'name': {'family': 'Chen', 'given': 'Niangjun'}, 'orcid': '0000-0002-2289-9737'}, {'id': 'Vardi-S', 'name': {'family': 'Vardi', 'given': 'Shai'}}, {'id': 'Wierman-A', 'name': {'family': 'Wierman', 'given': 'Adam'}}]}
Year: 2017
DOI: 10.1145/3152042.3152053
We propose a new approach for distributed optimization based on an emerging area of theoretical computer science -- local computation algorithms. The approach is fundamentally different from existing methodologies and provides a number of benefits, such as robustness to link failure and adaptivity in dynamic settings. Specifically, we develop an algorithm, LOCO, that given a convex optimization problem P with n variables and a "sparse" linear constraint matrix with m constraints, provably finds a solution as good as that of the best online algorithm for P using only O(log(n+m)) messages with high probability. The approach is not iterative and communication is restricted to a localized neighborhood. In addition to analytic results, we show numerically that the performance improvements over classical approaches for distributed optimization are significant, e.g., it uses orders of magnitude less communication than ADMM.https://authors.library.caltech.edu/records/rgy54-vfv22A First Look at Power Attacks in Multi-Tenant Data Centers
https://resolver.caltech.edu/CaltechAUTHORS:20180409-162520179
Authors: {'items': [{'id': 'Islam-M-A', 'name': {'family': 'Islam', 'given': 'Mohammad A.'}}, {'id': 'Ren-Shaolei', 'name': {'family': 'Ren', 'given': 'Shaolei'}}, {'id': 'Wierman-A', 'name': {'family': 'Wierman', 'given': 'Adam'}}]}
Year: 2017
DOI: 10.1145/3152042.3152070
Oversubscription increases the utilization of expensive power infrastructure in multi-tenant data centers, but it can create dangerous emergencies and outages if the designed power capacity is exceeded. Despite the safeguards in place today to prevent power outages, this extended abstract demonstrates that multi-tenant data centers are vulnerable to well-timed power attacks launched by a malicious tenant (i.e., attacker). Further, we show that there is a physical side channel -- a thermal side channel due to hot air recirculation -- that contains information about the benign tenants' runtime power usage. We develop a state-augmented Kalman filter that guides an attacker to precisely time its power attacks at moments that coincide with the benign tenants' high power demand, thus overloading the designed power capacity. Our experimental results show that an attacker can capture 53% of all attack opportunities, significantly compromising the data center availability.https://authors.library.caltech.edu/records/1t9q5-vns23Distributed optimization decomposition for joint economic dispatch and frequency regulation
https://resolver.caltech.edu/CaltechAUTHORS:20170315-153413222
Authors: {'items': [{'id': 'Cai-Desmond-W-H', 'name': {'family': 'Cai', 'given': 'Desmond'}, 'orcid': '0000-0001-9207-1890'}, {'id': 'Mallada-E', 'name': {'family': 'Mallada', 'given': 'Enrique'}, 'orcid': '0000-0003-1568-1833'}, {'id': 'Wierman-A', 'name': {'family': 'Wierman', 'given': 'Adam'}}]}
Year: 2017
DOI: 10.1109/TPWRS.2017.2682235
Economic dispatch and frequency regulation are typically viewed as fundamentally different problems in power systems and, hence, are typically studied separately. In this paper, we frame and study a joint problem that co-optimizes both slow timescale economic dispatch resources and fast timescale frequency regulation resources. We show how the joint problem can be decomposed without loss of optimality into slow and fast timescale sub-problems that have appealing interpretations as the economic dispatch and frequency regulation problems respectively. We solve the fast timescale sub-problem using a distributed frequency control algorithm that preserves network stability during transients. We solve the slow timescale subproblem using an efficient market mechanism that coordinates with the fast timescale sub-problem. We investigate the performance of our approach on the IEEE 24-bus reliability test system.https://authors.library.caltech.edu/records/vrv3s-mv932The Economics of the Cloud
https://resolver.caltech.edu/CaltechAUTHORS:20180409-162521250
Authors: {'items': [{'id': 'Anselmi-J', 'name': {'family': 'Anselmi', 'given': 'Jonatha'}}, {'id': 'Ardagna-D', 'name': {'family': 'Ardagna', 'given': 'Danilo'}}, {'id': 'Lui-John-C-S', 'name': {'family': 'Lui', 'given': 'John C. S.'}}, {'id': 'Wierman-A', 'name': {'family': 'Wierman', 'given': 'Adam'}}, {'id': 'Xu-Yunjian', 'name': {'family': 'Xu', 'given': 'Yunjian'}}, {'id': 'Yang-Zichao', 'name': {'family': 'Yang', 'given': 'Zichao'}}]}
Year: 2017
DOI: 10.1145/3086574
This article proposes a model to study the interaction of price competition and congestion in the cloud computing marketplace. Specifically, we propose a three-tier market model that captures a marketplace with users purchasing services from Software-as-a-Service (SaaS) providers, which in turn purchase computing resources from either Provider-as-a-Service (PaaS) or Infrastructure-as-a-Service (IaaS) providers. Within each level, we define and characterize market equilibria. Further, we use these characterizations to understand the relative profitability of SaaSs and PaaSs/IaaSs and to understand the impact of price competition on the user experienced performance, that is, the "price of anarchy" of the cloud marketplace. Our results highlight that both of these depend fundamentally on the degree to which congestion results from shared or dedicated resources in the cloud.https://authors.library.caltech.edu/records/hsy5n-8f067Provisioning of ad-supported cloud services: The role of competition
https://resolver.caltech.edu/CaltechAUTHORS:20180404-093242290
Authors: {'items': [{'id': 'Nair-J', 'name': {'family': 'Nair', 'given': 'Jayakrishnan'}}, {'id': 'Subramanian-V-G', 'name': {'family': 'Subramanian', 'given': 'Vijay'}}, {'id': 'Wierman-A', 'name': {'family': 'Wierman', 'given': 'Adam'}}]}
Year: 2018
DOI: 10.1016/j.peva.2018.01.001
Motivated by cloud services, we consider the interplay of network effects, congestion, and competition in ad-supported services. We study the strategic interactions between competing service providers and a user base, modeling congestion sensitivity and two forms of positive network effects: network effects that are either "firm-specific" or "industry-wide." Our analysis reveals that users are generally no better off due to the competition in a marketplace of ad-supported services. Further, our analysis highlights an important contrast between firm-specific and industry-wide network effects: Firms can coexist in a marketplace with industry-wide network effects, but near-monopolies tend to emerge in marketplaces with firm-specific network effects.https://authors.library.caltech.edu/records/db7ba-78h03Datum: Managing Data Purchasing and Data Placement in a Geo-Distributed Data Market
https://resolver.caltech.edu/CaltechAUTHORS:20180323-104121108
Authors: {'items': [{'id': 'Ren-Xiaoqi', 'name': {'family': 'Ren', 'given': 'Xiaoqi'}, 'orcid': '0000-0002-1121-9046'}, {'id': 'London-P', 'name': {'family': 'London', 'given': 'Palma'}}, {'id': 'Ziani-Juba', 'name': {'family': 'Ziani', 'given': 'Juba'}, 'orcid': '0000-0002-3324-4349'}, {'id': 'Wierman-A', 'name': {'family': 'Wierman', 'given': 'Adam'}}]}
Year: 2018
DOI: 10.1109/TNET.2018.2811374
This paper studies two design tasks faced by a geo-distributed cloud data market: which data to purchase (data purchasing) and where to place/replicate the data for delivery (data placement). We show that the joint problem of data purchasing and data placement within a cloud data market can be viewed as a facility location problem and is thus NP-hard. However, we give a provably optimal algorithm for the case of a data market made up of a single data center and then generalize the structure from the single data center setting in order to develop a near-optimal, polynomial-time algorithm for a geo-distributed data market. The resulting design, Datum, decomposes the joint purchasing and placement problem into two subproblems, one for data purchasing and one for data placement, using a transformation of the underlying bandwidth costs. We show, via a case study, that Datum is near optimal (within 1.6%) in practical settings.https://authors.library.caltech.edu/records/3vsjv-73n39Message from the Editors
https://resolver.caltech.edu/CaltechAUTHORS:20180618-151602165
Authors: {'items': [{'id': 'Wierman-A', 'name': {'family': 'Wierman', 'given': 'Adam'}}, {'id': 'Akella-A', 'name': {'family': 'Akella', 'given': 'Aditya'}}]}
Year: 2018
DOI: 10.1145/3224418
This issue marks the completion of the first year of the Proceedings of the ACM on Measurement and Analysis of Computing Systems (POMACS). POMACS was among the first three journals joining the recently launched Proceedings of the ACM (PACM) series, and with this issue POMACS has now published over 80 papers.
The goal of the PACM series is to showcase the highest quality research conducted in diverse areas of computer science, as represented by the ACM Special Interest Groups (SIGs). ACM POMACS focuses on the computer systems measurement and performance evaluation community and operates in close collaboration with the Special Interest Group SIGMETRICS. In fact, all the papers in the last three issues of POMACS will be presented during the SIGMETRICS annual conference this summer.https://authors.library.caltech.edu/records/q3p4d-wx792Minimal-variance distributed scheduling under strict demands and deadlines
https://resolver.caltech.edu/CaltechAUTHORS:20190125-161222101
Authors: {'items': [{'id': 'Nakahira-Yorie', 'name': {'family': 'Nakahira', 'given': 'Yorie'}, 'orcid': '0000-0003-3324-4602'}, {'id': 'Ferragut-A', 'name': {'family': 'Ferragut', 'given': 'Andres'}}, {'id': 'Wierman-A', 'name': {'family': 'Wierman', 'given': 'Adam'}}]}
Year: 2018
DOI: 10.1145/3305218.3305224
Many modern schedulers can dynamically adjust their service capacity to match the incoming workload. At the same time, however, variability in service capacity often incurs operational and infrastructure costs. In this abstract, we characterize an optimal distributed algorithm that minimizes service capacity variability when scheduling jobs with deadlines. Specifically, we show that Exact Scheduling minimizes service capacity variance subject to strict demand and deadline requirements under stationary Poisson arrivals. Moreover, we show how close the performance of the optimal distributed algorithm is to that of the optimal centralized algorithm by deriving a competitive-ratio-like bound.https://authors.library.caltech.edu/records/r5nwn-55b22Failure Localization in Power Systems via Tree Partitions
https://resolver.caltech.edu/CaltechAUTHORS:20190128-124902110
Authors: {'items': [{'id': 'Guo-Linqi', 'name': {'family': 'Guo', 'given': 'Linqi'}}, {'id': 'Liang-Chen', 'name': {'family': 'Liang', 'given': 'Chen'}}, {'id': 'Zocca-A', 'name': {'family': 'Zocca', 'given': 'Alessandro'}, 'orcid': '0000-0001-6585-4785'}, {'id': 'Low-S-H', 'name': {'family': 'Low', 'given': 'Steven H.'}, 'orcid': '0000-0001-6476-3048'}, {'id': 'Wierman-A', 'name': {'family': 'Wierman', 'given': 'Adam'}}]}
Year: 2018
DOI: 10.1145/3305218.3305247
Cascading failures in power systems propagate non-locally, making the control and mitigation of outages extremely hard. In this work, we use the emerging concept of the tree partition of transmission networks to provide an analytical characterization of line failure localizability in transmission systems. Our results rigorously formalize the well-known intuition that failures cannot cross bridges, and reveal a finer-grained concept that encodes more precise information on failure propagation within tree-partition regions. Specifically, when a non-bridge line is tripped, the impact of this failure only propagates within components of the tree partition defined by the bridges. In contrast, when a bridge line is tripped, the impact of this failure propagates globally across the network, affecting the power flow on all remaining lines. This characterization suggests that it is possible to improve the system robustness by temporarily switching off certain transmission lines, so as to create more, smaller components in the tree partition; thus spatially localizing line failures and making the grid less vulnerable to large outages.https://authors.library.caltech.edu/records/pk4s9-k5586Failure Localization in Power Systems via Tree Partitions
https://resolver.caltech.edu/CaltechAUTHORS:20190128-094206311
Authors: {'items': [{'id': 'Guo-Linqi', 'name': {'family': 'Guo', 'given': 'Linqi'}}, {'id': 'Liang-Chen', 'name': {'family': 'Liang', 'given': 'Chen'}}, {'id': 'Zocca-A', 'name': {'family': 'Zocca', 'given': 'Alessandro'}, 'orcid': '0000-0001-6585-4785'}, {'id': 'Low-S-H', 'name': {'family': 'Low', 'given': 'Steven H.'}, 'orcid': '0000-0001-6476-3048'}, {'id': 'Wierman-A', 'name': {'family': 'Wierman', 'given': 'Adam'}}]}
Year: 2018
DOI: 10.1145/3305218.3305240
Cascading failures in power systems propagate non-locally, making the control and mitigation of outages extremely hard. In this work, we use the emerging concept of the tree partition of transmission networks to provide an analytical characterization of line failure localizability in transmission systems. Our results rigorously establish the well perceived intuition in power community that failures cannot cross bridges, and reveal a finer-grained concept that encodes more precise information on failure propagations within tree-partition regions. Specifically, when a non-bridge line is tripped, the impact of this failure only propagates within well-defined components, which we refer to as cells, of the tree partition defined by the bridges. In contrast, when a bridge line is tripped, the impact of this failure propagates globally across the network, affecting the power flow on all remaining transmission lines. This characterization suggests that it is possible to improve the system robustness by temporarily switching off certain transmission lines, so as to create more, smaller components in the tree partition; thus spatially localizing line failures and making the grid less vulnerable to large-scale outages. We illustrate this approach using the IEEE 118-bus test system and demonstrate that switching off a negligible portion of transmission lines allows the impact of line failures to be significantly more localized without substantial changes in line congestion.https://authors.library.caltech.edu/records/hfpa5-89z53Convex Prophet Inequalities
https://resolver.caltech.edu/CaltechAUTHORS:20190128-125707935
Authors: {'items': [{'id': 'Qin-Junjie', 'name': {'family': 'Qin', 'given': 'Junjie'}}, {'id': 'Rajagopal-R', 'name': {'family': 'Rajagopal', 'given': 'Ram'}}, {'id': 'Vardi-S', 'name': {'family': 'Vardi', 'given': 'Shai'}}, {'id': 'Wierman-A', 'name': {'family': 'Wierman', 'given': 'Adam'}}]}
Year: 2018
DOI: 10.1145/3305218.3305250
We introduce a new class of prophet inequalities-convex prophet inequalities-where a gambler observes a sequence of convex cost functions ci (xi ) and is required to assign some fraction 0 ≤ x_i ≤ 1 to each, such that the sum of assigned values is exactly 1. The goal of the gambler is to minimize the sum of the costs. We provide an optimal algorithm for this problem, a dynamic program, and show that it can be implemented in polynomial time when the cost functions are polynomial. We also precisely characterize the competitive ratio of the optimal algorithm in the case where the gambler has an outside option and there are polynomial costs, showing that it grows as θ(n^(p-1)/ℓ), where n is the number of stages, p is the degree of the polynomial costs and the coefficients of the cost functions are bounded by [ℓ,u].https://authors.library.caltech.edu/records/haaq6-faa02Convex Prophet Inequalities
https://resolver.caltech.edu/CaltechAUTHORS:20190128-080449154
Authors: {'items': [{'id': 'Qin-Junjie', 'name': {'family': 'Qin', 'given': 'Junjie'}}, {'id': 'Rajagopal-R', 'name': {'family': 'Rajagopal', 'given': 'Ram'}}, {'id': 'Vardi-S', 'name': {'family': 'Vardi', 'given': 'Shai'}}, {'id': 'Wierman-A', 'name': {'family': 'Wierman', 'given': 'Adam'}}]}
Year: 2018
DOI: 10.1145/3305218.3305233
We introduce a new class of prophet inequalities-convex prophet inequalities-where a gambler observes a sequence of convex cost functions c_i(x_i) and is required to assign some fraction 0 ≤ x_i ≤ 1 to each, such that the sum of assigned values is exactly 1. The goal of the gambler is to minimize the sum of the costs. We provide an optimal algorithm for this problem, a dynamic program, and show that it can be implemented in polynomial time when the cost functions are polynomial. We also precisely characterize the competitive ratio of the optimal algorithm in the case where the gambler has an outside option and there are polynomial costs, showing that it grows as Θ(n^(p-1)/l), where n is the number of stages, p is the degree of the polynomial costs and the coefficients of the cost functions are bounded by [l, u].https://authors.library.caltech.edu/records/sa4ns-rpz74Opportunities for Price Manipulation by Aggregators in Electricity Markets
https://resolver.caltech.edu/CaltechAUTHORS:20170524-171255595
Authors: {'items': [{'id': 'Azizan-Ruhi-N', 'name': {'family': 'Azizan Ruhi', 'given': 'Navid'}, 'orcid': '0000-0002-4299-2963'}, {'id': 'Dvijotham-K', 'name': {'family': 'Dvijotham', 'given': 'Krishnamurthy'}, 'orcid': '0000-0002-1328-4677'}, {'id': 'Chen-Niangjun', 'name': {'family': 'Chen', 'given': 'Niangjun'}, 'orcid': '0000-0002-2289-9737'}, {'id': 'Wierman-A', 'name': {'family': 'Wierman', 'given': 'Adam'}}]}
Year: 2018
DOI: 10.1109/TSG.2017.2694043
Aggregators of distributed generation are playing an increasingly crucial role in the integration of renewable energy in power systems. However, the intermittent nature of renewable generation makes market interactions of aggregators difficult to monitor and regulate, raising concerns about potential market manipulation by aggregators. In this paper, we study this issue by quantifying the profit an aggregator can obtain through strategic curtailment of generation in an electricity market. We show that, while the problem of maximizing the benefit from curtailment is hard in general, efficient algorithms exist when the topology of the network is radial (acyclic). Further, we highlight that significant increases in profit are possible via strategic curtailment in practical settings.https://authors.library.caltech.edu/records/ng1rq-1d650Minimal-Variance Distributed Deadline Scheduling in a Stationary Environment
https://resolver.caltech.edu/CaltechAUTHORS:20190128-091502098
Authors: {'items': [{'id': 'Nakahira-Yorie', 'name': {'family': 'Nakahira', 'given': 'Yorie'}, 'orcid': '0000-0003-3324-4602'}, {'id': 'Ferragut-A', 'name': {'family': 'Ferragut', 'given': 'Andres'}}, {'id': 'Wierman-A', 'name': {'family': 'Wierman', 'given': 'Adam'}}]}
Year: 2018
DOI: 10.1145/3308897.3308925
Many modern schedulers can dynamically adjust their service capacity to match the incoming workload. At the same time, however, variability in service capacity often incurs operational and infrastructure costs. In this paper, we propose distributed algorithms that minimize service capacity variability when scheduling jobs with deadlines. Specifically, we show that Exact Scheduling minimizes service capacity variance subject to strict demand and deadline requirements under stationary Poisson arrivals. We also characterize the optimal distributed policies for more general settings with soft demand requirements, soft deadline requirements, or both. Additionally, we show how close the performance of the optimal distributed policy is to that of the optimal centralized policy by deriving a competitive-ratio-like bound.https://authors.library.caltech.edu/records/6db43-e2w16Competitive Online Optimization under Inventory Constraints
https://resolver.caltech.edu/CaltechAUTHORS:20190326-093440485
Authors: {'items': [{'id': 'Lin-Qiulin', 'name': {'family': 'Lin', 'given': 'Qiulin'}}, {'id': 'Yi-Hanling', 'name': {'family': 'Yi', 'given': 'Hanling'}}, {'id': 'Pang-John-Z-F', 'name': {'family': 'Pang', 'given': 'John'}, 'orcid': '0000-0002-6485-7922'}, {'id': 'Chen-Minghua', 'name': {'family': 'Chen', 'given': 'Minghua'}}, {'id': 'Wierman-A', 'name': {'family': 'Wierman', 'given': 'Adam'}}, {'id': 'Honig-Michael', 'name': {'family': 'Honig', 'given': 'Michael'}}, {'id': 'Xiao-Yuanzhang', 'name': {'family': 'Xiao', 'given': 'Yuanzhang'}}]}
Year: 2019
DOI: 10.1145/3322205.3311081
This paper studies online optimization under inventory (budget) constraints. While online optimization is a well-studied topic, versions with inventory constraints have proven difficult. We consider a formulation of inventory-constrained optimization that is a generalization of the classic one-way trading problem and has a wide range of applications. We present a new algorithmic framework, CR-Pursuit, and prove that it achieves the minimal competitive ratio among all deterministic algorithms (up to a problem-dependent constant factor) for inventory-constrained online optimization. Our algorithm and its analysis not only simplify and unify the state-of-the-art results for the standard one-way trading problem, but they also establish novel bounds for generalizations including concave revenue functions. For example, for one-way trading with price elasticity, the CR-Pursuit algorithm achieves a competitive ratio that is within a small additive constant (i.e., 1/3) to the lower bound of ln 0+1, where 0 is the ratio between the maximum and minimum base prices.https://authors.library.caltech.edu/records/808wm-kzj19An Online Algorithm for Smoothed Regression and LQR Control
https://resolver.caltech.edu/CaltechAUTHORS:20190626-160602759
Authors: {'items': [{'id': 'Goel-Gautam', 'name': {'family': 'Goel', 'given': 'Gautam'}, 'orcid': '0000-0002-7054-7218'}, {'id': 'Wierman-A', 'name': {'family': 'Wierman', 'given': 'Adam'}, 'orcid': '0000-0002-5923-0199'}]}
Year: 2019
DOI: 10.48550/arXiv.1810.10132
We consider Online Convex Optimization (OCO) in the setting where the costs are mm-strongly convex and the online learner pays a switching cost for changing decisions between rounds. We show that the recently proposed Online Balanced Descent (OBD) algorithm is constant competitive in this setting, with competitive ratio 3+O(1/m), irrespective of the ambient dimension. Additionally, we show that when the sequence of cost functions is ϵϵ-smooth, OBD has near-optimal dynamic regret and maintains strong per-round accuracy. We demonstrate the generality of our approach by showing that the OBD framework can be used to construct competitive algorithms for a variety of online problems across learning and control, including online variants of ridge regression, logistic regression, maximum likelihood estimation, and LQR control.https://authors.library.caltech.edu/records/1w8b7-9m514Competitive Online Optimization under Inventory Constraints
https://resolver.caltech.edu/CaltechAUTHORS:20191218-160755746
Authors: {'items': [{'id': 'Lin-Qiulin', 'name': {'family': 'Lin', 'given': 'Qiulin'}}, {'id': 'Yi-Hanling', 'name': {'family': 'Yi', 'given': 'Hanling'}}, {'id': 'Pang-John-Z-F', 'name': {'family': 'Pang', 'given': 'John'}, 'orcid': '0000-0002-6485-7922'}, {'id': 'Chen-Minghua', 'name': {'family': 'Chen', 'given': 'Minghua'}}, {'id': 'Wierman-A', 'name': {'family': 'Wierman', 'given': 'Adam'}}, {'id': 'Honig-Michael', 'name': {'family': 'Honig', 'given': 'Michael'}}, {'id': 'Xiao-Yuanzhang', 'name': {'family': 'Xiao', 'given': 'Yuanzhang'}}]}
Year: 2019
DOI: 10.1145/3309697.3331495
This paper studies online optimization under inventory (budget) constraints. While online optimization is a well-studied topic, versions with inventory constraints have proven difficult. We consider a formulation of inventory-constrained optimization that is a generalization of the classic one-way trading problem and has a wide range of applications. We present a new algorithmic framework, CR-Pursuit, and prove that it achieves the optimal competitive ratio among all deterministic algorithms (up to a problem-dependent constant factor) for inventory-constrained online optimization. Our algorithm and its analysis not only simplify and unify the state-of-the-art results for the standard one-way trading problem, but they also establish novel bounds for generalizations including concave revenue functions. For example, for one-way trading with price elasticity, CR-Pursuit achieves a competitive ratio within a small additive constant (i.e., 1/3) to the lower bound of ln Ө+1, where Ө is the ratio between the maximum and minimum base prices.https://authors.library.caltech.edu/records/e7n6z-bn283On the Role of a Market Maker in Networked Cournot Competition
https://resolver.caltech.edu/CaltechAUTHORS:20190905-100922515
Authors: {'items': [{'id': 'Cai-Desmond-W-H', 'name': {'family': 'Cai', 'given': 'Desmond'}, 'orcid': '0000-0001-9207-1890'}, {'id': 'Bose-Subhonmesh', 'name': {'family': 'Bose', 'given': 'Subhonmesh'}, 'orcid': '0000-0002-3445-4479'}, {'id': 'Wierman-A', 'name': {'family': 'Wierman', 'given': 'Adam'}}]}
Year: 2019
DOI: 10.1287/moor.2018.0961
We study Cournot competition among firms in a networked marketplace that is centrally managed by a market maker. In particular, we study a situation in which a market maker facilitates trade between geographically separate markets via a constrained transport network. Our focus is on understanding the consequences of the design of the market maker and providing tools for optimal design. To that end, we provide a characterization of the equilibrium outcomes of the game between the firms and the market maker. Our results highlight that the equilibrium structure is impacted dramatically by the market maker's objective—depending on the objective, there may be a unique equilibrium, multiple equilibria, or no equilibria. Furthermore, the game may be a potential game (as in the case of classical Cournot competition) or not. Beyond characterizing the equilibria of the game, we provide an approach for designing the market maker to optimize a design objective (e.g., social welfare) at the equilibrium of the game. Additionally, we use our results to explore the value of transport (trade) and the efficiency of the market maker (compared with a single aggregate market).https://authors.library.caltech.edu/records/rxvjs-y9y86Communication-Aware Scheduling of Precedence-Constrained Tasks
https://resolver.caltech.edu/CaltechAUTHORS:20191205-110845019
Authors: {'items': [{'id': 'Su-Yu', 'name': {'family': 'Su', 'given': 'Yu'}}, {'id': 'Ren-Xiaoqi', 'name': {'family': 'Ren', 'given': 'Xiaoqi'}, 'orcid': '0000-0002-1121-9046'}, {'id': 'Vardi-S', 'name': {'family': 'Vardi', 'given': 'Shai'}}, {'id': 'Wierman-A', 'name': {'family': 'Wierman', 'given': 'Adam'}}, {'id': 'He-Yuxiong', 'name': {'family': 'He', 'given': 'Yuxiong'}}]}
Year: 2019
DOI: 10.1145/3374888.3374897
Jobs in large-scale machine learning platforms are expressed using a computational graph of tasks with precedence constraints. To handle such precedence-constrained tasks that have machine-dependent communication demands in settings with heterogeneous service rates and communication times, we propose a new scheduling framework, Generalized Earliest Time First (GETF), that improves upon stateof- the-art results in the area. Specifically, we provide the first provable, worst-case approximation guarantee for the goal of minimizing the makespan of tasks with precedence constraints on related machines with machine-dependent communication times.https://authors.library.caltech.edu/records/zp2g5-e1f20An Online Algorithm for Smoothed Online Convex Optimization
https://resolver.caltech.edu/CaltechAUTHORS:20191205-111324876
Authors: {'items': [{'id': 'Goel-Gautam', 'name': {'family': 'Goel', 'given': 'Gautam'}, 'orcid': '0000-0002-7054-7218'}, {'id': 'Wierman-A', 'name': {'family': 'Wierman', 'given': 'Adam'}, 'orcid': '0000-0002-5923-0199'}]}
Year: 2019
DOI: 10.1145/3374888.3374892
We consider Online Convex Optimization (OCO) in the setting where the costs are m-strongly convex and the online learner pays a switching cost for changing decisions between rounds. We show that the recently proposed Online Balanced Descent (OBD) algorithm is constant competitive in this setting, with competitive ratio 3+O(1/m), irrespective of the ambient dimension. We demonstrate the generality of our approach by showing that the OBD framework can be used to construct competitive a algorithm for LQR control.https://authors.library.caltech.edu/records/g0hxe-nw304Prices and subsidies in the sharing economy
https://resolver.caltech.edu/CaltechAUTHORS:20190826-092413079
Authors: {'items': [{'id': 'Fang-Zhixuan', 'name': {'family': 'Fang', 'given': 'Zhixuan'}}, {'id': 'Huang-Longbo', 'name': {'family': 'Huang', 'given': 'Longbo'}}, {'id': 'Wierman-A', 'name': {'family': 'Wierman', 'given': 'Adam'}}]}
Year: 2019
DOI: 10.1016/j.peva.2019.102037
The growth of the sharing economy is driven by the emergence of platforms, e.g., Uber and Lyft, that match owners looking to share their resources with customers looking to rent them. The design of such platforms is a complex mixture of economics and engineering, and how to optimally design such platforms is still an open problem. In this paper, we focus on the design of prices and subsidies in sharing platforms. Our results provide insights into the tradeoff between revenue maximizing prices and social welfare maximizing prices. Specifically, we introduce a novel model of sharing platforms and characterize the profit and social welfare maximizing prices in this model. Further, we bound the efficiency loss under profit maximizing prices, showing that there is a strong alignment between profit and efficiency in practical settings. Our results highlight that the revenue of platforms may be limited in practice due to supply shortages; thus platforms have a strong incentive to encourage sharing via subsidies. We provide an analytic characterization of when such subsidies are valuable and show how to optimize the size of the subsidy provided. Finally, we validate our results and insights using data from Didi Chuxing, the largest ridesharing platform in China.https://authors.library.caltech.edu/records/rv4nq-h7x54Logarithmic Communication for Distributed Optimization in Multi-Agent Systems
https://resolver.caltech.edu/CaltechAUTHORS:20191218-160307829
Authors: {'items': [{'id': 'London-P', 'name': {'family': 'London', 'given': 'Palma'}}, {'id': 'Vardi-S', 'name': {'family': 'Vardi', 'given': 'Shai'}}, {'id': 'Wierman-A', 'name': {'family': 'Wierman', 'given': 'Adam'}}]}
Year: 2019
DOI: 10.1145/3366696
Classically, the design of multi-agent systems is approached using techniques from distributed optimization such as dual descent and consensus algorithms. Such algorithms depend on convergence to global consensus before any individual agent can determine its local action. This leads to challenges with respect to communication overhead and robustness, and improving algorithms with respect to these measures has been a focus of the community for decades.
This paper presents a new approach for multi-agent system design based on ideas from the emerging field of local computation algorithms. The framework we develop, LOcal Convex Optimization (LOCO), is the first local computation algorithm for convex optimization problems and can be applied in a wide-variety of settings. We demonstrate the generality of the framework via applications to Network Utility Maximization (NUM) and the distributed training of Support Vector Machines (SVMs), providing numerical results illustrating the improvement compared to classical distributed optimization approaches in each case.https://authors.library.caltech.edu/records/k3e07-xqv14On the Inefficiency of Forward Markets in Leader-Follower Competition
https://resolver.caltech.edu/CaltechAUTHORS:20190627-131545662
Authors: {'items': [{'id': 'Cai-Desmond-W-H', 'name': {'family': 'Cai', 'given': 'Desmond'}, 'orcid': '0000-0001-9207-1890'}, {'id': 'Agarwal-A', 'name': {'family': 'Agarwal', 'given': 'Anish'}}, {'id': 'Wierman-A', 'name': {'family': 'Wierman', 'given': 'Adam'}}]}
Year: 2020
DOI: 10.1287/opre.2019.1863
Motivated by electricity markets, this paper studies the impact of forward contracting in situations where firms have capacity constraints and heterogeneous production lead times. We consider a model with two types of firms—leaders and followers—that choose production at two different times. Followers choose productions in the second stage but can sell forward contracts in the first stage. Our main result is an explicit characterization of the equilibrium outcomes. Classic results on forward contracting suggest that it can mitigate market power in simple settings; however, the results in this paper show that the impact of forward markets in this setting is delicate—forward contracting can enhance or mitigate market power. In particular, our results show that leader–follower interactions created by heterogeneous production lead times may cause forward markets to be inefficient, even when there are a large number of followers. In fact, symmetric equilibria do not necessarily exist due to differences in market power among the leaders and followers.https://authors.library.caltech.edu/records/w36zy-76n50Optimal Pricing in Markets with Nonconvex Costs
https://resolver.caltech.edu/CaltechAUTHORS:20200409-070410121
Authors: {'items': [{'id': 'Azizan-Ruhi-N', 'name': {'family': 'Azizan', 'given': 'Navid'}, 'orcid': '0000-0002-4299-2963'}, {'id': 'Su-Yu', 'name': {'family': 'Su', 'given': 'Yu'}}, {'id': 'Dvijotham-K', 'name': {'family': 'Dvijotham', 'given': 'Krishnamurthy'}, 'orcid': '0000-0002-1328-4677'}, {'id': 'Wierman-A', 'name': {'family': 'Wierman', 'given': 'Adam'}}]}
Year: 2020
DOI: 10.1287/opre.2019.1900
We consider a market run by an operator who seeks to satisfy a given consumer demand for a commodity by purchasing the needed amount from a group of competing suppliers with nonconvex cost functions. The operator knows the suppliers' cost functions and announces a price/payment function for each supplier, which determines the payment to that supplier for producing different quantities. Each supplier then makes an individual decision about how much to produce, in order to maximize its own profit. The key question is how to design the price functions. To that end, we propose a new pricing scheme, which is applicable to general nonconvex costs, and allows using general parametric pricing functions. Optimizing for the quantities and the price parameters simultaneously, and the ability to use general parametric pricing functions allows our scheme to find prices that are typically economically more efficient and less discriminatory than those of the existing schemes. In addition, we supplement the proposed method with a polynomial-time approximation algorithm, which can be used to approximate the optimal quantities and prices. Our framework extends to the case of networked markets, which, to the best of our knowledge, has not been considered in previous work.https://authors.library.caltech.edu/records/9hj2j-e7793Online Linear Optimization with Inventory Management Constraints
https://resolver.caltech.edu/CaltechAUTHORS:20190626-143029409
Authors: {'items': [{'id': 'Yang-Lin', 'name': {'family': 'Yang', 'given': 'Lin'}}, {'id': 'Hajiesmaili-Mohammad-H', 'name': {'family': 'Hajiesmaili', 'given': 'Mohammad H.'}}, {'id': 'Sitaraman-Ramesh', 'name': {'family': 'Sitaraman', 'given': 'Ramesh'}}, {'id': 'Wierman-A', 'name': {'family': 'Wierman', 'given': 'Adam'}}, {'id': 'Mallada-Enrique', 'name': {'family': 'Mallada', 'given': 'Enrique'}, 'orcid': '0000-0003-1568-1833'}, {'id': 'Wong-Wing-S', 'name': {'family': 'Wong', 'given': 'Wing S.'}}]}
Year: 2020
DOI: 10.1145/3379482
This paper considers the problem of online linear optimization with inventory management constraints.
Specifically, we consider an online scenario where a decision maker needs to satisfy her time-varying demand
for some units of an asset, either from a market with a time-varying price or from her own inventory. In
each time slot, the decision maker is presented a (linear) price and must immediately decide the amount to
purchase for covering the demand and/or for storing in the inventory for future use. The inventory has a
limited capacity and can be used to buy and store assets at low price and cover the demand when the price is
high. The ultimate goal of the decision maker is to cover the demand at each time slot while minimizing the
cost of buying assets from the market. We propose ARP, an online algorithm for linear programming with
inventory constraints, and ARPRate, an extended version that handles rate constraints to/from the inventory.
Both ARP and ARPRate achieve optimal competitive ratios, meaning that no other online algorithm can achieve
a better theoretical guarantee. To illustrate the results, we use the proposed algorithms in a case study focused
on energy procurement and storage management strategies for data centers.https://authors.library.caltech.edu/records/ekja4-av334Third-Party Data Providers Ruin Simple Mechanisms
https://resolver.caltech.edu/CaltechAUTHORS:20190626-155536214
Authors: {'items': [{'id': 'Cai-Yang', 'name': {'family': 'Cai', 'given': 'Yang'}}, {'id': 'Echenique-F', 'name': {'family': 'Echenique', 'given': 'Federico'}, 'orcid': '0000-0002-1567-6770'}, {'id': 'Fu-Hu', 'name': {'family': 'Fu', 'given': 'Hu'}}, {'id': 'Ligett-K', 'name': {'family': 'Ligett', 'given': 'Katrina'}, 'orcid': '0000-0003-2780-6656'}, {'id': 'Wierman-A', 'name': {'family': 'Wierman', 'given': 'Adam'}}, {'id': 'Ziani-Juba', 'name': {'family': 'Ziani', 'given': 'Juba'}, 'orcid': '0000-0002-3324-4349'}]}
Year: 2020
DOI: 10.1145/3379478
Motivated by the growing prominence of third-party data providers in online marketplaces, this paper studies the impact of the presence of third-party data providers on mechanism design. When no data provider is present, it has been shown that simple mechanisms are "good enough'' -- they can achieve a constant fraction of the revenue of optimal mechanisms. The results in this paper demonstrate that this is no longer true in the presence of a third-party data provider who can provide the bidder with a signal that is correlated with the item type. Specifically, even with a single seller, a single bidder, and a single item of uncertain type for sale, the strategies of pricing each item-type separately (the analog of item pricing for multi-item auctions) and bundling all item-types under a single price (the analog of grand bundling) can both simultaneously be a logarithmic factor worse than the optimal revenue. Further, in the presence of a data provider, item-type partitioning mechanisms---a more general class of mechanisms which divide item-types into disjoint groups and offer prices for each group---still cannot achieve within a $łog łog$ factor of the optimal revenue. Thus, our results highlight that the presence of a data-provider forces the use of more complicated mechanisms in order to achieve a constant fraction of the optimal revenue.https://authors.library.caltech.edu/records/r67cj-d4544Online Optimization with Predictions and Non-convex Losses
https://resolver.caltech.edu/CaltechAUTHORS:20200214-105548481
Authors: {'items': [{'id': 'Lin-Yiheng', 'name': {'family': 'Lin', 'given': 'Yiheng'}, 'orcid': '0000-0001-6524-2877'}, {'id': 'Goel-Gautam', 'name': {'family': 'Goel', 'given': 'Gautam'}, 'orcid': '0000-0002-7054-7218'}, {'id': 'Wierman-A', 'name': {'family': 'Wierman', 'given': 'Adam'}, 'orcid': '0000-0002-5923-0199'}]}
Year: 2020
DOI: 10.1145/3379484
We study online optimization in a setting where an online learner seeks to optimize a per-round hitting cost, which may be non-convex, while incurring a movement cost when changing actions between rounds. We ask: under what general conditions is it possible for an online learner to leverage predictions of future cost functions in order to achieve near-optimal costs? Prior work has provided near-optimal online algorithms for specific combinations of assumptions about hitting and switching costs, but no general results are known. In this work, we give two general sufficient conditions that specify a relationship between the hitting and movement costs which guarantees that a new algorithm, Synchronized Fixed Horizon Control (SFHC), achieves a 1+O(1/w) competitive ratio, where w is the number of predictions available to the learner. Our conditions do not require the cost functions to be convex, and we also derive competitive ratio results for non-convex hitting and movement costs. Our results provide the first constant, dimension-free competitive ratio for online non-convex optimization with movement costs. We also give an example of a natural problem, Convex Body Chasing (CBC), where the sufficient conditions are not satisfied and prove that no online algorithm can have a competitive ratio that converges to 1.https://authors.library.caltech.edu/records/7m3ym-vmm22Third-Party Data Providers Ruin Simple Mechanisms
https://resolver.caltech.edu/CaltechAUTHORS:20200709-084932341
Authors: {'items': [{'id': 'Cai-Yang', 'name': {'family': 'Cai', 'given': 'Yang'}}, {'id': 'Echenique-F', 'name': {'family': 'Echenique', 'given': 'Federico'}, 'orcid': '0000-0002-1567-6770'}, {'id': 'Fu-Hu', 'name': {'family': 'Fu', 'given': 'Hu'}}, {'id': 'Ligett-K', 'name': {'family': 'Ligett', 'given': 'Katrina'}, 'orcid': '0000-0003-2780-6656'}, {'id': 'Wierman-A', 'name': {'family': 'Wierman', 'given': 'Adam'}}, {'id': 'Ziani-Juba', 'name': {'family': 'Ziani', 'given': 'Juba'}, 'orcid': '0000-0002-3324-4349'}]}
Year: 2020
DOI: 10.1145/3410048.3410108
Motivated by the growing prominence of third-party data providers in online marketplaces, this paper studies the impact of the presence of third-party data providers on mechanism design. When no data provider is present, it has been shown that simple mechanisms are "good enough" -they can achieve a constant fraction of the revenue of optimal mechanisms. The results in this paper demonstrate that this is no longer true in the presence of a third-party data provider who can provide the bidder with a signal that is correlated with the item type. Specifically, even with a single seller, a single bidder, and a single item of uncertain type for sale, the strategies of pricing each item-type separately (the analog of item pricing for multiitem auctions) and bundling all item-types under a single price (the analog of grand bundling) can both simultaneously be a logarithmic factor worse than the optimal revenue. Further, in the presence of a data provider, item-type partitioning mechanisms-a more general class of mechanisms which divide item-types into disjoint groups and offer prices for each group-still cannot achieve within a log log factor of the optimal revenue. Thus, our results highlight that the presence of a data-provider forces the use of more complicated mechanisms in order to achieve a constant fraction of the optimal revenue.https://authors.library.caltech.edu/records/hdp6f-c2s32Online Optimization with Predictions and Non-convex Losses
https://resolver.caltech.edu/CaltechAUTHORS:20200709-141107614
Authors: {'items': [{'id': 'Lin-Yiheng', 'name': {'family': 'Lin', 'given': 'Yiheng'}, 'orcid': '0000-0001-6524-2877'}, {'id': 'Goel-Gautam', 'name': {'family': 'Goel', 'given': 'Gautam'}, 'orcid': '0000-0002-7054-7218'}, {'id': 'Wierman-A', 'name': {'family': 'Wierman', 'given': 'Adam'}, 'orcid': '0000-0002-5923-0199'}]}
Year: 2020
DOI: 10.1145/3393691.3394208
We study online optimization in a setting where an online learner seeks to optimize a per-round hitting cost, which may be non-convex, while incurring a movement cost when changing actions between rounds. We ask: under what general conditions is it possible for an online learner to leverage predictions of future cost functions in order to achieve near-optimal costs? Prior work has provided near-optimal online algorithms for specific combinations of assumptions about hitting and switching costs, but no general results are known. In this work, we give two general sufficient conditions that specify a relationship between the hitting and movement costs which guarantees that a new algorithm, Synchronized Fixed Horizon Control (SFHC), achieves a 1+O(1/w) competitive ratio, where w is the number of predictions available to the learner. Our conditions do not require the cost functions to be convex, and we also derive competitive ratio results for non-convex hitting and movement costs. Our results provide the first constant, dimension-free competitive ratio for online non-convex optimization with movement costs. We also give an example of a natural problem, Convex Body Chasing (CBC), where the sufficient conditions are not satisfied and prove that no online algorithm can have a competitive ratio that converges to 1.https://authors.library.caltech.edu/records/366p2-gfa42Logarithmic Communication for Distributed Optimization in Multi-Agent Systems
https://resolver.caltech.edu/CaltechAUTHORS:20200709-141943501
Authors: {'items': [{'id': 'London-P', 'name': {'family': 'London', 'given': 'Palma'}}, {'id': 'Vardi-S', 'name': {'family': 'Vardi', 'given': 'Shai'}}, {'id': 'Wierman-A', 'name': {'family': 'Wierman', 'given': 'Adam'}}]}
Year: 2020
DOI: 10.1145/3393691.3394197
Classically, the design of multi-agent systems is approached using techniques from distributed optimization such as dual descent and consensus algorithms. Such algorithms depend on convergence to global consensus before any individual agent can determine its local action. This leads to challenges with respect to communication overhead and robustness, and improving algorithms with respect to these measures has been a focus of the community for decades.
This paper presents a new approach for multi-agent system design based on ideas from the emerging field of local computation algorithms. The framework we develop, LOcal Convex Optimization (LOCO), is the first local computation algorithm for convex optimization problems and can be applied in a wide-variety of settings. We demonstrate the generality of the framework via applications to Network Utility Maximization (NUM) and the distributed training of Support Vector Machines (SVMs), providing numerical results illustrating the improvement compared to classical distributed optimization approaches in each case.https://authors.library.caltech.edu/records/93py3-c5629Characterizing Policies with Optimal Response Time Tails under Heavy-Tailed Job Sizes
https://resolver.caltech.edu/CaltechAUTHORS:20210303-094800346
Authors: {'items': [{'id': 'Scully-Ziv', 'name': {'family': 'Scully', 'given': 'Ziv'}, 'orcid': '0000-0002-8547-1068'}, {'id': 'van-Kreveld-Lucas', 'name': {'family': 'van Kreveld', 'given': 'Lucas'}}, {'id': 'Boxma-Onno-J', 'name': {'family': 'Boxma', 'given': 'Onno'}, 'orcid': '0000-0003-4317-5380'}, {'id': 'Dorsman-Jan-Pieter', 'name': {'family': 'Dorsman', 'given': 'Jan-Pieter'}}, {'id': 'Wierman-A', 'name': {'family': 'Wierman', 'given': 'Adam'}}]}
Year: 2020
DOI: 10.1145/3393691.3394179
We consider the tail behavior of the response time distribution in an M/G/1 queue with heavy-tailed job sizes, specifically those with intermediately regularly varying tails. In this setting, the response time tail of many individual policies has been characterized, and it is known that policies such as Shortest Remaining Processing Time (SRPT) and Foreground-Background (FB) have response time tails of the same order as the job size tail, and thus such policies are tail-optimal. Our goal in this work is to move beyond individual policies and characterize the set of policies that are tail-optimal. Toward that end, we use the recently introduced SOAP framework to derive sufficient conditions on the form of prioritization used by a scheduling policy that ensure the policy is tail-optimal. These conditions are general and lead to new results for important policies that have previously resisted analysis, including the Gittins policy, which minimizes mean response time among policies that do not have access to job size information. As a by-product of our analysis, we derive a general upper bound for fractional moments of M/G/1 busy periods, which is of independent interest.https://authors.library.caltech.edu/records/jbez3-8ds67Characterizing Policies with Optimal Response Time Tails under Heavy-Tailed Job Sizes
https://resolver.caltech.edu/CaltechAUTHORS:20200511-093940097
Authors: {'items': [{'id': 'Scully-Z', 'name': {'family': 'Scully', 'given': 'Ziv'}}, {'id': 'van-Kreveld-L', 'name': {'family': 'van Kreveld', 'given': 'Lucas'}}, {'id': 'Boxma-O-J', 'name': {'family': 'Boxma', 'given': 'Onno'}}, {'id': 'Dorsman-J-P', 'name': {'family': 'Dorsman', 'given': 'Jan-Pieter'}}, {'id': 'Wierman-A', 'name': {'family': 'Wierman', 'given': 'Adam'}}]}
Year: 2020
DOI: 10.1145/3392148
We consider the tail behavior of the response time distribution in an M/G/1 queue with heavy-tailed job sizes, specifically those with intermediately regularly varying tails. In this setting, the response time tail of many individual policies has been characterized, and it is known that policies such as Shortest Remaining Processing Time (SRPT) and Foreground-Background (FB) have response time tails of the same order as the job size tail, and thus such policies are tail-optimal. Our goal in this work is to move beyond individual policies and characterize the set of policies that are tail-optimal. Toward that end, we use the recently introduced SOAP framework to derive sufficient conditions on the form of prioritization used by a scheduling policy that ensure the policy is tail-optimal. These conditions are general and lead to new results for important policies that have previously resisted analysis, including the Gittins policy, which minimizes mean response time among policies that do not have access to job size information. As a by-product of our analysis, we derive a general upper bound for fractional moments of M/G/1 busy periods, which is of independent interest.https://authors.library.caltech.edu/records/db8db-0yw83Generalized Exact Scheduling: A Minimal-Variance Distributed Deadline Scheduler
https://resolver.caltech.edu/CaltechAUTHORS:20201014-143948343
Authors: {'items': [{'id': 'Nakahira-Yorie', 'name': {'family': 'Nakahira', 'given': 'Yorie'}, 'orcid': '0000-0003-3324-4602'}, {'id': 'Ferragut-Andres', 'name': {'family': 'Ferragut', 'given': 'Andres'}, 'orcid': '0000-0003-0134-5548'}, {'id': 'Wierman-A', 'name': {'family': 'Wierman', 'given': 'Adam'}, 'orcid': '0000-0002-5923-0199'}]}
Year: 2020
DOI: 10.1287/opre.2021.2232
Many modern schedulers can dynamically adjust their service capacity to match the incoming workload. At the same time, however, unpredictability and instability in service capacity often incur operational and infrastructural costs. In this paper, we seek to characterize optimal distributed algorithms that maximize the predictability, stability, or both when scheduling jobs with deadlines. Specifically, we show that Exact Scheduling minimizes both the stationary mean and variance of the service capacity subject to strict demand and deadline requirements. For more general settings, we characterize the minimal-variance distributed policies with soft demand requirements, soft deadline requirements, or both. The performance of the optimal distributed policies is compared with that of the optimal centralized policy by deriving closed-form bounds and by testing centralized and distributed algorithms using real data from the Caltech electrical vehicle charging facility and many pieces of synthetic data from different arrival distributions. Moreover, we derive the Pareto-optimality condition for distributed policies that balance the variance and mean square of the service capacity. Finally, we discuss a scalable partially centralized algorithm that uses centralized information to boost performance and a method to deal with missing information on service requirements.https://authors.library.caltech.edu/records/hsj32-5en09Loyalty programs in the sharing economy: Optimality and competition
https://resolver.caltech.edu/CaltechAUTHORS:20200511-124909341
Authors: {'items': [{'id': 'Fang-Zhixuan', 'name': {'family': 'Fang', 'given': 'Zhixuan'}}, {'id': 'Huang-Longbo', 'name': {'family': 'Huang', 'given': 'Longbo'}}, {'id': 'Wierman-A', 'name': {'family': 'Wierman', 'given': 'Adam'}}]}
Year: 2020
DOI: 10.1016/j.peva.2020.102105
Loyalty programs are important tools for sharing platforms seeking to grow supply. Online sharing platforms use loyalty programs to heavily subsidize resource providers, encouraging participation and boosting supply. As the sharing economy has evolved and competition has increased, the design of loyalty programs has begun to play a crucial role in the pursuit of maximal revenue. In this paper, we first characterize the optimal loyalty program for a platform with homogeneous users. We then show that optimal revenue in a heterogeneous market can be achieved by a class of multi-threshold loyalty program (MTLP) which admits a simple implementation-friendly structure. We also study the performance of loyalty programs in a setting with two competing sharing platforms, showing that the degree of heterogeneity is a crucial factor for both loyalty programs and pricing strategies. Our results show that sophisticated loyalty programs that reward suppliers via stepwise linear functions outperform simple sign-up bonuses, which give them a one time reward for participating.https://authors.library.caltech.edu/records/vhrgg-qkf57Competitive Algorithms for the Online Multiple Knapsack Problem with Application to Electric Vehicle Charging
https://resolver.caltech.edu/CaltechAUTHORS:20201014-142839691
Authors: {'items': [{'id': 'Sun-Bo', 'name': {'family': 'Sun', 'given': 'Bo'}, 'orcid': '0000-0003-3172-7811'}, {'id': 'Zeynali-Ali', 'name': {'family': 'Zeynali', 'given': 'Ali'}}, {'id': 'Li-Tongxin', 'name': {'family': 'Li', 'given': 'Tongxin'}, 'orcid': '0000-0002-9806-8964'}, {'id': 'Hajiesmaili-Mohammad-H', 'name': {'family': 'Hajiesmaili', 'given': 'Mohammad'}, 'orcid': '0000-0001-9278-2254'}, {'id': 'Wierman-A', 'name': {'family': 'Wierman', 'given': 'Adam'}}, {'id': 'Tsang-Danny-Hin-Kwok', 'name': {'family': 'Tsang', 'given': 'Danny H. K.'}, 'orcid': '0000-0003-0135-7098'}]}
Year: 2020
DOI: 10.1145/3428336
We introduce and study a general version of the fractional online knapsack problem with multiple knapsacks, heterogeneous constraints on which items can be assigned to which knapsack, and rate-limiting constraints on the assignment of items to knapsacks. This problem generalizes variations of the knapsack problem and of the one-way trading problem that have previously been treated separately, and additionally finds application to the real-time control of electric vehicle (EV) charging. We introduce a new algorithm that achieves a competitive ratio within an additive factor of one of the best achievable competitive ratios for the general problem and matches or improves upon the best-known competitive ratio for special cases in the knapsack and one-way trading literatures. Moreover, our analysis provides a novel approach to online algorithm design based on an instance-dependent primal-dual analysis that connects the identification of worst-case instances to the design of algorithms. Finally, we illustrate the proposed algorithm via trace-based experiments of EV charging.https://authors.library.caltech.edu/records/4kef2-qk578Asymptotically Optimal Load Balancing in Large-scale Heterogeneous Systems with Multiple Dispatchers
https://resolver.caltech.edu/CaltechAUTHORS:20210308-133521968
Authors: {'items': [{'id': 'Zhou-Xingyu', 'name': {'family': 'Zhou', 'given': 'Xingyu'}}, {'id': 'Shroff-Ness-B', 'name': {'family': 'Shroff', 'given': 'Ness'}, 'orcid': '0000-0002-4606-6879'}, {'id': 'Wierman-A', 'name': {'family': 'Wierman', 'given': 'Adam'}}]}
Year: 2020
DOI: 10.1145/3453953.3453965
We consider the load balancing problem in large-scale heterogeneous systems with multiple dispatchers. We introduce a general framework called Local-Estimation-Driven (LED). Under this framework, each dispatcher keeps local (possibly outdated) estimates of the queue lengths for all the servers, and the dispatching decision is made purely based on these local estimates. The local estimates are updated via infrequent communications between dispatchers and servers. We derive sufficient conditions for LED policies to achieve throughput optimality and delay optimality in heavy-traffic, respectively. These conditions directly imply delay optimality for many previous local-memory based policies in heavy traffic. Moreover, the results enable us to design new delay optimal policies for heterogeneous systems with multiple dispatchers. Finally, the heavy-traffic delay optimality of the LED framework also sheds light on a recent open question on how to design optimal load balancing schemes using delayed information.https://authors.library.caltech.edu/records/3f55j-9r130Asymptotically optimal load balancing in large-scale heterogeneous systems with multiple dispatchers
https://resolver.caltech.edu/CaltechAUTHORS:20200526-152856053
Authors: {'items': [{'id': 'Zhou-Xingyu', 'name': {'family': 'Zhou', 'given': 'Xingyu'}}, {'id': 'Shroff-Ness-B', 'name': {'family': 'Shroff', 'given': 'Ness'}}, {'id': 'Wierman-A', 'name': {'family': 'Wierman', 'given': 'Adam'}}]}
Year: 2021
DOI: 10.1016/j.peva.2020.102146
We consider the load balancing problem in large-scale heterogeneous systems with multiple dispatchers. We introduce a general framework called Local-Estimation-Driven (LED). Under this framework, each dispatcher keeps local (possibly outdated) estimates of the queue lengths for all the servers, and the dispatching decision is made purely based on these local estimates. The local estimates are updated via infrequent communications between dispatchers and servers. We derive sufficient conditions for LED policies to achieve throughput optimality and delay optimality in heavy-traffic, respectively. These conditions directly imply delay optimality for many previous local-memory based policies in heavy traffic. Moreover, the results enable us to design new delay optimal policies for heterogeneous systems with multiple dispatchers. Finally, the heavy-traffic delay optimality of the LED framework also sheds light on a recent open question on how to design optimal load balancing schemes using delayed information.https://authors.library.caltech.edu/records/9hh61-rmd91An integrated approach for failure mitigation & localization in power systems
https://resolver.caltech.edu/CaltechAUTHORS:20200707-103725840
Authors: {'items': [{'id': 'Liang-Chen', 'name': {'family': 'Liang', 'given': 'Chen'}}, {'id': 'Guo-Linqi', 'name': {'family': 'Guo', 'given': 'Linqi'}}, {'id': 'Zocca-Alessandro', 'name': {'family': 'Zocca', 'given': 'Alessandro'}, 'orcid': '0000-0001-6585-4785'}, {'id': 'Yu-Shuyue', 'name': {'family': 'Yu', 'given': 'Shuyue'}}, {'id': 'Low-S-H', 'name': {'family': 'Low', 'given': 'Steven H.'}, 'orcid': '0000-0001-6476-3048'}, {'id': 'Wierman-A', 'name': {'family': 'Wierman', 'given': 'Adam'}}]}
Year: 2021
DOI: 10.48550/arXiv.2004.10401
The transmission grid is often comprised of several control areas that are connected by multiple tie lines in a mesh structure for reliability. It is also well-known that line failures can propagate non-locally and redundancy can exacerbate cascading. In this paper, we propose an integrated approach to grid reliability that (i) judiciously switches off a small number of tie lines so that the control areas are connected in a tree structure; and (ii) leverages a unified frequency control paradigm to provide congestion management in real time. Even though the proposed topology reduces redundancy, the integration of tree structure at regional level and real-time congestion management can provide stronger guarantees on failure localization and mitigation. We illustrate our approach on the IEEE 39-bus network and evaluate its performance on the IEEE 118-bus, 179-bus, 200-bus and 240-bus networks with various network congestion conditions. Simulations show that, compared with the traditional approach, our approach not only prevents load shedding in more failure scenarios, but also incurs smaller amounts of load loss in scenarios where load shedding is inevitable. Moreover, generators under our approach adjust their operations more actively and efficiently in a local manner.https://authors.library.caltech.edu/records/0hpmn-9pp50Stable Online Control of Linear Time-Varying Systems
https://resolver.caltech.edu/CaltechAUTHORS:20210510-092451106
Authors: {'items': [{'id': 'Qu-Guannan', 'name': {'family': 'Qu', 'given': 'Guannan'}, 'orcid': '0000-0002-5466-3550'}, {'id': 'Shi-Yuanyuan', 'name': {'family': 'Shi', 'given': 'Yuanyuan'}, 'orcid': '0000-0002-6182-7664'}, {'id': 'Lale-Sahin', 'name': {'family': 'Lale', 'given': 'Sahin'}, 'orcid': '0000-0002-7191-346X'}, {'id': 'Anandkumar-A', 'name': {'family': 'Anandkumar', 'given': 'Animashree'}, 'orcid': '0000-0002-6974-6797'}, {'id': 'Wierman-A', 'name': {'family': 'Wierman', 'given': 'Adam'}, 'orcid': '0000-0002-5923-0199'}]}
Year: 2021
DOI: 10.48550/arXiv.2104.14134
Linear time-varying (LTV) systems are widely used for modeling real-world dynamical systems due to their generality and simplicity. Providing stability guarantees for LTV systems is one of the central problems in control theory. However, existing approaches that guarantee stability typically lead to significantly sub-optimal cumulative control cost in online settings where only current or short-term system information is available. In this work, we propose an efficient online control algorithm, COvariance Constrained Online Linear Quadratic (COCO-LQ) control, that guarantees input-to-state stability for a large class of LTV systems while also minimizing the control cost. The proposed method incorporates a state covariance constraint into the semi-definite programming (SDP) formulation of the LQ optimal controller. We empirically demonstrate the performance of COCO-LQ in both synthetic experiments and a power system frequency control example.https://authors.library.caltech.edu/records/bz7g6-w7j65Signomial and polynomial optimization via relative entropy and partial dualization
https://resolver.caltech.edu/CaltechAUTHORS:20201014-133741550
Authors: {'items': [{'id': 'Murray-Riley', 'name': {'family': 'Murray', 'given': 'Riley'}}, {'id': 'Chandrasekaran-Venkat', 'name': {'family': 'Chandrasekaran', 'given': 'Venkat'}}, {'id': 'Wierman-A', 'name': {'family': 'Wierman', 'given': 'Adam'}}]}
Year: 2021
DOI: 10.1007/s12532-020-00193-4
We describe a generalization of the Sums-of-AM/GM-Exponential (SAGE) methodology for relative entropy relaxations of constrained signomial and polynomial optimization problems. Our approach leverages the fact that SAGE certificates conveniently and transparently blend with convex duality, in a way which enables partial dualization of certain structured constraints. This more general approach retains key properties of ordinary SAGE relaxations (e.g. sparsity preservation), and inspires a projective method of solution recovery which respects partial dualization. We illustrate the utility of our methodology with a range of examples from the global optimization literature, along with a publicly available software package.https://authors.library.caltech.edu/records/747jj-7q094Information Aggregation for Constrained Online Control
https://resolver.caltech.edu/CaltechAUTHORS:20210604-111535691
Authors: {'items': [{'id': 'Li-Tongxin', 'name': {'family': 'Li', 'given': 'Tongxin'}, 'orcid': '0000-0002-9806-8964'}, {'id': 'Chen-Yue', 'name': {'family': 'Chen', 'given': 'Yue'}, 'orcid': '0000-0002-7594-7587'}, {'id': 'Sun-Bo', 'name': {'family': 'Sun', 'given': 'Bo'}, 'orcid': '0000-0003-3172-7811'}, {'id': 'Wierman-A', 'name': {'family': 'Wierman', 'given': 'Adam'}}, {'id': 'Low-S-H', 'name': {'family': 'Low', 'given': 'Steven'}, 'orcid': '0000-0001-6476-3048'}]}
Year: 2021
DOI: 10.1145/3460085
This paper considers an online control problem involving two controllers. A central controller chooses an action from a feasible set that is determined by time-varying and coupling constraints, which depend on all past actions and states. The central controller's goal is to minimize the cumulative cost; however, the controller has access to neither the feasible set nor the dynamics directly, which are determined by a remote local controller. Instead, the central controller receives only an aggregate summary of the feasibility information from the local controller, which does not know the system costs. We show that it is possible for an online algorithm using feasibility information to nearly match the dynamic regret of an online algorithm using perfect information whenever the feasible sets satisfy a causal invariance criterion and there is a sufficiently large prediction window size. To do so, we use a form of feasibility aggregation based on entropic maximization in combination with a novel online algorithm, named Penalized Predictive Control (PPC) and demonstrate that aggregated information can be efficiently learned using reinforcement learning algorithms. The effectiveness of our approach for closed-loop coordination between central and local controllers is validated via an electric vehicle charging application in power systems.https://authors.library.caltech.edu/records/nha5m-d8388The Privacy Paradox and Optimal Bias-Variance Trade-offs in Data Acquisition
https://resolver.caltech.edu/CaltechAUTHORS:20220121-870642000
Authors: {'items': [{'id': 'Liao-Guocheng', 'name': {'family': 'Liao', 'given': 'Guocheng'}}, {'id': 'Su-Yu', 'name': {'family': 'Su', 'given': 'Yu'}}, {'id': 'Ziani-Juba', 'name': {'family': 'Ziani', 'given': 'Juba'}, 'orcid': '0000-0002-3324-4349'}, {'id': 'Wierman-A', 'name': {'family': 'Wierman', 'given': 'Adam'}, 'orcid': '0000-0002-5923-0199'}, {'id': 'Huang-Jianwei', 'name': {'family': 'Huang', 'given': 'Jianwei'}}]}
Year: 2021
DOI: 10.1145/3512798.3512802
While users claim to be concerned about privacy, often they do little to protect their privacy in their online actions. One prominent explanation for this "privacy paradox" is that when an individual shares her data, it is not just her privacy that is compromised; the privacy of other individuals with correlated data is also compromised. This information leakage encourages oversharing of data and significantly impacts the incentives of individuals in online platforms. In this extended abstract, we discuss the design of mechanisms for data acquisition in settings with information leakage and verifiable data. We summarize work designing an incentive compatible mechanism that optimizes the worst-case tradeoff between bias and variance of the estimation subject to a budget constraint, where the worst-case is over the unknown correlation between costs and data. Additionally, we characterize the structure of the optimal mechanism in closed form and study monotonicity and non-monotonicity properties of the marketplace.https://authors.library.caltech.edu/records/st3vj-7pt88Line Failure Localization of Power Networks Part II: Cut Set Outages
https://resolver.caltech.edu/CaltechAUTHORS:20200707-095927831
Authors: {'items': [{'id': 'Guo-Linqi', 'name': {'family': 'Guo', 'given': 'Linqi'}, 'orcid': '0000-0001-5771-2752'}, {'id': 'Liang-Chen', 'name': {'family': 'Liang', 'given': 'Chen'}, 'orcid': '0000-0002-0015-7206'}, {'id': 'Zocca-Alessandro', 'name': {'family': 'Zocca', 'given': 'Alessandro'}, 'orcid': '0000-0001-6585-4785'}, {'id': 'Low-S-H', 'name': {'family': 'Low', 'given': 'Steven H.'}, 'orcid': '0000-0001-6476-3048'}, {'id': 'Wierman-A', 'name': {'family': 'Wierman', 'given': 'Adam'}}]}
Year: 2021
DOI: 10.1109/TPWRS.2021.3068048
Transmission line failure in power systems prop-agate non-locally, making the control of the resulting outages extremely difficult. In Part II of this paper, we continue the study of line failure localizability in transmission networks and characterize the impact of cut set outages. We establish a Simple Path Criterion, showing that the propagation pattern due to bridge outages, a special case of cut set failures, are fully determined by the positions in the network of the buses that participate in load balancing. We then extend our results to general cut set outages. In contrast to non-cut outages discussed in Part I whose subsequent line failures are contained within the original blocks, cut set outages typically impact the whole network, affecting the power flows on all remaining lines. We corroborate our analytical results in both parts using the IEEE 118-bus test system, in which the failure propagation patterns exhibit a clear block-diagonal structure predicted by our theory, even when using full AC power flow equations.https://authors.library.caltech.edu/records/5epx3-x6t87Line Failure Localization of Power Networks Part I: Non-Cut Outages
https://resolver.caltech.edu/CaltechAUTHORS:20200707-101019648
Authors: {'items': [{'id': 'Guo-Linqi', 'name': {'family': 'Guo', 'given': 'Linqi'}, 'orcid': '0000-0001-5771-2752'}, {'id': 'Liang-Chen', 'name': {'family': 'Liang', 'given': 'Chen'}, 'orcid': '0000-0002-0015-7206'}, {'id': 'Zocca-Alessandro', 'name': {'family': 'Zocca', 'given': 'Alessandro'}, 'orcid': '0000-0001-6585-4785'}, {'id': 'Low-S-H', 'name': {'family': 'Low', 'given': 'Steven H.'}, 'orcid': '0000-0001-6476-3048'}, {'id': 'Wierman-A', 'name': {'family': 'Wierman', 'given': 'Adam'}}]}
Year: 2021
DOI: 10.1109/tpwrs.2021.3066336
Transmission line failures in power systems propagate non-locally, making the control of the resulting outages extremely difficult. In this work, we establish a mathematical theory that characterizes the patterns of line failure propagation and localization in terms of network graph structure. It provides a novel perspective on distribution factors that precisely captures Kirchhoff's Law in terms of topological structures. Our results show that the distribution of specific collections of subtrees of the transmission network plays a critical role on the patterns of power redistribution, and motivates the block decomposition of the transmission network as a structure to understand long-distance propagation of disturbances. In Part I of this paper, we present the case when the post-contingency network remains connected after an initial set of lines are disconnected simultaneously. In Part II, we present the case when an outage separates the network into multiple islands.https://authors.library.caltech.edu/records/xtnd5-5ye52Learning-based Predictive Control via Real-time Aggregate Flexibility
https://resolver.caltech.edu/CaltechAUTHORS:20210510-084600512
Authors: {'items': [{'id': 'Li-Tongxin', 'name': {'family': 'Li', 'given': 'Tongxin'}, 'orcid': '0000-0002-9806-8964'}, {'id': 'Sun-Bo', 'name': {'family': 'Sun', 'given': 'Bo'}, 'orcid': '0000-0003-3172-7811'}, {'id': 'Chen-Yue', 'name': {'family': 'Chen', 'given': 'Yue'}, 'orcid': '0000-0002-7594-7587'}, {'id': 'Ye-Zixin', 'name': {'family': 'Ye', 'given': 'Zixin'}}, {'id': 'Low-S-H', 'name': {'family': 'Low', 'given': 'Steven H.'}, 'orcid': '0000-0001-6476-3048'}, {'id': 'Wierman-A', 'name': {'family': 'Wierman', 'given': 'Adam'}}]}
Year: 2021
DOI: 10.1109/TSG.2021.3094719
Aggregators have emerged as crucial tools for the coordination of distributed, controllable loads. To be used effectively, an aggregator must be able to communicate the available flexibility of the loads they control, as known as the aggregate flexibility to a system operator. However, most of existing aggregate flexibility measures often are slow-timescale estimations and much less attention has been paid to real-time coordination between an aggregator and an operator. In this paper, we consider solving an online optimization in a closed-loop system and present a design of real-time aggregate flexibility feedback, termed the maximum entropy feedback (MEF). In addition to deriving analytic properties of the MEF, combining learning and control, we show that it can be approximated using reinforcement learning and used as a penalty term in a novel control algorithm – the penalized predictive control (PPC), which modifies vanilla model predictive control (MPC). The benefits of our scheme are (1). Efficient Communication . An operator running PPC does not need to know the exact states and constraints of the loads, but only the MEF. (2). Fast Computation . The PPC often has much less number of variables than an MPC formulation. (3). Lower Costs We show that under certain regularity assumptions, the PPC is optimal. We illustrate the efficacy of the PPC using a dataset from an adaptive electric vehicle charging network and show that PPC outperforms classical MPC.https://authors.library.caltech.edu/records/j0s8j-1aa33Newton Polytopes and Relative Entropy Optimization
https://resolver.caltech.edu/CaltechAUTHORS:20190627-102049940
Authors: {'items': [{'id': 'Murray-Riley', 'name': {'family': 'Murray', 'given': 'Riley'}, 'orcid': '0000-0003-1461-6458'}, {'id': 'Chandrasekaran-Venkat', 'name': {'family': 'Chandrasekaran', 'given': 'Venkat'}}, {'id': 'Wierman-A', 'name': {'family': 'Wierman', 'given': 'Adam'}, 'orcid': '0000-0002-5923-0199'}]}
Year: 2021
DOI: 10.1007/s10208-021-09497-w
Certifying function nonnegativity is a ubiquitous problem in computational mathematics, with especially notable applications in optimization. We study the question of certifying nonnegativity of signomials based on the recently proposed approach of Sums-of-AM/GM-Exponentials (SAGE) decomposition due to the second author and Shah. The existence of a SAGE decomposition is a sufficient condition for nonnegativity of a signomial, and it can be verified by solving a tractable convex relative entropy program. We present new structural properties of SAGE certificates such as a characterization of the extreme rays of the cones associated to these decompositions as well as an appealing form of sparsity preservation. These lead to a number of important consequences such as conditions under which signomial nonnegativity is equivalent to the existence of a SAGE decomposition; our results represent the broadest-known class of nonconvex signomial optimization problems that can be solved efficiently via convex relaxation. The analysis in this paper proceeds by leveraging the interaction between the convex duality underlying SAGE certificates and the face structure of Newton polytopes. After proving our main signomial results, we direct our machinery toward the topic of globally nonnegative polynomials. This leads to (among other things) efficient methods for certifying polynomial nonnegativity, with complexity independent of the degree of a polynomial.https://authors.library.caltech.edu/records/xjhj6-hft50Robustness and Consistency in Linear Quadratic Control with Untrusted Predictions
https://resolver.caltech.edu/CaltechAUTHORS:20210716-225846876
Authors: {'items': [{'id': 'Li-Tongxin', 'name': {'family': 'Li', 'given': 'Tongxin'}, 'orcid': '0000-0002-9806-8964'}, {'id': 'Yang-Ruixiao', 'name': {'family': 'Yang', 'given': 'Ruixiao'}}, {'id': 'Qu-Guannan', 'name': {'family': 'Qu', 'given': 'Guannan'}, 'orcid': '0000-0002-5466-3550'}, {'id': 'Shi-Guanya', 'name': {'family': 'Shi', 'given': 'Guanya'}, 'orcid': '0000-0002-9075-3705'}, {'id': 'Yu-Chenkai', 'name': {'family': 'Yu', 'given': 'Chenkai'}, 'orcid': '0000-0001-8683-7773'}, {'id': 'Wierman-A', 'name': {'family': 'Wierman', 'given': 'Adam'}, 'orcid': '0000-0002-5923-0199'}, {'id': 'Low-S-H', 'name': {'family': 'Low', 'given': 'Steven'}, 'orcid': '0000-0001-6476-3048'}]}
Year: 2022
DOI: 10.1145/3508038
We study the problem of learning-augmented predictive linear quadratic control. Our goal is to design a controller that balances consistency, which measures the competitive ratio when predictions are accurate, and robustness, which bounds the competitive ratio when predictions are inaccurate.https://authors.library.caltech.edu/records/p10m3-kqj37Online Optimization with Feedback Delay and Nonlinear Switching Cost
https://resolver.caltech.edu/CaltechAUTHORS:20220302-699021182
Authors: {'items': [{'id': 'Pan-Weici', 'name': {'family': 'Pan', 'given': 'Weici'}}, {'id': 'Shi-Guanya', 'name': {'family': 'Shi', 'given': 'Guanya'}, 'orcid': '0000-0002-9075-3705'}, {'id': 'Lin-Yiheng', 'name': {'family': 'Lin', 'given': 'Yiheng'}, 'orcid': '0000-0001-6524-2877'}, {'id': 'Wierman-A', 'name': {'family': 'Wierman', 'given': 'Adam'}, 'orcid': '0000-0002-5923-0199'}]}
Year: 2022
DOI: 10.1145/3508037
We study a variant of online optimization in which the learner receives k-round delayed feedback about hitting cost and there is a multi-step nonlinear switching cost, i.e., costs depend on multiple previous actions in a nonlinear manner. Our main result shows that a novel Iterative Regularized Online Balanced Descent (iROBD) algorithm has a constant, dimension-free competitive ratio that is O(L^(2k)), where L is the Lipschitz constant of the switching cost. Additionally, we provide lower bounds that illustrate the Lipschitz condition is required and the dependencies on k and L are tight. Finally, via reductions, we show that this setting is closely related to online control problems with delay, nonlinear dynamics, and adversarial disturbances, where iROBD directly offers constant-competitive online policies.https://authors.library.caltech.edu/records/epc2z-fcx12Transparency and Control in Platforms for Networked Markets
https://resolver.caltech.edu/CaltechAUTHORS:20190626-105727708
Authors: {'items': [{'id': 'Pang-John-Z-F', 'name': {'family': 'Pang', 'given': 'John'}, 'orcid': '0000-0002-6485-7922'}, {'id': 'Lin-Weixuan', 'name': {'family': 'Lin', 'given': 'Weixuan'}, 'orcid': '0000-0002-8988-8573'}, {'id': 'Fu-Hu', 'name': {'family': 'Fu', 'given': 'Hu'}}, {'id': 'Kleeman-Jack', 'name': {'family': 'Kleeman', 'given': 'Jack'}}, {'id': 'Bitar-Eilyan-Y', 'name': {'family': 'Bitar', 'given': 'Eilyan'}}, {'id': 'Wierman-A', 'name': {'family': 'Wierman', 'given': 'Adam'}, 'orcid': '0000-0002-5923-0199'}]}
Year: 2022
DOI: 10.1287/opre.2021.2244
In this paper, we analyze the worst-case efficiency loss of online platform designs under a networked Cournot competition model. Inspired by some of the largest platforms in operation today, we study a variety of platform designs to examine the impacts of market transparency and control on the worst-case efficiency loss of Nash equilibria in networked Cournot games. Our results show that open access designs incentivize increased production toward perfectly competitive levels and limit efficiency loss, while controlled allocation designs lead to producer-platform incentive misalignment, resulting in low participation rates and unbounded efficiency loss. We also show that discriminatory access designs balance transparency and control, achieving the best of both worlds by maintaining high participation rates while limiting efficiency loss.https://authors.library.caltech.edu/records/gqza3-7mm59Chasing Convex Bodies and Functions with Black-Box Advice
https://resolver.caltech.edu/CaltechAUTHORS:20230316-231309617
Authors: {'items': [{'id': 'Christianson-Nicolas-H', 'name': {'family': 'Christianson', 'given': 'Nicolas'}, 'orcid': '0000-0001-8330-8964'}, {'id': 'Handina-Tinashe', 'name': {'family': 'Handina', 'given': 'Tinashe'}}, {'id': 'Wierman-A', 'name': {'family': 'Wierman', 'given': 'Adam'}, 'orcid': '0000-0002-5923-0199'}]}
Year: 2022
We consider the problem of convex function chasing with black-box advice, where an online decision-maker aims to minimize the total cost of making and switching between decisions in a normed vector space, aided by black-box advice such as the decisions of a machine-learned algorithm. The decision-maker seeks cost comparable to the advice when it performs well, known as consistency, while also ensuring worst-case robustness even when the advice is adversarial. We first consider the common paradigm of algorithms that switch between the decisions of the advice and a competitive algorithm, showing that no algorithm in this class can improve upon 3-consistency while staying robust. We then propose two novel algorithms that bypass this limitation by exploiting the problem's convexity. The first, INTERP, achieves (√2̅ + ϵ)-consistency and 2̅(C/ϵ²)-robustness for any ϵ > 0, where C is the competitive ratio of an algorithm for convex function chasing or a subclass thereof. The second, BDINTERP, achieves (1 + ϵ)-consistency and O(CD/ϵ)-robustness when the problem has bounded diameter D. Further, we show that BDINTERP achieves near-optimal consistency-robustness trade-off for the special case where cost functions are α-polyhedral.https://authors.library.caltech.edu/records/e7pjw-x5n20Scalable Reinforcement Learning for Multiagent Networked Systems
https://resolver.caltech.edu/CaltechAUTHORS:20220914-591652300
Authors: {'items': [{'id': 'Qu-Guannan', 'name': {'family': 'Qu', 'given': 'Guannan'}, 'orcid': '0000-0002-5466-3550'}, {'id': 'Wierman-A', 'name': {'family': 'Wierman', 'given': 'Adam'}, 'orcid': '0000-0002-5923-0199'}, {'id': 'Li-Na', 'name': {'family': 'Li', 'given': 'Na'}, 'orcid': '0000-0001-9545-3050'}]}
Year: 2022
DOI: 10.1287/opre.2021.2226
We study reinforcement learning (RL) in a setting with a network of agents whose states and actions interact in a local manner where the objective is to find localized policies such that the (discounted) global reward is maximized. A fundamental challenge in this setting is that the state-action space size scales exponentially in the number of agents, rendering the problem intractable for large networks. In this paper, we propose a scalable actor critic (SAC) framework that exploits the network structure and finds a localized policy that is an O(ρκ+1)-approximation of a stationary point of the objective for some ρ∈(0,1), with complexity that scales with the local state-action space size of the largest κ-hop neighborhood of the network. We illustrate our model and approach using examples from wireless communication, epidemics, and traffic.https://authors.library.caltech.edu/records/kqyv8-7wr15Dispatch-aware planning for feasible power system operation
https://resolver.caltech.edu/CaltechAUTHORS:20221011-128968500.6
Authors: {'items': [{'id': 'Christianson-Nicolas-H', 'name': {'family': 'Christianson', 'given': 'Nicolas'}, 'orcid': '0000-0001-8330-8964'}, {'id': 'Werner-Lucien-D', 'name': {'family': 'Werner', 'given': 'Lucien'}}, {'id': 'Wierman-A', 'name': {'family': 'Wierman', 'given': 'Adam'}, 'orcid': '0000-0002-5923-0199'}, {'id': 'Low-S-H', 'name': {'family': 'Low', 'given': 'Steven H.'}, 'orcid': '0000-0001-6476-3048'}]}
Year: 2022
DOI: 10.1016/j.epsr.2022.108597
Maintaining stable energy production with increasing penetration of variable renewable energy requires sufficient flexible generation resources and dispatch algorithms that accommodate renewables' uncertainty. In this work, we study the feasibility properties of real-time economic dispatch (RTED) algorithms and establish fundamental limits on their performance. We propose a joint methodology for resource procurement and online economic dispatch with guaranteed feasibility. Our algorithm, Feasible Fixed Horizon Control (FFHC) is a regularized form of Receding Horizon Control (RHC) that balances exploitation of good near-term demand predictions with feasibility requirements. Empirical evaluation of FFHC in comparison to the standard RHC on realistic load profiles highlights that FFHC achieves near-optimal performance while ensuring feasibility in high-ramp scenarios where RHC becomes infeasible.https://authors.library.caltech.edu/records/tmnrt-9y586The Online Knapsack Problem with Departures
https://resolver.caltech.edu/CaltechAUTHORS:20230103-817548100.24
Authors: {'items': [{'id': 'Sun-Bo', 'name': {'family': 'Sun', 'given': 'Bo'}, 'orcid': '0000-0003-3172-7811'}, {'id': 'Yang-Lin', 'name': {'family': 'Yang', 'given': 'Lin'}, 'orcid': '0000-0001-9056-0500'}, {'id': 'Hajiesmaili-Mohammad-H', 'name': {'family': 'Hajiesmaili', 'given': 'Mohammad'}, 'orcid': '0000-0001-9278-2254'}, {'id': 'Wierman-A', 'name': {'family': 'Wierman', 'given': 'Adam'}, 'orcid': '0000-0002-5923-0199'}, {'id': 'Lui-John-C-S', 'name': {'family': 'Lui', 'given': 'John C. S.'}, 'orcid': '0000-0001-7466-0384'}, {'id': 'Towsley-Don', 'name': {'family': 'Towsley', 'given': 'Don'}, 'orcid': '0000-0002-7808-7375'}, {'id': 'Tsang-Danny-Hin-Kwok', 'name': {'family': 'Tsang', 'given': 'Danny H. K.'}, 'orcid': '0000-0003-0135-7098'}]}
Year: 2022
DOI: 10.1145/3570618
The online knapsack problem is a classic online resource allocation problem in networking and operations research. Its basic version studies how to pack online arriving items of different sizes and values into a capacity-limited knapsack. In this paper, we study a general version that includes item departures, while also considering multiple knapsacks and multi-dimensional item sizes. We design a threshold-based online algorithm and prove that the algorithm can achieve order-optimal competitive ratios. Beyond worst-case performance guarantees, we also aim to achieve near-optimal average performance under typical instances. Towards this goal, we propose a data-driven online algorithm that learns within a policy-class that guarantees a worst-case performance bound. In trace-driven experiments, we show that our data-driven algorithm outperforms other benchmark algorithms in an application of online knapsack to job scheduling for cloud computing.https://authors.library.caltech.edu/records/8zcg9-swy47Smoothed Online Optimization with Unreliable Predictions
https://resolver.caltech.edu/CaltechAUTHORS:20230316-86070000.1
Authors: {'items': [{'id': 'Rutten-Daan', 'name': {'family': 'Rutten', 'given': 'Daan'}, 'orcid': '0000-0002-4742-4201'}, {'id': 'Christianson-Nicolas-H', 'name': {'family': 'Christianson', 'given': 'Nicolas'}, 'orcid': '0000-0001-8330-8964'}, {'id': 'Mukherjee-Debankur', 'name': {'family': 'Mukherjee', 'given': 'Debankur'}, 'orcid': '0000-0003-1678-4893'}, {'id': 'Wierman-A', 'name': {'family': 'Wierman', 'given': 'Adam'}, 'orcid': '0000-0002-5923-0199'}]}
Year: 2023
DOI: 10.1145/3579442
We examine the problem of smoothed online optimization, where a decision maker must sequentially choose points in a normed vector space to minimize the sum of per-round, non-convex hitting costs and the costs of switching decisions between rounds. The decision maker has access to a black-box oracle, such as a machine learning model, that provides untrusted and potentially inaccurate predictions of the optimal decision in each round. The goal of the decision maker is to exploit the predictions if they are accurate, while guaranteeing performance that is not much worse than the hindsight optimal sequence of decisions, even when predictions are inaccurate. We impose the standard assumption that hitting costs are globally α-polyhedral. We propose a novel algorithm, Adaptive Online Switching (AOS), and prove that, for a large set of feasible δ > 0, it is (1+δ)-competitive if predictions are perfect, while also maintaining a uniformly bounded competitive ratio of 2^(O̅(1/(αδ))) even when predictions are adversarial. Further, we prove that this trade-off is necessary and nearly optimal in the sense that any deterministic algorithm which is (1 + δ)-competitive if predictions are perfect must be at least 2^(O̅(1/(αδ)))-competitive when predictions are inaccurate. In fact, we observe a unique threshold-type behavior in this trade-off: if δ is not in the set of feasible options, then no algorithm is simultaneously (1 + δ)-competitive if predictions are perfect and ζ-competitive when predictions are inaccurate for any ζ < ∞. Furthermore, we discuss that memory is crucial in AOS by proving that any algorithm that does not use memory cannot benefit from predictions. We complement our theoretical results by a numerical study on a microgrid application.https://authors.library.caltech.edu/records/hkfzt-avb28Online Adversarial Stabilization of Unknown Networked Systems
https://resolver.caltech.edu/CaltechAUTHORS:20230327-854092000.9
Authors: {'items': [{'id': 'Yu-Jing', 'name': {'family': 'Yu', 'given': 'Jing'}, 'orcid': '0000-0003-1318-0189'}, {'id': 'Ho-Dimitar', 'name': {'family': 'Ho', 'given': 'Dimitar'}, 'orcid': '0000-0002-7856-985X'}, {'id': 'Wierman-A', 'name': {'family': 'Wierman', 'given': 'Adam'}, 'orcid': '0000-0002-5923-0199'}]}
Year: 2023
DOI: 10.1145/3579452
We investigate the problem of stabilizing an unknown networked linear system under communication constraints and adversarial disturbances. We propose the first provably stabilizing algorithm for the problem. The algorithm uses a distributed version of nested convex body chasing to maintain a consistent estimate of the network dynamics and applies system level synthesis to determine a distributed controller based on this estimated model. Our approach avoids the need for system identification and accommodates a broad class of communication delay while being fully distributed and scaling favorably with the number of subsystems.https://authors.library.caltech.edu/records/qfnc6-w7606Global Convergence of Localized Policy Iteration in Networked Multi-Agent Reinforcement Learning
https://resolver.caltech.edu/CaltechAUTHORS:20230316-87864000.2
Authors: {'items': [{'id': 'Zhang-Yizhou', 'name': {'family': 'Zhang', 'given': 'Yizhou'}, 'orcid': '0000-0002-5677-4748'}, {'id': 'Qu-Guannan', 'name': {'family': 'Qu', 'given': 'Guannan'}, 'orcid': '0000-0002-5466-3550'}, {'id': 'Xu-Pan', 'name': {'family': 'Xu', 'given': 'Pan'}, 'orcid': '0000-0002-2559-8622'}, {'id': 'Lin-Yiheng', 'name': {'family': 'Lin', 'given': 'Yiheng'}, 'orcid': '0000-0001-6524-2877'}, {'id': 'Chen-Zaiwei', 'name': {'family': 'Chen', 'given': 'Zaiwei'}, 'orcid': '0000-0001-9915-5595'}, {'id': 'Wierman-A', 'name': {'family': 'Wierman', 'given': 'Adam'}, 'orcid': '0000-0002-5923-0199'}]}
Year: 2023
DOI: 10.1145/3579443
We study a multi-agent reinforcement learning (MARL) problem where the agents interact over a given network. The goal of the agents is to cooperatively maximize the average of their entropy-regularized long-term rewards. To overcome the curse of dimensionality and to reduce communication, we propose a Localized Policy Iteration (LPI) algorithm that provably learns a near-globally-optimal policy using only local information. In particular, we show that, despite restricting each agent's attention to only its κ-hop neighborhood, the agents are able to learn a policy with an optimality gap that decays polynomially in κ. In addition, we show the finite-sample convergence of LPI to the global optimal policy, which explicitly captures the trade-off between optimality and computational complexity in choosing κ. Numerical simulations demonstrate the effectiveness of LPI.https://authors.library.caltech.edu/records/hrq0x-5xz83An Energy Sharing Mechanism Considering Network Constraints and Market Power Limitation
https://resolver.caltech.edu/CaltechAUTHORS:20230502-727238500.2
Authors: {'items': [{'id': 'Chen-Yue', 'name': {'family': 'Chen', 'given': 'Yue'}, 'orcid': '0000-0002-7594-7587'}, {'id': 'Zhao-Changhong', 'name': {'family': 'Zhao', 'given': 'Changhong'}, 'orcid': '0000-0003-0539-8591'}, {'id': 'Low-S-H', 'name': {'family': 'Low', 'given': 'Steven H.'}, 'orcid': '0000-0001-6476-3048'}, {'id': 'Wierman-A', 'name': {'family': 'Wierman', 'given': 'Adam'}, 'orcid': '0000-0002-5923-0199'}]}
Year: 2023
DOI: 10.1109/tsg.2022.3198721
As the number of prosumers with distributed energy resources (DERs) grows, the conventional centralized operation scheme may suffer from conflicting interests, privacy concerns, and incentive inadequacy. In this paper, we propose an energy sharing mechanism to address the above challenges. It takes into account network constraints and fairness among prosumers. In the proposed energy sharing market, all prosumers play a generalized Nash game. The market equilibrium is proved to have nice features in a large market or when it is a variational equilibrium. To deal with the possible market failure, inefficiency, or instability in general cases, we introduce a price regulation policy to avoid market power exploitation. The improved energy sharing mechanism with price regulation can guarantee the existence and uniqueness of a socially near-optimal market equilibrium. Some advantageous properties are proved, such as the prosumer's individual rationality, a sharing price structure similar to the locational marginal price, and the tendency towards social optimum with an increasing number of prosumers. For implementation, a practical bidding algorithm is developed with a convergence condition. Experimental results validate the theoretical outcomes and show the practicability of our model and method.https://authors.library.caltech.edu/records/npwrx-d8m85Trading Throughput for Freshness: Freshness-aware Traffic Engineering and In-Network Freshness Control
https://resolver.caltech.edu/CaltechAUTHORS:20230327-854076000.4
Authors: {'items': [{'id': 'Tseng-Shih-Hao', 'name': {'family': 'Tseng', 'given': 'Shih-hao'}, 'orcid': '0000-0003-2376-9333'}, {'id': 'Han-SooJean', 'name': {'family': 'Han', 'given': 'SooJean'}, 'orcid': '0000-0003-1195-6465'}, {'id': 'Wierman-A', 'name': {'family': 'Wierman', 'given': 'Adam'}, 'orcid': '0000-0002-5923-0199'}]}
Year: 2023
DOI: 10.1145/3576919
With the advent of the Internet of Things (IoT), applications are becoming increasingly dependent on networks to not only transmit content at high throughput but also deliver it when it is fresh, i.e., synchronized between source and destination. Existing studies have proposed the metric age of information (AoI) to quantify freshness and have system designs that achieve low AoI. However, despite active research in this area, existing results are not applicable to general wired networks for two reasons. First, they focus on wireless settings, where AoI is mostly affected by interference and collision, while queueing issues are more prevalent in wired settings. Second, traditional high-throughput/low-latency legacy drop-adverse (LDA) flows are not taken into account in most system designs; hence, the problem of scheduling mixed flows with distinct performance objectives is not addressed.
In this article, we propose a hierarchical system design to treat wired networks shared by mixed flow traffic, specifically LDA and AoI flows, and study the characteristics of achieving a good tradeoff between throughput and AoI. Our approach to the problem consists of two layers: freshness-aware traffic engineering (FATE) and in-network freshness control (IFC). The centralized FATE solution studies the characteristics of the source flow to derive the sending rate/update frequency for flows via the optimization problem LDA-AoI Coscheduling. The parameters specified by FATE are then distributed to IFC, which is implemented at each outport of the network's nodes and used for efficient scheduling between LDA and AoI flows. We present a Linux implementation of IFC and demonstrate the effectiveness of FATE/IFC through extensive emulations. Our results show that it is possible to trade a little throughput (5% lower) for much shorter AoI (49% to 71% shorter) compared to state-of-the-art traffic engineering.https://authors.library.caltech.edu/records/axjfk-9s189Near-Optimal Distributed Linear-Quadratic Regulator for Networked Systems
https://resolver.caltech.edu/CaltechAUTHORS:20230613-731307200.40
Authors: {'items': [{'id': 'Shin-Sungho', 'name': {'family': 'Shin', 'given': 'Sungho'}, 'orcid': '0000-0002-9889-3278'}, {'id': 'Lin-Yiheng', 'name': {'family': 'Lin', 'given': 'Yiheng'}, 'orcid': '0000-0001-6524-2877'}, {'id': 'Qu-Guannan', 'name': {'family': 'Qu', 'given': 'Guannan'}, 'orcid': '0000-0002-5466-3550'}, {'id': 'Wierman-A', 'name': {'family': 'Wierman', 'given': 'Adam'}, 'orcid': '0000-0002-5923-0199'}, {'id': 'Anitescu-Mihai', 'name': {'family': 'Anitescu', 'given': 'Mihai'}}]}
Year: 2023
DOI: 10.1137/22m1489836
This paper studies the trade-off between the degree of decentralization and the performance of a distributed controller in a linear-quadratic control setting. We study a system of interconnected agents over a graph and a distributed controller, called κ-distributed control, which lets the agents make control decisions based on the state information within distance κ on the underlying graph. This controller can tune its degree of decentralization using the parameter κ and thus allows a characterization of the relationship between decentralization and performance. We show that under mild assumptions, including stabilizability, detectability, and a subexponentially growing graph condition, the performance difference between κ-distributed control and centralized optimal control becomes exponentially small in κ. This result reveals that distributed control can achieve near-optimal performance with a moderate degree of decentralization, and thus it is an effective controller architecture for large-scale networked systems.https://authors.library.caltech.edu/records/j47b6-s0z07Minimization Fractional Prophet Inequalities for Sequential Procurement
https://resolver.caltech.edu/CaltechAUTHORS:20230710-599244800.27
Authors: {'items': [{'id': 'Qin-Junjie', 'name': {'family': 'Qin', 'given': 'Junjie'}, 'orcid': '0000-0002-9597-1138'}, {'id': 'Vardi-Shai', 'name': {'family': 'Vardi', 'given': 'Shai'}, 'orcid': '0000-0003-4720-6826'}, {'id': 'Wierman-A', 'name': {'family': 'Wierman', 'given': 'Adam'}, 'orcid': '0000-0002-5923-0199'}]}
Year: 2023
DOI: 10.1287/moor.2021.173
We consider a minimization variant on the classical prophet inequality with monomial cost functions. A firm would like to procure some fixed amount of a divisible commodity from sellers that arrive sequentially. Whenever a seller arrives, the seller's cost function is revealed, and the firm chooses how much of the commodity to buy. We first show that if one restricts the set of distributions for the coefficients to a family of natural distributions that include, for example, the uniform and truncated normal distributions, then there is a thresholding policy that is asymptotically optimal in the number of sellers. We then compare two scenarios based on whether the firm has in-house production capabilities or not. We precisely compute the optimal algorithm's competitive ratio when in-house production capabilities exist and for a special case when they do not. We show that the main advantage of the ability to produce the commodity in house is that it shields the firm from price spikes in worst-case scenarios.https://authors.library.caltech.edu/records/twb2b-x7v50The Online Pause and Resume Problem: Optimal Algorithms and An Application to Carbon-Aware Load Shifting
https://authors.library.caltech.edu/records/00as5-6gj35
Authors: {'items': [{'id': 'Lechowicz-Adam', 'name': {'family': 'Lechowicz', 'given': 'Adam'}, 'orcid': '0000-0002-7774-9939'}, {'id': 'Christianson-Nicolas', 'name': {'family': 'Christianson', 'given': 'Nicolas'}, 'orcid': '0000-0001-8330-8964'}, {'id': 'Zuo-Jinhang', 'name': {'family': 'Zuo', 'given': 'Jinhang'}, 'orcid': '0000-0002-9557-3551'}, {'id': 'Bashir-Noman', 'name': {'family': 'Bashir', 'given': 'Noman'}, 'orcid': '0000-0001-9304-910X'}, {'id': 'Hajiesmaili-Mohammad', 'name': {'family': 'Hajiesmaili', 'given': 'Mohammad'}, 'orcid': '0000-0001-9278-2254'}, {'id': 'Wierman-A', 'name': {'family': 'Wierman', 'given': 'Adam'}, 'orcid': '0000-0002-5923-0199'}, {'id': 'Shenoy-Prashant', 'name': {'family': 'Shenoy', 'given': 'Prashant'}, 'orcid': '0000-0002-5435-1901'}]}
Year: 2023
DOI: 10.1145/3626776
<p>We introduce and study the online pause and resume problem. In this problem, a player attempts to find the k lowest (alternatively, highest) prices in a sequence of fixed length T, which is revealed sequentially. At each time step, the player is presented with a price and decides whether to accept or reject it. The player incurs a switching cost whenever their decision changes in consecutive time steps, i.e., whenever they pause or resume purchasing. This online problem is motivated by the goal of carbon-aware load shifting, where a workload may be paused during periods of high carbon intensity and resumed during periods of low carbon intensity and incurs a cost when saving or restoring its state. It has strong connections to existing problems studied in the literature on online optimization, though it introduces unique technical challenges that prevent the direct application of existing algorithms. Extending prior work on threshold-based algorithms, we introduce double-threshold algorithms for both the minimization and maximization variants of this problem. We further show that the competitive ratios achieved by these algorithms are the best achievable by any deterministic online algorithm. Finally, we empirically validate our proposed algorithm through case studies on the application of carbon-aware load shifting using real carbon trace data and existing baseline algorithms.</p>https://authors.library.caltech.edu/records/00as5-6gj35Stability Constrained Reinforcement Learning for Decentralized Real-Time Voltage Control
https://authors.library.caltech.edu/records/v26v8-j6q25
Authors: {'items': [{'id': 'Feng-Jie', 'name': {'family': 'Feng', 'given': 'Jie'}, 'orcid': '0000-0002-5049-9423'}, {'id': 'Shi-Yuanyuan', 'name': {'family': 'Shi', 'given': 'Yuanyuan'}, 'orcid': '0000-0002-6182-7664'}, {'id': 'Qu-Guannan', 'name': {'family': 'Qu', 'given': 'Guannan'}, 'orcid': '0000-0002-5466-3550'}, {'id': 'Low-S-H', 'name': {'family': 'Low', 'given': 'Steven H.'}, 'orcid': '0000-0001-6476-3048'}, {'id': 'Anandkumar-A', 'name': {'family': 'Anandkumar', 'given': 'Anima'}, 'orcid': '0000-0002-6974-6797'}, {'id': 'Wierman-A', 'name': {'family': 'Wierman', 'given': 'Adam'}, 'orcid': '0000-0002-5923-0199'}]}
Year: 2023
DOI: 10.1109/tcns.2023.3338240
<div class="abstract-text row g-0">
<div class="col-12">
<div class="u-mb-1">
<div>Deep reinforcement learning has been recognized as a promising tool to address the challenges in real-time control of power systems. However, its deployment in real-world power systems has been hindered by a lack of explicit stability and safety guarantees. In this paper, we propose a stability-constrained reinforcement learning (RL) method for real-time implementation of voltage control, that guarantees system stability both during policy learning and deployment of the learned policy. The key idea underlying our approach is an explicitly constructed Lyapunov function that leads to a sufficient structural condition for stabilizing policies, i.e., monotonically decreasing policies guarantee stability. We incorporate this structural constraint with RL, by parameterizing each local voltage controller using a monotone neural network, thus ensuring the stability constraint is satisfied by design. We demonstrate the effectiveness of our approach in both single-phase and three-phase IEEE test feeders, where the proposed method can reduce the transient control cost by more than 26.7% and shorten the voltage recovery time by 23.6% on average compared to the widely used linear policy, while always achieving voltage stability. In contrast, standard RL methods often fail to achieve voltage stability.</div>
</div>
</div>
</div>https://authors.library.caltech.edu/records/v26v8-j6q25