[
    {
        "id": "thesis:16999",
        "collection": "thesis",
        "collection_id": "16999",
        "cite_using_url": "https://resolver.caltech.edu/CaltechTHESIS:02122025-201948305",
        "primary_object_url": {
            "basename": "thesis_jiaweizhao_final_v1.pdf",
            "content": "final",
            "filesize": 11577434,
            "license": "other",
            "mime_type": "application/pdf",
            "url": "/16999/1/thesis_jiaweizhao_final_v1.pdf",
            "version": "v4.0.0"
        },
        "type": "thesis",
        "title": "Understanding and Improving Efficiency in Training of Deep Neural Networks",
        "author": [
            {
                "family_name": "Zhao",
                "given_name": "Jiawei",
                "orcid": "0000-0002-5726-6040",
                "clpid": "Zhao-Jiawei"
            }
        ],
        "thesis_advisor": [
            {
                "family_name": "Anandkumar",
                "given_name": "Anima",
                "orcid": "0000-0002-6974-6797",
                "clpid": "Anandkumar-A"
            }
        ],
        "thesis_committee": [
            {
                "family_name": "Wierman",
                "given_name": "Adam C.",
                "orcid": "0000-0002-5923-0199",
                "clpid": "Wierman-A-C"
            },
            {
                "family_name": "Anandkumar",
                "given_name": "Anima",
                "orcid": "0000-0002-6974-6797",
                "clpid": "Anandkumar-A"
            },
            {
                "family_name": "Mazumdar",
                "given_name": "Eric V.",
                "orcid": "0000-0002-1815-269X",
                "clpid": "Mazumdar-E-V"
            },
            {
                "family_name": "Chen",
                "given_name": "Beidi",
                "clpid": "Chen-Beidi"
            },
            {
                "family_name": "Tian",
                "given_name": "Yuandong",
                "clpid": "Tian-Yuandong"
            }
        ],
        "local_group": [
            {
                "literal": "div_eng"
            }
        ],
        "abstract": "<p>As deep neural networks (DNNs) continue to drive progress in fields like computer vision and natural language processing, their increasing complexity presents significant challenges for training efficiency, particularly in large language models (LLMs). These challenges include memory limitations, energy consumption, and bandwidth constraints during training.</p>\r\n\r\n<p>In this thesis, I address these challenges by analyzing the training dynamics of DNNs and proposing hardware-efficient learning algorithms to enhance training efficiency. First, I focus on mitigating memory limitations in LLM training. Training large models like LLMs requires substantial memory for parameters, gradients, and optimizer states, often exceeding standard hardware capacity. To tackle this, I propose GaLore, a memory-efficient training algorithm that reduces the memory footprint of LLM training by up to 65.5% while preserving performance. Additionally, I introduce InRank, an incremental low-rank learning algorithm that further reduces memory usage by gradually increasing matrix rank.</p>\r\n\r\n<p>Next, I address the issue of high energy consumption during training. Training large models like LLMs demands considerable energy, contributing to environmental impact. To mitigate this, I propose LNS-Madam, a low-precision training algorithm leveraging the logarithmic number system (LNS) to lower energy consumption without compromising accuracy. LNS-Madam achieves up to 90% energy savings compared to a full-precision baseline model.</p>\r\n\r\n<p>Finally, I focus on bandwidth limitations in distributed training. Training LLMs often requires distributing computations across multiple devices to accelerate training. However, network bandwidth constraints can cause communication bottlenecks that slow down training. To resolve this, I introduce signSGD with Majority Vote, a communication-efficient training algorithm that reduces the overhead associated with distributed training.</p>",
        "doi": "10.7907/jgq8-et91",
        "publication_date": "2025",
        "thesis_type": "phd",
        "thesis_year": "2025"
    },
    {
        "id": "thesis:17295",
        "collection": "thesis",
        "collection_id": "17295",
        "cite_using_url": "https://resolver.caltech.edu/CaltechTHESIS:05292025-064811885",
        "primary_object_url": {
            "basename": "PhD_Thesis_Yiheng_Lin.pdf",
            "content": "final",
            "filesize": 3464596,
            "license": "other",
            "mime_type": "application/pdf",
            "url": "/17295/3/PhD_Thesis_Yiheng_Lin.pdf",
            "version": "v5.0.0"
        },
        "type": "thesis",
        "title": "Predictions and Policy Optimization in Online Decision Making",
        "author": [
            {
                "family_name": "Lin",
                "given_name": "Yiheng",
                "orcid": "0000-0001-6524-2877",
                "clpid": "Lin-Yiheng"
            }
        ],
        "thesis_advisor": [
            {
                "family_name": "Wierman",
                "given_name": "Adam C.",
                "orcid": "0000-0002-5923-0199",
                "clpid": "Wierman-A-C"
            },
            {
                "family_name": "Yue",
                "given_name": "Yisong",
                "orcid": "0000-0001-9127-1989",
                "clpid": "Yue-Yisong"
            }
        ],
        "thesis_committee": [
            {
                "family_name": "Mazumdar",
                "given_name": "Eric V.",
                "orcid": "0000-0002-1815-269X",
                "clpid": "Mazumdar-E-V"
            },
            {
                "family_name": "Wierman",
                "given_name": "Adam C.",
                "orcid": "0000-0002-5923-0199",
                "clpid": "Wierman-A-C"
            },
            {
                "family_name": "Yue",
                "given_name": "Yisong",
                "orcid": "0000-0001-9127-1989",
                "clpid": "Yue-Yisong"
            },
            {
                "family_name": "Srikant",
                "given_name": "Rayadurgam",
                "orcid": "0000-0003-1483-5204",
                "clpid": "Srikant-Rayadurgam"
            }
        ],
        "local_group": [
            {
                "literal": "div_eng"
            }
        ],
        "abstract": "<p>Predictions are ubiquitous in modern systems, offering insights into how environments might evolve by encoding our prior knowledge and assumptions. Recent advances in artificial intelligence have significantly expanded the scope and accuracy of such models, creating vast new opportunities across domains. At the same time, online decision making remains a fundamental challenge in many real-world problems, concerned with challenges such as limited information, delayed feedback, and irrevocable actions. This dissertation focuses on the interplay between predictions and online decision making---how predictive information can be effectively leveraged to improve performance in dynamic, uncertain environments.</p>\r\n\r\n<p>While incorporating predictions often enhances decision-making, the degree of improvement can vary substantially. This variability arises from two key factors. First, the potential benefit of using predictions is fundamentally determined by both the nature of the predictions (e.g., their targets, errors, and distributions) and the characteristics of the decision-making process (e.g., costs and dynamics). Second, standard predictive policies frequently fall short of realizing such potential, especially in changing environments or when critical system parameters are unknown.</p>\r\n\r\n<p>This dissertation introduces a unified theoretical framework to quantify the benefit of leveraging predictions across a broad range of online decision-making problems. To close the gap between the maximum potential and achievable performance, we formulate a general policy optimization framework and design efficient algorithms capable of tracking optimal (predictive) policies in time-varying settings. Additionally, we address practical considerations such as scalability and computational efficiency, enabling the application of our methods in large-scale networks and on resource-constrained devices.</p>",
        "doi": "10.7907/37t0-7n77",
        "publication_date": "2025",
        "thesis_type": "phd",
        "thesis_year": "2025"
    },
    {
        "id": "thesis:17378",
        "collection": "thesis",
        "collection_id": "17378",
        "cite_using_url": "https://resolver.caltech.edu/CaltechTHESIS:06022025-224128103",
        "primary_object_url": {
            "basename": "Christianson_Nicolas_2025_Thesis.pdf",
            "content": "final",
            "filesize": 14322988,
            "license": "other",
            "mime_type": "application/pdf",
            "url": "/17378/3/Christianson_Nicolas_2025_Thesis.pdf",
            "version": "v5.0.0"
        },
        "type": "thesis",
        "title": "Machine Learning-Augmented Algorithms: Theory and Applications in Energy and Sustainability",
        "author": [
            {
                "family_name": "Christianson",
                "given_name": "Nicolas Henry",
                "orcid": "0000-0001-8330-8964",
                "clpid": "Christianson-Nicolas-Henry"
            }
        ],
        "thesis_advisor": [
            {
                "family_name": "Wierman",
                "given_name": "Adam C.",
                "orcid": "0000-0002-5923-0199",
                "clpid": "Wierman-A-C"
            },
            {
                "family_name": "Low",
                "given_name": "Steven H.",
                "orcid": "0000-0001-6476-3048",
                "clpid": "Low-S-H"
            }
        ],
        "thesis_committee": [
            {
                "family_name": "Mazumdar",
                "given_name": "Eric V.",
                "orcid": "0000-0002-1815-269X",
                "clpid": "Mazumdar-E-V"
            },
            {
                "family_name": "Low",
                "given_name": "Steven H.",
                "orcid": "0000-0001-6476-3048",
                "clpid": "Low-S-H"
            },
            {
                "family_name": "Wierman",
                "given_name": "Adam C.",
                "orcid": "0000-0002-5923-0199",
                "clpid": "Wierman-A-C"
            },
            {
                "family_name": "Hajiesmaili",
                "given_name": "Mohammad H.",
                "orcid": "0000-0001-9278-2254",
                "clpid": "Hajiesmaili-Mohammad-H"
            },
            {
                "family_name": "Zhang",
                "given_name": "Baosen",
                "orcid": "0000-0003-4065-7341",
                "clpid": "Zhang-Baosen"
            }
        ],
        "local_group": [
            {
                "literal": "div_eng"
            }
        ],
        "abstract": "<p>Uncertainty poses a significant challenge for decision-makers in energy and sustainability domains. The ongoing energy transition<span>&#8212;</span>characterized by increasing penetrations of variable renewable generation, deployment of novel grid assets like battery energy storage systems, and growing risks from climate-driven natural disasters<span>&#8212;</span>introduces new, multifaceted uncertainties that traditional operational methods struggle to accommodate. While artificial intelligence (AI) and machine learning (ML) hold significant promise for navigating this transition and improving the efficiency of energy system operation, their direct deployment to high-stakes energy and sustainability problems presents substantial risks. In particular, current AI/ML tools typically lack guarantees on reliability, robustness, and safety, and thus pose a risk of poor performance or catastrophic failure if deployed in the real world. To make progress on decarbonization while maintaining reliability, new approaches are needed to enable the design of AI- and ML-augmented algorithms that achieve near-optimal performance while providing rigorous guarantees on robustness and reliability when deployed in real-world energy and sustainability problems.</p>\r\n\r\n<p>This thesis addresses this challenge from two complementary perspectives, seeking to bridge the gap between theoretical algorithmic insights and practical impact. In the first part, we develop <i>learning-augmented algorithms</i> that integrate black-box AI/ML \"advice\" into online optimization problems while ensuring provable, worst-case performance guarantees. We propose algorithms for several classes of problems<span>&#8212;</span>including cases with convex costs, nonconvex costs, and long-term deadline constraints<span>&#8212;</span>that obtain the provably optimal tradeoff between exploiting good AI performance and worst-case robustness. We demonstrate these algorithms' ability to improve operational efficiency in energy and sustainability domains through case studies on cogeneration power plant operation under high renewables penetration and carbon-aware workload shifting for geographically-distributed datacenters.</p>\r\n\r\n<p>In the second part of this thesis, we move beyond the \"black box\" model of AI/ML to explore how risk-awareness and reliability can be integrated as primary design criteria in AI/ML model training and algorithm development more generally. We consider this objective along several avenues, introducing new theoretical and methodological approaches for risk-aware optimization and uncertainty quantification, designing new mechanisms for pricing general forms of uncertainty in electricity markets, and developing new frameworks for training machine learning models with provable reliability guarantees. Throughout, we emphasize connections with and applications to energy and sustainability problems ranging from grid-scale battery storage operation to power grid contingency analysis. Together, these approaches highlight the challenges facing and benefits to risk- and reliability-aware learning and decision-making.</p>",
        "doi": "10.7907/nyn2-q614",
        "publication_date": "2025",
        "thesis_type": "phd",
        "thesis_year": "2025"
    },
    {
        "id": "thesis:14980",
        "collection": "thesis",
        "collection_id": "14980",
        "cite_using_url": "https://resolver.caltech.edu/CaltechThesis:07202022-040725024",
        "primary_object_url": {
            "basename": "Thesis_Tongxin-Li.pdf",
            "content": "final",
            "filesize": 12516785,
            "license": "other",
            "mime_type": "application/pdf",
            "url": "/14980/1/Thesis_Tongxin-Li.pdf",
            "version": "v5.0.0"
        },
        "type": "thesis",
        "title": "Learning-Augmented Control and Decision-Making: Theory and Applications in Smart Grids",
        "author": [
            {
                "family_name": "Li",
                "given_name": "Tongxin",
                "orcid": "0000-0002-9806-8964",
                "clpid": "Li-Tongxin"
            }
        ],
        "thesis_advisor": [
            {
                "family_name": "Low",
                "given_name": "Steven H.",
                "orcid": "0000-0001-6476-3048",
                "clpid": "Low-S-H"
            },
            {
                "family_name": "Wierman",
                "given_name": "Adam C.",
                "orcid": "0000-0002-5923-0199",
                "clpid": "Wierman-A-C"
            }
        ],
        "thesis_committee": [
            {
                "family_name": "Yue",
                "given_name": "Yisong",
                "orcid": "0000-0001-9127-1989",
                "clpid": "Yue-Yisong"
            },
            {
                "family_name": "Low",
                "given_name": "Steven H.",
                "orcid": "0000-0001-6476-3048",
                "clpid": "Low-S-H"
            },
            {
                "family_name": "Wierman",
                "given_name": "Adam C.",
                "orcid": "0000-0002-5923-0199",
                "clpid": "Wierman-A-C"
            },
            {
                "family_name": "Mazumdar",
                "given_name": "Eric V.",
                "orcid": "0000-0002-1815-269X",
                "clpid": "Mazumdar-E-V"
            }
        ],
        "local_group": [
            {
                "literal": "Resnick Sustainability Institute"
            },
            {
                "literal": "div_eng"
            }
        ],
        "abstract": "<p>Achieving carbon neutrality by 2050 does not only lead to the increasing penetration of renewable energy, but also an explosive growth of smart meter data. Recently, augmenting classical methods in real-world cyber-physical systems such as smart grids with black-box AI tools, forecasts, and ML algorithms has attracted a lot of growing interest. Integrating AI techniques into smart grids, on the one hand, provides a new approach to handle the uncertainties caused by renewable resources and human behaviors, but on the other hand, creates practical issues such as reliability, stability, privacy, and scalability, etc. to the AI-integrated algorithms.</p>\r\n\r\n<p><em>This dissertation focuses on solving problems raised in designing learning-augmented control and decision-making algorithms.</em></p>\r\n \r\n<p>The results presented in this dissertation are three-fold. We first study a problem in linear quadratic control, where imperfect/untrusted AI predictions of system perturbations are available. We show that it is possible to design a learning-augmented algorithm with performance guarantees that is aggressive if the predictions are accurate and conservative if they are imperfect. Machine-learned black-box policies are ubiquitous for nonlinear control problems. Meanwhile, crude model information is often available for these problems from, e.g., linear approximations of nonlinear dynamics. We next study the problem of equipping a black-box control policy with model-based advice for nonlinear control on a single trajectory.  We first show a general negative result that a naive convex combination of a black-box policy and a linear model-based policy can lead to instability, even if the two policies are both stabilizing. We then propose an <em>adaptive \u03bb-confident policy</em>, with a coefficient \u03bb indicating the confidence in a black-box policy, and prove its stability. With bounded nonlinearity, in addition, we show that the adaptive \u03bb-confident policy achieves a bounded competitive ratio when a black-box policy is near-optimal. Finally, we propose an online learning approach to implement the adaptive \u03bb-confident policy and verify its efficacy in case studies about the Cart-Pole problem and a real-world electric vehicle (EV) charging problem with data bias due to COVID-19.</p>\r\n\r\n<p>Aggregators have emerged as crucial tools for the coordination of distributed, controllable loads. To be used effectively, an aggregator must be able to communicate the available flexibility of the loads they control, known as the aggregate flexibility to a system operator. However, most existing aggregate flexibility measures often are slow-timescale estimations and much less attention has been paid to real-time coordination between an aggregator and an operator. In the second part of this dissertation, we consider solving an online decision-making problem in a closed-loop system and present a design of <em>real-time</em> aggregate flexibility feedback, termed the <em>maximum entropy feedback</em> (MEF). In addition to deriving analytic properties of the MEF, combining learning and control, we show that it can be approximated using reinforcement learning and used as a penalty term in a novel control algorithm--the <em>penalized predictive control</em> (PPC) that enables efficient communication, fast computation, and lower costs. We illustrate the efficacy of the PPC using a dataset from an adaptive electric vehicle charging network and show that PPC outperforms classical MPC. We show that under certain regularity assumptions, the PPC is optimal. We illustrate the efficacy of the PPC using a dataset from an adaptive electric vehicle charging network and show that PPC outperforms classical model predictive control (MPC). In a theoretical perspective, a two-controller problem is formulated. A central controller chooses an action from a feasible set that is determined by time-varying and coupling constraints, which depend on all past actions and states. The central controller's goal is to minimize the cumulative cost; however, the controller has access to neither the feasible set nor the dynamics directly, which are determined by a remote local controller. Instead, the central controller receives only an aggregate summary of the feasibility information from the local controller, which does not know the system costs. We show that it is possible for an online algorithm using feasibility information to nearly match the dynamic regret of an online algorithm using perfect information whenever the feasible sets satisfy some criterion, which is satisfied by inventory and tracking constraints.</p>\r\n\r\n<p>The third part of this dissertation consists of examples of learning, inference, and data analysis methods for power system identification and electric charging. We present a power system identification problem with noisy nodal measurements and efficient algorithms, based on fundamental trade-offs between the number of measurements, the complexity of the graph class, and the probability of error. Next, we specifically consider prediction and unsupervised learning tasks in EV charging. We provide basic data analysis results of a public dataset released by Caltech and develop a novel iterative clustering method for classifying time series of EV charging rates.</p>",
        "doi": "10.7907/cdf6-0w78",
        "publication_date": "2023",
        "thesis_type": "phd",
        "thesis_year": "2023"
    },
    {
        "id": "thesis:15104",
        "collection": "thesis",
        "collection_id": "15104",
        "cite_using_url": "https://resolver.caltech.edu/CaltechTHESIS:02082023-223824752",
        "primary_object_url": {
            "basename": "Thesis_Anushri_Dixit.pdf",
            "content": "final",
            "filesize": 95636900,
            "license": "other",
            "mime_type": "application/pdf",
            "url": "/15104/1/Thesis_Anushri_Dixit.pdf",
            "version": "v5.0.0"
        },
        "type": "thesis",
        "title": "Risk-Aware Planning and Control in Extreme Environments",
        "author": [
            {
                "family_name": "Dixit",
                "given_name": "Anushri C.",
                "orcid": "0000-0002-9698-2189",
                "clpid": "Dixit-Anushri-C"
            }
        ],
        "thesis_advisor": [
            {
                "family_name": "Burdick",
                "given_name": "Joel Wakeman",
                "orcid": "0000-0002-3091-540X",
                "clpid": "Burdick-J-W"
            }
        ],
        "thesis_committee": [
            {
                "family_name": "Murray",
                "given_name": "Richard M.",
                "orcid": "0000-0002-5785-7481",
                "clpid": "Murray-R-M"
            },
            {
                "family_name": "Ames",
                "given_name": "Aaron D.",
                "orcid": "0000-0003-0848-3177",
                "clpid": "Ames-A-D"
            },
            {
                "family_name": "Chung",
                "given_name": "Soon-Jo",
                "orcid": "0000-0002-6657-3907",
                "clpid": "Chung-Soon-Jo"
            },
            {
                "family_name": "Mazumdar",
                "given_name": "Eric V.",
                "orcid": "0000-0002-1815-269X",
                "clpid": "Mazumdar-E-V"
            },
            {
                "family_name": "Burdick",
                "given_name": "Joel Wakeman",
                "orcid": "0000-0002-3091-540X",
                "clpid": "Burdick-J-W"
            }
        ],
        "local_group": [
            {
                "literal": "div_eng"
            }
        ],
        "abstract": "<p>Safety-critical control and planning for autonomous systems operating in unstructured environments is a challenging problem must be addressed as autonomous vehicles, surgical robots, and autonomous industrial robots become more pervasive. This thesis addresses some of the issues in safety critical autonomy by introducing new techniques for computationally tractable and efficient safety-critical control.  The approach developed in this thesis arises from taking a deeper look at two questions: 1) How can we obtain better uncertainty quantification of the disturbances that affect autonomous systems either as a result of unmodeled changes in the environment or due to sensor imperfections?  2) Given richer uncertainty quantification techniques, how do incorporate the diverse uncertainty descriptions into the control and planning framework without sacrificing the tractability and efficiency of existing approaches?</p>\r\n\r\n<p>I address the above two questions by developing risk-aware control and planning techniques for traversal of a mobile robot over static but extreme terrain and in the presence of dynamic obstacles. We first look at algorithms for risk-aware terrain assessment, and extensively test them on wheeled and legged robots  that were deployed in subterranean tunnel, urban, and cave environments for search and rescue operations in the DARPA Subterranean Challenge. I then present a theory for risk-aware model predictive control in static environments and in the presence of dynamic obstacles. Coherent risk measures are applied to this planning and control framework in order to account for diverse uncertainty descriptions. Computationally tractable reformulations of the optimal control problem are realized through constraint tightening techniques.</p>\r\n   \r\n<p>I then investigate algorithms for uncertainty assessment and prediction of apriori unknown, dynamic obstacles using data-driven techniques. We use a technique from signal processing literature called Singular Spectrum Analysis for making linear predictions of dynamic obstacles. The obstacle motion predictions are equipped with error predictions to account for the uncertainty in the sensing heuristically using bootstrapping techniques. We use a statistical tool, Adaptive Conformal Inference, to further calibrate the heuristic error prediction online to obtain true uncertainty prediction while using nonstationary data to analyze the performance of the data-driven predictor. These techniques provide reactive, real-time, risk-aware obstacle avoidance in dynamic environments.</p>",
        "doi": "10.7907/xv2b-tj24",
        "publication_date": "2023",
        "thesis_type": "phd",
        "thesis_year": "2023"
    },
    {
        "id": "thesis:15262",
        "collection": "thesis",
        "collection_id": "15262",
        "cite_using_url": "https://resolver.caltech.edu/CaltechTHESIS:06012023-203113987",
        "type": "thesis",
        "title": "Distributed and Localized Model Predictive Control",
        "author": [
            {
                "family_name": "Amo Alonso",
                "given_name": "Carmen",
                "orcid": "0000-0001-7593-5992",
                "clpid": "Amo-Alonso-Carmen"
            }
        ],
        "thesis_advisor": [
            {
                "family_name": "Doyle",
                "given_name": "John Comstock",
                "orcid": "0000-0002-1828-2486",
                "clpid": "Doyle-J-C"
            }
        ],
        "thesis_committee": [
            {
                "family_name": "Murray",
                "given_name": "Richard M.",
                "orcid": "0000-0002-5785-7481",
                "clpid": "Murray-R-M"
            },
            {
                "family_name": "Mazumdar",
                "given_name": "Eric V.",
                "orcid": "0000-0002-1815-269X",
                "clpid": "Mazumdar-E-V"
            },
            {
                "family_name": "Matni",
                "given_name": "Nikolai",
                "orcid": "0000-0003-4936-3921",
                "clpid": "Matni-Nikolai"
            },
            {
                "family_name": "Doyle",
                "given_name": "John Comstock",
                "orcid": "0000-0002-1828-2486",
                "clpid": "Doyle-J-C"
            }
        ],
        "local_group": [
            {
                "literal": "div_eng"
            }
        ],
        "abstract": "The increasing presence of large-scale distributed systems highlights the need for scalable control strategies where only local communication is required. Moreover, in safety-critical systems it is imperative that such control strategies handle constraints in the presence of disturbances and enjoy theoretical and performance guarantees. In response to this need, we present the Distributed and Localized Model Predictive Control (DLMPC) algorithm for large-scale linear systems. DLMPC is a distributed closed-loop model predictive control (MPC) scheme wherein only local state and model information needs to be exchanged between subsystems for the computation and implementation of control actions. The resulting distributed algorithms tackle various types of additive disturbances and enjoy recursive feasibility and asymptotic stability guarantees that introduce minimal conservatism and can be computed in an offline fashion without adding to the computational burden. We also provide analysis and guarantees on the global performance of DLMPC, and demonstrate that in cases where the underlying topology of the system is sparse (as is the case in most large-scale networks), the inclusion of local communication constraints does not result in a suboptimal solution. Moreover, we show that when no noise is present, this algorithm can be extended to the purely data-driven case where all previous guarantees hold and the need for a model is fully replaced by past-trajectory data. We show that the amount of data needed for our synthesis problem is independent of the size of the global system. Lastly, we explore the potential of DLMPC for hardware accelerated implementation in GPU by exploiting the fact that the structure of the DLMPC problem captures some of the limitations of GPU computations. In all algorithmic and theoretical results presented in this thesis, only local information exchange is necessary, and computational complexity is independent of the global system size. DLMPC is the first MPC algorithm that allows for the scalable, efficient and data-driven computation and implementation of distributed closed-loop control policies and enjoys theoretical guarantees.",
        "doi": "10.7907/6pje-yd82",
        "publication_date": "2023",
        "thesis_type": "phd",
        "thesis_year": "2023"
    }
]