[
    {
        "id": "thesis:16465",
        "collection": "thesis",
        "collection_id": "16465",
        "cite_using_url": "https://resolver.caltech.edu/CaltechTHESIS:06022024-014038700",
        "primary_object_url": {
            "basename": "ApurvaBadithela_June2024.pdf",
            "content": "final",
            "filesize": 96248118,
            "license": "other",
            "mime_type": "application/pdf",
            "url": "/16465/9/ApurvaBadithela_June2024.pdf",
            "version": "v6.0.0"
        },
        "type": "thesis",
        "title": "Test and Evaluation of Autonomous Systems: Reactive Test Synthesis and Task-Relevant Evaluation of Perception",
        "author": [
            {
                "family_name": "Badithela",
                "given_name": "Apurva Srinivas",
                "orcid": "0000-0002-9788-2702",
                "clpid": "Apurva-Apurva-Srinivas"
            }
        ],
        "thesis_advisor": [
            {
                "family_name": "Murray",
                "given_name": "Richard M.",
                "orcid": "0000-0002-5785-7481",
                "clpid": "Murray-R-M"
            }
        ],
        "thesis_committee": [
            {
                "family_name": "Ames",
                "given_name": "Aaron D.",
                "orcid": "0000-0003-0848-3177",
                "clpid": "Ames-A-D"
            },
            {
                "family_name": "Chandy",
                "given_name": "K. Mani",
                "orcid": "0000-0001-9190-1290",
                "clpid": "Chandy-K-M"
            },
            {
                "family_name": "Burdick",
                "given_name": "Joel Wakeman",
                "orcid": "0000-0002-3091-540X",
                "clpid": "Burdick-J-W"
            },
            {
                "family_name": "Wongpiromsarn",
                "given_name": "Tichakorn",
                "clpid": "Wongpiromsarn-Tichakorn"
            },
            {
                "family_name": "Murray",
                "given_name": "Richard M.",
                "orcid": "0000-0002-5785-7481",
                "clpid": "Murray-R-M"
            }
        ],
        "local_group": [
            {
                "literal": "div_eng"
            }
        ],
        "abstract": "<p>Autonomous robotic systems have potential for profound impact on our society -- legged and wheeled robots for  search and rescue missions, drones for wildfire management, self-driving cars for improving mobility, and robotic space missions for exploration and repair of spacecraft. The complexity of these systems implies that formal guarantees during the design phase alone is not sufficient; mainstream deployment of these systems requires principled frameworks for test and evaluation, and verification and validation. This thesis studies two such challenges to mainstream deployment of these systems.</p>\r\n\r\n<p>First, we consider the problem of evaluating perception models in a manner relevant to the system-level specification and the downstream planner. Perception and planning modules are often designed under different computational and mathematical paradigms. This talk will focus on evaluating models for classification and detection tasks, and leverages confusion matrices which are popularly used in computer vision to evaluate object detection models to derive probabilistic guarantees at the system-level. However, not all perception errors are equally safety-critical, and traditional confusion matrices account for all objects equally. Thus, task-relevant metrics such as proposition labeled confusion matrices are introduced. These are constructed by identifying propositional formulas relevant to the downstream planning logic and the system-level specification, and result in less conservative system-level guarantees. Using this analysis, fundamental tradeoffs in perception models are reflected in the tradeoffs of probabilistic guarantees. This framework is illustrated on a car-pedestrian example in simulation, and the confusion matrices are constructed from state-of-the-art detection models evaluated on the nuScenes dataset.</p>\r\n\r\n<p>Second, we consider the problem of automatically synthesizing tests for autonomous robotic systems. These systems reason over both discrete (e.g., navigate left or right around an obstacle) and continuous variables (e.g., continuous trajectories). This talk presents a flow-based approach for test environment synthesis which handles discrete variables and is also reactive to the system under test. Reactivity is important to account for uncertainties in system modeling, and to adapt to system behavior without knowledge of the system controller. These tests are synthesized from high-level specifications of desired behavior. Though the problem is shown to be NP-hard, a flow-based mixed-integer linear program formulation is used that scales well to medium-sized examples (e.g., >10,000 integer variables). The test environment can consist of static and reactive obstacles as well as dynamic test agents, whose strategies are synthesized to match the solution of the flow-based optimization. The overview of the approach is as follows. First, principles of automata theory are used to translate the high-level system and test objectives, and the non-deterministic abstraction of the system into a network flow optimization. The solution of this optimization is then parsed into GR(1) formulas in linear temporal logic. This GR(1) formula is used to synthesize reactive strategies of a dynamic test agent in a counterexample-guided fashion. We provide guarantees that the synthesized test strategy will realize the desired test behavior under the assumption of a well-designed system, the test strategy is reactive and least-restrictive,. This framework is illustrated on several simulation and hardware experiments with quadrupeds, showing promise towards a layered approach to test and evaluation.</p>",
        "doi": "10.7907/e8qz-rd26",
        "publication_date": "2024",
        "thesis_type": "phd",
        "thesis_year": "2024"
    },
    {
        "id": "thesis:16107",
        "collection": "thesis",
        "collection_id": "16107",
        "cite_using_url": "https://resolver.caltech.edu/CaltechTHESIS:06122023-162907795",
        "primary_object_url": {
            "basename": "Akella_Prithvi_2023.pdf",
            "content": "final",
            "filesize": 20419417,
            "license": "other",
            "mime_type": "application/pdf",
            "url": "/16107/1/Akella_Prithvi_2023.pdf",
            "version": "v4.0.0"
        },
        "type": "thesis",
        "title": "Reliable Controller Synthesis: Guarantees for Safety-Critical System Testing and Verification",
        "author": [
            {
                "family_name": "Akella",
                "given_name": "Prithvi",
                "orcid": "0000-0003-4375-0015",
                "clpid": "Akella-Prithvi"
            }
        ],
        "thesis_advisor": [
            {
                "family_name": "Ames",
                "given_name": "Aaron D.",
                "orcid": "0000-0003-0848-3177",
                "clpid": "Ames-A-D"
            }
        ],
        "thesis_committee": [
            {
                "family_name": "Burdick",
                "given_name": "Joel Wakeman",
                "orcid": "0000-0002-3091-540X",
                "clpid": "Burdick-J-W"
            },
            {
                "family_name": "Ames",
                "given_name": "Aaron D.",
                "orcid": "0000-0003-0848-3177",
                "clpid": "Ames-A-D"
            },
            {
                "family_name": "Chandy",
                "given_name": "K. Mani",
                "orcid": "0000-0001-9190-1290",
                "clpid": "Chandy-K-M"
            },
            {
                "family_name": "Murray",
                "given_name": "Richard M.",
                "orcid": "0000-0002-5785-7481",
                "clpid": "Murray-R-M"
            }
        ],
        "local_group": [
            {
                "literal": "div_eng"
            }
        ],
        "abstract": "<p>The well-known quote by George Box states that \"All models are wrong, but some are useful\", and the controls and robotics communities alike have followed a similar paradigm to make significant theoretical and practical advances in the study of controllable systems to date.  However, recent robotic system requirements include formal considerations for system safety, especially as we engineer systems that are required to work alongside us in our daily lives.  As such, current research directions require analyses that consider these inaccurate system models, our inaccurate understanding of the environments in which these systems operate, and their combined effects on safe, effective system operation, e.g. the canonical autonomous driving problem in exceedingly difficult-to-model urban environments.  Recently, this has led to burgeoning efforts in a formal study of controller verification.  Specifically, verification denotes the process of determining whether a controller steers its system to exhibit desired behaviors despite the variety of environments the system might face during operation, e.g. whether the autonomous car's controller successfully drives the car to a destination without crashing into obstacles or pedestrians along the way.  However, formalization of such a verification pipeline has proved difficult to date, especially since both the models we use for controller synthesis and our understanding of system environments are typically inaccurate.</p>\r\n   \r\n<p>As a result, this thesis describes our efforts in the development of a formal verification pipeline that addresses a few key challenges in traditional approaches to safety-critical system verification.  The first contribution centers on difficult, reactive test synthesis.  By test synthesis, we mean the construction of a (potentially difficult) environment in which we require the system under test to perform its objective, e.g. placement of parked cars around which an autonomous vehicle must park.  Typically phrased as an optimization problem over the space of allowable environments, these tests are \"static\" insofar as they do not react to the system's choices made during the test.  We posit that such reactivity could more accurately identify worst-case system behavior.  As a result, we phrase reactive, maximally difficult test synthesis as a game-theoretic optimization problem, leveraging the same control theoretic tools that facilitate safety-critical controller synthesis - control barrier functions and signal temporal logic.  We prove that our proposed synthesis technique is always solvable and always produces a realizable test environment.  Finally, we showcase our results by synthesizing reactive tests for both single and multi-agent systems.</p>\r\n\r\n<p>The second set of contributions centers on our efforts in uncertainty quantification.  Due to un-modeled system and environmental aspects affecting system evolution in unpredictable ways, real-life systems need not realize the same paths every time.  As such, typical analyses phrase verification as an optimization problem minimizing the expected value of a function over system trajectories with the expectation taken over this path variability, the distribution for which is assumed to be known.  However, we posit that such an analysis should be risk-aware, i.e. account for this variability in a more principled fashion than an expectation-specific analysis, and should not assume apriori knowledge of the distribution corresponding to path variability, as it will be unknown in practice.  To that end, we develop methods to bound a subset of risk measures for random variables whose distributions are unknown.  This subset includes both Value-at-Risk and other, coherent risk measures heavily utilized in the controls and robotics communities.  Simultaneously, we note that the same procedure can be applied to a wide class of non-convex optimization problems.  In doing so, we develop a percentile-based optimization approach that rapidly identifies percentile solutions to optimization problems, i.e. a 90-th percentile solution is as good as 90% of solutions in the considered decision space.</p>\r\n\r\n<p>The third set of contributions focuses on the application of the prior mathematical developments to facilitate both risk-aware safety-critical system verification and controller synthesis.  We phrase risk-aware controller verification as a risk-measure identification problem and utilize the prior bounding results to provide an efficient, dimensionally-independent verification procedure.  Then, we phrase risk-aware controller synthesis as an optimization problem maximizing the bound provided by our risk-aware verification method, and show that this problem is solvable by the percentile optimization methods mentioned prior.  Finally, we lay the foundation for the utilization of the aforementioned mathematical developments in other aspects of controls and robotics and communities more broadly.  We show how risk-measure bounding can augment models both offline and online to robustify safety-critical controllers, how percentile optimization can facilitate \"optimal\" input selection and guarantee generation for non-convex finite-time optimal controllers, and how multiple applications of the percentile approach can also bound the optimality gap of reported percentile solutions.  We showcase all these results on hardware for multiple systems and highlight the data efficiency of our proposed approaches.</p>",
        "doi": "10.7907/jej3-4444",
        "publication_date": "2023",
        "thesis_type": "phd",
        "thesis_year": "2023"
    },
    {
        "id": "thesis:14115",
        "collection": "thesis",
        "collection_id": "14115",
        "cite_using_url": "https://resolver.caltech.edu/CaltechTHESIS:04022021-033321217",
        "type": "thesis",
        "title": "Safe and Interpretable Autonomous Systems Design: Behavioral Contracts and Semantic-Based Perception",
        "author": [
            {
                "family_name": "Cai",
                "given_name": "Karena Xin",
                "orcid": "0000-0002-8392-4158",
                "clpid": "Cai-Karena-Xin"
            }
        ],
        "thesis_advisor": [
            {
                "family_name": "Murray",
                "given_name": "Richard M.",
                "orcid": "0000-0002-5785-7481",
                "clpid": "Murray-R-M"
            },
            {
                "family_name": "Chung",
                "given_name": "Soon-Jo",
                "orcid": "0000-0002-6657-3907",
                "clpid": "Chung-Soon-Jo"
            }
        ],
        "thesis_committee": [
            {
                "family_name": "Burdick",
                "given_name": "Joel Wakeman",
                "orcid": "0000-0002-3091-540X",
                "clpid": "Burdick-J-W"
            },
            {
                "family_name": "Murray",
                "given_name": "Richard M.",
                "orcid": "0000-0002-5785-7481",
                "clpid": "Murray-R-M"
            },
            {
                "family_name": "Chung",
                "given_name": "Soon-Jo",
                "orcid": "0000-0002-6657-3907",
                "clpid": "Chung-Soon-Jo"
            },
            {
                "family_name": "Chandy",
                "given_name": "K. Mani",
                "clpid": "Chandy-K-M"
            }
        ],
        "local_group": [
            {
                "literal": "div_eng"
            }
        ],
        "abstract": "<p>We are on the verge of experiencing a new, integrated society where autonomous vehicles will become a fabric of our everyday lives. And yet, seamless integration of autonomous vehicles into our society will require vehicles to interface safely with humans in an incredibly complex, fast-paced, and dynamic environment. Premature deployment of these new autonomous systems \u2014 without safety guarantees or interpretability of algorithms, could prove catastrophic. How can algorithms governing vehicle behavior be designed in a way that guarantees safety, performance, interpretability and scalability? This is the question this thesis seeks to answer. </p>\r\n\r\n<p>First, we present a framework for architecting the decision-making module of autonomous vehicles so that safety and progress of agents can be formally guaranteed. In particular, all agents are defined to act according to what is termed an assume-guarantee contract, which is broadly defined as a set of behavioral preferences. The first version of the assume-guarantee contract is a behavioral profile, which is a set of ordered rules that agents must use to select actions in a way that is interpretable. With all agents operating according to a behavioral profile, the interactions however, are not necessarily coordinated. We then constrain agent behavior with an additional set of interaction rules. The behavioral profile combined with these additional constraints, are what we term a behavioral protocol. With all agents operating according to a local, decentralized behavioral protocol, we can provide formal proofs of the correctness of agent behavior, i.e. all agents will never collide and agents will make it to their respective destinations. Not only does the protocol so\u00a0defined allow us to make formal guarantees, but it is also designed in a way that scales well in the number of agents and provides interpretability of agent\u00a0behaviors. Safety and progress guarantees are proven and verified in simulation. </p>\r\n\r\n<p>Second, we focus on using information from object classifiers to enhance an autonomous vehicle's ability to localize where it is within its environment. The proposed approach for incorporating this semantic information is based on solving the maximum likelihood problem. With a hierarchical formulation, we are not only able to improve upon the accuracy of traditional localization techniques, but we are also able to improve our confidence in the accuracy of object detection classifications. The improvement in robustness and accuracy of these algorithms are shown in simulation.</p>",
        "doi": "10.7907/w3m8-es32",
        "publication_date": "2021",
        "thesis_type": "phd",
        "thesis_year": "2021"
    },
    {
        "id": "thesis:14052",
        "collection": "thesis",
        "collection_id": "14052",
        "cite_using_url": "https://resolver.caltech.edu/CaltechTHESIS:01132021-065636010",
        "primary_object_url": {
            "basename": "Tung Phan Caltech Thesis.pdf",
            "content": "final",
            "filesize": 7424375,
            "license": "other",
            "mime_type": "application/pdf",
            "url": "/14052/1/Tung Phan Caltech Thesis.pdf",
            "version": "v4.0.0"
        },
        "type": "thesis",
        "title": "Contract-Based Design: Theories and Applications",
        "author": [
            {
                "family_name": "Phan-Minh",
                "given_name": "Tung",
                "orcid": "000-0002-1403-5197",
                "clpid": "Phan-Minh-Tung"
            }
        ],
        "thesis_advisor": [
            {
                "family_name": "Murray",
                "given_name": "Richard M.",
                "orcid": "0000-0002-5785-7481",
                "clpid": "Murray-R-M"
            }
        ],
        "thesis_committee": [
            {
                "family_name": "Burdick",
                "given_name": "Joel Wakeman",
                "orcid": "0000-0002-3091-540X",
                "clpid": "Burdick-J-W"
            },
            {
                "family_name": "Doyle",
                "given_name": "John Comstock",
                "orcid": "0000-0002-1828-2486",
                "clpid": "Doyle-J-C"
            },
            {
                "family_name": "Chandy",
                "given_name": "K. Mani",
                "clpid": "Chandy-K-M"
            },
            {
                "family_name": "Murray",
                "given_name": "Richard M.",
                "orcid": "0000-0002-5785-7481",
                "clpid": "Murray-R-M"
            }
        ],
        "local_group": [
            {
                "literal": "div_eng"
            }
        ],
        "abstract": "<p>Most things we know only exist in relation to one another. Their states are strongly coupled due to dependencies that arise from such relations. For a system designer, acknowledging the presence of these dependencies is as crucial to guaranteeing performance as studying them. As the roles played by technology in fields such as transportation, healthcare, and finance continue to be more profound and diverse, modern engineering systems have grown to be more reliant on the integration of technologies across multiple disciplines and their requirements. The need to ensure proper division of labor, integration of system modules, and attribution of legal responsibility calls for a more methodological look into co-design considerations. Originally conceived in computer programming, contract-based reasoning is a design approach whose promise of a formal compositional paradigm is receiving attention from a broader engineering community. Our work is dedicated to narrowing the gap between the theory and application of this yet nascent framework.</p>\r\n\r\n<p>In the first half of this dissertation, we introduce a model interface contract theory for input/output automata with guards and a formalization of the directive-response architecture using assume-guarantee contracts and show how these may be used to guide the formal design of a traffic intersection and an automated valet parking system respectively. Next, we address a major drawback of assume-guarantee contracts, i.e., the problem of a void contract due to antecedent failure. Our proposed solution is a reactive version of assume-guarantee contracts that enables direct specification at the assumption and guarantee level along with a novel synthesis algorithm that exposes the effects of failures on the contract structure. This is then used to help optimize, adapt, and robustify our design against an uncertain environment.</p>\r\n\r\n<p>In light of ongoing development of autonomous driving technologies and its potential impact on the safety of future transportation, the second half of this work is dedicated to the application of the design-by-contract framework to the distributed control of autonomous vehicles. We start by defining and proving properties of \"assume-guarantee profiles,\" our proposed approach to transparent distributed multi-agent decision making and behavior prediction. Next, we provide a local conflict resolution algorithm in the context of a quasi-simultaneous game which guarantees safety and liveness to the composition of autonomous vehicle systems in this game. Finally, to facilitate the extension of these frameworks to real-life urban driving settings, we also supply an effective method to predict agent behavior that utilizes recent advances in machine learning research.</p>",
        "doi": "10.7907/8vp7-kd82",
        "publication_date": "2021",
        "thesis_type": "phd",
        "thesis_year": "2021"
    },
    {
        "id": "thesis:13689",
        "collection": "thesis",
        "collection_id": "13689",
        "cite_using_url": "https://resolver.caltech.edu/CaltechTHESIS:04292020-165136662",
        "type": "thesis",
        "title": "Scalable Synthesis and Verification: Towards Reliable Autonomy",
        "author": [
            {
                "family_name": "Dathathri",
                "given_name": "Sumanth",
                "clpid": "Dathathri-Sumanth"
            }
        ],
        "thesis_advisor": [
            {
                "family_name": "Murray",
                "given_name": "Richard M.",
                "clpid": "Murray-R-M"
            }
        ],
        "thesis_committee": [
            {
                "family_name": "Burdick",
                "given_name": "Joel Wakeman",
                "clpid": "Burdick-J-W"
            },
            {
                "family_name": "Chandy",
                "given_name": "K. Mani",
                "clpid": "Chandy-K-M"
            },
            {
                "family_name": "Gao",
                "given_name": "Sicun",
                "clpid": "Gao-Sicun"
            },
            {
                "family_name": "Murray",
                "given_name": "Richard M.",
                "clpid": "Murray-R-M"
            }
        ],
        "local_group": [
            {
                "literal": "div_eng"
            }
        ],
        "abstract": "<p>We have seen the growing deployment of autonomous systems in our daily life, ranging from safety-critical self-driving cars to dialogue agents. While impactful and impressive, these systems do not often come with guarantees and are not rigorously evaluated for failure cases. This is in part due to the limited scalability of tools available for designing correct-by-construction systems, or verifying them posthoc. Another key limitation is the lack of availability of models for the complex environments with which autonomous systems often have to interact with. In the direction of overcoming these above mentioned bottlenecks to designing reliable autonomous systems, this thesis makes contributions along three fronts.</p>\r\n\r\n<p>First, we develop an approach for parallelized synthesis from linear-time temporal logic Specifications corresponding to the generalized reactivity (1) fragment. We begin by identifying a special case corresponding to singleton liveness goals that allows for a decomposition of the synthesis problem, which facilitates parallelized synthesis. Based on the intuition from this special case, we propose a more generalized approach for parallelized synthesis that relies on identifying equicontrollable states.</p>\r\n\r\n<p>Second, we consider learning-based approaches to enable verification at scale for complex systems, and for autonomous systems that interact with black-box environments. For the former, we propose a new abstraction refinement procedure based on machine learning to improve the performance of nonlinear constraint solving algorithms on large-scale problems. For the latter, we present a data-driven approach based on chance-constrained optimization that allows for a system to be evaluated for specification conformance without an accurate model of the environment. We demonstrate this approach on several tasks, including a lane-change scenario with real-world driving data.</p>\r\n\r\n<p>Lastly, we consider the problem of interpreting and verifying learning-based components such as neural networks. We introduce a new method based on Craig's interpolants for computing compact symbolic abstractions of pre-images for neural networks. Our approach relies on iteratively computing approximations that provably overapproximate and underapproximate the pre-images at all layers. Further, building on existing work for training neural networks for verifiability in the classification setting, we propose extensions that allow us to generalize the approach to more general architectures and temporal specifications.</p>",
        "doi": "10.7907/4j39-v857",
        "publication_date": "2020",
        "thesis_type": "phd",
        "thesis_year": "2020"
    },
    {
        "id": "thesis:11129",
        "collection": "thesis",
        "collection_id": "11129",
        "cite_using_url": "https://resolver.caltech.edu/CaltechTHESIS:07202018-115217471",
        "primary_object_url": {
            "basename": "filippidis_ioannis_2019.pdf",
            "content": "final",
            "filesize": 700365,
            "license": "other",
            "mime_type": "application/pdf",
            "url": "/11129/39/filippidis_ioannis_2019.pdf",
            "version": "v3.0.0"
        },
        "type": "thesis",
        "title": "Decomposing Formal Specifications Into Assume-Guarantee Contracts for Hierarchical System Design",
        "author": [
            {
                "family_name": "Filippidis",
                "given_name": "Ioannis",
                "orcid": "0000-0003-4704-3334",
                "clpid": "Filippidis-Ioannis"
            }
        ],
        "thesis_advisor": [
            {
                "family_name": "Murray",
                "given_name": "Richard M.",
                "clpid": "Murray-R-M"
            }
        ],
        "thesis_committee": [
            {
                "family_name": "Murray",
                "given_name": "Richard M.",
                "clpid": "Murray-R-M"
            },
            {
                "family_name": "Burdick",
                "given_name": "Joel Wakeman",
                "clpid": "Burdick-J-W"
            },
            {
                "family_name": "Chandy",
                "given_name": "K. Mani",
                "clpid": "Chandy-K-M"
            },
            {
                "family_name": "Holzmann",
                "given_name": "Gerard J.",
                "clpid": "Holzmann-G-J"
            }
        ],
        "local_group": [
            {
                "literal": "div_eng"
            }
        ],
        "abstract": "<p>Specifications for complex engineering systems are typically decomposed into specifications for individual subsystems in a way that ensures they are implementable and simpler to develop further. We describe a method to algorithmically construct specifications for components that should implement a given specification when assembled. By eliminating variables that are irrelevant to realizability of each component, we simplify the specifications and reduce the amount of information necessary for operation.\r\nTo identify these variables, we parametrize the information flow between components.</p>\r\n\r\n\r\n<p>The specifications are written in the Temporal Logic of Actions, TLA+, with liveness properties restricted to an implication of conjoined recurrence properties, known as GR(1). We study whether GR(1) contracts exist in the presence of full information, and prove that memoryless GR(1) contracts that preserve safety do not always exist, whereas contracts in GR(1) with history-determined variables added do exist. We observe that timed stutter-invariant specifications of open-systems in general require GR(2) liveness properties for expressing them.</p>\r\n\r\n\r\n<p>We formalize a definition of realizability in TLA+, and define an operator for forming open-systems from closed-systems, based on a variant of the while-plus operator. The resulting open-system properties are realizable when expected to be. We compare stepwise implication operators from the literature, and establish relations between them, and examine the arity required for expressing these operators. We examine which symmetric combinations of stepwise implication and implementation kind avoid circular dependence, and show that only Moore components specified by strictly causal stepwise implication avoid circular dependence.</p>\r\n\r\n\r\n<p>The proposed approach relies on symbolic algorithms for computing specifications. To convert the generated specifications from binary decision diagrams to readable formulas over integer variables, we symbolically solve a minimal covering problem. We implemented an algorithm for minimal covering over lattices originally proposed for two-level logic minimization. We formalized the computation of essential elements and cyclic core that is part of this algorithm, and machine-checked the proofs of safety properties using a proof assistant. Proofs supporting the thesis are organized as TLA+ modules in appendices.</p>",
        "doi": "10.7907/Z9Q52MTD",
        "publication_date": "2019",
        "thesis_type": "phd",
        "thesis_year": "2019"
    },
    {
        "id": "thesis:10978",
        "collection": "thesis",
        "collection_id": "10978",
        "cite_using_url": "https://resolver.caltech.edu/CaltechTHESIS:05312018-080301508",
        "primary_object_url": {
            "basename": "Ren_Xiaoqi_2018.pdf",
            "content": "final",
            "filesize": 6693722,
            "license": "other",
            "mime_type": "application/pdf",
            "url": "/10978/1/Ren_Xiaoqi_2018.pdf",
            "version": "v4.0.0"
        },
        "type": "thesis",
        "title": "Optimizing Resource Management in Cloud Analytics Services",
        "author": [
            {
                "family_name": "Ren",
                "given_name": "Xiaoqi",
                "orcid": "0000-0002-1121-9046",
                "clpid": "Ren-Xiaoqi"
            }
        ],
        "thesis_advisor": [
            {
                "family_name": "Wierman",
                "given_name": "Adam C.",
                "clpid": "Wierman-A-C"
            }
        ],
        "thesis_committee": [
            {
                "family_name": "Wierman",
                "given_name": "Adam C.",
                "clpid": "Wierman-A-C"
            },
            {
                "family_name": "Low",
                "given_name": "Steven H.",
                "clpid": "Low-S-H"
            },
            {
                "family_name": "Chandy",
                "given_name": "K. Mani",
                "clpid": "Chandy-K-M"
            },
            {
                "family_name": "Yue",
                "given_name": "Yisong",
                "clpid": "Yue-Yisong"
            }
        ],
        "local_group": [
            {
                "literal": "Resnick Sustainability Institute"
            },
            {
                "literal": "div_eng"
            }
        ],
        "abstract": "<p>The fundamental challenge in the cloud today is how to build and optimize machine learning and data analytical services. Machine learning and data analytical platforms are changing computing infrastructure from expensive private data centers to easily accessible online services. These services pack user requests as jobs and run them on thousands of machines in parallel in geo-distributed clusters. The scale and the complexity of emerging jobs lead to increasing challenges for the clusters at all levels, from power infrastructure to system architecture and corresponding software framework design.</p>\r\n\r\n<p>These challenges come in many forms. Today's clusters are built on commodity hardware and hardware failures are unavoidable. Resource competition, network congestion, and mixed generations of hardware make the hardware environment complex and hard to model and predict. Such heterogeneity becomes a crucial roadblock for efficient parallelization on both the task level and job level. Another challenge comes from the increasing complexity of the applications. For example, machine learning services run jobs made up of multiple tasks with complex dependency structures. This complexity leads to difficulties in framework designs. The scale, especially when services span geo-distributed clusters, leads to another important hurdle for cluster design.  Challenges also come from the power infrastructure. Power infrastructure is very expensive and accounts for more than 20% of the total costs to build a cluster. Power sharing optimization to maximize the facility utilization and smooth peak hour usages is another roadblock for cluster design.</p>\r\n\r\n<p>In this thesis, we focus on solutions for these challenges at the task level, on the job level, with respect to the geo-distributed data cloud design and for power management in colocation data centers.</p>\r\n\r\n<p>At the task level, a crucial hurdle to achieving predictable performance is stragglers, i.e., tasks that take significantly longer than expected to run. At this point, speculative execution has been widely adopted to mitigate the impact of stragglers in simple workloads. We apply straggler mitigation for approximation jobs for the first time. We present GRASS, which carefully uses speculation to mitigate the impact of stragglers in approximation jobs. GRASS's design is based on the analysis of a model we develop to capture the optimal speculation levels for approximation jobs. Evaluations with production workloads from Facebook and Microsoft Bing in an EC2 cluster of 200 nodes show that GRASS increases accuracy of deadline-bound jobs by 47% and speeds up error-bound jobs by 38%.</p>\r\n\r\n<p>Moving from task level to job level, task level speculation mechanisms are designed and operated independently of job scheduling when, in fact, scheduling a speculative copy of a task has a direct impact on the resources available for other jobs. Thus, we present Hopper, a job-level speculation-aware scheduler that integrates the tradeoffs associated with speculation into job scheduling decisions based on a model generalized from the task-level speculation model. We implement both centralized and decentralized prototypes of the Hopper scheduler and show that 50% (66%) improvements over state-of-the-art centralized (decentralized) schedulers and speculation strategies can be achieved through the coordination of scheduling and speculation.</p>\r\n\r\n<p>As computing resources move from local clusters to geo-distributed cloud services, we are expecting the same transformation for data storage. We study two crucial pieces of a geo-distributed data cloud system: data acquisition and data placement. Starting from developing the optimal algorithm for the case of a data cloud made up of a single data center, we propose a near-optimal, polynomial-time algorithm for a geo-distributed data cloud in general. We show, via a case study, that the resulting design, Datum, is near-optimal (within 1.6%) in practical settings.</p>\r\n \r\n<p>Efficient power management is a fundamental challenge for data centers when providing reliable services. Power oversubscription in data centers is very common and may occasionally trigger an emergency when the aggregate power demand exceeds the capacity. We study power capping solutions for handling such emergencies in a colocation data center, where the operator supplies power to multiple tenants. We propose a novel market mechanism based on supply function bidding, called COOP, to financially incentivize and coordinate tenants' power reduction for minimizing total performance loss while satisfying multiple power capping constraints. We demonstrate that COOP is \"win-win\", increasing the operator's profit (through oversubscription) and reducing tenants' costs (through financial compensation for their power reduction during emergencies).</p>",
        "doi": "10.7907/K62Y-FV39",
        "publication_date": "2018",
        "thesis_type": "phd",
        "thesis_year": "2018"
    },
    {
        "id": "thesis:9765",
        "collection": "thesis",
        "collection_id": "9765",
        "cite_using_url": "https://resolver.caltech.edu/CaltechTHESIS:05262016-112813537",
        "primary_object_url": {
            "basename": "thesis.pdf",
            "content": "",
            "filesize": 1308192,
            "license": "other",
            "mime_type": "application/pdf",
            "url": "/9765/1/thesis.pdf",
            "version": "v1.0.0"
        },
        "type": "thesis",
        "title": "Electricity Markets for the Smart Grid: Networks, Timescales, and Integration with Control",
        "author": [
            {
                "family_name": "Cai",
                "given_name": "Wuhan Desmond",
                "orcid": "0000-0001-9207-1890",
                "clpid": "Cai-Wuhan-Desmond"
            }
        ],
        "thesis_advisor": [
            {
                "family_name": "Low",
                "given_name": "Steven H.",
                "clpid": "Low-S-H"
            },
            {
                "family_name": "Wierman",
                "given_name": "Adam C.",
                "clpid": "Wierman-A-C"
            },
            {
                "family_name": "Chandy",
                "given_name": "K. Mani",
                "clpid": "Chandy-K-M"
            },
            {
                "family_name": "Ledyard",
                "given_name": "John O.",
                "clpid": "Ledyard-J-O"
            }
        ],
        "thesis_committee": [
            {
                "family_name": "Low",
                "given_name": "Steven H.",
                "clpid": "Low-S-H"
            },
            {
                "family_name": "Wierman",
                "given_name": "Adam C.",
                "clpid": "Wierman-A-C"
            },
            {
                "family_name": "Chandy",
                "given_name": "K. Mani",
                "clpid": "Chandy-K-M"
            },
            {
                "family_name": "Ledyard",
                "given_name": "John O.",
                "clpid": "Ledyard-J-O"
            },
            {
                "family_name": "Doyle",
                "given_name": "John Comstock",
                "clpid": "Doyle-J-C"
            },
            {
                "family_name": "Vaidyanathan",
                "given_name": "P. P.",
                "clpid": "Vaidyanathan-P-P"
            }
        ],
        "local_group": [
            {
                "literal": "div_eng"
            }
        ],
        "abstract": "<p>We are at the dawn of a significant transformation in the electric industry. Renewable generation and customer participation in grid operations and markets have been growing at tremendous rates in recent years and these trends are expected to continue. These trends are likely to be accompanied by both engineering and market integration challenges. Therefore, to incorporate these resources efficiently into the grid, it is important to deal with the inefficiencies in existing markets. The goal of this thesis is to contribute new insights towards improving the design of electricity markets.</p>\r\n\r\n<p>This thesis makes three main contributions. First, we provide insights into how the economic dispatch mechanism could be designed to account for price-anticipating participants. We study this problem in the context of a networked Cournot competition with a market maker and we give an algorithm to find improved market clearing designs. Our findings illustrate the potential inefficiencies in existing markets and provides a framework for improving the design of the markets. Second, we provide insights into the strategic interactions between generation flexibility and forward markets. Our key insight is an observation that spot market capacity constraints can significantly impact the efficiency and existence of equilibrium in forward markets, as they give producers incentives to strategically withhold offers from the markets. Third, we provide insights into how optimization decomposition theory can guide optimal design of the architecture of power systems control. In particular, we illustrate a context where decomposition theory enables us to jointly design market and control mechanisms to allocate resources efficiently across both the economic dispatch and frequency regulation timescales.\r\n</p>",
        "doi": "10.7907/Z9BG2KZG",
        "publication_date": "2016",
        "thesis_type": "phd",
        "thesis_year": "2016"
    },
    {
        "id": "thesis:9549",
        "collection": "thesis",
        "collection_id": "9549",
        "cite_using_url": "https://resolver.caltech.edu/CaltechTHESIS:01262016-194420781",
        "primary_object_url": {
            "basename": "Pengthesis.pdf",
            "content": "final",
            "filesize": 2095107,
            "license": "other",
            "mime_type": "application/pdf",
            "url": "/9549/1/Pengthesis.pdf",
            "version": "v2.0.0"
        },
        "type": "thesis",
        "title": "Distributed Control and Optimization for Communication and Power Systems",
        "author": [
            {
                "family_name": "Peng",
                "given_name": "Qiuyu",
                "clpid": "Peng-Qiuyu"
            }
        ],
        "thesis_advisor": [
            {
                "family_name": "Low",
                "given_name": "Steven H.",
                "clpid": "Low-S-H"
            }
        ],
        "thesis_committee": [
            {
                "family_name": "Low",
                "given_name": "Steven H.",
                "clpid": "Low-S-H"
            },
            {
                "family_name": "Wierman",
                "given_name": "Adam C.",
                "clpid": "Wierman-A-C"
            },
            {
                "family_name": "Chandy",
                "given_name": "K. Mani",
                "clpid": "Chandy-K-M"
            },
            {
                "family_name": "Doyle",
                "given_name": "John Comstock",
                "clpid": "Doyle-J-C"
            },
            {
                "family_name": "Vaidyanathan",
                "given_name": "P. P.",
                "clpid": "Vaidyanathan-P-P"
            }
        ],
        "local_group": [
            {
                "literal": "div_eng"
            }
        ],
        "abstract": "<p>We are at the cusp of a historic transformation of both communication system and electricity system. This creates challenges as well as opportunities for the study of networked systems. Problems of these systems typically involve a huge number of end points that require intelligent coordination in a distributed manner. In this thesis, we develop models, theories, and scalable distributed optimization and control algorithms to overcome these challenges.</p>\r\n\r\n<p>This thesis focuses on two specific areas: multi-path TCP (Transmission Control Protocol) and electricity distribution system operation and control. Multi-path TCP (MP-TCP) is a TCP extension that allows a single data stream to be split across multiple paths. MP-TCP has the potential to greatly improve reliability as well as efficiency of communication devices. We propose a fluid model for a large class of MP-TCP algorithms and identify design criteria that guarantee the existence, uniqueness, and stability of system equilibrium. We clarify how algorithm parameters impact TCP-friendliness, responsiveness, and window oscillation and demonstrate an inevitable tradeoff among these properties. We discuss the implications of these properties on the behavior of existing algorithms and motivate a new algorithm Balia (balanced linked adaptation) which generalizes existing algorithms and strikes a good balance among TCP-friendliness, responsiveness, and window oscillation. We have implemented Balia in the Linux kernel. We use our prototype to compare the new proposed algorithm Balia with existing MP-TCP algorithms.</p>\r\n\r\n<p>Our second focus is on designing computationally efficient algorithms for electricity distribution system operation and control. First, we develop efficient algorithms for feeder reconfiguration in distribution networks. The feeder reconfiguration problem chooses the on/off status of the switches in a distribution network in order to minimize a certain cost such as power loss. It is a mixed integer nonlinear program and hence hard to solve. We propose a heuristic algorithm that is based on the recently developed convex relaxation of the optimal power flow problem. The algorithm is efficient and can successfully computes an optimal configuration on all networks that we have tested. Moreover we prove that the algorithm solves the feeder reconfiguration problem optimally under certain conditions. We also propose a more efficient algorithm and it incurs a loss in optimality of less than 3% on the test networks.</p> \r\n\r\n<p>Second, we develop efficient distributed algorithms that solve the optimal power flow (OPF) problem on distribution networks. The OPF problem determines a network operating point that minimizes a certain objective such as generation cost or power loss. Traditionally OPF is solved in a centralized manner. With increasing penetration of volatile renewable energy resources in distribution systems, we need faster and distributed solutions for real-time feedback control. This is difficult because power flow equations are nonlinear and kirchhoff's law is global. We propose solutions for both balanced and unbalanced radial distribution networks. They exploit recent results that suggest solving for a globally optimal solution of OPF over a radial network through a second-order cone program (SOCP) or semi-definite program (SDP) relaxation. Our distributed algorithms are based on the alternating direction method of multiplier (ADMM), but unlike standard ADMM-based distributed OPF algorithms that require solving optimization subproblems using iterative methods, the proposed solutions exploit the problem structure that greatly reduce the computation time. Specifically, for balanced networks, our decomposition allows us to derive closed form solutions for these subproblems and it speeds up the convergence by 1000x times in simulations. For unbalanced networks, the subproblems reduce to either closed form solutions or eigenvalue problems whose size remains constant as the network scales up and computation time is reduced by 100x compared with iterative methods.</p>",
        "doi": "10.7907/Z99C6VBW",
        "publication_date": "2016",
        "thesis_type": "phd",
        "thesis_year": "2016"
    },
    {
        "id": "thesis:8654",
        "collection": "thesis",
        "collection_id": "8654",
        "cite_using_url": "https://resolver.caltech.edu/CaltechTHESIS:09082014-135331211",
        "type": "thesis",
        "title": "Macroscopically Dissipative Systems with Underlying Microscopic Dynamics : Properties and Limits of Measurement",
        "author": [
            {
                "family_name": "Asimakopoulos",
                "given_name": "Aristotelis",
                "clpid": "Asimakopoulos-Aristotelis"
            }
        ],
        "thesis_advisor": [
            {
                "family_name": "Doyle",
                "given_name": "John Comstock",
                "orcid": "0000-0002-1828-2486",
                "clpid": "Doyle-J-C"
            }
        ],
        "thesis_committee": [
            {
                "family_name": "Hajimiri",
                "given_name": "Ali",
                "orcid": "0000-0001-6736-8019",
                "clpid": "Hajimiri-A"
            },
            {
                "family_name": "Phillips",
                "given_name": "Robert B.",
                "orcid": "0000-0003-3082-2809",
                "clpid": "Phillips-R"
            },
            {
                "family_name": "Chandy",
                "given_name": "K. Mani",
                "orcid": "0000-0001-9190-1290",
                "clpid": "Chandy-K-M"
            },
            {
                "family_name": "Doyle",
                "given_name": "John Comstock",
                "orcid": "0000-0002-1828-2486",
                "clpid": "Doyle-J-C"
            }
        ],
        "local_group": [
            {
                "literal": "div_eng"
            }
        ],
        "abstract": "<p>While some of the deepest results in nature are those that give explicit bounds  between important physical quantities, some of the most intriguing and celebrated of such bounds come from fields where there is still a great deal of disagreement and confusion regarding even the most fundamental aspects of the theories. For example, in quantum mechanics, there is still no complete consensus as to whether the limitations associated with Heisenberg's Uncertainty Principle derive from an inherent randomness in physics, or rather from limitations in the measurement process itself, resulting from phenomena like back action. Likewise, the second law of thermodynamics makes a statement regarding the increase in entropy of closed systems, yet the theory itself has neither a universally-accepted definition of equilibrium, nor an adequate explanation of how a system with underlying microscopically Hamiltonian dynamics (reversible) settles into a fixed distribution.</p>\r\n\r\n<p>Motivated by these physical theories, and perhaps their inconsistencies,  in this thesis we use dynamical systems theory to investigate how the very simplest of systems, even with no physical constraints, are characterized by bounds that give limits to the ability to make measurements on them. Using an existing interpretation, we start by examining how dissipative systems can be viewed as high-dimensional lossless systems, and how taking this view necessarily implies the existence of a noise process that results from the uncertainty in the initial system state. This fluctuation-dissipation result plays a central role in a measurement model that we examine, in particular describing how noise is inevitably injected into a system during a measurement, noise that can be viewed as originating either from the randomness of the many degrees of freedom of the measurement device, or of the environment.  This noise constitutes one component of measurement back action, and ultimately imposes limits on measurement uncertainty.  Depending on the assumptions we make about active devices, and their limitations, this back action can be offset to varying degrees via control. It turns out that using active devices to reduce measurement back action leads to estimation problems that have non-zero uncertainty lower bounds, the most interesting of which arise when the observed system is lossless. One such lower bound, a main contribution of this work, can be viewed as a classical version of a Heisenberg uncertainty relation between the system's position and momentum.  We finally also revisit the murky question of how macroscopic dissipation appears from lossless dynamics, and propose alternative approaches for framing the question using existing systematic methods of model reduction.</p>",
        "doi": "10.7907/Z9V40S4N",
        "publication_date": "2015",
        "thesis_type": "phd",
        "thesis_year": "2015"
    },
    {
        "id": "thesis:8762",
        "collection": "thesis",
        "collection_id": "8762",
        "cite_using_url": "https://resolver.caltech.edu/CaltechTHESIS:01272015-214848277",
        "type": "thesis",
        "title": "Distributed Load Control in Multiphase Radial Networks",
        "author": [
            {
                "family_name": "Gan",
                "given_name": "Lingwen",
                "clpid": "Gan-Lingwen"
            }
        ],
        "thesis_advisor": [
            {
                "family_name": "Low",
                "given_name": "Steven H.",
                "clpid": "Low-S-H"
            }
        ],
        "thesis_committee": [
            {
                "family_name": "Low",
                "given_name": "Steven H.",
                "clpid": "Low-S-H"
            },
            {
                "family_name": "Wierman",
                "given_name": "Adam C.",
                "clpid": "Wierman-A-C"
            },
            {
                "family_name": "Chandy",
                "given_name": "K. Mani",
                "clpid": "Chandy-K-M"
            },
            {
                "family_name": "Doyle",
                "given_name": "John Comstock",
                "clpid": "Doyle-J-C"
            },
            {
                "family_name": "Hassibi",
                "given_name": "Babak",
                "clpid": "Hassibi-B"
            }
        ],
        "local_group": [
            {
                "literal": "div_eng"
            }
        ],
        "abstract": "<p>The current power grid is on the cusp of modernization due to the emergence of distributed generation and controllable loads, as well as renewable energy. On one hand, distributed and renewable generation is volatile and difficult to dispatch. On the other hand, controllable loads provide significant potential for compensating for the uncertainties. In a future grid where there are thousands or millions of controllable loads and a large portion of the generation comes from volatile sources like wind and solar, distributed control that shifts or reduces the power consumption of electric loads in a reliable and economic way would be highly valuable.</p>\r\n\r\n<p>Load control needs to be conducted with network awareness. Otherwise, voltage violations and overloading of circuit devices are likely. To model these effects, network power flows and voltages have to be considered explicitly. However, the physical laws that determine power flows and voltages are nonlinear. Furthermore, while distributed generation and controllable loads are mostly located in distribution networks that are multiphase and radial, most of the power flow studies focus on single-phase networks.</p>\r\n\r\n<p>This thesis focuses on distributed load control in multiphase radial distribution networks. In particular, we first study distributed load control without considering network constraints, and then consider network-aware distributed load control.</p>\r\n\r\n<p>Distributed implementation of load control is the main challenge if network constraints can be ignored. In this case, we first ignore the uncertainties in renewable generation and load arrivals, and propose a distributed load control algorithm, Algorithm 1, that optimally schedules the deferrable loads to shape the net electricity demand. Deferrable loads refer to loads whose total energy consumption is fixed, but energy usage can be shifted over time in response to network conditions. Algorithm 1 is a distributed gradient decent algorithm, and empirically converges to optimal deferrable load schedules within 15 iterations.</p>\r\n\r\n<p>We then extend Algorithm 1 to a real-time setup where deferrable loads arrive over time, and only imprecise predictions about future renewable generation and load are available at the time of decision making. The real-time algorithm Algorithm 2 is based on model-predictive control: Algorithm 2 uses updated predictions on renewable generation as the true values, and computes a pseudo load to simulate future deferrable load. The pseudo load consumes 0 power at the current time step, and its total energy consumption equals the expectation of future deferrable load total energy request.</p>\r\n\r\n<p>Network constraints, e.g., transformer loading constraints and voltage regulation constraints, bring significant challenge to the load control problem since power flows and voltages are governed by nonlinear physical laws. Remarkably, distribution networks are usually multiphase and radial. Two approaches are explored to overcome this challenge: one based on convex relaxation and the other that seeks a locally optimal load schedule.</p>\r\n\r\n<p>To explore the convex relaxation approach, a novel but equivalent power flow model, the branch flow model, is developed, and a semidefinite programming relaxation, called BFM-SDP, is obtained using the branch flow model. BFM-SDP is mathematically equivalent to a standard convex relaxation proposed in the literature, but numerically is much more stable. Empirical studies show that BFM-SDP is numerically exact for the IEEE 13-, 34-, 37-, 123-bus networks and a real-world 2065-bus network, while the standard convex relaxation is numerically exact for only two of these networks.</p>\r\n\r\n<p>Theoretical guarantees on the exactness of convex relaxations are provided for two types of networks: single-phase radial alternative-current (AC) networks, and single-phase mesh direct-current (DC) networks. In particular, for single-phase radial AC networks, we prove that a second-order cone program (SOCP) relaxation is exact if voltage upper bounds are not binding; we also modify the optimal load control problem so that its SOCP relaxation is always exact. For single-phase mesh DC networks, we prove that an SOCP relaxation is exact if 1) voltage upper bounds are not binding, or 2) voltage upper bounds are uniform and power injection lower bounds are strictly negative; we also modify the optimal load control problem so that its SOCP relaxation is always exact.</p>\r\n\r\n<p>To seek a locally optimal load schedule, a distributed gradient-decent algorithm, Algorithm 9, is proposed. The suboptimality gap of the algorithm is rigorously characterized and close to 0 for practical networks. Furthermore, unlike the convex relaxation approach, Algorithm 9 ensures a feasible solution. The gradients used in Algorithm 9 are estimated based on a linear approximation of the power flow, which is derived with the following assumptions: 1) line losses are negligible; and 2) voltages are reasonably balanced. Both assumptions are satisfied in practical distribution networks. Empirical results show that Algorithm 9 obtains 70+ times speed up over the convex relaxation approach, at the cost of a suboptimality within numerical precision.</p>",
        "doi": "10.7907/Z9FQ9TJ0",
        "publication_date": "2015",
        "thesis_type": "phd",
        "thesis_year": "2015"
    },
    {
        "id": "thesis:7990",
        "collection": "thesis",
        "collection_id": "7990",
        "cite_using_url": "https://resolver.caltech.edu/CaltechTHESIS:10142013-111401153",
        "primary_object_url": {
            "basename": "mihai_florian_thesis.pdf",
            "content": "final",
            "filesize": 1272593,
            "license": "other",
            "mime_type": "application/pdf",
            "url": "/7990/1/mihai_florian_thesis.pdf",
            "version": "v2.0.0"
        },
        "type": "thesis",
        "title": "Analysis-Aware Design of Embedded Systems Software",
        "author": [
            {
                "family_name": "Florian",
                "given_name": "Mihai",
                "clpid": "Florian-Mihai"
            }
        ],
        "thesis_advisor": [
            {
                "family_name": "Holzmann",
                "given_name": "Gerard J.",
                "clpid": "Holzmann-G-J"
            }
        ],
        "thesis_committee": [
            {
                "family_name": "Chandy",
                "given_name": "K. Mani",
                "clpid": "Chandy-K-M"
            },
            {
                "family_name": "Holzmann",
                "given_name": "Gerard J.",
                "clpid": "Holzmann-G-J"
            },
            {
                "family_name": "Murray",
                "given_name": "Richard M.",
                "clpid": "Murray-R-M"
            },
            {
                "family_name": "Havelund",
                "given_name": "Klaus",
                "clpid": "Havelund-K"
            },
            {
                "family_name": "Joshi",
                "given_name": "Rajeev",
                "clpid": "Joshi-R"
            }
        ],
        "local_group": [
            {
                "literal": "div_eng"
            }
        ],
        "abstract": "<p>In the past many different methodologies have been devised to support software development and different sets of methodologies have been developed to support the analysis of software artefacts. We have identified this mismatch as one of the causes of the poor reliability of embedded systems software. The issue with software development styles is that they are ``analysis-agnostic.'' They do not try to structure the code in a way that lends itself to analysis. The analysis is usually applied post-mortem after the software was developed and it requires a large amount of effort. The issue with software analysis methodologies is that they do not exploit available information about the system being analyzed.</p>\r\n\r\n<p>In this thesis we address the above issues by developing a new methodology, called \"analysis-aware\" design, that links software development styles with the capabilities of analysis tools. This methodology forms the basis of a framework for interactive software development. The framework consists of an executable specification language and a set of analysis tools based on static analysis, testing, and model checking. The language enforces an analysis-friendly code structure and offers primitives that allow users to implement their own testers and model checkers directly in the language. We introduce a new approach to static analysis that takes advantage of the capabilities of a rule-based engine. We have applied the analysis-aware methodology to the development of a smart home application.</p>",
        "doi": "10.7907/VB1N-Y042",
        "publication_date": "2014",
        "thesis_type": "phd",
        "thesis_year": "2014"
    },
    {
        "id": "thesis:8458",
        "collection": "thesis",
        "collection_id": "8458",
        "cite_using_url": "https://resolver.caltech.edu/CaltechTHESIS:06012014-040224456",
        "primary_object_url": {
            "basename": "BoseThesis.pdf",
            "content": "final",
            "filesize": 2661427,
            "license": "other",
            "mime_type": "application/pdf",
            "url": "/8458/1/BoseThesis.pdf",
            "version": "v2.0.0"
        },
        "type": "thesis",
        "title": "An Integrated Design Approach to Power Systems: From Power Flows to Electricity Markets",
        "author": [
            {
                "family_name": "Bose",
                "given_name": "Subhonmesh",
                "clpid": "Bose-Subhonmesh"
            }
        ],
        "thesis_advisor": [
            {
                "family_name": "Low",
                "given_name": "Steven H.",
                "clpid": "Low-S-H"
            },
            {
                "family_name": "Wierman",
                "given_name": "Adam C.",
                "clpid": "Wierman-A-C"
            },
            {
                "family_name": "Chandy",
                "given_name": "K. Mani",
                "clpid": "Chandy-K-M"
            },
            {
                "family_name": "Hassibi",
                "given_name": "Babak",
                "clpid": "Hassibi-B"
            }
        ],
        "thesis_committee": [
            {
                "family_name": "Low",
                "given_name": "Steven H.",
                "clpid": "Low-S-H"
            },
            {
                "family_name": "Wierman",
                "given_name": "Adam C.",
                "clpid": "Wierman-A-C"
            },
            {
                "family_name": "Chandy",
                "given_name": "K. Mani",
                "clpid": "Chandy-K-M"
            },
            {
                "family_name": "Hassibi",
                "given_name": "Babak",
                "clpid": "Hassibi-B"
            },
            {
                "family_name": "Ledyard",
                "given_name": "John O.",
                "clpid": "Ledyard-J-O"
            },
            {
                "family_name": "Baldick",
                "given_name": "Ross",
                "clpid": "Baldick-R"
            }
        ],
        "local_group": [
            {
                "literal": "div_eng"
            }
        ],
        "abstract": "Power system is at the brink of change. Engineering needs, economic forces and environmental factors are the main drivers of this change. The vision is to build a smart electrical grid and a smarter market mechanism around it to fulfill mandates on clean energy. Looking at engineering and economic issues in isolation is no longer an option today; it needs an integrated design approach. In this thesis, I shall revisit some of the classical questions on the engineering operation of power systems that deals with the nonconvexity of power flow equations. Then I shall explore some issues of the interaction of these power flow equations on the electricity markets to address the fundamental issue of market power in a deregulated market environment. Finally, motivated by the emergence of new storage technologies, I present an interesting result on the investment decision problem of placing storage over a power network. The goal of this study is to demonstrate that modern optimization and game theory can provide unique insights into this complex system. Some of the ideas carry over to applications beyond power systems.",
        "doi": "10.7907/FRGW-AF26",
        "publication_date": "2014",
        "thesis_type": "phd",
        "thesis_year": "2014"
    },
    {
        "id": "thesis:8457",
        "collection": "thesis",
        "collection_id": "8457",
        "cite_using_url": "https://resolver.caltech.edu/CaltechTHESIS:05312014-215801543",
        "primary_object_url": {
            "basename": "PhD_zhenhua.pdf",
            "content": "final",
            "filesize": 2273195,
            "license": "other",
            "mime_type": "application/pdf",
            "url": "/8457/1/PhD_zhenhua.pdf",
            "version": "v2.0.0"
        },
        "type": "thesis",
        "title": "Sustainable IT and IT for Sustainability",
        "author": [
            {
                "family_name": "Liu",
                "given_name": "Zhenhua",
                "clpid": "Liu-Zhenhua"
            }
        ],
        "thesis_advisor": [
            {
                "family_name": "Wierman",
                "given_name": "Adam C.",
                "clpid": "Wierman-A-C"
            },
            {
                "family_name": "Low",
                "given_name": "Steven H.",
                "clpid": "Low-S-H"
            }
        ],
        "thesis_committee": [
            {
                "family_name": "Wierman",
                "given_name": "Adam C.",
                "clpid": "Wierman-A-C"
            },
            {
                "family_name": "Low",
                "given_name": "Steven H.",
                "clpid": "Low-S-H"
            },
            {
                "family_name": "Chandy",
                "given_name": "K. Mani",
                "clpid": "Chandy-K-M"
            },
            {
                "family_name": "Liu",
                "given_name": "Xue",
                "clpid": "Liu-X"
            }
        ],
        "local_group": [
            {
                "literal": "div_eng"
            }
        ],
        "abstract": "<p>Energy and sustainability have become one of the most critical issues of our generation. While the abundant potential of renewable energy such as solar and wind provides a real opportunity for sustainability, their intermittency and uncertainty present a daunting operating challenge. This thesis aims to develop analytical models, deployable algorithms, and real systems to enable efficient integration of renewable energy into complex distributed systems with limited information.</p> \r\n\r\n<p>The first thrust of the thesis is to make IT systems more sustainable by facilitating the integration of renewable energy into these systems. IT represents the fastest growing sectors in energy usage and greenhouse gas pollution. Over the last decade there are dramatic improvements in the energy efficiency of IT systems, but the efficiency improvements do not necessarily lead to reduction in energy consumption because more servers are demanded. Further, little effort has been put in making IT more sustainable, and most of the improvements are from improved \"engineering\" rather than improved \"algorithms\".  In contrast, my work focuses on developing algorithms with rigorous theoretical analysis that improve the sustainability of IT. In particular, this thesis seeks to exploit the flexibilities of cloud workloads both (i) in time by scheduling delay-tolerant workloads and (ii) in space by routing requests to geographically diverse data centers. These opportunities allow data centers to adaptively respond to renewable availability, varying cooling efficiency, and fluctuating energy prices, while still meeting performance requirements. The design of the enabling algorithms is however very challenging because of limited information, non-smooth objective functions and the need for distributed control. Novel distributed algorithms are developed with theoretically provable guarantees to enable the \"follow the renewables\" routing. Moving from theory to practice, I helped HP design and implement industry's first Net-zero Energy Data Center. </p>\r\n\r\n<p>The second thrust of this thesis is to use IT systems to improve the sustainability and efficiency of our energy infrastructure through data center demand response. The main challenges as we integrate more renewable sources to the existing power grid come from the fluctuation and unpredictability of renewable generation. Although energy storage and reserves can potentially solve the issues, they are very costly. One promising alternative is to make the cloud data centers demand responsive. The potential of such an approach is huge. </p>\r\n\r\n<p>To realize this potential, we need adaptive and distributed control of cloud data centers and new electricity market designs for distributed electricity resources. My work is progressing in both directions. In particular, I have designed online algorithms with theoretically guaranteed performance for data center operators to deal with uncertainties under popular demand response programs. Based on local control rules of customers, I have further designed new pricing schemes for demand response to align the interests of customers, utility companies, and the society to improve social welfare.</p>",
        "doi": "10.7907/296T-HR79",
        "publication_date": "2014",
        "thesis_type": "phd",
        "thesis_year": "2014"
    },
    {
        "id": "thesis:8188",
        "collection": "thesis",
        "collection_id": "8188",
        "cite_using_url": "https://resolver.caltech.edu/CaltechTHESIS:04152014-111007328",
        "primary_object_url": {
            "basename": "Faulkner-M-N-2014-thesis.pdf",
            "content": "final",
            "filesize": 46546824,
            "license": "other",
            "mime_type": "application/pdf",
            "url": "/8188/1/Faulkner-M-N-2014-thesis.pdf",
            "version": "v2.0.0"
        },
        "type": "thesis",
        "title": "Community Sense and Response Systems",
        "author": [
            {
                "family_name": "Faulkner",
                "given_name": "Matthew Nicholas",
                "clpid": "Faulkner-Matthew-Nicholas"
            }
        ],
        "thesis_advisor": [
            {
                "family_name": "Krause",
                "given_name": "R. Andreas",
                "clpid": "Krause-R-A"
            },
            {
                "family_name": "Chandy",
                "given_name": "K. Mani",
                "clpid": "Chandy-K-M"
            },
            {
                "family_name": "Heaton",
                "given_name": "Thomas H.",
                "clpid": "Heaton-T-H"
            }
        ],
        "thesis_committee": [
            {
                "family_name": "Krause",
                "given_name": "R. Andreas",
                "clpid": "Krause-R-A"
            },
            {
                "family_name": "Chandy",
                "given_name": "K. Mani",
                "clpid": "Chandy-K-M"
            },
            {
                "family_name": "Heaton",
                "given_name": "Thomas H.",
                "clpid": "Heaton-T-H"
            },
            {
                "family_name": "Tropp",
                "given_name": "Joel A.",
                "clpid": "Tropp-J-A"
            },
            {
                "family_name": "Abu-Mostafa",
                "given_name": "Yaser S.",
                "clpid": "Abu-Mostafa-Y-S"
            }
        ],
        "local_group": [
            {
                "literal": "div_eng"
            }
        ],
        "abstract": "<p>The proliferation of smartphones and other internet-enabled, sensor-equipped consumer devices enables us to sense and act upon the physical environment in unprecedented ways. This thesis considers Community Sense-and-Response (CSR) systems, a new class of web application for acting on sensory data gathered from participants' personal smart devices. The thesis describes how rare events can be reliably detected using a decentralized anomaly detection architecture that performs client-side anomaly detection and server-side event detection. After analyzing this decentralized anomaly detection approach, the thesis describes how weak but spatially structured events can be detected, despite significant noise, when the events have a sparse representation in an alternative basis. Finally, the thesis describes how the statistical models needed for client-side anomaly detection may be learned efficiently, using limited space, via coresets.</p> \r\n  \r\n<p>The Caltech Community Seismic Network (CSN) is a prototypical example of a CSR system that harnesses accelerometers in volunteers' smartphones and consumer electronics. Using CSN, this thesis presents the systems and algorithmic techniques to design, build and evaluate a scalable network for real-time awareness of spatial phenomena such as dangerous earthquakes.</p>",
        "doi": "10.7907/QFM5-FH06",
        "publication_date": "2014",
        "thesis_type": "phd",
        "thesis_year": "2014"
    },
    {
        "id": "thesis:8145",
        "collection": "thesis",
        "collection_id": "8145",
        "cite_using_url": "https://resolver.caltech.edu/CaltechTHESIS:03182014-225151551",
        "type": "thesis",
        "title": "New Applications that Come from Extending Seismic Networks into Buildings",
        "author": [
            {
                "family_name": "Cheng",
                "given_name": "Ming Hei",
                "clpid": "Cheng-Ming-Hei"
            }
        ],
        "thesis_advisor": [
            {
                "family_name": "Heaton",
                "given_name": "Thomas H.",
                "clpid": "Heaton-T-H"
            }
        ],
        "thesis_committee": [
            {
                "family_name": "Beck",
                "given_name": "James L.",
                "clpid": "Beck-J-L"
            },
            {
                "family_name": "Heaton",
                "given_name": "Thomas H.",
                "clpid": "Heaton-T-H"
            },
            {
                "family_name": "Chandy",
                "given_name": "K. Mani",
                "clpid": "Chandy-K-M"
            },
            {
                "family_name": "Hall",
                "given_name": "John F.",
                "clpid": "Hall-J-F"
            },
            {
                "family_name": "Kohler",
                "given_name": "Monica D.",
                "clpid": "Kohler-M-D"
            }
        ],
        "local_group": [
            {
                "literal": "div_eng"
            }
        ],
        "abstract": "This thesis describes engineering applications that come from extending seismic networks into building structures. The proposed applications will benefit the data from the newly developed crowd-sourced seismic networks which are composed of low-cost accelerometers. An overview of the Community Seismic Network and the earthquake detection method are addressed. In the structural array components of crowd-sourced seismic networks, there may be instances in which a single seismometer is the only data source that is available from a building. A simple prismatic Timoshenko beam model with soil-structure interaction (SSI) is developed to approximate mode shapes of buildings using natural frequency ratios. A closed form solution with complete vibration modes is derived. In addition, a new method to rapidly estimate total displacement response of a building based on limited observational data, in some cases from a single seismometer, is presented. The total response of a building is modeled by the combination of the initial vibrating motion due to an upward traveling wave, and the subsequent motion as the low-frequency resonant mode response. Furthermore, the expected shaking intensities in tall buildings will be significantly different from that on the ground during earthquakes. Examples are included to estimate the characteristics of shaking that can be expected in mid-rise to high-rise buildings. Development of engineering applications (e.g., human comfort prediction and automated elevator control) for earthquake early warning system using probabilistic framework and statistical learning technique is addressed. ",
        "doi": "10.7907/STB2-XR07",
        "publication_date": "2014",
        "thesis_type": "phd",
        "thesis_year": "2014"
    },
    {
        "id": "thesis:7939",
        "collection": "thesis",
        "collection_id": "7939",
        "cite_using_url": "https://resolver.caltech.edu/CaltechTHESIS:08242013-182604077",
        "type": "thesis",
        "title": "Cloud Computing Services for Seismic Networks",
        "author": [
            {
                "family_name": "Olson",
                "given_name": "Michael James",
                "clpid": "Olson-Michael-James"
            }
        ],
        "thesis_advisor": [
            {
                "family_name": "Chandy",
                "given_name": "K. Mani",
                "clpid": "Chandy-K-M"
            }
        ],
        "thesis_committee": [
            {
                "family_name": "Chandy",
                "given_name": "K. Mani",
                "clpid": "Chandy-K-M"
            },
            {
                "family_name": "Heaton",
                "given_name": "Thomas H.",
                "clpid": "Heaton-T-H"
            },
            {
                "family_name": "Billock",
                "given_name": "Joseph Gregory",
                "clpid": "Billock-J-G"
            },
            {
                "family_name": "Clayton",
                "given_name": "Robert W.",
                "clpid": "Clayton-R-W"
            },
            {
                "family_name": "Wierman",
                "given_name": "Adam C.",
                "clpid": "Wierman-A-C"
            }
        ],
        "local_group": [
            {
                "literal": "div_eng"
            }
        ],
        "abstract": "This thesis describes a compositional framework for developing situation awareness applications: applications that provide ongoing information about a user's changing environment.  The thesis describes how the framework is used to develop a situation awareness application for earthquakes. The applications are implemented as Cloud computing services connected to sensors and actuators. The architecture and design of the Cloud services are described and measurements of performance metrics are provided.  The thesis includes results of experiments on earthquake monitoring conducted over a year. The applications developed by the framework are (1) the CSN --- the Community Seismic Network --- which uses relatively low-cost sensors deployed by members of the community, and (2) SAF  --- the Situation Awareness Framework --- which integrates data from multiple sources, including the CSN, CISN --- the California Integrated Seismic Network, a network consisting of high-quality seismometers deployed carefully by professionals in the CISN organization and spread across Southern California --- and prototypes of multi-sensor platforms that include carbon monoxide, methane, dust and radiation sensors.",
        "doi": "10.7907/5D60-FG88",
        "publication_date": "2014",
        "thesis_type": "phd",
        "thesis_year": "2014"
    },
    {
        "id": "thesis:8078",
        "collection": "thesis",
        "collection_id": "8078",
        "cite_using_url": "https://resolver.caltech.edu/CaltechTHESIS:02172014-121159358",
        "primary_object_url": {
            "basename": "wolff_eric_thesis.pdf",
            "content": "final",
            "filesize": 2659687,
            "license": "other",
            "mime_type": "application/pdf",
            "url": "/8078/1/wolff_eric_thesis.pdf",
            "version": "v2.0.0"
        },
        "type": "thesis",
        "title": "Control of Dynamical Systems with Temporal Logic Specifications",
        "author": [
            {
                "family_name": "Wolff",
                "given_name": "Eric McKenzie",
                "clpid": "Wolff-Eric-McKenzie"
            }
        ],
        "thesis_advisor": [
            {
                "family_name": "Murray",
                "given_name": "Richard M.",
                "clpid": "Murray-R-M"
            }
        ],
        "thesis_committee": [
            {
                "family_name": "Murray",
                "given_name": "Richard M.",
                "clpid": "Murray-R-M"
            },
            {
                "family_name": "Burdick",
                "given_name": "Joel Wakeman",
                "clpid": "Burdick-J-W"
            },
            {
                "family_name": "Topcu",
                "given_name": "Ufuk",
                "clpid": "Topcu-U"
            },
            {
                "family_name": "Chandy",
                "given_name": "K. Mani",
                "clpid": "Chandy-K-M"
            }
        ],
        "local_group": [
            {
                "literal": "div_eng"
            }
        ],
        "abstract": "<p>This thesis is motivated by safety-critical applications involving autonomous air, ground, and space vehicles carrying out complex tasks in uncertain and adversarial environments. We use temporal logic as a language to formally specify complex tasks and system properties. Temporal logic specifications generalize the classical notions of stability and reachability that are studied in the control and hybrid systems communities. Given a system model and a formal task specification, the goal is to automatically synthesize a control policy for the system that ensures that the system satisfies the specification. This thesis presents novel control policy synthesis algorithms for optimal and robust control of dynamical systems with temporal logic specifications.  Furthermore, it introduces algorithms that are efficient and extend to high-dimensional dynamical systems.</p>\r\n\r\n<p>The first contribution of this thesis is the generalization of a classical linear temporal logic (LTL) control synthesis approach to optimal and robust control. We show how we can extend automata-based synthesis techniques for discrete abstractions of dynamical systems to create optimal and robust controllers that are guaranteed to satisfy an LTL specification. Such optimal and robust controllers can be computed at little extra computational cost compared to computing a feasible controller.</p>\r\n\r\n<p>The second contribution of this thesis addresses the scalability of control synthesis with LTL specifications. A major limitation of the standard automaton-based approach for control with LTL specifications is that the automaton might be doubly-exponential in the size of the LTL specification. We introduce a fragment of LTL for which one can compute feasible control policies in time polynomial in the size of the system and specification. Additionally, we show how to compute optimal control policies for a variety of cost functions, and identify interesting cases when this can be done in polynomial time. These techniques are particularly relevant for online control, as one can guarantee that a feasible solution can be found quickly, and then iteratively improve on the quality as time permits.</p> \r\n\r\n<p>The final contribution of this thesis is a set of algorithms for computing feasible trajectories for high-dimensional, nonlinear systems with LTL specifications. These algorithms avoid a potentially computationally-expensive process of computing a discrete abstraction, and instead compute directly on the system's continuous state space. The first method uses an automaton representing the specification to directly encode a series of constrained-reachability subproblems, which can be solved in a modular fashion by using standard techniques. The second method encodes an LTL formula as mixed-integer linear programming constraints on the dynamical system. We demonstrate these approaches with numerical experiments on temporal logic motion planning problems with high-dimensional (10+ states) continuous systems.</p>",
        "doi": "10.7907/TGFR-SS39",
        "publication_date": "2014",
        "thesis_type": "phd",
        "thesis_year": "2014"
    },
    {
        "id": "thesis:7753",
        "collection": "thesis",
        "collection_id": "7753",
        "cite_using_url": "https://resolver.caltech.edu/CaltechTHESIS:05252013-081655550",
        "primary_object_url": {
            "basename": "Sojoudi_Somayeh_2013_Thesis.pdf",
            "content": "final",
            "filesize": 1227396,
            "license": "other",
            "mime_type": "application/pdf",
            "url": "/7753/1/Sojoudi_Somayeh_2013_Thesis.pdf",
            "version": "v2.0.0"
        },
        "type": "thesis",
        "title": "Mathematical Study of Complex Networks: Brain, Internet, and Power Grid",
        "author": [
            {
                "family_name": "Sojoudi",
                "given_name": "Somayeh",
                "clpid": "Sojoudi-Somayeh"
            }
        ],
        "thesis_advisor": [
            {
                "family_name": "Doyle",
                "given_name": "John Comstock",
                "clpid": "Doyle-J-C"
            }
        ],
        "thesis_committee": [
            {
                "family_name": "Doyle",
                "given_name": "John Comstock",
                "clpid": "Doyle-J-C"
            },
            {
                "family_name": "Murray",
                "given_name": "Richard M.",
                "clpid": "Murray-R-M"
            },
            {
                "family_name": "Low",
                "given_name": "Steven H.",
                "clpid": "Low-S-H"
            },
            {
                "family_name": "Chandy",
                "given_name": "K. Mani",
                "clpid": "Chandy-K-M"
            }
        ],
        "local_group": [
            {
                "literal": "div_eng"
            }
        ],
        "abstract": "<p>The dissertation is concerned with the mathematical study of various network problems. First, three real-world networks are considered: (i) the human brain network (ii) communication networks, (iii) electric power networks. Although these networks perform very different tasks, they share similar mathematical foundations. The high-level goal is to analyze and/or synthesis each of these systems from a \u201ccontrol and optimization\u201d point of view. After studying these three real-world networks, two abstract network problems are also explored, which are motivated by power systems. The first one is \u201cflow optimization over a flow network\u201d and the second one is \u201cnonlinear optimization over a generalized weighted graph\u201d. The results derived in this dissertation are summarized below.</p>\r\n\r\n<p>Brain Networks: Neuroimaging data reveals the coordinated activity of spatially distinct brain regions, which may be represented mathematically as a network of nodes (brain regions) and links (interdependencies). To obtain the brain connectivity network, the graphs associated with the correlation matrix and the inverse covariance matrix\u2014describing marginal and conditional dependencies between brain regions\u2014have been proposed in the literature. A question arises as to whether any of these graphs provides useful information about the brain connectivity. Due to the electrical properties of the brain, this problem will be investigated in the context of electrical circuits. First, we consider an electric circuit model and show that the inverse covariance matrix of the node voltages reveals the topology of the circuit. Second, we study the problem of finding the topology of the circuit based on only measurement. In this case, by assuming that the circuit is hidden inside a black box and only the nodal signals are available for measurement, the aim is to find the topology of the circuit when a limited number of samples are available. For this purpose, we deploy the graphical lasso technique to estimate a sparse inverse covariance matrix. It is shown that the graphical lasso may find most of the circuit topology if the exact covariance matrix is well-conditioned. However, it may fail to work well when this matrix is ill-conditioned. To deal with ill-conditioned matrices, we propose a small modification to the graphical lasso algorithm and demonstrate its performance. Finally, the technique developed in this work will be applied to the resting-state fMRI data of a number of healthy subjects.</p>\r\n\r\n<p>Communication Networks: Congestion control techniques aim to adjust the transmission rates of competing users in the Internet in such a way that the network resources are shared efficiently. Despite the progress in the analysis and synthesis of the Internet congestion control, almost all existing fluid models of congestion control assume that every link in the path of a flow observes the original source rate. To address this issue, a more accurate model is derived in this work for the behavior of the network under an arbitrary congestion controller, which takes into account of the effect of buffering (queueing) on data flows. Using this model, it is proved that the well-known Internet congestion control algorithms may no longer be stable for the common pricing schemes, unless a sufficient condition is satisfied. It is also shown that these algorithms are guaranteed to be stable if a new pricing mechanism is used.</p>\r\n\r\n<p>Electrical Power Networks: Optimal power flow (OPF) has been one of the most studied problems for power systems since its introduction by Carpentier in 1962. This problem is concerned with finding an optimal operating point of a power network minimizing the total power generation cost subject to network and physical constraints. It is well known that OPF is computationally hard to solve due to the nonlinear interrelation among the optimization variables. The objective is to identify a large class of networks over which every OPF problem can be solved in polynomial time. To this end, a convex relaxation is proposed, which solves the OPF problem exactly for every radial network and every meshed network with a sufficient number of phase shifters, provided power over-delivery is allowed. The concept of \u201cpower over-delivery\u201d is equivalent to relaxing the power balance equations to inequality constraints.</p>\r\n\r\n<p>Flow Networks: In this part of the dissertation, the minimum-cost flow problem over an arbitrary flow network is considered. In this problem, each node is associated with some possibly unknown injection, each line has two unknown flows at its ends related to each other via a nonlinear function, and all injections and flows need to satisfy certain box constraints. This problem, named generalized network flow (GNF), is highly non-convex due to its nonlinear equality constraints. Under the assumption of monotonicity and convexity of the flow and cost functions, a convex relaxation is proposed, which always finds the optimal injections. A primary application of this work is in the OPF problem. The results of this work on GNF prove that the relaxation on power balance equations (i.e., load over-delivery) is not needed in practice under a very mild angle assumption.</p>\r\n\r\n<p>Generalized Weighted Graphs: Motivated by power optimizations, this part aims to find a global optimization technique for a nonlinear optimization defined over a generalized weighted graph. Every edge of this type of graph is associated with a weight set corresponding to the known parameters of the optimization (e.g., the coefficients). The motivation behind this problem is to investigate how the (hidden) structure of a given real/complex valued optimization makes the problem easy to solve, and indeed the generalized weighted graph is introduced to capture the structure of an optimization. Various sufficient conditions are derived, which relate the polynomial-time solvability of different classes of optimization problems to weak properties of the generalized weighted graph such as its topology and the sign definiteness of its weight sets. As an application, it is proved that a broad class of real and complex optimizations over power networks are polynomial-time solvable due to the passivity of transmission lines and transformers.</p>",
        "doi": "10.7907/E750-2M74",
        "publication_date": "2013",
        "thesis_type": "phd",
        "thesis_year": "2013"
    },
    {
        "id": "thesis:7188",
        "collection": "thesis",
        "collection_id": "7188",
        "cite_using_url": "https://resolver.caltech.edu/CaltechTHESIS:08192012-145253489",
        "type": "thesis",
        "title": "GRAph Parallel Actor Language: A Programming Language for Parallel Graph Algorithms",
        "author": [
            {
                "family_name": "DeLorimier",
                "given_name": "Michael John",
                "clpid": "DeLorimier-Michael-John"
            }
        ],
        "thesis_advisor": [
            {
                "family_name": "DeHon",
                "given_name": "Andre",
                "clpid": "DeHon-A"
            },
            {
                "family_name": "Desbrun",
                "given_name": "Mathieu",
                "clpid": "Desbrun-M"
            }
        ],
        "thesis_committee": [
            {
                "family_name": "DeHon",
                "given_name": "Andre",
                "clpid": "DeHon-A"
            },
            {
                "family_name": "Martin",
                "given_name": "Alain J.",
                "clpid": "Martin-A-J"
            },
            {
                "family_name": "Chandy",
                "given_name": "K. Mani",
                "clpid": "Chandy-K-M"
            },
            {
                "family_name": "Meiron",
                "given_name": "Daniel I.",
                "clpid": "Meiron-D-I"
            },
            {
                "family_name": "Shrobe",
                "given_name": "Howard",
                "clpid": "Shrobe-H"
            }
        ],
        "local_group": [
            {
                "literal": "div_eng"
            }
        ],
        "abstract": "We introduce a domain-specific language, GRAph Parallel Actor Language, that enables parallel graph algorithms to be written in a natural, high-level form. GRAPAL is based on our GraphStep compute model, which enables a wide range of parallel graph algorithms that are high-level, deterministic, free from race conditions, and free from deadlock. Programs written in GRAPAL are easy for a compiler and runtime to map to efficient parallel field programmable gate array (FPGA) implementations. We show that the GRAPAL compiler can verify that the structure of operations conforms to the GraphStep model. We allocate many small processing elements in each FPGA that take advantage of the high on-chip memory bandwidth (5x the sequential processor) and process one graph edge per clock cycle per processing element. We show how to automatically choose parameters for the logic architecture so the high-level GRAPAL programming model is independent of the target FPGA architecture. We compare our GRAPAL applications mapped to a platform with four 65 nm Virtex-5 SX95T FPGAs to sequential programs run on a single 65 nm Xeon 5160. Our implementation achieves a total mean speedup of 8x with a maximum speedup of 28x. The speedup per chip is 2x with a maximum of 7x. The ratio of energy used by our GRAPAL implementation over the sequential implementation has a mean of 1/10 with a minimum of 1/80.",
        "doi": "10.7907/M3TW-7Y53",
        "publication_date": "2013",
        "thesis_type": "phd",
        "thesis_year": "2013"
    },
    {
        "id": "thesis:7822",
        "collection": "thesis",
        "collection_id": "7822",
        "cite_using_url": "https://resolver.caltech.edu/CaltechTHESIS:06032013-104204451",
        "primary_object_url": {
            "basename": "gopalakrishnan_ragavendran_2013.pdf",
            "content": "final",
            "filesize": 1600469,
            "license": "other",
            "mime_type": "application/pdf",
            "url": "/7822/1/gopalakrishnan_ragavendran_2013.pdf",
            "version": "v2.0.0"
        },
        "type": "thesis",
        "title": "Characterizing Distribution Rules for Cost Sharing Games",
        "author": [
            {
                "family_name": "Gopalakrishnan",
                "given_name": "Ragavendran",
                "clpid": "Gopalakrishnan-Ragavendran"
            }
        ],
        "thesis_advisor": [
            {
                "family_name": "Wierman",
                "given_name": "Adam C.",
                "clpid": "Wierman-A-C"
            }
        ],
        "thesis_committee": [
            {
                "family_name": "Wierman",
                "given_name": "Adam C.",
                "clpid": "Wierman-A-C"
            },
            {
                "family_name": "Chandy",
                "given_name": "K. Mani",
                "clpid": "Chandy-K-M"
            },
            {
                "family_name": "Ledyard",
                "given_name": "John O.",
                "clpid": "Ledyard-J-O"
            },
            {
                "family_name": "Ligett",
                "given_name": "Katrina A.",
                "clpid": "Ligett-K-A"
            },
            {
                "family_name": "Marden",
                "given_name": "Jason R.",
                "clpid": "Marden-J-R"
            }
        ],
        "local_group": [
            {
                "literal": "div_eng"
            }
        ],
        "abstract": "<p>In noncooperative cost sharing games, individually strategic agents choose resources based on how the welfare (cost or revenue) generated at each resource (which depends on the set of agents that choose the resource) is distributed. The focus is on finding distribution rules that lead to stable allocations, which is formalized by the concept of <em>Nash equilibrium</em>, e.g., Shapley value (budget-balanced) and marginal contribution (not budget-balanced) rules.</p>\r\n\r\n<p>Recent work that seeks to characterize the space of all such rules shows that the only <em>budget-balanced</em> distribution rules that guarantee equilibrium existence in all welfare sharing games are generalized weighted Shapley values (GWSVs), by exhibiting a specific 'worst-case' welfare function which requires that GWSV rules be used. Our work provides an exact characterization of the space of distribution rules (not necessarily budget-balanced) for any specific local welfare functions remains, for a general class of scalable and separable games with well-known applications, e.g., facility location, routing, network formation, and coverage games.</p>\r\n\r\n<p>We show that all games conditioned on any fixed local welfare functions possess an equilibrium if and only if the distribution rules are equivalent to GWSV rules on some 'ground' welfare functions. Therefore, it is neither the existence of some worst-case welfare function, nor the restriction of budget-balance, which limits the design to GWSVs. Also, in order to guarantee equilibrium existence, it is <em>necessary</em> to work within the class of <em>potential games</em>, since GWSVs result in (weighted) potential games.</p>\r\n\r\n<p>We also provide an alternative characterization&#8212;all games conditioned on any fixed local welfare functions possess an equilibrium if and only if the distribution rules are equivalent to generalized weighted marginal contribution (GWMC) rules on some 'ground' welfare functions. This result is due to a deeper fundamental connection between Shapley values and marginal contributions that our proofs expose&#8212;they are equivalent given a transformation connecting their ground welfare functions. (This connection leads to novel closed-form expressions for the GWSV potential function.) Since GWMCs are more tractable than GWSVs, a designer can tradeoff budget-balance with computational tractability in deciding which rule to implement.</p>",
        "doi": "10.7907/AWE2-H976",
        "publication_date": "2013",
        "thesis_type": "phd",
        "thesis_year": "2013"
    },
    {
        "id": "thesis:7815",
        "collection": "thesis",
        "collection_id": "7815",
        "cite_using_url": "https://resolver.caltech.edu/CaltechTHESIS:05312013-223354639",
        "primary_object_url": {
            "basename": "thesis.pdf",
            "content": "final",
            "filesize": 1293573,
            "license": "other",
            "mime_type": "application/pdf",
            "url": "/7815/1/thesis.pdf",
            "version": "v2.0.0"
        },
        "type": "thesis",
        "title": "Algorithmic Challenges in Green Data Centers",
        "author": [
            {
                "family_name": "Lin",
                "given_name": "Minghong",
                "clpid": "Lin-Minghong"
            }
        ],
        "thesis_advisor": [
            {
                "family_name": "Wierman",
                "given_name": "Adam C.",
                "clpid": "Wierman-A-C"
            }
        ],
        "thesis_committee": [
            {
                "family_name": "Wierman",
                "given_name": "Adam C.",
                "clpid": "Wierman-A-C"
            },
            {
                "family_name": "Low",
                "given_name": "Steven H.",
                "clpid": "Low-S-H"
            },
            {
                "family_name": "Chandy",
                "given_name": "K. Mani",
                "clpid": "Chandy-K-M"
            },
            {
                "family_name": "Lui",
                "given_name": "John C. S.",
                "clpid": "Lui-J-CS"
            }
        ],
        "local_group": [
            {
                "literal": "div_eng"
            }
        ],
        "abstract": "With data centers being the supporting infrastructure for a wide range of IT services, their efficiency has become a big concern to operators, as well as to society, for both economic and environmental reasons. The goal of this thesis is to design energy-efficient algorithms that reduce energy cost while minimizing compromise to service.  We focus on the algorithmic challenges at different levels of energy optimization across the data center stack.  The algorithmic challenge at the device level is to improve the energy efficiency of a single computational device via techniques such as job scheduling and speed scaling.  We analyze the common speed scaling algorithms in both the worst-case model and stochastic model  to answer some fundamental issues in the design of speed scaling algorithms.  The algorithmic challenge at the local data center level is to dynamically allocate resources (e.g., servers) and to dispatch the workload in a data center. We develop an online algorithm to make a data center more power-proportional by dynamically adapting the number of active servers.  The algorithmic challenge at the global data center level is to dispatch the workload across multiple data centers, considering the geographical diversity of electricity price, availability of renewable energy, and network propagation delay. We propose algorithms to jointly optimize routing and provisioning in an online manner.  Motivated by the above online decision problems, we move on to study a general class of online problem named \"smoothed online convex optimization\", which seeks to minimize the sum of a sequence of convex functions when \"smooth\" solutions are preferred.  This model allows us to bridge different research communities and help us get a more fundamental understanding of general online decision problems.",
        "doi": "10.7907/NRXJ-JB76",
        "publication_date": "2013",
        "thesis_type": "phd",
        "thesis_year": "2013"
    },
    {
        "id": "thesis:7789",
        "collection": "thesis",
        "collection_id": "7789",
        "cite_using_url": "https://resolver.caltech.edu/CaltechTHESIS:05312013-103940337",
        "primary_object_url": {
            "basename": "Xu_Huan_Thesis_2013.pdf",
            "content": "final",
            "filesize": 16950376,
            "license": "other",
            "mime_type": "application/pdf",
            "url": "/7789/1/Xu_Huan_Thesis_2013.pdf",
            "version": "v2.0.0"
        },
        "type": "thesis",
        "title": "Design, Specification, and Synthesis of Aircraft Electric Power Systems Control Logic",
        "author": [
            {
                "family_name": "Xu",
                "given_name": "Huan",
                "clpid": "Xu-Huan"
            }
        ],
        "thesis_advisor": [
            {
                "family_name": "Murray",
                "given_name": "Richard M.",
                "clpid": "Murray-R-M"
            }
        ],
        "thesis_committee": [
            {
                "family_name": "Burdick",
                "given_name": "Joel Wakeman",
                "clpid": "Burdick-J-W"
            },
            {
                "family_name": "Chandy",
                "given_name": "K. Mani",
                "clpid": "Chandy-K-M"
            },
            {
                "family_name": "Holzmann",
                "given_name": "Gerard J.",
                "clpid": "Holzmann-G-J"
            },
            {
                "family_name": "Murray",
                "given_name": "Richard M.",
                "clpid": "Murray-R-M"
            }
        ],
        "local_group": [
            {
                "literal": "div_eng"
            }
        ],
        "abstract": "<p>Cyber-physical systems integrate computation, networking, and physical processes. Substantial research challenges exist in the design and verification of such large-scale, distributed sensing, ac- tuation, and control systems. Rapidly improving technology and recent advances in control theory, networked systems, and computer science give us the opportunity to drastically improve our approach to integrated flow of information and cooperative behavior. Current systems rely on text-based spec- ifications and manual design. Using new technology advances, we can create easier, more efficient, and cheaper ways of developing these control systems. This thesis will focus on design considera- tions for system topologies, ways to formally and automatically specify requirements, and methods to synthesize reactive control protocols, all within the context of an aircraft electric power system as a representative application area.</p>\r\n\r\n<p>This thesis consists of three complementary parts: synthesis, specification, and design. The first section focuses on the synthesis of central and distributed reactive controllers for an aircraft elec- tric power system. This approach incorporates methodologies from computer science and control. The resulting controllers are correct by construction with respect to system requirements, which are formulated using the specification language of linear temporal logic (LTL). The second section addresses how to formally specify requirements and introduces a domain-specific language for electric power systems. A software tool automatically converts high-level requirements into LTL and synthesizes a controller.</p>\r\n\r\n<p>The final sections focus on design space exploration. A design methodology is proposed that uses mixed-integer linear programming to obtain candidate topologies, which are then used to synthesize controllers. The discrete-time control logic is then verified in real-time by two methods: hardware and simulation. Finally, the problem of partial observability and dynamic state estimation is ex- plored. Given a set placement of sensors on an electric power system, measurements from these sensors can be used in conjunction with control logic to infer the state of the system.</p>",
        "doi": "10.7907/QDJN-BB72",
        "publication_date": "2013",
        "thesis_type": "phd",
        "thesis_year": "2013"
    },
    {
        "id": "thesis:7856",
        "collection": "thesis",
        "collection_id": "7856",
        "cite_using_url": "https://resolver.caltech.edu/CaltechTHESIS:06062013-224746692",
        "primary_object_url": {
            "basename": "thesis_annie.pdf",
            "content": "final",
            "filesize": 23718990,
            "license": "other",
            "mime_type": "application/pdf",
            "url": "/7856/1/thesis_annie.pdf",
            "version": "v2.0.0"
        },
        "type": "thesis",
        "title": "Sensor Networks for Geospatial Event Detection - Theory and Applications",
        "author": [
            {
                "family_name": "Liu",
                "given_name": "Annie Hsin-Wen",
                "clpid": "Liu-Annie-Hsin-Wen"
            }
        ],
        "thesis_advisor": [
            {
                "family_name": "Chandy",
                "given_name": "K. Mani",
                "clpid": "Chandy-K-M"
            }
        ],
        "thesis_committee": [
            {
                "family_name": "Chandy",
                "given_name": "K. Mani",
                "clpid": "Chandy-K-M"
            },
            {
                "family_name": "Clayton",
                "given_name": "Robert W.",
                "clpid": "Clayton-R-W"
            },
            {
                "family_name": "Wierman",
                "given_name": "Adam C.",
                "clpid": "Wierman-A-C"
            },
            {
                "family_name": "Beck",
                "given_name": "James L.",
                "clpid": "Beck-J-L"
            },
            {
                "family_name": "Murray",
                "given_name": "Richard M.",
                "clpid": "Murray-R-M"
            }
        ],
        "local_group": [
            {
                "literal": "div_eng"
            }
        ],
        "abstract": "<p>This thesis presents theories, analyses, and algorithms for detecting and estimating parameters of geospatial events with today's large, noisy sensor networks. A geospatial event is initiated by a significant change in the state of points in a region in a 3-D space over an interval of time. After the event is initiated it may change the state of points over larger regions and longer periods of time.</p>\r\n\r\n<p>Networked sensing is a typical approach for geospatial event detection. In contrast to traditional sensor networks comprised of a small number of high quality (and expensive) sensors, trends in personal computing devices and consumer electronics have made it possible to build large, dense networks at a low cost. The changes in sensor capability, network composition, and system constraints call for new models and algorithms suited to the opportunities and challenges of the new generation of sensor networks.</p>\r\n\r\n<p>This thesis offers a single unifying model and a Bayesian framework for analyzing different types of geospatial events in such noisy sensor networks. It presents algorithms and theories for estimating the speed and accuracy of detecting geospatial events as a function of parameters from both the underlying geospatial system and the sensor network. Furthermore, the thesis addresses network scalability issues by presenting rigorous scalable algorithms for data aggregation for detection. These studies provide insights to the design of networked sensing systems for detecting geospatial events.</p>\r\n\r\n<p>In addition to providing an overarching framework, this thesis presents theories and experimental results for two very different geospatial problems: detecting earthquakes and hazardous radiation. The general framework is applied to these specific problems, and predictions based on the theories are validated against measurements of systems in the laboratory and in the field.</p>",
        "doi": "10.7907/MZWJ-T222",
        "publication_date": "2013",
        "thesis_type": "phd",
        "thesis_year": "2013"
    },
    {
        "id": "thesis:7148",
        "collection": "thesis",
        "collection_id": "7148",
        "cite_using_url": "https://resolver.caltech.edu/CaltechTHESIS:06082012-122324439",
        "type": "thesis",
        "title": "Network Coding and Distributed Compression over Large Networks: Some Basic Principles",
        "author": [
            {
                "family_name": "Bakshi",
                "given_name": "Mayank",
                "clpid": "Bakshi-Mayank"
            }
        ],
        "thesis_advisor": [
            {
                "family_name": "Effros",
                "given_name": "Michelle",
                "orcid": "0000-0003-3757-0675",
                "clpid": "Effros-M"
            }
        ],
        "thesis_committee": [
            {
                "family_name": "Effros",
                "given_name": "Michelle",
                "orcid": "0000-0003-3757-0675",
                "clpid": "Effros-M"
            },
            {
                "family_name": "Bruck",
                "given_name": "Jehoshua",
                "orcid": "0000-0001-8474-0812",
                "clpid": "Bruck-J"
            },
            {
                "family_name": "Ho",
                "given_name": "Tracey C.",
                "clpid": "Ho-Tracey"
            },
            {
                "family_name": "Chandy",
                "given_name": "K. Mani",
                "orcid": "0000-0001-9190-1290",
                "clpid": "Chandy-K-M"
            },
            {
                "family_name": "Wierman",
                "given_name": "Adam C.",
                "orcid": "0000-0002-5923-0199",
                "clpid": "Wierman-A-C"
            }
        ],
        "local_group": [
            {
                "literal": "div_eng"
            }
        ],
        "abstract": "<p>The fields of Network Coding and Distributed Compression have focused primarily on finding the capacity for families of problems defined by either a broad class of networks topologies (e.g., directed, acyclic networks) under a narrow class of demands (e.g., multicast), or a specific network topology (e.g. three-node networks) under different types of demands (e.g. Slepian-Wolf, Ahlswede-K\u00f6rner). Given the difficulty of the general problem, it is not surprising that the collection of networks that have been fully solved to date is still very small. This work investigates several new approaches to bounding the achievable rate region for general network source coding problems - reducing a network to an equivalent network or collection of networks, investigating the effect of feedback on achievable rates, and characterizing the role of side information.</p>\r\n\r\n<p>We describe two approaches aimed at simplifying the capacity calculations in a large network. First, we prove the optimality of separation between network coding  and channel coding for networks of point-to-point channels with a Byzantine adversary. Next, we give a strategy for calculating the capacity of an error-free network by decomposing that network into smaller networks. We show that this strategy is optimal for a large class of networks and give a bound for other cases.</p>\r\n\r\n<p>To date, the role of feedback in network source coding has received very little attention. We present several examples of networks that demonstrate that feedback can increases the set of achievable rates in both lossy and lossless network source coding settings. We derive general upper and lower bounds on the rate regions for networks with limited feedback that demonstrate a fundamental tradeoff between the forward rate and the feedback rate. For zero error source coding with limited feedback and decoder side information, we derive the exact tradeoff between the forward rate and the feedback rate for several classes of sources. A surprising result is that even zero rate feedback can reduce the optimal forward rate by an arbitrary factor.</p>\r\n\r\n<p>Side information can be used to reduce the rates required for reliable information. We precisely characterize the exact achievable region for multicast networks with side information at the sinks and find upper and lower bounds on the achievable rate region for other demand types.</p>",
        "doi": "10.7907/GWDW-5H78",
        "publication_date": "2012",
        "thesis_type": "phd",
        "thesis_year": "2012"
    },
    {
        "id": "thesis:7121",
        "collection": "thesis",
        "collection_id": "7121",
        "cite_using_url": "https://resolver.caltech.edu/CaltechTHESIS:06012012-134536732",
        "primary_object_url": {
            "basename": "thesis.pdf",
            "content": "final",
            "filesize": 1102717,
            "license": "other",
            "mime_type": "application/pdf",
            "url": "/7121/1/thesis.pdf",
            "version": "v4.0.0"
        },
        "type": "thesis",
        "title": "Scheduling for Heavy-Tailed and Light-Tailed Workloads in Queueing Systems",
        "author": [
            {
                "family_name": "Nair",
                "given_name": "Jayakrishnan U.",
                "clpid": "Nair-Jayakrishnan-U"
            }
        ],
        "thesis_advisor": [
            {
                "family_name": "Low",
                "given_name": "Steven H.",
                "orcid": "0000-0001-6476-3048",
                "clpid": "Low-S-H"
            },
            {
                "family_name": "Wierman",
                "given_name": "Adam C.",
                "orcid": "0000-0002-5923-0199",
                "clpid": "Wierman-A-C"
            }
        ],
        "thesis_committee": [
            {
                "family_name": "Low",
                "given_name": "Steven H.",
                "orcid": "0000-0001-6476-3048",
                "clpid": "Low-S-H"
            },
            {
                "family_name": "Chandy",
                "given_name": "K. Mani",
                "orcid": "0000-0001-9190-1290",
                "clpid": "Chandy-K-M"
            },
            {
                "family_name": "Hassibi",
                "given_name": "Babak",
                "orcid": "0000-0002-1375-5838",
                "clpid": "Hassibi-B"
            },
            {
                "family_name": "Ho",
                "given_name": "Tracey C.",
                "clpid": "Ho-Tracey"
            },
            {
                "family_name": "Wierman",
                "given_name": "Adam C.",
                "orcid": "0000-0002-5923-0199",
                "clpid": "Wierman-A-C"
            }
        ],
        "local_group": [
            {
                "literal": "div_eng"
            }
        ],
        "abstract": "<p>In much of classical queueing theory, workloads are assumed to be light-tailed, with job sizes being described using exponential or phase type distributions. However, over the past two decades, studies have shown that several real-world workloads exhibit heavy-tailed characteristics. As a result, there has been a strong interest in studying queues with heavy-tailed workloads. So at this stage, there is a large body of literature on queues with light-tailed workloads, and a large body of literature on queues with heavy-tailed workloads. However, heavy-tailed workloads and light-tailed workloads differ considerably in their behavior, and these two types of workloads are rarely studied jointly.</p>\r\n\r\n<p>In this thesis, we design scheduling policies for queueing systems, considering both heavy-tailed as well as light-tailed workloads. The motivation for this line of work is twofold. First, since real world workloads can be heavy-tailed or light-tailed, it is desirable to design schedulers that are robust in their performance to distributional assumptions on the workload. Second, there might be scenarios where a heavy-tailed and a light-tailed workload interact in a queueing system. In such cases, it is desirable to design schedulers that guarantee fairness in resource allocation for both workload types.</p>\r\n\r\n<p>In this thesis, we study three models involving the design of scheduling disciplines for both heavy-tailed as well as light-tailed workloads. In Chapters 3 and 4, we design schedulers that guarantee robust performance across heavy-tailed and light-tailed workloads. In Chapter 5, we consider a setting in which a heavy-tailed and a light-tailed workload complete for service. In this setting, we design scheduling policies that guarantee good response time tail performance for both workloads, while also maintaining throughput optimality.</p>",
        "doi": "10.7907/AAXJ-EX10",
        "publication_date": "2012",
        "thesis_type": "phd",
        "thesis_year": "2012"
    },
    {
        "id": "thesis:7148",
        "collection": "thesis",
        "collection_id": "7148",
        "cite_using_url": "https://resolver.caltech.edu/CaltechTHESIS:06082012-122324439",
        "type": "thesis",
        "title": "Network Coding and Distributed Compression over Large Networks: Some Basic Principles",
        "author": [
            {
                "family_name": "Bakshi",
                "given_name": "Mayank",
                "clpid": "Bakshi-Mayank"
            }
        ],
        "thesis_advisor": [
            {
                "family_name": "Effros",
                "given_name": "Michelle",
                "orcid": "0000-0003-3757-0675",
                "clpid": "Effros-M"
            }
        ],
        "thesis_committee": [
            {
                "family_name": "Effros",
                "given_name": "Michelle",
                "orcid": "0000-0003-3757-0675",
                "clpid": "Effros-M"
            },
            {
                "family_name": "Bruck",
                "given_name": "Jehoshua",
                "orcid": "0000-0001-8474-0812",
                "clpid": "Bruck-J"
            },
            {
                "family_name": "Ho",
                "given_name": "Tracey C.",
                "clpid": "Ho-Tracey"
            },
            {
                "family_name": "Chandy",
                "given_name": "K. Mani",
                "orcid": "0000-0001-9190-1290",
                "clpid": "Chandy-K-M"
            },
            {
                "family_name": "Wierman",
                "given_name": "Adam C.",
                "orcid": "0000-0002-5923-0199",
                "clpid": "Wierman-A-C"
            }
        ],
        "local_group": [
            {
                "literal": "div_eng"
            }
        ],
        "abstract": "<p>The fields of Network Coding and Distributed Compression have focused primarily on finding the capacity for families of problems defined by either a broad class of networks topologies (e.g., directed, acyclic networks) under a narrow class of demands (e.g., multicast), or a specific network topology (e.g. three-node networks) under different types of demands (e.g. Slepian-Wolf, Ahlswede-K\u00f6rner). Given the difficulty of the general problem, it is not surprising that the collection of networks that have been fully solved to date is still very small. This work investigates several new approaches to bounding the achievable rate region for general network source coding problems - reducing a network to an equivalent network or collection of networks, investigating the effect of feedback on achievable rates, and characterizing the role of side information.</p>\r\n\r\n<p>We describe two approaches aimed at simplifying the capacity calculations in a large network. First, we prove the optimality of separation between network coding  and channel coding for networks of point-to-point channels with a Byzantine adversary. Next, we give a strategy for calculating the capacity of an error-free network by decomposing that network into smaller networks. We show that this strategy is optimal for a large class of networks and give a bound for other cases.</p>\r\n\r\n<p>To date, the role of feedback in network source coding has received very little attention. We present several examples of networks that demonstrate that feedback can increases the set of achievable rates in both lossy and lossless network source coding settings. We derive general upper and lower bounds on the rate regions for networks with limited feedback that demonstrate a fundamental tradeoff between the forward rate and the feedback rate. For zero error source coding with limited feedback and decoder side information, we derive the exact tradeoff between the forward rate and the feedback rate for several classes of sources. A surprising result is that even zero rate feedback can reduce the optimal forward rate by an arbitrary factor.</p>\r\n\r\n<p>Side information can be used to reduce the rates required for reliable information. We precisely characterize the exact achievable region for multicast networks with side information at the sinks and find upper and lower bounds on the achievable rate region for other demand types.</p>",
        "doi": "10.7907/GWDW-5H78",
        "publication_date": "2012",
        "thesis_type": "phd",
        "thesis_year": "2012"
    },
    {
        "id": "thesis:6481",
        "collection": "thesis",
        "collection_id": "6481",
        "cite_using_url": "https://resolver.caltech.edu/CaltechTHESIS:05312011-123940546",
        "primary_object_url": {
            "basename": "jwhite.phd.pdf",
            "content": "final",
            "filesize": 1446611,
            "license": "other",
            "mime_type": "application/pdf",
            "url": "/6481/1/jwhite.phd.pdf",
            "version": "v4.0.0"
        },
        "type": "thesis",
        "title": "Applying Formal Methods to Distributed Algorithms Using Local-Global Relations  ",
        "author": [
            {
                "family_name": "White",
                "given_name": "Jerome S.",
                "clpid": "White-Jerome-S"
            }
        ],
        "thesis_advisor": [
            {
                "family_name": "Chandy",
                "given_name": "K. Mani",
                "clpid": "Chandy-K-M"
            }
        ],
        "thesis_committee": [
            {
                "family_name": "Murray",
                "given_name": "Richard M.",
                "clpid": "Murray-R-M"
            },
            {
                "family_name": "Doyle",
                "given_name": "John Comstock",
                "clpid": "Doyle-J-C"
            },
            {
                "family_name": "Martin",
                "given_name": "Alain J.",
                "clpid": "Martin-A-J"
            },
            {
                "family_name": "Holzmann",
                "given_name": "Gerard J.",
                "clpid": "Holzmann-G-J"
            },
            {
                "family_name": "Chandy",
                "given_name": "K. Mani",
                "clpid": "Chandy-K-M"
            }
        ],
        "local_group": [
            {
                "literal": "div_eng"
            }
        ],
        "abstract": "<p>This thesis deals with the design and analysis of distributed systems in which homogeneous, autonomous agents collaborate to achieve a common goal. The class of problems studied includes consensus algorithms in which all agents eventually come to an agreement about a specific action. The thesis proposes a framework, called local-global, for analyzing these systems. A local interaction is an interaction among subsets of agents, while a global interaction is one among all agents in the system. Global interactions, in practice, are rare, yet they are the basis by which correctness of a system is measured. For example, if the problem is to compute the average of a measurement made separately by each agent, and all the agents in the system could exchange values in a single action, then the solution is straightforward: each agent gets the values of all others and computes the average independently. However, if the system consists of a large number of agents with unreliable communication, this scenario is highly unlikely. Thus, the design challenge is to ensure that sequences of local interactions lead, or converge, to the same state as a global interaction.</p>\r\n\r\n<p>The local-global framework addresses this challenge by describing each local interaction as if were a global one, encompassing all agents within the system. This thesis outlines the concept in detail, using it to design algorithms, prove their correctness, and ultimately develop executable implementations that are reliable. To this end, the tools of formal methods are employed: algorithms are modeled, and mechanically checked, within the PVS theorem prover; programs are also verified using the Spin model checker; and interface specification languages are used to ensure local-global properties are still maintained within Java and C# implementations. The thesis presents example applications of the framework and discusses a class of problems to which the framework can be applied.</p>",
        "doi": "10.7907/8FRW-ZF17",
        "publication_date": "2011",
        "thesis_type": "phd",
        "thesis_year": "2011"
    },
    {
        "id": "thesis:6391",
        "collection": "thesis",
        "collection_id": "6391",
        "cite_using_url": "https://resolver.caltech.edu/CaltechTHESIS:05132011-113642762",
        "primary_object_url": {
            "basename": "Thesis_Caltech.pdf",
            "content": "final",
            "filesize": 3285748,
            "license": "other",
            "mime_type": "application/pdf",
            "url": "/6391/1/Thesis_Caltech.pdf",
            "version": "v4.0.0"
        },
        "type": "thesis",
        "title": "Large-Scale Complex Systems: From Antenna Circuits to Power Grids",
        "author": [
            {
                "family_name": "Lavaei",
                "given_name": "Javad",
                "clpid": "Lavaeiyanesi-Javad"
            }
        ],
        "thesis_advisor": [
            {
                "family_name": "Doyle",
                "given_name": "John Comstock",
                "clpid": "Doyle-J-C"
            },
            {
                "family_name": "Murray",
                "given_name": "Richard M.",
                "clpid": "Murray-R-M"
            }
        ],
        "thesis_committee": [
            {
                "family_name": "Doyle",
                "given_name": "John Comstock",
                "clpid": "Doyle-J-C"
            },
            {
                "family_name": "Murray",
                "given_name": "Richard M.",
                "clpid": "Murray-R-M"
            },
            {
                "family_name": "Low",
                "given_name": "Steven H.",
                "clpid": "Low-S-H"
            },
            {
                "family_name": "Chandy",
                "given_name": "K. Mani",
                "clpid": "Chandy-K-M"
            }
        ],
        "local_group": [
            {
                "literal": "div_eng"
            }
        ],
        "abstract": "<p>This dissertation is motivated by the lack of scalable methods for the analysis and synthesis of different large-scale complex systems appearing in electrical and computer engineering. The systems of interest in this work are power networks, analog circuits, antenna systems, communication networks and distributed control systems. By combining theories  from control and optimization, the high-level objective is to develop new design tools and algorithms that explicitly exploit the physical properties of these practical systems (e.g., passivity of electrical elements or sparsity of network topology). To this end, the aforementioned systems are categorized intro three classes of systems, and then studied in Parts I, II, and III of this dissertation, as explained below:</p>\r\n\r\n<p>Power networks: In Part I of this work, the operation planning of power networks using efficient algorithms is studied. The primary focus is on the optimal power flow (OPF) problem, which has been studied by the operations research and power communities in the past 50 years with little success. In this part, it is shown that  there exists an efficient method to solve a practical OPF problem along with many other energy-related optimization problems such as dynamic OPF or security-constrained OPF. The main reason for the successful convexification of these optimization problems is also identified to be the  physical properties of a power circuit, especially the passivity of transmission lines.</p>\r\n\r\n<p>Circuits and systems: Motivated by different applications in power networks, electromagnetics and optics, Part II of this work studies the fundamental limits associated with the synthesis of a particular type of linear circuit. It is shown that the optimal design of the parameters of this type of circuit can be performed in polynomial time if the circuit is passive and  there are sufficient number of controllable (unknown) parameters. This result introduces a trade-off between the design simplicity and the implementation complexity for an important class of linear circuits. As an application of this methodology, the design of smart antennas is also studied;  the goal is to devise an intelligent wireless communication device in order to avoid co-channel interference, power consumption in undesired directions and security issues. Since the existing smart antennas are either hard to program or hard to implement, a new type of smart antenna is synthesized by utilizing tools from algebraic geometry, control, communications, and circuits, which is both easy to program and easy to implement.</p>\r\n \r\n<p>Distributed computation: The first problem tackled in Part III of this work is a very simple type of distributed computation, referred to as quantized consensus, which aims to compute the average of a set of numbers using a distributed algorithm subject to a quantization error.  It is shown that quantized consensus is reached by means of a recently proposed gossip algorithm, and the convergence time of the algorithm is also derived. The second problem studied in Part III is a more advanced type of distributed computation, which is  the distributed resource allocation problem for the Internet. The existing distributed resource allocation algorithms aim to maximize the utility of the network only at the equilibrium point and ignore the transient behavior of the network. To address this issue, it is shown that optimal control theory provides powerful tools for designing distributed resource allocation algorithms with a guaranteed real-time performance.</p>\r\n\r\n<p>The results of this work can all be integrated to address real-world interdisciplinary problems, such as the design of the next generation of the electrical power grid, named the Smart Grid.</p>\r\n\r\n",
        "doi": "10.7907/CM46-5R54",
        "publication_date": "2011",
        "thesis_type": "phd",
        "thesis_year": "2011"
    },
    {
        "id": "thesis:6418",
        "collection": "thesis",
        "collection_id": "6418",
        "cite_using_url": "https://resolver.caltech.edu/CaltechTHESIS:05232011-013046516",
        "primary_object_url": {
            "basename": "thesis.pdf",
            "content": "final",
            "filesize": 1636443,
            "license": "other",
            "mime_type": "application/pdf",
            "url": "/6418/1/thesis.pdf",
            "version": "v6.0.0"
        },
        "type": "thesis",
        "title": "Systematic Design and Formal Verification of Multi-Agent Systems  ",
        "author": [
            {
                "family_name": "Pilotto",
                "given_name": "Concetta",
                "clpid": "Pilotto-Concetta"
            }
        ],
        "thesis_advisor": [
            {
                "family_name": "Chandy",
                "given_name": "K. Mani",
                "clpid": "Chandy-K-M"
            }
        ],
        "thesis_committee": [
            {
                "family_name": "Chandy",
                "given_name": "K. Mani",
                "clpid": "Chandy-K-M"
            },
            {
                "family_name": "Ledyard",
                "given_name": "John O.",
                "clpid": "Ledyard-J-O"
            },
            {
                "family_name": "Murray",
                "given_name": "Richard M.",
                "clpid": "Murray-R-M"
            },
            {
                "family_name": "Doyle",
                "given_name": "John Comstock",
                "clpid": "Doyle-J-C"
            },
            {
                "family_name": "Low",
                "given_name": "Steven H.",
                "clpid": "Low-S-H"
            }
        ],
        "local_group": [
            {
                "literal": "div_eng"
            }
        ],
        "abstract": "<p>This thesis presents methodologies for verifying the correctness of multi-agent systems operating in hostile environments. Verification of these systems is challenging because of their inherent concurrency and unreliable communication medium. The problem is exacerbated if the model representing the multi-agent system includes infinite or uncountable data types.</p>\r\n\r\n<p>We first consider message-passing multi-agent systems  operating over an unreliable communication medium. We assume that messages in transit may be lost, delayed or received out-of-order. We present conditions on the system that reduce the design and verification of a message-passing system to the design and verification of the corresponding shared-state system operating in a friendly environment. Our conditions can be applied both to discrete and continuous agent trajectories.</p>\r\n\r\n<p>We apply our results to verify a general class of multi-agent system whose goal is solving a system of linear equations. We discuss this class in detail and show that mobile robot linear pattern-formation schemes are instances of this class. In these protocols, the goal of the team of robots is to reach a given pattern formation.</p>\r\n\r\n<p>We present a framework that allows verification of message-passing systems operating over an unreliable communication medium. This framework is implemented as a library of PVS theorem prover meta-theories and is built on top of the timed automata framework. We discuss the applicability of this tool. As an example, we automatically check correctness of the mobile robot linear pattern formation protocols.</p>\r\n\r\n<p>We conclude with an analysis of the verification of multi-agent systems operating in hostile environments. Under these more general assumptions, we derive conditions on the agents' protocols and properties of the environment that ensure bounded steady-state system error. We apply these results to message-passing multi-agent systems that allow for lost, delayed, received out-of-order or forged messages, and to multi-agent systems whose goal is tracking time-varying quantities. We show that pattern formation schemes are robust to leaders dynamics, i.e., in these schemes, followers eventually form the pattern defined by the new positions of the leaders.</p>",
        "doi": "10.7907/SCQF-VP66",
        "publication_date": "2011",
        "thesis_type": "phd",
        "thesis_year": "2011"
    },
    {
        "id": "thesis:5864",
        "collection": "thesis",
        "collection_id": "5864",
        "cite_using_url": "https://resolver.caltech.edu/CaltechTHESIS:05272010-153304667",
        "primary_object_url": {
            "basename": "main.pdf",
            "content": "final",
            "filesize": 3350152,
            "license": "other",
            "mime_type": "application/pdf",
            "url": "/5864/1/main.pdf",
            "version": "v4.0.0"
        },
        "type": "thesis",
        "title": "Formal Methods for Design and Verification of Embedded Control Systems: Application to an Autonomous Vehicle",
        "author": [
            {
                "family_name": "Wongpiromsarn",
                "given_name": "Tichakorn",
                "clpid": "Wongpiromsarn-Tichakorn"
            }
        ],
        "thesis_advisor": [
            {
                "family_name": "Murray",
                "given_name": "Richard M.",
                "clpid": "Murray-R-M"
            }
        ],
        "thesis_committee": [
            {
                "family_name": "Murray",
                "given_name": "Richard M.",
                "clpid": "Murray-R-M"
            },
            {
                "family_name": "Burdick",
                "given_name": "Joel Wakeman",
                "clpid": "Burdick-J-W"
            },
            {
                "family_name": "Chandy",
                "given_name": "K. Mani",
                "clpid": "Chandy-K-M"
            },
            {
                "family_name": "Holzmann",
                "given_name": "Gerard J.",
                "clpid": "Holzmann-G-J"
            }
        ],
        "local_group": [
            {
                "literal": "div_eng"
            }
        ],
        "abstract": "<p>The design of reliable embedded control systems inherits the difficulties involved in designing both control systems and distributed (concurrent) computing systems. Design bugs in these systems may arise from the unforeseen interactions among the computing, communication and control subsystems. Motivated by the difficulties of finding this type of design bugs, this thesis develops mathematical frameworks, based on formal methods, to facilitate the design and analysis of such embedded systems. An expressive specification language of linear temporal logic (LTL) is used to specify the desired system properties. The practicality of the proposed frameworks is demonstrated through autonomous vehicle case studies and autonomous urban driving problems.</p>\r\n\r\n<p>Our approach incorporates methodology from computer science and control, including model checking, theorem proving, synthesis of digital designs, reachability analysis, Lyapunov-type methods and receding horizon control. This thesis consists of two complementary parts, namely, verification and design. First, we introduce Periodically Controlled Hybrid Automata (PCHA), a subclass of hybrid automata that abstractly captures a common design pattern in embedded control systems. New sufficient conditions that exploit the structure of PCHAs in order to simplify their invariant verification are presented.</p>\r\n\r\n<p>Although the aforementioned technique simplifies an invariant verification of PCHAs, finding a proper invariant remains a challenging problem. To complement the verification efforts, in the second part of the thesis, we present a methodology for automatic synthesis of embedded control software that provides a formal guarantee of system correctness, with respect to its desired properties expressed in linear temporal logic. The correctness of the system is guaranteed even in the presence of an adversary (typically arising from changes in the environments), disturbances and modeling errors. A receding horizon framework is proposed to alleviate the associated computational complexity of LTL synthesis. The effectiveness of this framework is demonstrated through the autonomous urban driving problems.</p>\r\n",
        "doi": "10.7907/XZ3X-7V51",
        "publication_date": "2010",
        "thesis_type": "phd",
        "thesis_year": "2010"
    },
    {
        "id": "thesis:2178",
        "collection": "thesis",
        "collection_id": "2178",
        "cite_using_url": "https://resolver.caltech.edu/CaltechETD:etd-05272009-141742",
        "primary_object_url": {
            "basename": "thesis.pdf",
            "content": "final",
            "filesize": 1176527,
            "license": "other",
            "mime_type": "application/pdf",
            "url": "/2178/1/thesis.pdf",
            "version": "v3.0.0"
        },
        "type": "thesis",
        "title": "Credit Risk and Nonlinear Filtering: Computational Aspects and Empirical Evidence",
        "author": [
            {
                "family_name": "Capponi",
                "given_name": "Agostino",
                "orcid": "0000-0001-9735-7935",
                "clpid": "Capponi-Agostino"
            }
        ],
        "thesis_advisor": [
            {
                "family_name": "Cvitani\u0107",
                "given_name": "Jak\u0161a",
                "orcid": "0000-0001-6651-3552",
                "clpid": "Cvitani\u0107-J"
            }
        ],
        "thesis_committee": [
            {
                "family_name": "Cvitani\u0107",
                "given_name": "Jak\u0161a",
                "orcid": "0000-0001-6651-3552",
                "clpid": "Cvitani\u0107-J"
            },
            {
                "family_name": "Ledyard",
                "given_name": "John O.",
                "clpid": "Ledyard-J-O"
            },
            {
                "family_name": "Chandy",
                "given_name": "K. Mani",
                "clpid": "Chandy-K-M"
            },
            {
                "family_name": "Abu-Mostafa",
                "given_name": "Yaser S.",
                "clpid": "Abu-Mostafa-Y-S"
            }
        ],
        "local_group": [
            {
                "literal": "div_eng"
            }
        ],
        "abstract": "<p>This thesis proposes a novel credit risk model which deals with incomplete information on the firm's asset value. Such incompleteness is due to reporting bias deliberately introduced by insider managers and executives of the firm and unobserved by outsiders.</p>\r\n\r\n<p>The pricing of corporate securities and the evaluation of default measures in our credit risk framework requires the solution of a computationally unfeasible nonlinear filtering problem. The model introduces computational issues arising from the fact that the optimal probability density on the firm's asset value is the solution of a nonlinear filtering problem, which is computationally unfeasible. We propose a polynomial time-sequential Bayesian approximation scheme which employs convex optimization methods to iteratively approximate the optimal conditional density of the state on the basis of received market observations. We also provide an upper bound on the total variation distance between the actual filter density and our approximate estimator. We use the filter estimator to derive analytical expressions for the price of corporate securities (bond and equity) as well as for default measures (default probabilities, recovery rates, and credit spreads) under our credit risk framework. We propose a novel statistical calibration method to recover the parameters of our credit risk model from market price of equity and balance sheet indicators. We apply the method to the Parmalat case, a real case of misreporting and show that the model is able to successfully isolate the misreporting component. We also provide empirical evidence that the term structure of credit default swaps quotes exhibits special patterns in cases of misreporting by using three well known cases of accounting irregularities in US history: Tyco, Enron, and WorldCom.</p>\r\n\r\n<p>We conclude the thesis with a study of bilateral credit risk, which accommodates the case in which both parties of the financial contract may default on their payments. We introduce the general arbitrage-free valuation framework for counterparty risk adjustments in presence of bilateral default risk. We illustrate the symmetry in the valuation and show that the adjustment involves a long position in a put option plus a short position in a call option, both with zero strike and written on the residual net value of the contract at the relevant default times. We allow for correlation between the default times of each party of the contract and the underlying portfolio risk factors. We introduce stochastic intensity models and a trivariate copula function on the default times exponential variables to model default dependence.  We provide evidence that both default correlation and credit spread volatilities have a relevant and structured impact on the adjustment. We also study a case involving British Airways, Lehman Brothers, and Royal Dutch Shell, illustrating the bilateral adjustments in concrete crisis situations.</p>\r\n",
        "doi": "10.7907/7XV3-9Q45",
        "publication_date": "2009",
        "thesis_type": "phd",
        "thesis_year": "2009"
    },
    {
        "id": "thesis:2271",
        "collection": "thesis",
        "collection_id": "2271",
        "cite_using_url": "https://resolver.caltech.edu/CaltechETD:etd-05292009-111937",
        "primary_object_url": {
            "basename": "thesis.pdf",
            "content": "final",
            "filesize": 5631440,
            "license": "other",
            "mime_type": "application/pdf",
            "url": "/2271/11/thesis.pdf",
            "version": "v5.0.0"
        },
        "type": "thesis",
        "title": "Safety Verification and Failure Analysis of Goal-Based Hybrid Control Systems",
        "author": [
            {
                "family_name": "Braman",
                "given_name": "Julia Marie Badger",
                "clpid": "Braman-Julia-Marie-Badger"
            }
        ],
        "thesis_advisor": [
            {
                "family_name": "Murray",
                "given_name": "Richard M.",
                "clpid": "Murray-R-M"
            }
        ],
        "thesis_committee": [
            {
                "family_name": "Murray",
                "given_name": "Richard M.",
                "clpid": "Murray-R-M"
            },
            {
                "family_name": "Beck",
                "given_name": "James L.",
                "clpid": "Beck-J-L"
            },
            {
                "family_name": "Burdick",
                "given_name": "Joel Wakeman",
                "clpid": "Burdick-J-W"
            },
            {
                "family_name": "Chandy",
                "given_name": "K. Mani",
                "clpid": "Chandy-K-M"
            }
        ],
        "local_group": [
            {
                "literal": "div_eng"
            }
        ],
        "abstract": "The success of complex autonomous robotic systems depends on the quality and correctness of their fault tolerant control systems. A goal-based approach to fault tolerant control, which is modeled after a software architecture developed at the Jet Propulsion Laboratory, uses networks of goals to control autonomous systems. The complex conditional branching of the control program makes safety verification necessary. Three novel verification methods are presented. In the first, goal networks are converted to linear hybrid automata via a bisimulation. The converted automata can then be verified against an unsafe set of conditions using an existing symbolic model checker such as PHAVer. Due to the complexity issues that result from this method, a design for verification software tool, the SBT Checker, was developed to create goal networks that have state-based transitions.  Goal networks that have state-based transitions can be converted to hybrid automata whose locations' invariants contain all information necessary to determine the transitions between the locations.  An original verification software called InVeriant can then be used to find unsafe locations of linear hybrid systems based on the locations\u2019 invariants and rate conditions, which are compared to the unsafe set of conditions. The reachability of the unsafe locations depends only on the reachability of the states of the state variables constrained in the locations'  invariants from those state variables' initial conditions. In cases where this reachability condition is not trivially true, the software efficiently searches for a path to the unsafe locations using properties of the system. The third verification method is the calculation of the failure probability of the verified hybrid control system due to state estimation uncertainty, which is extremely important in autonomous systems that rely heavily on the state estimates made from sensor measurements. Finally, two significant example goal network control programs, one for a complex rover and another for a proposed aerobot mission to Titan, a moon of Saturn, are verified using the three techniques presented.",
        "doi": "10.7907/3H42-BF56",
        "publication_date": "2009",
        "thesis_type": "phd",
        "thesis_year": "2009"
    },
    {
        "id": "thesis:3252",
        "collection": "thesis",
        "collection_id": "3252",
        "cite_using_url": "https://resolver.caltech.edu/CaltechETD:etd-08272008-121822",
        "primary_object_url": {
            "basename": "Ling_Shi_thesis.pdf",
            "content": "final",
            "filesize": 1526490,
            "license": "other",
            "mime_type": "application/pdf",
            "url": "/3252/1/Ling_Shi_thesis.pdf",
            "version": "v2.0.0"
        },
        "type": "thesis",
        "title": "Resource Optimization for Networked Estimator with Guaranteed Estimation Quality",
        "author": [
            {
                "family_name": "Shi",
                "given_name": "Ling",
                "clpid": "Shi-Ling"
            }
        ],
        "thesis_advisor": [
            {
                "family_name": "Murray",
                "given_name": "Richard M.",
                "clpid": "Murray-R-M"
            }
        ],
        "thesis_committee": [
            {
                "family_name": "Murray",
                "given_name": "Richard M.",
                "clpid": "Murray-R-M"
            },
            {
                "family_name": "Burdick",
                "given_name": "Joel Wakeman",
                "clpid": "Burdick-J-W"
            },
            {
                "family_name": "Doyle",
                "given_name": "John Comstock",
                "clpid": "Doyle-J-C"
            },
            {
                "family_name": "Chandy",
                "given_name": "K. Mani",
                "clpid": "Chandy-K-M"
            },
            {
                "family_name": "Johansson",
                "given_name": "Karl Henrik",
                "clpid": "Johansson-K-H"
            }
        ],
        "local_group": [
            {
                "literal": "div_eng"
            }
        ],
        "abstract": "Advances in fabrication, modern sensor and communication technologies, and computer architecture have enabled a variety of new networked sensing and control applications. However, many difficulties are inherent with these systems, for example, the constrained communication and computation capabilities, and limited energy resources, which are frequently seen in a wireless sensor network. As a consequence, the networks typically induce many new issues such as limited bandwidth, packet loss, and delay. Estimation and control over such networks thus require new design paradigms beyond traditional sampled-data control, as the aforementioned constraints undoubtedly affect system performance or even stability. In this thesis work, I consider the problem of state estimation over networks. As communication, computation, and energy are scarce resources in such networks, I focus on optimizing the use of them. When the state estimation is carried out over a sensor network, I consider the problem of minimizing the sensor energy usage and maximizing the network lifetime. When the state estimation is carried out over a packet-delaying network, I consider the problem of minimizing the buffer length at the remote state estimator. In each scenario, a certain desired level of estimation quality is guaranteed.\r\n",
        "doi": "10.7907/DTCJ-BN07",
        "publication_date": "2009",
        "thesis_type": "phd",
        "thesis_year": "2009"
    },
    {
        "id": "thesis:5267",
        "collection": "thesis",
        "collection_id": "5267",
        "cite_using_url": "https://resolver.caltech.edu/CaltechETD:etd-06042009-233839",
        "primary_object_url": {
            "basename": "thesis.pdf",
            "content": "final",
            "filesize": 694496,
            "license": "other",
            "mime_type": "application/pdf",
            "url": "/5267/1/thesis.pdf",
            "version": "v3.0.0"
        },
        "type": "thesis",
        "title": "Limited Randomness in Games, and Computational Perspectives in Revealed Preference\r ",
        "author": [
            {
                "family_name": "Kalyanaraman",
                "given_name": "Shankar",
                "clpid": "Kalyanaraman-Shankar"
            }
        ],
        "thesis_advisor": [
            {
                "family_name": "Umans",
                "given_name": "Christopher M.",
                "clpid": "Umans-C-M"
            }
        ],
        "thesis_committee": [
            {
                "family_name": "Umans",
                "given_name": "Christopher M.",
                "clpid": "Umans-C-M"
            },
            {
                "family_name": "Echenique",
                "given_name": "Federico",
                "clpid": "Echenique-F"
            },
            {
                "family_name": "Schulman",
                "given_name": "Leonard J.",
                "clpid": "Schulman-L-J"
            },
            {
                "family_name": "Chandy",
                "given_name": "K. Mani",
                "clpid": "Chandy-K-M"
            }
        ],
        "local_group": [
            {
                "literal": "div_eng"
            }
        ],
        "abstract": "<p>In this dissertation, we explore two particular themes in connection with the  study of games and general economic interactions: bounded resources and rationality. The rapidly maturing field of algorithmic game theory concerns itself with looking at the computational limits and effects when agents in such an interaction make choices in their \"self-interest.\" The solution concepts that have been studied in this regard, and which we shall focus on in this dissertation, assume that agents are capable of randomizing over their set of choices. We posit that agents are randomness-limited in addition to being computationally bounded, and determine how this affects their equilibrium strategies in different scenarios.</p>\r\n\r\n<p>In particular, we study three interpretations of what it means for agents to be randomness-limited, and offer results on finding (approximately) optimal strategies that are randomness-efficient:<br />\r\n\r\n1. One-shot games with access to the support of the optimal strategies: for this case, our results are obtained by sampling strategies from the optimal support by performing a random walk on an expander graph.<br />\r\n2. Multiple-round games where agents have no a priori knowledge of their payoffs: we significantly improve the randomness-efficiency of known online algorithms for such games by utilizing distributions based on almost pairwise independent random variables.<br />\r\n3. Low-rank games: for games in which agents' payoff matrices have low rank, we devise \"fixed-parameter\" algorithms that compute strategies yielding approximately optimal payoffs for agents, and are polynomial-time in the size of the input and the rank of the payoff tensors.</p>\r\n\r\n<p>In regard to rationality, we look at some computational questions in a related line of work known as revealed preference theory, with the purpose of understanding the computational limits of inferring agents' payoffs and motives when they reveal their preferences by way of how they act. We investigate two problem settings as applications of this theory and obtain results about their intractability:<br />\r\n\r\n1. Rationalizability of matchings: we consider the problem of rationalizing a given collection of bipartite matchings and show that it is NP-hard to determine agent preferences for which matchings would be stable. Further, we show, assuming P \u2260 NP, that this problem does not admit polynomial-time approximation schemes under two suitably defined notions of optimization.<br />\r\n2. Rationalizability of network formation games: in the case of network formation games, we take up a particular model of connections known as the Jackson-Wolinsky model in which nodes in a graph have valuations for each other and take their valuations into consideration when they choose to build edges. We show that under a notion of stability, known as pairwise stability, the problem of finding valuations that rationalize a collection of networks as pairwise stable is NP-hard. More significantly, we show that this problem is hard even to approximate to within a factor 1/2 and that this is tight.</p>\r\n\r\n<p>Our results on hardness and inapproximability of these problems use well-known techniques from complexity theory, and particularly in the case of the inapproximability of rationalizing network formation games, PCPs for the problem of satisfying the optimal number of linear equations in positive integers, building on recent results of Guruswami and Raghavendra.</p>\r\n",
        "doi": "10.7907/KH85-HJ73",
        "publication_date": "2009",
        "thesis_type": "phd",
        "thesis_year": "2009"
    },
    {
        "id": "thesis:5260",
        "collection": "thesis",
        "collection_id": "5260",
        "cite_using_url": "https://resolver.caltech.edu/CaltechETD:etd-11092007-180524",
        "type": "thesis",
        "title": "Soft-Error Tolerant Quasi Delay-insensitive Circuits",
        "author": [
            {
                "family_name": "Jang",
                "given_name": "Wonjin",
                "clpid": "Jang-Wonjin"
            }
        ],
        "thesis_advisor": [
            {
                "family_name": "Martin",
                "given_name": "Alain J.",
                "clpid": "Martin-A-J"
            }
        ],
        "thesis_committee": [
            {
                "family_name": "Martin",
                "given_name": "Alain J.",
                "clpid": "Martin-A-J"
            },
            {
                "family_name": "Bruck",
                "given_name": "Jehoshua",
                "orcid": "0000-0001-8474-0812",
                "clpid": "Bruck-J"
            },
            {
                "family_name": "Chandy",
                "given_name": "K. Mani",
                "orcid": "0000-0001-9190-1290",
                "clpid": "Chandy-K-M"
            },
            {
                "family_name": "Ho",
                "given_name": "Tracey C.",
                "clpid": "Ho-Tracey"
            },
            {
                "family_name": "Abu-Mostafa",
                "given_name": "Yaser S.",
                "clpid": "Abu-Mostafa-Y-S"
            }
        ],
        "local_group": [
            {
                "literal": "div_eng"
            }
        ],
        "abstract": "<p>A hard error is an error that damages a circuit irrevocably; a soft error flips the logic states without causing any physical damage to the circuit, resulting in transient corruption of data. They result in transient, inconsistent corruption of data.</p>\r\n\r\n<p>The soft-error tolerance of logic circuits is recently getting more attention, since the soft- error rate of advanced CMOS devices is higher than before. As a response to the concern on soft errors, we propose a new method for making asynchronous circuits tolerant to soft errors. Since it relies on a property unique to asynchronous circuits, the method is different from what is done in synchronous circuits with triple modular redundancy. Asynchronous circuits have been attractive to the designers of reliable systems, because of their clock-less design, which makes them more robust to variations on computation time of modules. The quasi delay-insensitive (QDI) design style is one of the most robust asynchronous design styles for general computation; it makes one minimal assumption on delays in gates and wires. QDI circuits are easy to verify, simple, and modular, because the correct operation of a QDI circuit is independent of delays in gates and wires.</p>\r\n\r\n<p>Here, we shall overview how to design a QDI circuit, and what will happen if a soft error occurs on a QDI circuit. Then the crucial components of the method are shown: (1) a special kind of duplication for random logic (when each bit has to be corrected individually), (2) special protection circuitry for arbiter and synchronizer (as needed for example for external interrupts), (3) reconfigurable circuits using a special configuration unit, and (4) error correcting for memory arrays and other structures in which the data bits can be self- corrected. The solution of protecting random logic is compared with alternatives, which use other types of error correcting codes (e.g., parity code) in a QDI circuit. It turns out that the duplication generates efficient circuits more commonly than other possible constructions. Finally, the design of a soft-error tolerant asynchronous microprocessor is detailed and testing results of the soft-error tolerance of the microprocessor are shown.</p>\r\n\r\n",
        "doi": "10.7907/ZVFF-WE07",
        "publication_date": "2008",
        "thesis_type": "phd",
        "thesis_year": "2008"
    },
    {
        "id": "thesis:2118",
        "collection": "thesis",
        "collection_id": "2118",
        "cite_using_url": "https://resolver.caltech.edu/CaltechETD:etd-05262008-234258",
        "primary_object_url": {
            "basename": "thesis.pdf",
            "content": "final",
            "filesize": 917396,
            "license": "other",
            "mime_type": "application/pdf",
            "url": "/2118/1/thesis.pdf",
            "version": "v3.0.0"
        },
        "type": "thesis",
        "title": "Throughput Optimization of Quasi Delay Insensitive Circuits via Slack Matching",
        "author": [
            {
                "family_name": "Prakash",
                "given_name": "Piyush",
                "clpid": "Prakash-Piyush"
            }
        ],
        "thesis_advisor": [
            {
                "family_name": "Martin",
                "given_name": "Alain J.",
                "clpid": "Martin-A-J"
            }
        ],
        "thesis_committee": [
            {
                "family_name": "Martin",
                "given_name": "Alain J.",
                "clpid": "Martin-A-J"
            },
            {
                "family_name": "DeHon",
                "given_name": "Andre",
                "clpid": "DeHon-A"
            },
            {
                "family_name": "Umans",
                "given_name": "Christopher M.",
                "clpid": "Umans-C-M"
            },
            {
                "family_name": "Chandy",
                "given_name": "K. Mani",
                "clpid": "Chandy-K-M"
            }
        ],
        "local_group": [
            {
                "literal": "div_eng"
            }
        ],
        "abstract": "Though the logical correctness of an asynchronous circuit is independent of implementation delays, the cycle time of an asynchronous circuit is of great importance to the designer.  Oftentimes, the insertion of buffers to such circuits reduces the cycle time of the circuit without affecting the logical correctness of the circuit.  This optimization is called slack matching.  In this thesis the slack matching problem is formulated.  I show that this problem is NP-complete via a reduction from subset sum.  I describe two methods for expressing slack matching as a mixed integer linear program(MILP).  The first method is applicable to any QDI circuit, while the second method produces a smaller MILP for circuits comprised solely of half buffers.  These two formulations of slack matching were applied to the design of a fetch loop in an asynchronous micro-controller.  Slack matching reduced the cycle time of the circuit by a factor of 3.  For a circuit composed of 14 byte wide processes and a 8k instruction memory, 30s were required to generate the first MILP.  It was solved in 2s.  When the memory is modeled as a pipeline of half buffers, the second MILP could be formulated in 0.1s and solved in 0.6s.  This MILP had half the number of integer variables as the first formulation.",
        "doi": "10.7907/9HMY-RR92",
        "publication_date": "2008",
        "thesis_type": "phd",
        "thesis_year": "2008"
    },
    {
        "id": "thesis:5073",
        "collection": "thesis",
        "collection_id": "5073",
        "cite_using_url": "https://resolver.caltech.edu/CaltechETD:etd-12192007-153619",
        "primary_object_url": {
            "basename": "MSE_thesis.pdf",
            "content": "final",
            "filesize": 1506285,
            "license": "other",
            "mime_type": "application/pdf",
            "url": "/5073/2/MSE_thesis.pdf",
            "version": "v4.0.0"
        },
        "type": "thesis",
        "title": "Managing Information in Networked and Multi-Agent Control Systems",
        "author": [
            {
                "family_name": "Epstein",
                "given_name": "Michael Steven",
                "clpid": "Epstein-Michael-Steven"
            }
        ],
        "thesis_advisor": [
            {
                "family_name": "Murray",
                "given_name": "Richard M.",
                "clpid": "Murray-R-M"
            }
        ],
        "thesis_committee": [
            {
                "family_name": "Burdick",
                "given_name": "Joel Wakeman",
                "clpid": "Burdick-J-W"
            },
            {
                "family_name": "MacMynowski",
                "given_name": "Douglas G.",
                "clpid": "MacMynowski-D-G"
            },
            {
                "family_name": "Chandy",
                "given_name": "K. Mani",
                "clpid": "Chandy-K-M"
            },
            {
                "family_name": "Murray",
                "given_name": "Richard M.",
                "clpid": "Murray-R-M"
            }
        ],
        "local_group": [
            {
                "literal": "div_eng"
            }
        ],
        "abstract": "Traditional feedback control systems give little attention to issues associated with the flow of information through the feedback loop. Typically implemented with dedicated communication links that deliver nearly precise, reliable and non-delayed information, researchers have not needed to concern themselves with issues related to quantized, delayed and even lost information. With the advent of newer technologies and application areas that pass information through non-reliable networks, these issues can not be ignored. In recent years the field of Networked Control Systems (NCS) has emerged to describe situations where these issues are present. The research in this field focuses on quantifying performance degradations in the presence of network effects and proposing algorithms for managing the information flow to counter those negative effects. In this thesis I propose and analyze algorithms for managing information flow for several NCS scenarios; state estimation with lossy measurement signals, using input buffers to reduce the frequency of communication with a remote plant, and performing state estimation when control signals are transmitted to a remote plant via a lossy communication link with no acknowledgement signal at the estimator. Multi-agent coordinated control systems serve as a prime example of an emerging area of feedback control systems that utilize feedback loops with information passed through possibly imperfect communication networks. In these systems, agents use a communication network to exchange information in order to achieve a desired global objective. Hence managing the information flow has a direct impact on the performance of the system. I also explore this area by focusing on the problem of multi-agent average consensus. I propose an algorithm based on a hierarchical decomposition of the communication topology to speed up the time to convergence. For all these topics I focus on designing intuitive algorithms that intelligently manage the information flow and provide analysis and simulations to illustrate their effectiveness.\r\n",
        "doi": "10.7907/84NT-9N46",
        "publication_date": "2008",
        "thesis_type": "phd",
        "thesis_year": "2008"
    },
    {
        "id": "thesis:5260",
        "collection": "thesis",
        "collection_id": "5260",
        "cite_using_url": "https://resolver.caltech.edu/CaltechETD:etd-11092007-180524",
        "type": "thesis",
        "title": "Soft-Error Tolerant Quasi Delay-insensitive Circuits",
        "author": [
            {
                "family_name": "Jang",
                "given_name": "Wonjin",
                "clpid": "Jang-Wonjin"
            }
        ],
        "thesis_advisor": [
            {
                "family_name": "Martin",
                "given_name": "Alain J.",
                "clpid": "Martin-A-J"
            }
        ],
        "thesis_committee": [
            {
                "family_name": "Martin",
                "given_name": "Alain J.",
                "clpid": "Martin-A-J"
            },
            {
                "family_name": "Bruck",
                "given_name": "Jehoshua",
                "orcid": "0000-0001-8474-0812",
                "clpid": "Bruck-J"
            },
            {
                "family_name": "Chandy",
                "given_name": "K. Mani",
                "orcid": "0000-0001-9190-1290",
                "clpid": "Chandy-K-M"
            },
            {
                "family_name": "Ho",
                "given_name": "Tracey C.",
                "clpid": "Ho-Tracey"
            },
            {
                "family_name": "Abu-Mostafa",
                "given_name": "Yaser S.",
                "clpid": "Abu-Mostafa-Y-S"
            }
        ],
        "local_group": [
            {
                "literal": "div_eng"
            }
        ],
        "abstract": "<p>A hard error is an error that damages a circuit irrevocably; a soft error flips the logic states without causing any physical damage to the circuit, resulting in transient corruption of data. They result in transient, inconsistent corruption of data.</p>\r\n\r\n<p>The soft-error tolerance of logic circuits is recently getting more attention, since the soft- error rate of advanced CMOS devices is higher than before. As a response to the concern on soft errors, we propose a new method for making asynchronous circuits tolerant to soft errors. Since it relies on a property unique to asynchronous circuits, the method is different from what is done in synchronous circuits with triple modular redundancy. Asynchronous circuits have been attractive to the designers of reliable systems, because of their clock-less design, which makes them more robust to variations on computation time of modules. The quasi delay-insensitive (QDI) design style is one of the most robust asynchronous design styles for general computation; it makes one minimal assumption on delays in gates and wires. QDI circuits are easy to verify, simple, and modular, because the correct operation of a QDI circuit is independent of delays in gates and wires.</p>\r\n\r\n<p>Here, we shall overview how to design a QDI circuit, and what will happen if a soft error occurs on a QDI circuit. Then the crucial components of the method are shown: (1) a special kind of duplication for random logic (when each bit has to be corrected individually), (2) special protection circuitry for arbiter and synchronizer (as needed for example for external interrupts), (3) reconfigurable circuits using a special configuration unit, and (4) error correcting for memory arrays and other structures in which the data bits can be self- corrected. The solution of protecting random logic is compared with alternatives, which use other types of error correcting codes (e.g., parity code) in a QDI circuit. It turns out that the duplication generates efficient circuits more commonly than other possible constructions. Finally, the design of a soft-error tolerant asynchronous microprocessor is detailed and testing results of the soft-error tolerance of the microprocessor are shown.</p>\r\n\r\n",
        "doi": "10.7907/ZVFF-WE07",
        "publication_date": "2008",
        "thesis_type": "phd",
        "thesis_year": "2008"
    },
    {
        "id": "thesis:2252",
        "collection": "thesis",
        "collection_id": "2252",
        "cite_using_url": "https://resolver.caltech.edu/CaltechETD:etd-05292007-223200",
        "primary_object_url": {
            "basename": "new2.pdf",
            "content": "final",
            "filesize": 3288018,
            "license": "other",
            "mime_type": "application/pdf",
            "url": "/2252/1/new2.pdf",
            "version": "v3.0.0"
        },
        "type": "thesis",
        "title": "Microscopic Behavior of Internet Congestion Control",
        "author": [
            {
                "family_name": "Wei",
                "given_name": "Xiaoliang (David)",
                "clpid": "Wei-Xiaoliang-David"
            }
        ],
        "thesis_advisor": [
            {
                "family_name": "Low",
                "given_name": "Steven H.",
                "clpid": "Low-S-H"
            }
        ],
        "thesis_committee": [
            {
                "family_name": "Low",
                "given_name": "Steven H.",
                "clpid": "Low-S-H"
            },
            {
                "family_name": "Hickey",
                "given_name": "Jason J.",
                "clpid": "Hickey-J-J"
            },
            {
                "family_name": "Doyle",
                "given_name": "John Comstock",
                "clpid": "Doyle-J-C"
            },
            {
                "family_name": "Chandy",
                "given_name": "K. Mani",
                "clpid": "Chandy-K-M"
            },
            {
                "family_name": "Cao",
                "given_name": "Pei",
                "clpid": "Cao-Pei"
            }
        ],
        "local_group": [
            {
                "literal": "div_eng"
            }
        ],
        "abstract": "<p>The Internet research community has focused on the macroscopic behavior of Transmission Control Protocol (TCP) and overlooked its microscopic behavior for years. This thesis studies the microscopic behavior of TCP and its effects on performance. We go into the packet-level details of TCP control algorithms and explore the behavior in short time scales within one round-trip time. We find that the burstiness effects in such small time scales have significant impacts on both delay-based TCP and loss-based TCP.</p>\r\n\r\n<p>For delay-based TCP algorithms, the micro-burst leads to much faster queue convergence than what the traditional macroscopic models predict. With such fast queue convergence, some delay-based congestion control algorithms are much more stable in reality than in the analytical results from existing macroscopic models. This observation allows us to design more responsive yet stable algorithm which would otherwise be impossible.</p>\r\n\r\n<p>For loss-based TCP algorithms, the sub-RTT burstiness in TCP packet transmission process has significant impacts on the loss synchronization rate, an important parameter which affects the efficiency, fairness and convergence of loss-based TCP congestion control algorithms.</p>\r\n\r\n<p>Our findings explain several long-standing controversial problems and have inspired new algorithms that achieve better TCP performance.</p>",
        "doi": "10.7907/W5E3-9N04",
        "publication_date": "2007",
        "thesis_type": "phd",
        "thesis_year": "2007"
    },
    {
        "id": "thesis:1951",
        "collection": "thesis",
        "collection_id": "1951",
        "cite_using_url": "https://resolver.caltech.edu/CaltechETD:etd-05222007-101946",
        "primary_object_url": {
            "basename": "main.pdf",
            "content": "final",
            "filesize": 5076548,
            "license": "other",
            "mime_type": "application/pdf",
            "url": "/1951/1/main.pdf",
            "version": "v3.0.0"
        },
        "type": "thesis",
        "title": "A Treatise on Econometric Forecasting",
        "author": [
            {
                "family_name": "Martinez Estrada",
                "given_name": "Alfredo",
                "clpid": "Martinez-Estrada-Alfredo"
            }
        ],
        "thesis_advisor": [
            {
                "family_name": "Doyle",
                "given_name": "John Comstock",
                "clpid": "Doyle-J-C"
            }
        ],
        "thesis_committee": [
            {
                "family_name": "Doyle",
                "given_name": "John Comstock",
                "clpid": "Doyle-J-C"
            },
            {
                "family_name": "Chandy",
                "given_name": "K. Mani",
                "clpid": "Chandy-K-M"
            },
            {
                "family_name": "Murray",
                "given_name": "Richard M.",
                "clpid": "Murray-R-M"
            },
            {
                "family_name": "Abu-Mostafa",
                "given_name": "Yaser S.",
                "clpid": "Abu-Mostafa-Y-S"
            }
        ],
        "local_group": [
            {
                "literal": "div_eng"
            }
        ],
        "abstract": "We investigate the effects of model misspecification and stochastic dynamics in the problem of forecasting. In economics and many fields of engineering, many researchers are guilty of the dangerous practice of treating their mathematical models as the true data generating mechanisms responsible for the observed phenomena and downplaying or omitting all together the important step of model verification. In recent years, econometricians have acknowledged the need to account for model misspecification in the problems of estimation and forecasting. In particular, a large body of work has emerged to address properties of estimators under model misspecification, along with a plethora of misspecification testing methodologies. In this work, we investigate the combined effects of model misspecification and various types of stochastic dynamics on forecasts based on linear regression models. The data generating process (DGP) is assumed unknown to the forecaster except for the nature of process dependencies, i.e., independent identically distributed, covariance stationary, or nonstationary. Estimation is carried out by means of ordinary least squares, and forecasts are evaluated with the mean squared forecast error (MSFE) or mean square error of prediction. We investigate the sample size dependence of the MSFE. For this purpose, we develop an algorithm to approximate the MSFE by an expression depending only on the sample size n and moments of the processes. The approximation is constructed by Taylor series expansions of the squared forecast error which do not require knowledge of the functional form of the DGP. The approximation can be used to determine the existence of optimal observation windows which result in the minimum MSFE. We assess the accuracy of the approximating algorithm with Monte Carlo experiments.\r\n",
        "doi": "10.7907/WXN5-9A47",
        "publication_date": "2007",
        "thesis_type": "phd",
        "thesis_year": "2007"
    },
    {
        "id": "thesis:2166",
        "collection": "thesis",
        "collection_id": "2166",
        "cite_using_url": "https://resolver.caltech.edu/CaltechETD:etd-05272007-214755",
        "primary_object_url": {
            "basename": "XinThesis.pdf",
            "content": "final",
            "filesize": 670893,
            "license": "other",
            "mime_type": "application/pdf",
            "url": "/2166/1/XinThesis.pdf",
            "version": "v2.0.0"
        },
        "type": "thesis",
        "title": "Robustness, Complexity, Validation and Risk",
        "author": [
            {
                "family_name": "Liu",
                "given_name": "Xin",
                "clpid": "Liu-Xin"
            }
        ],
        "thesis_advisor": [
            {
                "family_name": "Doyle",
                "given_name": "John Comstock",
                "clpid": "Doyle-J-C"
            }
        ],
        "thesis_committee": [
            {
                "family_name": "Doyle",
                "given_name": "John Comstock",
                "clpid": "Doyle-J-C"
            },
            {
                "family_name": "Chandy",
                "given_name": "K. Mani",
                "clpid": "Chandy-K-M"
            },
            {
                "family_name": "Murray",
                "given_name": "Richard M.",
                "clpid": "Murray-R-M"
            },
            {
                "family_name": "Yi",
                "given_name": "Tau-Mu",
                "clpid": "Yi-Tau-Mu"
            }
        ],
        "local_group": [
            {
                "literal": "div_eng"
            }
        ],
        "abstract": "<p>A robust design process starts with modeling of the physical system and the uncertainty it faces. Robust design tools are then applied to achieve specified performance criteria. Verification of system properties is crucial as improvements on the modeling and design practices can be made based on results of such verification. In this thesis, we discuss three aspects of this closed-loop process.</p>\r\n\r\n<p>First and the most important aspect is the possibility of the feedback from verification to system modeling and design. When verification is hard, what does it tell us about our system? When the system is robust, would it be easy to verify so? We study the relation between robustness of a system property posed as a decision problem and the proof complexity of verifying such property. We examine this relation in two classes of problems: percolation lattices and linear programming problems, and show complexity is upper-bounded by the reciprocal of robustness, i.e. fragility.</p>\r\n\r\n<p>The second aspect we study is model validation. More precisely, when given a candidate model and experiment data, how do we rigorously refute the model or gain information about the consistent parameter set? Different methods for model invalidation and parameter inference are demonstrated with the G-protein signaling system in yeast to show the advantages and hurdles in their applications.</p>\r\n\r\n<p>While quantification of robustness requirements has been well-studied in engineering, it is just emerging in the field of finance. Robustness specification in finance is closely related to the availability of proper risk measures. We study the estimation of a coherent risk measure, Expected Shortfall (ES). A consistent and asymptotically normal estimator for ES based on empirical likelihood is proposed. Although empirical likelihood based estimators usually involve numerically solving optimization problems that are not necessarily convex, computation of our estimator can be carried out in a sequential manner, avoiding solving non-convex optimization problems.</p>",
        "doi": "10.7907/JZX4-QN41",
        "publication_date": "2007",
        "thesis_type": "phd",
        "thesis_year": "2007"
    },
    {
        "id": "thesis:104",
        "collection": "thesis",
        "collection_id": "104",
        "cite_using_url": "https://resolver.caltech.edu/CaltechETD:etd-01102007-010550",
        "primary_object_url": {
            "basename": "thesis.pdf",
            "content": "final",
            "filesize": 990506,
            "license": "other",
            "mime_type": "application/pdf",
            "url": "/104/1/thesis.pdf",
            "version": "v3.0.0"
        },
        "type": "thesis",
        "title": "Distributed Averaging and Efficient File Sharing on Peer-to-Peer Networks",
        "author": [
            {
                "family_name": "Mehyar",
                "given_name": "Mortada",
                "clpid": "Mehyar-Mortada"
            }
        ],
        "thesis_advisor": [
            {
                "family_name": "Low",
                "given_name": "Steven H.",
                "orcid": "0000-0001-6476-3048",
                "clpid": "Low-S-H"
            }
        ],
        "thesis_committee": [
            {
                "family_name": "Low",
                "given_name": "Steven H.",
                "orcid": "0000-0001-6476-3048",
                "clpid": "Low-S-H"
            },
            {
                "family_name": "Doyle",
                "given_name": "John Comstock",
                "orcid": "0000-0002-1828-2486",
                "clpid": "Doyle-J-C"
            },
            {
                "family_name": "Chandy",
                "given_name": "K. Mani",
                "orcid": "0000-0001-9190-1290",
                "clpid": "Chandy-K-M"
            },
            {
                "family_name": "Murray",
                "given_name": "Richard M.",
                "orcid": "0000-0002-5785-7481",
                "clpid": "Murray-R-M"
            },
            {
                "family_name": "Ho",
                "given_name": "Tracey C.",
                "clpid": "Ho-Tracey"
            }
        ],
        "local_group": [
            {
                "literal": "div_eng"
            }
        ],
        "abstract": "<p>The work presented in this thesis is mainly divided in two parts. In the first part we study the problem of distributed averaging, which has attracted a lot of interest in the research community in recent years. Our work focuses on the issues of implementing distributed averaging algorithms on peer-to-peer networks such as the Internet. We present algorithms that eliminate the need for global coordination or synchronization, as many other algorithms require, and show mathematical analysis of their convergence.</p>\r\n\r\n<p>Discrete-event simulations that verify the theoretical results are presented. We show that the algorithms proposed converge rapidly in practical scenarios. Real-world experiments are also presented to further corroborate these results. We present experiments conducted on the PlanetLab research network. Finally, we present several promising applications of distributed averaging that can be implemented in a wide range of areas of interest.</p>\r\n\r\n<p>The second part of this thesis, also related to peer-to-peer networking, is about modelling and understanding peer-to-peer file sharing. The BitTorrent protocol has become one of the most popular peer-to-peer file sharing systems in recent years. Theoretical understanding of the global behavior of BitTorrent and similar peer-to-peer file sharing systems is however not very complete yet. We study a model that requires very simple assumptions yet exhibits a lot structure. We show that it is possible to consider a wide range of performance criteria within the framework, and that the model captures many of the important issues of peer-to-peer file sharing.</p>\r\n\r\n<p>We believe the results provide fundamental insights to practical peer-to-peer file sharing systems. We show that many optimization criteria can be studied within our framework. Many new directions of research are also opened up.</p>",
        "doi": "10.7907/Q9EV-S167",
        "publication_date": "2007",
        "thesis_type": "phd",
        "thesis_year": "2007"
    },
    {
        "id": "thesis:2011",
        "collection": "thesis",
        "collection_id": "2011",
        "cite_using_url": "https://resolver.caltech.edu/CaltechETD:etd-05242006-170918",
        "primary_object_url": {
            "basename": "thesis.pdf",
            "content": "final",
            "filesize": 1812990,
            "license": "other",
            "mime_type": "application/pdf",
            "url": "/2011/1/thesis.pdf",
            "version": "v4.0.0"
        },
        "type": "thesis",
        "title": "Heterogeneous Congestion Control Protocols",
        "author": [
            {
                "family_name": "Tang",
                "given_name": "Ao (Kevin)",
                "clpid": "Tang-Ao-Kevin"
            }
        ],
        "thesis_advisor": [
            {
                "family_name": "Low",
                "given_name": "Steven H.",
                "clpid": "Low-S-H"
            }
        ],
        "thesis_committee": [
            {
                "family_name": "Low",
                "given_name": "Steven H.",
                "clpid": "Low-S-H"
            },
            {
                "family_name": "Hassibi",
                "given_name": "Babak",
                "clpid": "Hassibi-B"
            },
            {
                "family_name": "Bruck",
                "given_name": "Jehoshua",
                "clpid": "Bruck-J"
            },
            {
                "family_name": "Doyle",
                "given_name": "John Comstock",
                "clpid": "Doyle-J-C"
            },
            {
                "family_name": "Chandy",
                "given_name": "K. Mani",
                "clpid": "Chandy-K-M"
            }
        ],
        "local_group": [
            {
                "literal": "div_eng"
            }
        ],
        "abstract": "<p>Homogeneity of price is an implicit yet fundamental assumption underlying price based resource allocation theory. In this thesis, we study the effects of relaxing this assumption by examining a concrete engineering system (network with heterogeneous congestion control protocols). The behavior of the system turns out to be very different from the homogeneous case and can potentially be much more complicated. A systematic theory is developed that includes all major properties of equilibrium of the system such as existence, uniqueness, optimality, and stability. In addition to analysis, we also present numerical examples, simulations, and experiments to illustrate the theory and verify its predictions.</p>\r\n\r\n<p>When heterogeneous congestion control protocols that react to different pricing signals share the same network, the resulting equilibrium can no longer be interpreted as a solution to the standard utility maximization problem as the current theory suggests. After introducing a mathematical formulation of network equilibrium for multi-protocol networks, we prove the existence of equilibrium under mild assumptions. For almost all networks, the equilibria are locally unique. They are finite and odd in number. They cannot all be locally stable unless the equilibrium is globally unique. We also derive two conditions for global uniqueness. By identifying an optimization problem associated with every equilibrium, we show that every equilibrium is Pareto efficient and provide an upper bound on efficiency loss due to pricing heterogeneity. Both intra-protocol and inter-protocol fairness are then discussed. On dynamics, various stability results are provided. In particular it is shown that if the degree of pricing heterogeneity is small enough, the network equilibrium is not only unique but also locally stable. Finally, a distributed algorithm is proposed to steer a network to the unique equilibrium that maximizes the aggregate utility, by only updating a linear parameter in the sources' algorithms in a slow timescale.</p>",
        "doi": "10.7907/eh43-pa83",
        "publication_date": "2006",
        "thesis_type": "phd",
        "thesis_year": "2006"
    },
    {
        "id": "thesis:4530",
        "collection": "thesis",
        "collection_id": "4530",
        "cite_using_url": "https://resolver.caltech.edu/CaltechETD:etd-11122005-082753",
        "primary_object_url": {
            "basename": "JiantaoWang.pdf",
            "content": "final",
            "filesize": 1473428,
            "license": "other",
            "mime_type": "application/pdf",
            "url": "/4530/1/JiantaoWang.pdf",
            "version": "v2.0.0"
        },
        "type": "thesis",
        "title": "A Theoretical Study of Internet Congestion Control: Equilibrium and Dynamics",
        "author": [
            {
                "family_name": "Wang",
                "given_name": "Jiantao",
                "clpid": "Wang-Jianto"
            }
        ],
        "thesis_advisor": [
            {
                "family_name": "Doyle",
                "given_name": "John Comstock",
                "clpid": "Doyle-J-C"
            },
            {
                "family_name": "Low",
                "given_name": "Steven H.",
                "clpid": "Low-S-H"
            }
        ],
        "thesis_committee": [
            {
                "family_name": "Doyle",
                "given_name": "John Comstock",
                "clpid": "Doyle-J-C"
            },
            {
                "family_name": "Low",
                "given_name": "Steven H.",
                "clpid": "Low-S-H"
            },
            {
                "family_name": "Hassibi",
                "given_name": "Babak",
                "clpid": "Hassibi-B"
            },
            {
                "family_name": "Chandy",
                "given_name": "K. Mani",
                "clpid": "Chandy-K-M"
            },
            {
                "family_name": "Murray",
                "given_name": "Richard M.",
                "clpid": "Murray-R-M"
            }
        ],
        "local_group": [
            {
                "literal": "div_eng"
            }
        ],
        "abstract": "<p>In the last several years, significant progress has been made in modelling the Internet congestion control using theories from convex optimization and feedback control. In this dissertation, the equilibrium and dynamics of various congestion control schemes are rigorously studied using these mathematical frameworks.</p>\r\n\r\n<p>First, we study the dynamics of TCP/AQM systems. We demonstrate that the dynamics of queue and average window in Reno/RED networks are determined predominantly by the protocol stability, not by AIMD probing nor noise traffic. Our study shows that Reno/RED becomes unstable when delay increases and more strikingly, when link capacity increases. Therefore, TCP Reno is ill suited for the future high-speed network, which has motivated the design of FAST TCP. Using a continuous-time model, we prove that FAST TCP is globally stable without feedback delays and provide a sufficient condition for local stability when feedback delays are present. We also introduce a discrete-time model for FAST TCP that fully captures the effect of self-clocking and derive the local stability condition for general networks with feedback delays.</p>\r\n\r\n<p>Second, the equilibrium properties (i.e., fairness, throughput, and capacity) of TCP/AQM systems are studied using the utility maximization framework. We quantitatively capture the variations in network throughput with changes in link capacity and allocation fairness. We clarify the open conjecture of whether a fairer allocation is always more efficient. The effects of changes in routing are studied using a joint optimization problem over both source rates and their routes. We investigate whether minimal-cost routing with proper link costs can solve this joint optimization problem in a distributed way. We also identify the tradeoff between achievable utility and routing stability.</p>\r\n\r\n<p>At the end, two other related projects are briefly described.</p>",
        "doi": "10.7907/4DQ0-GA49",
        "publication_date": "2006",
        "thesis_type": "phd",
        "thesis_year": "2006"
    },
    {
        "id": "thesis:2402",
        "collection": "thesis",
        "collection_id": "2402",
        "cite_using_url": "https://resolver.caltech.edu/CaltechETD:etd-06022006-140421",
        "primary_object_url": {
            "basename": "tapus-phd.pdf",
            "content": "final",
            "filesize": 1531668,
            "license": "other",
            "mime_type": "application/pdf",
            "url": "/2402/1/tapus-phd.pdf",
            "version": "v2.0.0"
        },
        "type": "thesis",
        "title": "Distributed Speculations: Providing Fault-Tolerance and Improving Performance",
        "author": [
            {
                "family_name": "\u021a\u0103pu\u0219",
                "given_name": "Cristian",
                "clpid": "\u021a\u0103pu\u0219-Cristian"
            }
        ],
        "thesis_advisor": [
            {
                "family_name": "Hickey",
                "given_name": "Jason J.",
                "clpid": "Hickey-J-J"
            }
        ],
        "thesis_committee": [
            {
                "family_name": "Hickey",
                "given_name": "Jason J.",
                "clpid": "Hickey-J-J"
            },
            {
                "family_name": "Chandy",
                "given_name": "K. Mani",
                "clpid": "Chandy-K-M"
            },
            {
                "family_name": "Murray",
                "given_name": "Richard M.",
                "clpid": "Murray-R-M"
            },
            {
                "family_name": "Low",
                "given_name": "Steven H.",
                "clpid": "Low-S-H"
            }
        ],
        "local_group": [
            {
                "literal": "div_eng"
            }
        ],
        "abstract": "<p>This thesis introduces a new programming model based on speculative execution and it examines the use of speculations, a form of distributed transactions, for improving the performance, reliability and fault tolerance of distributed systems.  A speculation is defined as a computation that is based on an assumption that is not validated before the computation is started.  If the assumption is later invalidated the computation is aborted and the state of the program is rolled back; if the assumption is validated, the results of the computation are committed. The primary difference between a speculation and a transaction is that a speculation is not isolated---for example, a speculative computation may send and receive messages, and it may modify shared objects.  As a result, processes that share those objects may be absorbed into a speculation.</p>\r\n\r\n<p>The contributions presented in this thesis include:\r\n<ul>\r\n<li>the introduction of a new programming model based on speculations,</li>\r\n<li>the definition of new speculative programming language constructs,</li>\r\n<li>the formal specification of the semantics of various speculative execution models, including message passing and shared objects,</li>\r\n<li>the implementation of speculations in the Linux kernel in a transparent manner, and</li>\r\n<li>the design and implementation of components of a distributed filesystem that supports speculations and guarantees sequential consistency of concurrent accesses to files.</li>\r\n</ul></p>",
        "doi": "10.7907/YZCK-4T29",
        "publication_date": "2006",
        "thesis_type": "phd",
        "thesis_year": "2006"
    },
    {
        "id": "thesis:155",
        "collection": "thesis",
        "collection_id": "155",
        "cite_using_url": "https://resolver.caltech.edu/CaltechETD:etd-01132006-152609",
        "primary_object_url": {
            "basename": "main.pdf",
            "content": "final",
            "filesize": 1062451,
            "license": "other",
            "mime_type": "application/pdf",
            "url": "/155/1/main.pdf",
            "version": "v3.0.0"
        },
        "type": "thesis",
        "title": "Rigorous Analog Verification of Asynchronous Circuits",
        "author": [
            {
                "family_name": "Papadantonakis",
                "given_name": "Karl Spyros",
                "clpid": "Papadantonakis-Karl-Spyros"
            }
        ],
        "thesis_advisor": [
            {
                "family_name": "Martin",
                "given_name": "Alain J.",
                "clpid": "Martin-A-J"
            }
        ],
        "thesis_committee": [
            {
                "family_name": "Martin",
                "given_name": "Alain J.",
                "clpid": "Martin-A-J"
            },
            {
                "family_name": "DeHon",
                "given_name": "Andre",
                "clpid": "DeHon-A"
            },
            {
                "family_name": "Winfree",
                "given_name": "Erik",
                "clpid": "Winfree-E"
            },
            {
                "family_name": "Hickey",
                "given_name": "Jason J.",
                "clpid": "Hickey-J-J"
            },
            {
                "family_name": "Chandy",
                "given_name": "K. Mani",
                "clpid": "Chandy-K-M"
            }
        ],
        "local_group": [
            {
                "literal": "div_eng"
            }
        ],
        "abstract": "This thesis shows that rigorous verification of some analog implementation of any Quasi-Delay-Insensitive (QDI) asynchronous circuit is possible.  That is, we show that in an accurate analog model, any behavior will adhere to the digital computation specifications under any possible noise and environment timing. Unlike a traditional simulation, we can analyze all of the infinitely many possible analog behaviors, in a time linear in the circuit size. A problem that arises in asynchronous circuit design is that the analog implementations of digital computations do not in general exhibit all properties demanded by the digital model assumed in circuit construction. For example, the digital model is atomic, in a sense we define. By contrast, analog models are non-atomic, and, as a result, we can give examples of real circuits with operational failures. There exist other attributes of analog models which can cause failures, and no complete classification exists. Ultimately there is only one way to solve this problem: we must show that all possible analog behaviors obey the atomic model. We focus on CMOS implementations, and the associated accepted bulk-scale model. Given any canonically-generated implementation of a general computation, we can rigorously verify it. The only exception to this rule is that restoring delay elements must be inserted into some implementations (fortunately, this change has no semantic effect on QDI circuits, by definition). Our theorem guarantees that when any possible analog behavior is properly observed, we obtain a valid, atomic digital execution. Several rigorous verifications have been produced, including one for an asynchronous pipeline circuit with dual-rail data.",
        "doi": "10.7907/4R8F-WF03",
        "publication_date": "2006",
        "thesis_type": "phd",
        "thesis_year": "2006"
    },
    {
        "id": "thesis:2155",
        "collection": "thesis",
        "collection_id": "2155",
        "cite_using_url": "https://resolver.caltech.edu/CaltechETD:etd-05272005-144358",
        "primary_object_url": {
            "basename": "thesis.pdf",
            "content": "final",
            "filesize": 856631,
            "license": "other",
            "mime_type": "application/pdf",
            "url": "/2155/1/thesis.pdf",
            "version": "v3.0.0"
        },
        "type": "thesis",
        "title": "Optimization-Based Methods for Nonlinear and Hybrid Systems Verification",
        "author": [
            {
                "family_name": "Prajna",
                "given_name": "Stephen",
                "clpid": "Prajna-Stephen"
            }
        ],
        "thesis_advisor": [
            {
                "family_name": "Doyle",
                "given_name": "John Comstock",
                "clpid": "Doyle-J-C"
            }
        ],
        "thesis_committee": [
            {
                "family_name": "Doyle",
                "given_name": "John Comstock",
                "clpid": "Doyle-J-C"
            },
            {
                "family_name": "Rantzer",
                "given_name": "Anders",
                "clpid": "Rantzer-A"
            },
            {
                "family_name": "Marsden",
                "given_name": "Jerrold E.",
                "clpid": "Marsden-J-E"
            },
            {
                "family_name": "Chandy",
                "given_name": "K. Mani",
                "clpid": "Chandy-K-M"
            },
            {
                "family_name": "Murray",
                "given_name": "Richard M.",
                "clpid": "Murray-R-M"
            }
        ],
        "local_group": [
            {
                "literal": "div_eng"
            }
        ],
        "abstract": "<p>Complex behaviors that can be exhibited by hybrid systems make the verification of such systems both important and challenging. Due to the infinite number of possibilities taken by the continuous state and the uncertainties in the system, exhaustive simulation is impossible, and also computing the set of reachable states is generally intractable. Nevertheless, the ever-increasing presence of hybrid systems in safety critical applications makes it evident that verification is an issue that has to be addressed.</p>\r\n\r\n<p>In this thesis, we develop a unified methodology for verifying temporal properties of continuous and hybrid systems. Our framework does not require explicit computation of reachable states. Instead, functions of state termed barrier certificates and density functions are used in conjunction with deductive inference to prove properties such as safety, reachability, eventuality, and their combinations. As a consequence, the proposed methods are directly applicable to systems with nonlinearity, uncertainty, and constraints. Moreover, it is possible to treat safety verification of stochastic systems in a similar fashion, by computing an upper-bound on the probability of reaching the unsafe states.</p>\r\n\r\n<p>We formulate verification using barrier certificates and density functions as convex programming problems. For systems with polynomial descriptions, sum of squares optimization can be used to construct polynomial barrier certificates and density functions in a computationally scalable manner. Some examples are presented to illustrate the use of the methods. At the end, the convexity of the problem formulation is also exploited to prove a converse theorem in safety verification using barrier certificates.</p>",
        "doi": "10.7907/S3BJ-4M47",
        "publication_date": "2005",
        "thesis_type": "phd",
        "thesis_year": "2005"
    },
    {
        "id": "thesis:2137",
        "collection": "thesis",
        "collection_id": "2137",
        "cite_using_url": "https://resolver.caltech.edu/CaltechETD:etd-05272004-163315",
        "primary_object_url": {
            "basename": "phd.pdf",
            "content": "final",
            "filesize": 1071633,
            "license": "other",
            "mime_type": "application/pdf",
            "url": "/2137/1/phd.pdf",
            "version": "v3.0.0"
        },
        "type": "thesis",
        "title": "Optimized Network Data Storage and Topology Control",
        "author": [
            {
                "family_name": "Jiang",
                "given_name": "Anxiao (Andrew)",
                "orcid": "0000-0002-0120-7930",
                "clpid": "Jiang-Anxiao-Andrew"
            }
        ],
        "thesis_advisor": [
            {
                "family_name": "Bruck",
                "given_name": "Jehoshua",
                "clpid": "Bruck-J"
            }
        ],
        "thesis_committee": [
            {
                "family_name": "Bruck",
                "given_name": "Jehoshua",
                "clpid": "Bruck-J"
            },
            {
                "family_name": "Chandy",
                "given_name": "K. Mani",
                "clpid": "Chandy-K-M"
            },
            {
                "family_name": "Blaum",
                "given_name": "Mario",
                "clpid": "Blaum-M"
            },
            {
                "family_name": "McEliece",
                "given_name": "Robert J.",
                "clpid": "McEliece-R-J"
            },
            {
                "family_name": "Low",
                "given_name": "Steven H.",
                "clpid": "Low-S-H"
            }
        ],
        "local_group": [
            {
                "literal": "div_eng"
            }
        ],
        "abstract": "<p>This thesis addresses two key challenges for network data-storage systems: optimizing data placement for highly efficient and robust data access, and constructing network topologies that facilitate data transmission scalable to both network sizes and network dynamics. It focuses on two new topics \u2014 data placement using erasure-correcting codes, and topology control for nodes in normed spaces. The first topic generalizes traditional file-assignment problems, and has the distinct feature of interleavingly placing data in networks. The second topic emphasizes the construction of network topologies that achieve excellent global performance in comprehensive measurements, through purely local decisions on connectivity. The results of the thesis deepen the current understanding on these important and intriguing topics, and follow a mathematically rigorous approach.</p>\r\n",
        "doi": "10.7907/91R7-MH71",
        "publication_date": "2004",
        "thesis_type": "phd",
        "thesis_year": "2004"
    },
    {
        "id": "thesis:5393",
        "collection": "thesis",
        "collection_id": "5393",
        "cite_using_url": "https://resolver.caltech.edu/CaltechTHESIS:11192009-161338958",
        "primary_object_url": {
            "basename": "wong_catherine_grace_2004.pdf",
            "content": "final",
            "filesize": 1069347,
            "license": "other",
            "mime_type": "application/pdf",
            "url": "/5393/1/wong_catherine_grace_2004.pdf",
            "version": "v4.0.0"
        },
        "type": "thesis",
        "title": "High-Level Synthesis and Rapid Prototyping of Asynchronous VLSI Systems",
        "author": [
            {
                "family_name": "Wong",
                "given_name": "Catherine Grace",
                "clpid": "Wong-Catherine-Grace"
            }
        ],
        "thesis_advisor": [
            {
                "family_name": "Martin",
                "given_name": "Alain J.",
                "clpid": "Martin-A-J"
            }
        ],
        "thesis_committee": [
            {
                "family_name": "Martin",
                "given_name": "Alain J.",
                "clpid": "Martin-A-J"
            },
            {
                "family_name": "DeHon",
                "given_name": "Andre",
                "clpid": "DeHon-A"
            },
            {
                "family_name": "Chandy",
                "given_name": "K. Mani",
                "clpid": "Chandy-K-M"
            },
            {
                "family_name": "Hickey",
                "given_name": "Jason J.",
                "clpid": "Hickey-J-J"
            }
        ],
        "local_group": [
            {
                "literal": "div_eng"
            }
        ],
        "abstract": "<p>This thesis introduces data-driven decomposition (DDD), a new method for the high-level synthesis of asynchronous VLSI systems and the first method to target high-performance asynchronous circuits. Given a sequential description of circuit behavior, DDD produces an equivalent network of communicating processes that can each be directly implemented as fine-grained asynchronous pipeline stages. Control and datapath are integrated within each pipeline stage of the final system.</p>\r\n\r\n<p>We present many aspects of the synthesis of asynchronous VLSI systems, including general circuit templates that DDD uses to estimate low-level performance and energy metrics while optimizing the concurrent system. We also introduce a new circuit model and new techniques for slack matching, a performance optimization that inserts pipelining into a system to modify asynchronous handshake dynamics and increase throughput. The entire method is then applied to a complex control unit from an asynchronous 8051 microcontroller, as an example.</p>\r\n\r\n<p>This thesis also introduces a new architecture for an asynchronous field-programmable gate array (FPGA). The architecture is cluster-based and, unlike most FPGA designs, contains an entirely delay-insensitive interconnect. The basic reconfigurable cells of this FPGA fit the asynchronous pipeline-stage circuit-template used by DDD, and the reconfigurable clusters include circuitry that implements features assumed by an optimization phase of DDD, which reduces the energy  consumption of the system.</p>",
        "doi": "10.7907/5N2N-0W58",
        "publication_date": "2004",
        "thesis_type": "phd",
        "thesis_year": "2004"
    },
    {
        "id": "thesis:1888",
        "collection": "thesis",
        "collection_id": "1888",
        "cite_using_url": "https://resolver.caltech.edu/CaltechETD:etd-05202003-170423",
        "primary_object_url": {
            "basename": "thesis.pdf",
            "content": "final",
            "filesize": 2999100,
            "license": "other",
            "mime_type": "application/pdf",
            "url": "/1888/1/thesis.pdf",
            "version": "v3.0.0"
        },
        "type": "thesis",
        "title": "Efficient Algorithms for Solving Static Hamilton-Jacobi Equations",
        "author": [
            {
                "family_name": "Mauch",
                "given_name": "Sean Patrick",
                "clpid": "Mauch-Sean-Patrick"
            }
        ],
        "thesis_advisor": [
            {
                "family_name": "Meiron",
                "given_name": "Daniel I.",
                "orcid": "0000-0003-0397-3775",
                "clpid": "Meiron-D-I"
            }
        ],
        "thesis_committee": [
            {
                "family_name": "Meiron",
                "given_name": "Daniel I.",
                "orcid": "0000-0003-0397-3775",
                "clpid": "Meiron-D-I"
            },
            {
                "family_name": "Chandy",
                "given_name": "K. Mani",
                "orcid": "0000-0001-9190-1290",
                "clpid": "Chandy-K-M"
            },
            {
                "family_name": "Schroeder",
                "given_name": "Peter",
                "orcid": "0000-0002-0323-7674",
                "clpid": "Schr\u00f6der-P"
            },
            {
                "family_name": "Hou",
                "given_name": "Thomas Y.",
                "orcid": "0000-0001-6287-1133",
                "clpid": "Hou-T-Y"
            }
        ],
        "local_group": [
            {
                "literal": "div_eng"
            }
        ],
        "abstract": "<p>We present an algorithm for computing the closest point transform to an explicitly described manifold on a rectilinear grid in low dimensional spaces.  The closest point transform finds the closest point on a manifold and the Euclidean distance to a manifold for the points in a grid.  We consider manifolds composed of simple geometric shapes, such as, a set of points, piecewise linear curves or triangle meshes.  The algorithm solves the eikonal equation |grad u| = 1 with the method of characteristics.  For many problems, the computational complexity of the algorithm is linear in both the number of grid points and the complexity of the manifold.</p>\r\n\r\n<p>Many query problems can be aided by using orthogonal range queries (ORQ).  There are several standard data structures for performing ORQ's in 3-D, including kd-trees, octrees, and cell arrays.  We develop additional data structures based on cell arrays.  We study the characteristics of each data structure and compare their performance.</p>\r\n\r\n<p>We present a new algorithm for solving the single-source, non-negative weight, shortest-paths problem.  Dijkstra's algorithm solves this problem with computational complexity O((E + V) log V) where E is the number of edges and V is the number of vertices.  The new algorithm, called Marching with a Correctness Criterion (MCC), has computational complexity O(E + R V), where R is the ratio of the largest to smallest edge weight.</p>\r\n\r\n<p>Sethian's Fast Marching Method (FMM) may be used to solve static Hamilton-Jacobi equations.  It has computational complexity O(N log N), where N is the number of grid points.  The FMM has been regarded as an optimal algorithm because it is closely related to Dijkstra's algorithm.  The new shortest-paths algorithm discussed above can be used to develop an ordered, upwind, finite difference algorithm for solving static Hamilton-Jacobi equations.  This algorithm requires difference schemes that difference not only in coordinate directions, but in diagonal directions as well.  It has computational complexity O(R N), where R is the ratio of the largest to smallest propagation speed and N is the number of grid points.</p>",
        "doi": "10.7907/5R5P-Y603",
        "publication_date": "2003",
        "thesis_type": "phd",
        "thesis_year": "2003"
    },
    {
        "id": "thesis:4357",
        "collection": "thesis",
        "collection_id": "4357",
        "cite_using_url": "https://resolver.caltech.edu/CaltechETD:etd-11012005-093745",
        "primary_object_url": {
            "basename": "Ginis_r_2002.pdf",
            "content": "final",
            "filesize": 654958,
            "license": "other",
            "mime_type": "application/pdf",
            "url": "/4357/1/Ginis_r_2002.pdf",
            "version": "v2.0.0"
        },
        "type": "thesis",
        "title": "Automating Resource Management for Distributed Business Processes",
        "author": [
            {
                "family_name": "Ginis",
                "given_name": "Roman",
                "clpid": "Ginis-Roman"
            }
        ],
        "thesis_advisor": [
            {
                "family_name": "Chandy",
                "given_name": "K. Mani",
                "clpid": "Chandy-K-M"
            },
            {
                "family_name": "Martin",
                "given_name": "Alain J.",
                "clpid": "Martin-A-J"
            }
        ],
        "thesis_committee": [
            {
                "family_name": "Chandy",
                "given_name": "K. Mani",
                "clpid": "Chandy-K-M"
            },
            {
                "family_name": "Hickey",
                "given_name": "Jason J.",
                "clpid": "Hickey-J-J"
            },
            {
                "family_name": "Arvo",
                "given_name": "James R.",
                "clpid": "Arvo-J-R"
            },
            {
                "family_name": "Schulman",
                "given_name": "Leonard J.",
                "orcid": "0000-0001-9901-2797",
                "clpid": "Schulman-L-J"
            },
            {
                "family_name": "Pierce",
                "given_name": "Niles A.",
                "orcid": "0000-0003-2367-4406",
                "clpid": "Pierce-N-A"
            }
        ],
        "local_group": [
            {
                "literal": "div_eng"
            }
        ],
        "abstract": "A distributed business process is a set of related activities performed by independent resources offering services for lease. For instance, constructing an office building involves hundreds of activities such as excavating, plumbing and carpentry performed by machines and subcontractors, whose activities are related in time, space, cost and other dimensions. In the last decade Internet-based middleware has linked consumers with resources and services enabling the consumers to more efficiently locate, select and reserve the resources for use in business processes. This recent capability creates an opportunity for a new automation of resource management that can assign the optimal resources to the activities of a business process to maximize its utility to the consumer and yield substantial gains in operational efficiency.\r\n\r\nThis thesis explores two basic problems towards automating the management of distributed business processes: 1. How to choose the best resources for the activities of a process (the Activity Resource Assignment - ARA - optimization problem); and 2. How to reserve the resources chosen for a process as an atomic operation when time has value, i.e., commit all resources or no resources (the Distributed Service Commit problem - DSC). I believe these will become the typical optimization and agreement problems between consumers and producers in a networked service economy.\r\n\r\nI propose a solution to the ARA optimization problem by modeling it as a special type of Integer Programming and I give a method for solving it efficiently for a large class of practical cases. Given a problem instance the method extracts the structure of the problem and using a new concept of variable independence recursively simplifies it while retaining at least one optimal solution. The reduction operation is guided by a novel procedure that makes use of the recent advances in tree-decomposition of graphs from the graph complexity theory.\r\n\r\nThe solution to the DSC problem is an algorithm based on financial instruments and the two-phase commit protocol adapted for services. The method achieves an economically sensible atomic reservation agreement between multiple distributed resources and consumers in a free market environment.\r\n\r\nI expect the automation of resource management addressed in my thesis and elsewhere will pave the way for more efficient business operations in the networked economy.",
        "doi": "10.7907/9GXT-BD03",
        "publication_date": "2002",
        "thesis_type": "phd",
        "thesis_year": "2002"
    },
    {
        "id": "thesis:4821",
        "collection": "thesis",
        "collection_id": "4821",
        "cite_using_url": "https://resolver.caltech.edu/CaltechETD:etd-12072001-160019",
        "primary_object_url": {
            "basename": "thesis-online.pdf",
            "content": "final",
            "filesize": 850716,
            "license": "other",
            "mime_type": "application/pdf",
            "url": "/4821/1/thesis-online.pdf",
            "version": "v3.0.0"
        },
        "type": "thesis",
        "title": "Dynamic UNITY",
        "author": [
            {
                "family_name": "Zimmerman",
                "given_name": "Daniel Marc",
                "clpid": "Zimmerman-Daniel-Marc"
            }
        ],
        "thesis_advisor": [
            {
                "family_name": "Chandy",
                "given_name": "K. Mani",
                "clpid": "Chandy-K-M"
            }
        ],
        "thesis_committee": [
            {
                "family_name": "Chandy",
                "given_name": "K. Mani",
                "clpid": "Chandy-K-M"
            },
            {
                "family_name": "Martin",
                "given_name": "Alain J.",
                "clpid": "Martin-A-J"
            },
            {
                "family_name": "Hickey",
                "given_name": "Jason J.",
                "clpid": "Hickey-J-J"
            },
            {
                "family_name": "Bruck",
                "given_name": "Jehoshua",
                "orcid": "0000-0001-8474-0812",
                "clpid": "Bruck-J"
            }
        ],
        "local_group": [
            {
                "literal": "div_eng"
            }
        ],
        "abstract": "Dynamic distributed systems, where a changing set of communicating processes must interoperate to accomplish particular computational tasks, are becoming extremely important. Designing and implementing these systems, and verifying the correctness of the designs and implementations, are difficult tasks. The goal of this thesis is to make these tasks easier.\r\n\r\nThis thesis presents a specification language for dynamic distributed systems, based on Chandy and Misra's UNITY language. It extends the UNITY language to enable process creation, process deletion, and dynamic communication patterns.\r\n\r\nThe thesis defines an execution model for systems specified in this language, which leads to a proof logic similar to that of UNITY. While extending UNITY logic to correctly handle systems with dynamic behavior, this logic retains the familiar UNITY operators and most of the proof rules associated with them. \r\n\r\nThe thesis presents specifications for three example dynamic distributed systems to demonstrate the use of the specification language, and full correctness proofs for two of these systems and a partial correctness proof for the third to demonstrate the use of the proof logic. \r\n\r\nThe thesis details a method for determining whether a system in the specification language can be transformed into an implementation in a standard programming language, as well as a method for performing this transformation on those specifications that can. This guarantees a correct implementation for any specification that can be so transformed. \r\n",
        "doi": "10.7907/AC6E-WE21",
        "publication_date": "2002",
        "thesis_type": "phd",
        "thesis_year": "2002"
    },
    {
        "id": "thesis:2468",
        "collection": "thesis",
        "collection_id": "2468",
        "cite_using_url": "https://resolver.caltech.edu/CaltechETD:etd-06062002-164914",
        "primary_object_url": {
            "basename": "dissertation.pdf",
            "content": "final",
            "filesize": 1785426,
            "license": "other",
            "mime_type": "application/pdf",
            "url": "/2468/1/dissertation.pdf",
            "version": "v2.0.0"
        },
        "type": "thesis",
        "title": "Kind Theory",
        "author": [
            {
                "family_name": "Kiniry",
                "given_name": "Joseph Roland",
                "orcid": "0000-0002-3589-2454",
                "clpid": "Kiniry-Joseph Roland"
            }
        ],
        "thesis_advisor": [
            {
                "family_name": "Hickey",
                "given_name": "Jason J.",
                "clpid": "Hickey-J-J"
            },
            {
                "family_name": "Chandy",
                "given_name": "K. Mani",
                "clpid": "Chandy-K-M"
            }
        ],
        "thesis_committee": [
            {
                "family_name": "Hickey",
                "given_name": "Jason J.",
                "clpid": "Hickey-J-J"
            },
            {
                "family_name": "Lea",
                "given_name": "Doug",
                "clpid": "Lea-Doug"
            },
            {
                "family_name": "Klavins",
                "given_name": "Eric",
                "clpid": "Klavins-E"
            },
            {
                "family_name": "Chandy",
                "given_name": "K. Mani",
                "clpid": "Chandy-K-M"
            }
        ],
        "local_group": [
            {
                "literal": "div_eng"
            }
        ],
        "abstract": "<p>My contribution, described in this thesis, is a theory that is meant to assist in the construction of complex software systems.  I propose a notion of structure that is independent of language, formalism, or problem domain.  I call this new abstraction a kind, and its related formal system, kind theory.  I define a type system that models the structural aspects of kind theory.  I also define an algebra that models this type system and provides a logic in which one can specify and execute computations.</p>\r\n  \r\n<p>A reflective definition of kind theory is reviewed.  This reflective specification depends upon a basic ontology for mathematics. By specifying the theory in itself, I provide an example of how one can use kind theory to reason about reuse in general formal systems.</p>\r\n  \r\n<p>I provide examples of the use of kind theory in reasoning about software constructs in several domains of software engineering.  I also discuss a set of software tools that I have constructed that realize or use kind theory.</p>\r\n  \r\n<p>A logical framework is used to specify a type theoretic and algebraic model for the theory.  Using this basic theorem prover one can reason about software systems using kind theory.  Also, I have constructed a reuse repository that supports online collaboration, houses software assets, helps search for components that match specifications, and more.  This repository is designed to use kind theory (via the logical framework) for the representation of, and reasoning about, software assets.</p>\r\n  \r\n<p>Finally, I propose a set of language-independent specification constructs called semantic properties which have a semantics specified in kind theory.  I show several uses of these constructs, all of which center on reasoning about reusable component-based software, by giving examples of how these constructs are applied to programming and specification languages.  I discuss how the availability of these constructs and the associated theory impact the software development process.</p>",
        "doi": "10.7907/TVTD-E826",
        "publication_date": "2002",
        "thesis_type": "phd",
        "thesis_year": "2002"
    },
    {
        "id": "thesis:3236",
        "collection": "thesis",
        "collection_id": "3236",
        "cite_using_url": "https://resolver.caltech.edu/CaltechETD:etd-08272001-155016",
        "primary_object_url": {
            "basename": "00ch0.pdf",
            "content": "final",
            "filesize": 144139,
            "license": "other",
            "mime_type": "application/pdf",
            "url": "/3236/1/00ch0.pdf",
            "version": "v2.0.0"
        },
        "type": "thesis",
        "title": "Why multicast protocols (don't) scale: an analysis of multipoint algorithms for scalable group communication",
        "author": [
            {
                "family_name": "Schooler",
                "given_name": "Eve Meryl",
                "clpid": "Schooler-Eve-Meryl"
            }
        ],
        "thesis_advisor": [
            {
                "family_name": "Chandy",
                "given_name": "K. Mani",
                "clpid": "Chandy-K-M"
            }
        ],
        "thesis_committee": [
            {
                "family_name": "Chandy",
                "given_name": "K. Mani",
                "clpid": "Chandy-K-M"
            },
            {
                "family_name": "Martin",
                "given_name": "Alain J.",
                "clpid": "Martin-A-J"
            },
            {
                "family_name": "Estrin",
                "given_name": "Deborah",
                "clpid": "Estrin-D"
            },
            {
                "family_name": "Hickey",
                "given_name": "Jason J.",
                "clpid": "Hickey-J-J"
            },
            {
                "family_name": "Bruck",
                "given_name": "Jehoshua",
                "clpid": "Bruck-J"
            }
        ],
        "local_group": [
            {
                "literal": "div_eng"
            }
        ],
        "abstract": "With the exponential growth of the Internet, there is a critical need to design efficient, scalable and robust protocols to support the network infrastructure.  A new class of protocols has emerged to address these challenges, and these protocols rely on a few key techniques, or micro-algorithms, to achieve scalability.  By scalability, we mean the ability of groups of communicating processes to grow very large in size.  We study the behavior of several of these fundamental techniques that appear in many deployed and emerging Internet standards:  Suppression, Announce-Listen, and Leader Election.\r\n\r\nThese algorithms are based on the principle of efficient multipoint communication, often in combination with periodic messaging.  We assume a loosely-coupled communication model, where acknowledged messaging among groups of processes is not required.  Thus, processes infer information from the periodic receipt or loss of messages from other processes.\r\n\r\nWe present an analysis, validated by simulation, of the performance tradeoffs of each of these techniques.  Toward this end, we derive a series of performance metrics that help us to evaluate these algorithms under lossy conditions:  expected response time, network usage, memory overhead, consistency attainable, and convergence time.  In addition, we study the impact of both correlated and uncorrelated loss on groups of communicating processes.\r\n\r\nAs a result, this thesis provides insights into the scalability of multicast protocols that rely upon these techniques.  We provide a systematic framework for calibrating as well as predicting protocol behavior over a range of operating conditions.  In the process, we establish a general methodology for the analysis of these and other scalability techniques.  Finally, we explore a theory of composition; if we understand the behavior of these micro-algorithms, then we can bound analytically the performance of the more complex algorithms that rely upon them.",
        "doi": "10.7907/44QZ-R465",
        "publication_date": "2001",
        "thesis_type": "phd",
        "thesis_year": "2001"
    },
    {
        "id": "thesis:1607",
        "collection": "thesis",
        "collection_id": "1607",
        "cite_using_url": "https://resolver.caltech.edu/CaltechETD:etd-05042006-131410",
        "primary_object_url": {
            "basename": "Zhu_x_2000.pdf",
            "content": "final",
            "filesize": 7149117,
            "license": "other",
            "mime_type": "application/pdf",
            "url": "/1607/1/Zhu_x_2000.pdf",
            "version": "v3.0.0"
        },
        "type": "thesis",
        "title": "Hard vs. soft bounds in probablilistic robustness analysis and generalized source coding and optimal web layout design",
        "author": [
            {
                "family_name": "Zhu",
                "given_name": "Xiaoyun",
                "clpid": "Zhu-Xiaoy"
            }
        ],
        "thesis_advisor": [
            {
                "family_name": "Doyle",
                "given_name": "John Comstock",
                "clpid": "Doyle-J-C"
            }
        ],
        "thesis_committee": [
            {
                "family_name": "Doyle",
                "given_name": "John Comstock",
                "clpid": "Doyle-J-C"
            },
            {
                "family_name": "Chandy",
                "given_name": "K. Mani",
                "clpid": "Chandy-K-M"
            },
            {
                "family_name": "Effros",
                "given_name": "Michelle",
                "clpid": "Effros-M"
            },
            {
                "family_name": "Murray",
                "given_name": "Richard M.",
                "clpid": "Murray-R-M"
            },
            {
                "family_name": "Hou",
                "given_name": "Thomas Y.",
                "clpid": "Hou-T-Y"
            }
        ],
        "local_group": [
            {
                "literal": "div_eng"
            }
        ],
        "abstract": "NOTE: Text or symbols not renderable in plain ASCII are indicated by [...]. Abstract is included in .pdf document.\r\n\r\nPart I:\r\n\r\nThe relationship between hard vs. soft bounds and probabilistic vs. worst-case problem formulations for robustness analysis has been a source of some apparent confusion in the control community, and this thesis attempts to clarify some of these issues. Essentially, worst-case analysis involves computing the maximum of a function which measures performance over some set of uncertainty. Probabilistic analysis assumes some distribution on the uncertainty and computes the resulting probability measure on performance. Exact computation in each case is intractable in general. In the past most research focused on computing hard bounds on worst-case performance. This thesis explores the use of both hard and soft bounds in probabilistic robustness analysis, and investigates the computational complexity of the problems through extensive numerical experimentation. We focus on the simplest possible problem formulations that we believe reveal the difficulties associated with more general probabilistic analysis.\r\n\r\nBy extending the standard structured singular value [...] framework to allow for probabilistic descriptions of uncertainty, probabilistic [...] is defined, which characterizes the probability distribution of some performance function. The computation of probabilistic [...] involves approximating the level surface of the function in the parameter space, which is even more complex than the worst-case [...] computation, a well-known NP-hard problem. In particular, providing sufficiently tight bounds in the tail of the distribution is extremely difficult. This thesis proposes three different methods for computing a hard upper bound on probabilistic [...] whose tightness can be tested by comparison with the soft bound provided by Monte-Carlo simulations. At the same time, the efficiency of the soft bounds can be significantly improved with the information from the hard bound computation. Among the three algorithms proposed, the LC-BNB algorithm is proven by numerical experiments to provide the best average performance on random examples. One particular example is shown in the end to demonstrate the effectiveness of the method.\r\n\r\nPart II:  \r\n\r\nThe design of robust and reliable networks and network services has become an increasingly challenging task in today's Internet world. To achieve this goal, understanding the characteristics of Internet traffic plays a more and more critical role. Empirical studies of measured traffic traces have led to the wide recognition of self-similarity in network traffic. Moreover, a direct link has been established between the self-similar nature of measured aggregate network traffic and the underlying heavy-tailed distributions of the Web traffic at the source level.\r\n\r\nThis thesis provides a natural and plausible explanation for the origin of heavy tails in Web traffic by introducing a series of simplified models for optimal Web layout design with varying levels of realism and analytic tractability. The basic approach is to view the minimization of the average file download time as a generalization of standard source coding for data compression, but with the design of the Web layout rather than the codewords. The results, however, are quite different from standard source coding, as all assumptions produce power law distributions for a wide variety of user behavior models.\r\n\r\nIn addition, a simulation model of more complex Web site layouts is proposed, with more detailed hyperlinks and user behavior. The throughput of a Web site can be maximized by taking advantage of information on user access patterns and rearranging (splitting or merging) files on the Web site accordingly, with a constraint on available resources. A heuristic optimization on random graphs is formulated, with user navigation modeled as Markov Chains. Simulations on different classes of graphs as well as more realistic models with simple geometries in individual Web pages all produce power law tails in the resulting size distributions of the files transferred from the Web sites. This again verifies our conjecture that heavy-tailed distributions result naturally from the tradeoff between the design objective and limited resources, and suggests a methodology for aiding in the design of high-throughput Web sites.",
        "doi": "10.7907/1f3r-va82",
        "publication_date": "2000",
        "thesis_type": "phd",
        "thesis_year": "2000"
    },
    {
        "id": "thesis:1647",
        "collection": "thesis",
        "collection_id": "1647",
        "cite_using_url": "https://resolver.caltech.edu/CaltechETD:etd-05062004-055516",
        "primary_object_url": {
            "basename": "Parrilo-Thesis.pdf",
            "content": "final",
            "filesize": 911166,
            "license": "other",
            "mime_type": "application/pdf",
            "url": "/1647/1/Parrilo-Thesis.pdf",
            "version": "v2.0.0"
        },
        "type": "thesis",
        "title": "Structured semidefinite programs and semialgebraic geometry methods in robustness and optimization",
        "author": [
            {
                "family_name": "Parrilo",
                "given_name": "Pablo A.",
                "clpid": "Parrilo-P-A"
            }
        ],
        "thesis_advisor": [
            {
                "family_name": "Doyle",
                "given_name": "John Comstock",
                "clpid": "Doyle-J-C"
            }
        ],
        "thesis_committee": [
            {
                "family_name": "Doyle",
                "given_name": "John Comstock",
                "clpid": "Doyle-J-C"
            },
            {
                "family_name": "Marsden",
                "given_name": "Jerrold E.",
                "clpid": "Marsden-J-E"
            },
            {
                "family_name": "Chandy",
                "given_name": "K. Mani",
                "clpid": "Chandy-K-M"
            },
            {
                "family_name": "Murray",
                "given_name": "Richard M.",
                "clpid": "Murray-R-M"
            }
        ],
        "local_group": [
            {
                "literal": "div_eng"
            }
        ],
        "abstract": "In the first part of this thesis, we introduce a specific class of Linear Matrix Inequalities (LMI) whose optimal solution can be characterized exactly. This family corresponds to the case where the associated linear operator maps the cone of positive semidefinite matrices onto itself. In this case, the optimal value equals the spectral radius of the operator. It is shown that some rank minimization problems, as well as generalizations of the structured singular value ($mu$) LMIs, have exactly this property.\n\nIn the same spirit of exploiting structure to achieve computational efficiency, an algorithm for the numerical solution of a special class of frequency-dependent LMIs is presented. These optimization problems arise from robustness analysis questions, via the Kalman-Yakubovich-Popov lemma. The procedure is an outer approximation method based on the algorithms used in the computation of hinf norms for linear, time invariant systems. The result is especially useful for systems with large state dimension.\n\nThe other main contribution in this thesis is the formulation of a convex optimization framework for semialgebraic problems, i.e., those that can be expressed by polynomial equalities and inequalities. The key element is the interaction of concepts in real algebraic geometry (Positivstellensatz) and semidefinite programming.\n\nTo this end, an LMI formulation for the sums of squares decomposition for multivariable polynomials is presented. Based on this, it is shown how to construct sufficient Positivstellensatz-based convex tests to prove that certain sets are empty. Among other applications, this leads to a nonlinear extension of many LMI based results in uncertain linear system analysis.\n\nWithin the same framework, we develop stronger criteria for matrix copositivity, and generalizations of the well-known standard semidefinite relaxations for quadratic programming.\n\nSome applications to new and previously studied problems are presented. A few examples are Lyapunov function computation, robust bifurcation analysis, structured singular values, etc. It is shown that the proposed methods allow for improved solutions for very diverse questions in continuous and combinatorial optimization.\n",
        "doi": "10.7907/2K6Y-CH43",
        "publication_date": "2000",
        "thesis_type": "phd",
        "thesis_year": "2000"
    },
    {
        "id": "thesis:1834",
        "collection": "thesis",
        "collection_id": "1834",
        "cite_using_url": "https://resolver.caltech.edu/CaltechETD:etd-05162005-084223",
        "primary_object_url": {
            "basename": "00_cover.pdf",
            "content": "final",
            "filesize": 7260,
            "license": "other",
            "mime_type": "application/pdf",
            "url": "/1834/1/00_cover.pdf",
            "version": "v2.0.0"
        },
        "type": "thesis",
        "title": "Highly available distributed storage systems",
        "author": [
            {
                "family_name": "Xu",
                "given_name": "Lihao",
                "clpid": "Xu-Lihao"
            }
        ],
        "thesis_advisor": [
            {
                "family_name": "Bruck",
                "given_name": "Jehoshua",
                "clpid": "Bruck-J"
            }
        ],
        "thesis_committee": [
            {
                "family_name": "Bruck",
                "given_name": "Jehoshua",
                "clpid": "Bruck-J"
            },
            {
                "family_name": "van Tilborg",
                "given_name": "Henk C.A.",
                "clpid": "van-Tilborg-H-C-A"
            },
            {
                "family_name": "Chandy",
                "given_name": "K. Mani",
                "clpid": "Chandy-K-M"
            },
            {
                "family_name": "Vaidyanathan",
                "given_name": "P. P.",
                "clpid": "Vaidyanathan-P-P"
            },
            {
                "family_name": "McEliece",
                "given_name": "Robert J.",
                "clpid": "McEliece-R-J"
            }
        ],
        "local_group": [
            {
                "literal": "div_eng"
            }
        ],
        "abstract": "As the need for data explodes with the passage of time and the increase of computing power, data storage becomes more and more important. Distributed storage, as distributed computing before it, is coming of age as a good solution to make systems highly available, i.e., highly scalable, reliable and efficient. The focus of this thesis is how to achieve data reliability and efficiency in distributed storage systems. This thesis consists of two parts. The first part deals with the reliability of distributed storage systems.  Reliability is achieved by computationally efficient MDS array codes that eliminate single points of failure in the systems, thus providing more reliability and flexibility to the systems. Such codes can be used as general MDS error-correcting codes.  They are particularly suitable for use in distributed storage systems.  The second part deals with the efficiency of distributed storage systems.  Methods are proposed to improve the performance of data server and storage systems significantly through the proper use of data redundancy.  These methods are based on error-correcting codes, particularly the MDS array codes developed in the first part.\r\n\r\nTwo new classes of MDS array codes are presented: the X-Code and the B-Code. The encoding operations of both codes are optimal, i.e., their update complexity achieves the theoretical lower bound. They distribute parity bits over all columns rather than concentrating them on some parity columns. As with other array codes, the error model for both codes is that errors or erasures are columns of the array, i.e., if at least one bit of a column is an error or erasure, then the whole column is considered to be an error or erasure.  Both codes are of distance 3, i.e., they can either:  correct two erasures, detect two errors or correct one error.  In addition to encoding algorithms, efficient decoding algorithms are proposed, both for erasure-correcting and for error-correcting.  In fact, the erasure-correcting algorithms are also optimal in terms of computation complexity.\r\n\r\nThe X-Code has a very simple geometrical structure:  the parity bits are constructed along two groups of parallel parity lines of slopes 1 and -1.  This is the origin of the name X-Code.  This simple geometrical structure allows simple erasure-decoding and error-decoding algorithms, using only XORs and vector cyclic-shift operations.\r\n\r\nThe significance of the B-code not only includes all its optimality properties:  MDS, optimal encoding and optimal decoding, but also its relation with a 3-decade old graph theory problem.  It is proven in this thesis that constructing a B-Code of odd length is exactly equivalent to constructing a perfect one-factorization (or P1F) of a complete graph.  Constructing a P1F of an arbitrary complete graph has remained a conjecture since the early 1960's.  Though the P1F conjecture remains unsolved, the B-code as the first real application of the P1F problem will hopefully spur more research on it.  It is also conjectured in this thesis that constructing a B-Code of any length, even or odd, is equivalent to constructing a P1F of a complete graph.  An efficient error-correcting algorithm for the B-Code is also presented, which is based on the relations between the B-Code and its dual.  The algorithm might give a hint of how to develop efficient decoding algorithms for other codes.\r\n\r\nWhile it is intuitive that redundancy can bring reliability to a system, this thesis gives another direction:  using redundancy actively to improve performance (efficiency) of distributed data systems.  The results in this direction are both theoretical and experimental.  System models are extracted from experiments in real practical systems; analytical results are derived using these and are then fed back to experiments for verification.\r\n\r\nIn this thesis, a novel deterministic voting scheme that uses error-correcting codes is proposed.  The voting scheme generalizes all known simple deterministic voting algorithms.  It can be tuned to various application environments with different error rates to drastically reduce average communication complexity, i.e., the amount of information that must be transmitted in order to get correct voting results.\r\n\r\nTwo problems are identified to improve the performance of general data server systems, namely the data distribution problem and the data acquisition problem.  Solutions to these are proposed, as are general analytical results on performance of (n, k) systems.  A simple service time model of a practical disk-based distributed server system is given.  This model, which is based on experimental results, is a starting point for data distribution and data acquisition schemes.  These results, both experimental and analytical, can be further used for more sophisticated scheduling schemes to optimize or improve the performance of data server systems that serve multiple clients simultaneously.\r\n\r\nFinally, some research problems related to storage systems are proposed as future directions.",
        "doi": "10.7907/EQK9-8C84",
        "publication_date": "1999",
        "thesis_type": "phd",
        "thesis_year": "1999"
    },
    {
        "id": "thesis:4124",
        "collection": "thesis",
        "collection_id": "4124",
        "cite_using_url": "https://resolver.caltech.edu/CaltechETD:etd-10172005-103315",
        "primary_object_url": {
            "basename": "Primbs_ja_1999.pdf",
            "content": "final",
            "filesize": 688728,
            "license": "other",
            "mime_type": "application/pdf",
            "url": "/4124/1/Primbs_ja_1999.pdf",
            "version": "v2.0.0"
        },
        "type": "thesis",
        "title": "Nonlinear optimal control: a receding horizon appoach",
        "author": [
            {
                "family_name": "Primbs",
                "given_name": "James A.",
                "clpid": "Primbs-J-A"
            }
        ],
        "thesis_advisor": [
            {
                "family_name": "Doyle",
                "given_name": "John Comstock",
                "clpid": "Doyle-J-C"
            }
        ],
        "thesis_committee": [
            {
                "family_name": "Doyle",
                "given_name": "John Comstock",
                "clpid": "Doyle-J-C"
            },
            {
                "family_name": "Krener",
                "given_name": "Arthur",
                "clpid": "Krener-A"
            },
            {
                "family_name": "Marsden",
                "given_name": "Jerrold E.",
                "clpid": "Marsden-J-E"
            },
            {
                "family_name": "Chandy",
                "given_name": "K. Mani",
                "clpid": "Chandy-K-M"
            },
            {
                "family_name": "Murray",
                "given_name": "Richard M.",
                "clpid": "Murray-R-M"
            }
        ],
        "local_group": [
            {
                "literal": "div_eng"
            }
        ],
        "abstract": "As advances in computing power forge ahead at an unparalleled rate, an increasingly compelling question that spans nearly every discipline is how best to exploit these advances. At one extreme, a tempting approach is to throw as much computational power at a problem as possible. Unfortunately, this is rarely a justifiable approach unless one has some theoretical guarantee of the efficacy of the computations. At the other extreme, not taking advantage of available computing power is unnecessarily limiting. In general, it is only through a careful inspection of the strengths and weaknesses of all available approaches that an optimal balance between analysis and computation is achieved. This thesis addresses the delicate interaction between theory and computation in the context of optimal control.\n\nAn exact solution to the nonlinear optimal control problem is known to be prohibitively difficult, both analytically and computationally. Nevertheless, a number of alternative (suboptimal) approaches have been developed. Many of these techniques approach the problem from an off-line, analytical point of view, designing a controller based on a detailed analysis of the system dynamics. A concept particularly amenable to this point of view is that of a control Lyapunov function. These techniques extend the Lyapunov methodology to control systems. In contrast, so-called receding horizon techniques rely purely on on-line computation to determine a control law. While offering an alternative method of attacking the optimal control problem, receding horizon implementations often lack solid theoretical stability guarantees.\n\nIn this thesis, we uncover a synergistic relationship that holds between control Lyapunov function based schemes and on-line receding horizon style computation. These connections derive from the classical Hamilton-Jacobi-Bellman and Euler-Lagrange approaches to optimal control. By returning to these roots, a broad class of control Lyapunov schemes are shown to admit natural extensions to receding horizon schemes, benefiting from the performance advantages of on-line computation. From the receding horizon point of view, the use of a control Lyapunov function is a convenient solution to not only the theoretical properties that receding horizon control typically lacks, but also unexpectedly eases many of the difficult implementation requirements associated with on-line computation. After developing these schemes for the unconstrained nonlinear optimal control problem, the entire design methodology is illustrated on a simple model of a longitudinal flight control system. They are then extended to time-varying and input constrained nonlinear systems, offering a promising new paradigm for nonlinear optimal control design.",
        "doi": "10.7907/4AD2-0T48",
        "publication_date": "1999",
        "thesis_type": "phd",
        "thesis_year": "1999"
    },
    {
        "id": "thesis:3095",
        "collection": "thesis",
        "collection_id": "3095",
        "cite_using_url": "https://resolver.caltech.edu/CaltechETD:etd-08112005-114144",
        "primary_object_url": {
            "basename": "Manohar_r_1998.pdf",
            "content": "final",
            "filesize": 7772553,
            "license": "other",
            "mime_type": "application/pdf",
            "url": "/3095/1/Manohar_r_1998.pdf",
            "version": "v3.0.0"
        },
        "type": "thesis",
        "title": "The impact of asynchrony on computer architecture",
        "author": [
            {
                "family_name": "Manohar",
                "given_name": "Rajit",
                "clpid": "Manohar-R"
            }
        ],
        "thesis_advisor": [
            {
                "family_name": "Martin",
                "given_name": "Alain J.",
                "clpid": "Martin-A-J"
            }
        ],
        "thesis_committee": [
            {
                "family_name": "Martin",
                "given_name": "Alain J.",
                "clpid": "Martin-A-J"
            },
            {
                "family_name": "Barr",
                "given_name": "Alan H.",
                "clpid": "Barr-A-H"
            },
            {
                "family_name": "Chandy",
                "given_name": "K. Mani",
                "clpid": "Chandy-K-M"
            },
            {
                "family_name": "Abu-Mostafa",
                "given_name": "Yaser S.",
                "clpid": "Abu-Mostafa-Y-S"
            }
        ],
        "local_group": [
            {
                "literal": "div_eng"
            }
        ],
        "abstract": "The performance characteristics of asynchronous circuits are quite different from those of their synchronous counterparts. As a result, the best asynchronous design of a particular system does not necessarily correspond to the best synchronous design, even at the algorithmic level. The goal of this thesis is to examine certain aspects of computer architecture and design in the context of an asynchronous VLSI implementation.\n\nWe present necessary and sufficient conditions under which the degree of pipelining of a component can be modified without affecting the correctness of an asynchronous computation.\n\nAs an instance of the improvements possible using an asynchronous architecture, we present circuits to solve the prefix problem with average-case behavior better than that possible by any synchronous solution in the case when the prefix operator has a right zero. We show that our circuit implementations are area-optimal given their performance characteristics, and have the best possible average-case latency.\n\nAt the level of processor design, we present a mechanism for the implementation of precise exceptions in asynchronous processors. The novel feature of this mechanism is that it permits the presence of a data-dependent number of instructions in the execution pipeline of the processor.\n\nFinally, at the level of processor architecture, we present the architecture of a processor with an independent instruction stream for branches. The instruction set permits loops and function calls to be executed with minimal control-flow overhead.",
        "doi": "10.7907/xzwa-p598",
        "publication_date": "1999",
        "thesis_type": "phd",
        "thesis_year": "1999"
    },
    {
        "id": "thesis:304",
        "collection": "thesis",
        "collection_id": "304",
        "cite_using_url": "https://resolver.caltech.edu/CaltechETD:etd-01232008-111520",
        "primary_object_url": {
            "basename": "Heirich_a_1998.pdf",
            "content": "final",
            "filesize": 5149311,
            "license": "other",
            "mime_type": "application/pdf",
            "url": "/304/1/Heirich_a_1998.pdf",
            "version": "v3.0.0"
        },
        "type": "thesis",
        "title": "Analysis of scalable algorithms for dynamic load balancing and mapping with application to photo-realistic rendering",
        "author": [
            {
                "family_name": "Heirich",
                "given_name": "Alan Bryant",
                "clpid": "Heirich-Alan-Bryant"
            }
        ],
        "thesis_advisor": [
            {
                "family_name": "Arvo",
                "given_name": "James R.",
                "clpid": "Arvo-J-R"
            }
        ],
        "thesis_committee": [
            {
                "family_name": "Arvo",
                "given_name": "James R.",
                "clpid": "Arvo-J-R"
            },
            {
                "family_name": "Barr",
                "given_name": "Alan H.",
                "clpid": "Barr-A-H"
            },
            {
                "family_name": "Kesselman",
                "given_name": "Carl",
                "clpid": "Kesselman-C"
            },
            {
                "family_name": "Chandy",
                "given_name": "K. Mani",
                "orcid": "0000-0001-9190-1290",
                "clpid": "Chandy-K-M"
            },
            {
                "family_name": "Schroeder",
                "given_name": "Peter",
                "orcid": "0000-0002-0323-7674",
                "clpid": "Schr\u00f6der-P"
            },
            {
                "family_name": "Wiggins",
                "given_name": "Stephen R.",
                "orcid": "0000-0002-0780-0911",
                "clpid": "Wiggins-S-R"
            }
        ],
        "local_group": [
            {
                "literal": "div_eng"
            }
        ],
        "abstract": "This thesis presents and analyzes scalable algorithms for dynamic load balancing and mapping in distributed computer systems. The algorithms are distributed and concurrent, have no central thread of control, and require no centralized communication. They are derived using spectral properties of graphs: graphs of physical network links among computers in the load balancing problem, and graphs of logical communication channels among processes in the mapping problem. A distinguishing characteristic of these algorithms is that they are scalable: the expected cost of execution does not increase with problem scale. This is proven in a scalability theorem which shows that, for several simple disturbance models, the rate of convergence to a solution is independent of scale. This property is extended through simulated examples and informal argument to general and random disturbances. A worst case disturbance is presented and shown to occur with vanishing probability as the problem scale increases. To verify these conclusions the load balancing algorithm is deployed in support of a photo-realistic rendering application on a parallel computer system based on Monte Carlo path tracing. The performance and scaling of this application, and of the dynamic load balancing algorithm, are measured on different numbers of computers. The results are consistent with the predictions of scalability, and the cost of load balancing is seen to be non-increasing for increasing numbers of computers. The quality of load balancing is evaluated and compared with the quality of solutions produced by competing approaches for up to 1,024 computers. This comparison shows that the algorithm presented here is as good as or better than the most popular competing approaches for this application. The thesis then presents the dynamic mapping algorithm, with simulations of a model problem, and suggests that the pair of algorithms presented here may be an ideal complement to more expensive algorithms such as the well-known recursive spectral bisection.\r\n",
        "doi": "10.7907/ZVYW-H876",
        "publication_date": "1998",
        "thesis_type": "phd",
        "thesis_year": "1998"
    },
    {
        "id": "thesis:304",
        "collection": "thesis",
        "collection_id": "304",
        "cite_using_url": "https://resolver.caltech.edu/CaltechETD:etd-01232008-111520",
        "primary_object_url": {
            "basename": "Heirich_a_1998.pdf",
            "content": "final",
            "filesize": 5149311,
            "license": "other",
            "mime_type": "application/pdf",
            "url": "/304/1/Heirich_a_1998.pdf",
            "version": "v3.0.0"
        },
        "type": "thesis",
        "title": "Analysis of scalable algorithms for dynamic load balancing and mapping with application to photo-realistic rendering",
        "author": [
            {
                "family_name": "Heirich",
                "given_name": "Alan Bryant",
                "clpid": "Heirich-Alan-Bryant"
            }
        ],
        "thesis_advisor": [
            {
                "family_name": "Arvo",
                "given_name": "James R.",
                "clpid": "Arvo-J-R"
            }
        ],
        "thesis_committee": [
            {
                "family_name": "Arvo",
                "given_name": "James R.",
                "clpid": "Arvo-J-R"
            },
            {
                "family_name": "Barr",
                "given_name": "Alan H.",
                "clpid": "Barr-A-H"
            },
            {
                "family_name": "Kesselman",
                "given_name": "Carl",
                "clpid": "Kesselman-C"
            },
            {
                "family_name": "Chandy",
                "given_name": "K. Mani",
                "orcid": "0000-0001-9190-1290",
                "clpid": "Chandy-K-M"
            },
            {
                "family_name": "Schroeder",
                "given_name": "Peter",
                "orcid": "0000-0002-0323-7674",
                "clpid": "Schr\u00f6der-P"
            },
            {
                "family_name": "Wiggins",
                "given_name": "Stephen R.",
                "orcid": "0000-0002-0780-0911",
                "clpid": "Wiggins-S-R"
            }
        ],
        "local_group": [
            {
                "literal": "div_eng"
            }
        ],
        "abstract": "This thesis presents and analyzes scalable algorithms for dynamic load balancing and mapping in distributed computer systems. The algorithms are distributed and concurrent, have no central thread of control, and require no centralized communication. They are derived using spectral properties of graphs: graphs of physical network links among computers in the load balancing problem, and graphs of logical communication channels among processes in the mapping problem. A distinguishing characteristic of these algorithms is that they are scalable: the expected cost of execution does not increase with problem scale. This is proven in a scalability theorem which shows that, for several simple disturbance models, the rate of convergence to a solution is independent of scale. This property is extended through simulated examples and informal argument to general and random disturbances. A worst case disturbance is presented and shown to occur with vanishing probability as the problem scale increases. To verify these conclusions the load balancing algorithm is deployed in support of a photo-realistic rendering application on a parallel computer system based on Monte Carlo path tracing. The performance and scaling of this application, and of the dynamic load balancing algorithm, are measured on different numbers of computers. The results are consistent with the predictions of scalability, and the cost of load balancing is seen to be non-increasing for increasing numbers of computers. The quality of load balancing is evaluated and compared with the quality of solutions produced by competing approaches for up to 1,024 computers. This comparison shows that the algorithm presented here is as good as or better than the most popular competing approaches for this application. The thesis then presents the dynamic mapping algorithm, with simulations of a model problem, and suggests that the pair of algorithms presented here may be an ideal complement to more expensive algorithms such as the well-known recursive spectral bisection.\r\n",
        "doi": "10.7907/ZVYW-H876",
        "publication_date": "1998",
        "thesis_type": "phd",
        "thesis_year": "1998"
    },
    {
        "id": "thesis:341",
        "collection": "thesis",
        "collection_id": "341",
        "cite_using_url": "https://resolver.caltech.edu/CaltechETD:etd-01252008-095244",
        "primary_object_url": {
            "basename": "Sivilotti_pag_1998.pdf",
            "content": "final",
            "filesize": 6977574,
            "license": "other",
            "mime_type": "application/pdf",
            "url": "/341/1/Sivilotti_pag_1998.pdf",
            "version": "v3.0.0"
        },
        "type": "thesis",
        "title": "A method for the specification, composition, and testing of distributed object systems",
        "author": [
            {
                "family_name": "Sivilotti",
                "given_name": "Paolo A. G.",
                "clpid": "Sivilotti-P-A-G"
            }
        ],
        "thesis_advisor": [
            {
                "family_name": "Chandy",
                "given_name": "K. Mani",
                "clpid": "Chandy-K-M"
            }
        ],
        "thesis_committee": [
            {
                "family_name": "Chandy",
                "given_name": "K. Mani",
                "clpid": "Chandy-K-M"
            },
            {
                "family_name": "Martin",
                "given_name": "Alain J.",
                "clpid": "Martin-A-J"
            },
            {
                "family_name": "Arvo",
                "given_name": "James R.",
                "clpid": "Arvo-J-R"
            },
            {
                "family_name": "Bagrodia",
                "given_name": "Rajive",
                "clpid": "Bagrodia-R"
            },
            {
                "family_name": "Abu-Mostafa",
                "given_name": "Yaser S.",
                "clpid": "Abu-Mostafa-Y-S"
            }
        ],
        "local_group": [
            {
                "literal": "div_eng"
            }
        ],
        "abstract": "The formation of a distributed system from a collection of individual components requires the ability for components to exchange syntactically well-formed messages. Several technologies exist that provide this fundamental functionality, as well as the ability to locate components dynamically based on syntactic requirements. The formation of a correct distributed system requires, in addition, that these interactions between components be semantically well-formed. The method presented in this thesis is intended to assist in the development of correct distributed systems.\n\nWe present a specification methodology based on three fundamental operators from temporal logic: initially, next, and transient. From these operators we derive a collection of higher-level operators that are used for component specification. The novel aspect of our specification methodology is that we require that these operators be used in the following restricted manner:\n\n\u2022A specification statement can refer only to properties that are local to a single component.\n\u2022A single component must be able to guarantee unilaterally the validity of the specification statement for any distributed system of which it is a part.  Specification statements that conform to these two restrictions we call certificates.\n\nThe first restriction is motivated by our desire for these component specifications to be testable in a relatively efficient manner. In fact, we describe a set of simplified certificates that can be translated into a testing harness by a simple parser with very little programmer intervention. The second restriction is motivated by our desire for a simple theory of composition: If a certificate is a property of a component, that certificate is also a property of any system containing that component.\n\nAnother novel aspect of our methodology is the introduction of a new temporal operator that combines both safety and progress properties. The concept underlying this operator has been used implicitly before; but by extracting this concept into a first-class operator, we are able to prove several new theorems about such properties. We demonstrate the utility of this operator and of our theorems by using them to simplify several proofs.\n\nThe restrictions imposed on certificates are severe. Although they have pleasing consequences as described above, they can also lead to lengthy proofs of system properties that are not simple conjunctions. To compensate for this difficulty, we introduce collections of certificates that we call services. Services facilitate proof reuse by encapsulating common component interactions used to establish various system properties.\n\nWe experiment with our methodology by applying it to several extended examples. These experiments illustrate the utility of our approach and convince us of the practicality of component-based distributed system development. This thesis addresses three parts of the development cycle for distributed object systems: (i) the specification of systems and components, (ii) the compositional reasoning used to verify that a collection of components satisfy a system specification, and (iii) the validation of component implementations.\n",
        "doi": "10.7907/z89g-gm27",
        "publication_date": "1998",
        "thesis_type": "phd",
        "thesis_year": "1998"
    },
    {
        "id": "thesis:321",
        "collection": "thesis",
        "collection_id": "321",
        "cite_using_url": "https://resolver.caltech.edu/CaltechETD:etd-01242008-074143",
        "primary_object_url": {
            "basename": "Massingill_bl_1998.pdf",
            "content": "final",
            "filesize": 5884120,
            "license": "other",
            "mime_type": "application/pdf",
            "url": "/321/1/Massingill_bl_1998.pdf",
            "version": "v3.0.0"
        },
        "type": "thesis",
        "title": "A structured approach to parallel programming",
        "author": [
            {
                "family_name": "Massingill",
                "given_name": "Berna Linda",
                "clpid": "Massingill-B-L"
            }
        ],
        "thesis_advisor": [
            {
                "family_name": "Chandy",
                "given_name": "K. Mani",
                "clpid": "Chandy-K-M"
            }
        ],
        "thesis_committee": [
            {
                "family_name": "Chandy",
                "given_name": "K. Mani",
                "clpid": "Chandy-K-M"
            },
            {
                "family_name": "Martin",
                "given_name": "Alain J.",
                "clpid": "Martin-A-J"
            },
            {
                "family_name": "Meiron",
                "given_name": "Daniel I.",
                "clpid": "Meiron-D-I"
            },
            {
                "family_name": "Van de Velde",
                "given_name": "Eric",
                "clpid": "van-de-Velde-E"
            },
            {
                "family_name": "Arvo",
                "given_name": "James R.",
                "clpid": "Arvo-J-R"
            },
            {
                "family_name": "Abu-Mostafa",
                "given_name": "Yaser S.",
                "clpid": "Abu-Mostafa-Y-S"
            }
        ],
        "local_group": [
            {
                "literal": "div_eng"
            }
        ],
        "abstract": "Parallel programs are more difficult to develop and reason about than sequential programs. There are two broad classes of parallel programs: (1) programs whose specifications describe ongoing behavior and interaction with an environment, and (2) programs whose specifications describe the relation between initial and final states. This thesis presents a simple, structured approach to developing parallel programs of the latter class that allows much of the work of development and reasoning to be done using the same techniques and tools used for sequential programs. In this approach, programs are initially developed in a primary programming model that combines the standard sequential model with a restricted form of parallel composition that is semantically equivalent to sequential composition. Such programs can be reasoned about using sequential techniques and executed sequentially for testing. They are then transformed for execution on typical parallel architectures via a sequence of semantics-preserving transformations, making use of two secondary programming models, both based on parallel composition with barrier synchronization and one incorporating data partitioning. The transformation process for a particular program is typically guided and assisted by a parallel programming archetype, an abstraction that captures the commonality of a class of programs with similar computational features and provides a class-specific strategy for producing efficient parallel programs. Transformations may be applied manually or via a parallelizing compiler. Correctness of transformations within the primary programming model is proved using standard sequential techniques. Correctness of transformations between the programming models and between the models and practical programming languages is proved using a state-transition-based operational model.\n\nThis thesis presents: (1) the primary and secondary programming models, (2) an operational model that provides a common framework for reasoning about programs in all three models, (3) a collection of example program transformations with arguments for their correctness, and (4) two groups of experiments in which our overall approach was used to develop example applications. The specific contribution of this work is to present a unified theory/practice framework for this approach to parallel program development, tying together the underlying theory, the program transformations, and the program-development methodology.\n",
        "doi": "10.7907/5ma9-h225",
        "publication_date": "1998",
        "thesis_type": "phd",
        "thesis_year": "1998"
    },
    {
        "id": "thesis:326",
        "collection": "thesis",
        "collection_id": "326",
        "cite_using_url": "https://resolver.caltech.edu/CaltechETD:etd-01242008-132610",
        "primary_object_url": {
            "basename": "Rieffel_ma_1998.pdf",
            "content": "final",
            "filesize": 26274743,
            "license": "other",
            "mime_type": "application/pdf",
            "url": "/326/1/Rieffel_ma_1998.pdf",
            "version": "v3.0.0"
        },
        "type": "thesis",
        "title": "Performance modeling for concurrent particle simulations",
        "author": [
            {
                "family_name": "Rieffel",
                "given_name": "Marc A.",
                "clpid": "Rieffel-M-A"
            }
        ],
        "thesis_advisor": [
            {
                "family_name": "Taylor",
                "given_name": "Stephen",
                "clpid": "Taylor-S"
            }
        ],
        "thesis_committee": [
            {
                "family_name": "Taylor",
                "given_name": "Stephen",
                "clpid": "Taylor-S"
            },
            {
                "family_name": "Arvo",
                "given_name": "James R.",
                "clpid": "Arvo-J-R"
            },
            {
                "family_name": "Chandy",
                "given_name": "K. Mani",
                "clpid": "Chandy-K-M"
            },
            {
                "family_name": "McKoy",
                "given_name": "Basil Vincent",
                "clpid": "McKoy-B-V"
            }
        ],
        "local_group": [
            {
                "literal": "div_eng"
            }
        ],
        "abstract": "This thesis develops an application- and architecture-independent framework for predicting the runtime and memory requirements of particle simulations in complex three-dimensional geometries. Both sequential and concurrent simulations are addressed, on a variety of homogeneous and heterogeneous architectures. The models are considered in the context of neutral flow Direct Simulation Monte Carlo (DSMC) simulations for semiconductor manufacturing and aerospace applications.\n\nComplex physical and chemical processes render algorithmic analysis alone insufficient for understanding the performance characteristics of particle simulations. For this reason, detailed knowledge of the interaction between the physics and chemistry of a problem and the numerical method used to solve it is required.\n\nPrediction of runtime and storage requirements of sequential and concurrent particle simulations is possible with the use of these models. The feasibility of simulations for given physical systems can also be determined. While the present work focuses on the concurrent DSMC method, the same modeling techniques can be applied to other numerical methods, such as Particle-In-Cell (PIC) and Navier-Stokes (NS).\n",
        "doi": "10.7907/sx57-5d89",
        "publication_date": "1998",
        "thesis_type": "phd",
        "thesis_year": "1998"
    },
    {
        "id": "thesis:539",
        "collection": "thesis",
        "collection_id": "539",
        "cite_using_url": "https://resolver.caltech.edu/CaltechETD:etd-02072008-075916",
        "primary_object_url": {
            "basename": "Watts_jr_1998.pdf",
            "content": "final",
            "filesize": 7754875,
            "license": "other",
            "mime_type": "application/pdf",
            "url": "/539/1/Watts_jr_1998.pdf",
            "version": "v3.0.0"
        },
        "type": "thesis",
        "title": "Dynamic load balancing and granularity control on heterogeneous and hybrid architectures",
        "author": [
            {
                "family_name": "Watts",
                "given_name": "Jerrell R.",
                "clpid": "Watts-J-R"
            }
        ],
        "thesis_advisor": [
            {
                "family_name": "Taylor",
                "given_name": "Stephen",
                "clpid": "Taylor-S"
            }
        ],
        "thesis_committee": [
            {
                "family_name": "Taylor",
                "given_name": "Stephen",
                "clpid": "Taylor-S"
            },
            {
                "family_name": "Arvo",
                "given_name": "James R.",
                "clpid": "Arvo-J-R"
            },
            {
                "family_name": "Chandy",
                "given_name": "K. Mani",
                "clpid": "Chandy-K-M"
            },
            {
                "family_name": "van de Geijn",
                "given_name": "Robert A.",
                "clpid": "van-de-Geijn-R-A"
            }
        ],
        "local_group": [
            {
                "literal": "div_eng"
            }
        ],
        "abstract": "The past several years have seen concurrent applications grow increasingly complex, as the most advanced techniques from academia find their way into production parallel applications. Moreover, the platforms on which these concurrent computations now execute are frequently heterogeneous networks of workstations and shared-memory multiprocessors, because of their low cost relative to traditional large-scale multicomputers. The combination of sophisticated algorithms and more complex computing environments has made existing load balancing techniques obsolete. Current methods characterize the loads of tasks in very simple terms, often fail to account for the communication costs of an application, and typically consider computational resources to be homogeneous. The complexity of current applications coupled with the fact that they are running in heterogeneous environments has also made partitioning a problem for concurrent execution an ordeal. It is no longer adequate to simply divide the problem into some number of pieces per computer and hope for the best. In a complex application, the workloads of the pieces, which may be equal initially, may diverge over time. On a heterogeneous network, the varying capabilities of the computers will widen this disparity in resource usage even further. Thus, there is a need to dynamically manage the granularity of an application, repartitioning the problem at runtime to correct inadequacies in the original partitioning and to make more effective use of computational resources.\n\nThis thesis presents techniques for dynamic load balancing in complex irregular applications. Advances over previous work are three-fold: First, these techniques are applicable to networks comprised of heterogeneous machines, including both single- processor workstations and personal computers, and multiprocessor compute servers. Second, the use of load vectors more accurately characterizes the resource requirements of tasks, including the computational demands of different algorithmic phases as well as the needs for other resources, such as memory. Finally, runtime repartitioning adjusts the granularity of the problem so that the available resources are more fully utilized. Two other improvements over earlier techniques include improved algorithms for determining the ideal redistribution of work as well as advanced techniques for selecting which tasks to transfer to satisfy those ideals. The latter algorithms incorporate the notion of task migration costs, including the impact on an application's communications locality. The improvements listed above are demonstrated on both industrial applications and small parametric problems on networks of heterogeneous computers as well as traditional large-scale multicomputers.\n",
        "doi": "10.7907/gvgq-3d11",
        "publication_date": "1998",
        "thesis_type": "phd",
        "thesis_year": "1998"
    },
    {
        "id": "thesis:90",
        "collection": "thesis",
        "collection_id": "90",
        "cite_using_url": "https://resolver.caltech.edu/CaltechETD:etd-01092008-082210",
        "primary_object_url": {
            "basename": "Cheng_jf_1997.pdf",
            "content": "final",
            "filesize": 3705135,
            "license": "other",
            "mime_type": "application/pdf",
            "url": "/90/1/Cheng_jf_1997.pdf",
            "version": "v3.0.0"
        },
        "type": "thesis",
        "title": "Iterative decoding",
        "author": [
            {
                "family_name": "Cheng",
                "given_name": "Jung-Fu",
                "clpid": "Cheng-Jung-Fu"
            }
        ],
        "thesis_advisor": [
            {
                "family_name": "McEliece",
                "given_name": "Robert J.",
                "clpid": "McEliece-R-J"
            }
        ],
        "thesis_committee": [
            {
                "family_name": "McEliece",
                "given_name": "Robert J.",
                "clpid": "McEliece-R-J"
            },
            {
                "family_name": "Goldsmith",
                "given_name": "Andrea Jo",
                "clpid": "Goldsmith-A-J"
            },
            {
                "family_name": "Divsalar",
                "given_name": "Dariush",
                "clpid": "Divsalar-D"
            },
            {
                "family_name": "Chandy",
                "given_name": "K. Mani",
                "clpid": "Chandy-K-M"
            },
            {
                "family_name": "Simon",
                "given_name": "Marvin K.",
                "clpid": "Simon-M-K"
            },
            {
                "family_name": "Vaidyanathan",
                "given_name": "P. P.",
                "clpid": "Vaidyanathan-P-P"
            }
        ],
        "local_group": [
            {
                "literal": "div_eng"
            }
        ],
        "abstract": "Though coding theory suggests long error correcting codes chosen at random perform close to the optimum, the problem of designing good codes has traditionally been attacked by developing codes with a lot of structure, which lends itself to feasible decoders. The challenge to find practical decoders for long random codes has not been seriously considered until the recent introduction of turbo codes in 1993. This methodology of multi-stage iterative decoding with exchange of soft information, applied to codes with pseudo-random structure, has provided a whole new approach to construct good codes and to decode them with low complexity. This thesis examines the theoretical ground as well as the design and implementation details of these iterative decoding techniques. The methodology is first applied to parallel concatenated unit-memory convolutional codes and generalized concatenated convolutional codes to demonstrate its power and the general design principle. We then show that, by representing these coding systems with appropriate Bayesian belief networks, all the ad hoc algorithms can be derived from a general statistical inference belief propagation algorithm A class of new binary codes based on low-density generator matrices is proposed to eliminate the arbitrariness and unnecessary constraints in turbo coding we have recognized from this Bayesian network viewpoint. Contrary to the turbo decoding paradigm where sequential processing is accomplished by very powerful central units, the decoding algorithm for the new code is highly parallel and distributive. We also apply these codes to M-ary modulations using multilevel coding techniques to achieve higher spectral efficiency. In all cases, we have constructed systems with flexible error protection capability and performance within 1 dB of the channel capacity.\r\n",
        "doi": "10.7907/ydj9-zq05",
        "publication_date": "1997",
        "thesis_type": "phd",
        "thesis_year": "1997"
    },
    {
        "id": "thesis:118",
        "collection": "thesis",
        "collection_id": "118",
        "cite_using_url": "https://resolver.caltech.edu/CaltechETD:etd-01102008-153402",
        "primary_object_url": {
            "basename": "Maskit_d_1997.pdf",
            "content": "final",
            "filesize": 7724432,
            "license": "other",
            "mime_type": "application/pdf",
            "url": "/118/1/Maskit_d_1997.pdf",
            "version": "v3.0.0"
        },
        "type": "thesis",
        "title": "Software register synchronization for super-scalar processors with partitioned register files",
        "author": [
            {
                "family_name": "Maskit",
                "given_name": "Daniel",
                "clpid": "Maskit-D"
            }
        ],
        "thesis_advisor": [
            {
                "family_name": "Taylor",
                "given_name": "Stephen",
                "clpid": "Taylor-S"
            }
        ],
        "thesis_committee": [
            {
                "family_name": "Taylor",
                "given_name": "Stephen",
                "clpid": "Taylor-S"
            },
            {
                "family_name": "Barr",
                "given_name": "Alan H.",
                "clpid": "Barr-A-H"
            },
            {
                "family_name": "Chandy",
                "given_name": "K. Mani",
                "clpid": "Chandy-K-M"
            }
        ],
        "local_group": [
            {
                "literal": "div_eng"
            }
        ],
        "abstract": "Increases in high-end microprocessor performance are becoming increasingly reliant on simultaneous issuing of instructions to multiple functional units on a single chip. As the number of functional units increases, the chip area, wire lengths, and delays required for a monolithic register file become unreasonable. Future microprocessors will have partitioned register files. The correctness of contemporary super-scalar processors relies on synchronized accesses to registers. This issue will be critical in systems with partitioned register files. Current techniques for managing register access ordering, such as register score boarding and register renaming, are inadequate for architectures with partitioned register files. This thesis demonstrates the difficulties of implementing these techniques with a partitioned register file, and introduces a novel compiler algorithm which addresses this issue.\n\nWhenever a processor using register scoreboarding or register renaming issues an instruction, either the scoreboard or the register name table must be accessed to check the instruction's sources and destination. If the register file is partitioned, checking the scoreboard or name table for a remote register is difficult. One functional unit cannot determine at runtime when it is safe to write to a register in another functional unit's register file. While these techniques can be supported through use of a global or partitioned scoreboard, such an implementation would be complex, and have latency problems similar to those of a monolithic register file.\n\nThis work discusses the organization of multiple functional units into loosely-coupled groups of functional units that can communicate via direct register writes, but with purely local hardware interlocks to force synchronization. A novel compiler algorithm, Software Register Synchronization (SRS), is introduced. A comparison between SRS and existing hardware mechanisms is conducted using the Multiflow compiler modified to generate code for the MIT M-Machine, Experiments to evaluate the SRS algorithm are run on the M-Machine simulator being used for architectural verification. In order to support partitioned register file architectures, an alternative to traditional hardware methods for managing register synchronization needs to be developed. This thesis presents a novel compiler algorithm to address this need. The SRS algorithm is described, demonstrated to be correct, and evaluated. Details of the implementation of the SRS algorithm within the Multiflow compiler for the MIT M-Machine are provided.",
        "doi": "10.7907/tyap-ea69",
        "publication_date": "1997",
        "thesis_type": "phd",
        "thesis_year": "1997"
    },
    {
        "id": "thesis:26",
        "collection": "thesis",
        "collection_id": "26",
        "cite_using_url": "https://resolver.caltech.edu/CaltechETD:etd-01042008-085720",
        "primary_object_url": {
            "basename": "Thornley_jw_1996.pdf",
            "content": "final",
            "filesize": 10872831,
            "license": "other",
            "mime_type": "application/pdf",
            "url": "/26/1/Thornley_jw_1996.pdf",
            "version": "v3.0.0"
        },
        "type": "thesis",
        "title": "A parallel programming model with sequential semantics",
        "author": [
            {
                "family_name": "Thornley",
                "given_name": "John William",
                "clpid": "Thornley-J-W"
            }
        ],
        "thesis_advisor": [
            {
                "family_name": "Chandy",
                "given_name": "K. Mani",
                "orcid": "0000-0001-9190-1290",
                "clpid": "Chandy-K-M"
            }
        ],
        "thesis_committee": [
            {
                "family_name": "Chandy",
                "given_name": "K. Mani",
                "orcid": "0000-0001-9190-1290",
                "clpid": "Chandy-K-M"
            },
            {
                "family_name": "Kesselman",
                "given_name": "Carl",
                "clpid": "Kesselman-C"
            },
            {
                "family_name": "Van de Velde",
                "given_name": "Eric",
                "clpid": "van-de-Velde-E"
            },
            {
                "family_name": "Hall",
                "given_name": "Mary",
                "clpid": "Hall-M"
            },
            {
                "family_name": "Schroeder",
                "given_name": "Peter",
                "orcid": "0000-0002-0323-7674",
                "clpid": "Schr\u00f6der-P"
            }
        ],
        "local_group": [
            {
                "literal": "div_eng"
            }
        ],
        "abstract": "Parallel programming is more difficult than sequential programming in part because of the complexity of reasoning, testing, and debugging in the context of concurrency. In this thesis, we present and investigate a parallel programming model that provides direct control of parallelism in a notation with sequential semantics. Our model consists of a standard sequential imperative programming notation extended with the following three pragmas:\r\n\r\n1. The parallelizable sequence of statements pragma indicates that a sequence of statements can be executed as parallel threads.\r\n\r\n2. The parallelizable for-loop statement pragma indicates that the iterations of a for-loop statement can be executed as parallel threads.\r\n\r\n3. The single-assignment type pragma indicates that variables of a given type are assigned at most once and that ordinary assignment and evaluation operations can be used as implicit communication and synchronization operations between parallel threads.\r\n\r\nIn our model, a parallel program is simply an equivalent sequential program with added pragmas. The placement of the pragmas is subject to a small set of restrictions that ensure the equivalence of the parallel and sequential semantics. We prove that if standard sequential execution of a program (by ignoring the pragmas) satisfies a given specification and the pragmas are used correctly, parallel execution of the program (as directed by the pragmas) is guaranteed to satisfy the same specification.\r\n\r\nOur model allows parallel programs to be developed using sequential reasoning, testing, and debugging techniques, prior to parallel execution for performance. Since parallelism is specified directly, sophisticated analysis and compilation techniques are not required to extract parallelism from programs. However, it is important that parallel performance issues such as granularity, load balancing, and locality be considered throughout algorithm and program development.\r\n\r\nWe describe a series of programming experiments performed on up to 32 processors of a shared-memory multiprocessor system. These experiments indicate that for a wide range of problems:\r\n\r\n1. Our model can express sophisticated parallel algorithms with significantly less complication than traditional explicit parallel programming models.\r\n\r\n2. Parallel programs in our model execute as efficiently as sequential programs on one processor and deliver good speedups on multiple processors.\r\n\r\n3. Program development with our model is less difficult than with traditional explicit parallel programming models because reasoning, testing, and debugging are performed using sequential methods.\r\n\r\nWe believe that our model provides the basis of the method of choice for a large number of moderate-scale, medium-grained parallel programming applications.",
        "doi": "10.7907/mytw-er77",
        "publication_date": "1996",
        "thesis_type": "phd",
        "thesis_year": "1996"
    },
    {
        "id": "thesis:4987",
        "collection": "thesis",
        "collection_id": "4987",
        "cite_using_url": "https://resolver.caltech.edu/CaltechETD:etd-12132007-083330",
        "primary_object_url": {
            "basename": "Dabdub_d_1996.pdf",
            "content": "final",
            "filesize": 22329563,
            "license": "other",
            "mime_type": "application/pdf",
            "url": "/4987/1/Dabdub_d_1996.pdf",
            "version": "v3.0.0"
        },
        "type": "thesis",
        "title": "Mathematical modeling of air pollution dynamics by parallel computation",
        "author": [
            {
                "family_name": "Dabdub",
                "given_name": "Donald",
                "clpid": "Dabdub-D"
            }
        ],
        "thesis_advisor": [
            {
                "family_name": "Seinfeld",
                "given_name": "John H.",
                "clpid": "Seinfeld-J-H"
            }
        ],
        "thesis_committee": [
            {
                "family_name": "Seinfeld",
                "given_name": "John H.",
                "clpid": "Seinfeld-J-H"
            },
            {
                "family_name": "Van de Velde",
                "given_name": "Eric",
                "clpid": "van-de-Velde-E"
            },
            {
                "family_name": "Gavalas",
                "given_name": "George R.",
                "clpid": "Gavalas-G-R"
            },
            {
                "family_name": "Keller",
                "given_name": "Herbert Bishop",
                "clpid": "Keller-H-B"
            },
            {
                "family_name": "Chandy",
                "given_name": "K. Mani",
                "clpid": "Chandy-K-M"
            },
            {
                "family_name": "Flagan",
                "given_name": "Richard C.",
                "clpid": "Flagan-R-C"
            }
        ],
        "local_group": [
            {
                "literal": "div_chem"
            }
        ],
        "abstract": "The use of massively parallel computers provides an avenue to overcome the computational requirements in the study of atmospheric chemical dynamics. General considerations on parallel implementation of air quality models are outlined including domain decomposition strategies, algorithm evaluation and design, portability, modularity, and buffering techniques used in I/O operations. Results are given for the implementation of the CIT urban air pollution model on distributed memory multiple instruction / multiple data (MIMD) machines ranging from a cluster of workstations to a 512 node Intel Paragon.\r\n\r\nThe central challenge in developing a parallel air pollution model is the implementation of the chemistry and transport operators used in the solution of the atmospheric reaction-diffusion equation. The chemistry operator is generally the most computationally intensive step in atmospheric air quality models. A new method based on Richardson extrapolation to solve the chemical kinetics is presented. The transport operator is the most challenging to solve numerically. Because of its hyperbolic nature non-physical oscillations and/or negative concentrations appear near steep gradient regions of the solution. Six algorithms for solving the advection equation are compared to determine their suitability for use in parallel photochemical air quality models. Four algorithms for filtering the numerical noise produced when solving the advection equation are also compared.\r\n\r\nA speed-up factor of 94.9 has been measured when the I/O, transport, and chemistry portions of the model are performed in parallel. This work provides the computational infrastructure required to incorporate new physico-chemical phenomena in the next generation of urban- or regional-scale air quality models.\r\n\r\nFinally, the SARMAP model is used to model the San Joaquin Valley of California. SARMAP is the updated version of RADM. It can be considered a state-of-the- art regional air pollution model. Like the CIT model, SARMAP incorporates the following atmospheric phenomena: gas-phase chemistry, advection and diffusion. In addition, SARMAP incorporates aqueous-phase chemistry and transport through cumulus clouds. Sensitivity studies performed show a significant dependence of ozone model predictions on boundary conditions.",
        "doi": "10.7907/k1ap-np35",
        "publication_date": "1996",
        "thesis_type": "phd",
        "thesis_year": "1996"
    },
    {
        "id": "thesis:4136",
        "collection": "thesis",
        "collection_id": "4136",
        "cite_using_url": "https://resolver.caltech.edu/CaltechETD:etd-10172007-090528",
        "primary_object_url": {
            "basename": "Lee_tk_1995.pdf",
            "content": "final",
            "filesize": 6208750,
            "license": "other",
            "mime_type": "application/pdf",
            "url": "/4136/1/Lee_tk_1995.pdf",
            "version": "v3.0.0"
        },
        "type": "thesis",
        "title": "A General Approach to Performance Analysis and Optimization of Asynchronous Circuits",
        "author": [
            {
                "family_name": "Lee",
                "given_name": "Tak Kwan",
                "clpid": "Lee-Tak-Kwan"
            }
        ],
        "thesis_advisor": [
            {
                "family_name": "Martin",
                "given_name": "Alain J.",
                "clpid": "Martin-A-J"
            }
        ],
        "thesis_committee": [
            {
                "family_name": "Martin",
                "given_name": "Alain J.",
                "clpid": "Martin-A-J"
            },
            {
                "family_name": "Seitz",
                "given_name": "Charles L.",
                "clpid": "Seitz-C-L"
            },
            {
                "family_name": "Chandy",
                "given_name": "K. Mani",
                "clpid": "Chandy-K-M"
            },
            {
                "family_name": "Goodman",
                "given_name": "Rodney M.",
                "clpid": "Goodman-R-M"
            },
            {
                "family_name": "Burns",
                "given_name": "Steven",
                "clpid": "Burns-S"
            },
            {
                "family_name": "Abu-Mostafa",
                "given_name": "Yaser S.",
                "clpid": "Abu-Mostafa-Y-S"
            }
        ],
        "local_group": [
            {
                "literal": "div_eng"
            }
        ],
        "abstract": "A systematic approach for evaluating and optimizing the performance of asynchronous VLSI circuits is presented. Index-priority simulation is introduced to efficiently find minimal cycles in the state graph of a given circuit. These minimal cycles are used to determine the causality relationships between all signal transitions in the circuit. Once these relationships are known, the circuit is then modeled as an extended event-rule system, which can be used to describe many circuits, including ones that are inherently disjunctive. An accurate indication of the performance of the circuit is obtained by analytically computing the period of the corresponding extended event-rule system.\r\n",
        "doi": "10.7907/ehzs-y537",
        "publication_date": "1995",
        "thesis_type": "phd",
        "thesis_year": "1995"
    },
    {
        "id": "thesis:4114",
        "collection": "thesis",
        "collection_id": "4114",
        "cite_using_url": "https://resolver.caltech.edu/CaltechETD:etd-10162007-111256",
        "primary_object_url": {
            "basename": "Leino_krm_1995.pdf",
            "content": "final",
            "filesize": 7306673,
            "license": "other",
            "mime_type": "application/pdf",
            "url": "/4114/1/Leino_krm_1995.pdf",
            "version": "v3.0.0"
        },
        "type": "thesis",
        "title": "Toward reliable modular programs",
        "author": [
            {
                "family_name": "Leino",
                "given_name": "K. Rustan M.",
                "clpid": "Leino-K-M"
            }
        ],
        "thesis_advisor": [
            {
                "family_name": "Van de Snepscheut",
                "given_name": "Jan L. A.",
                "clpid": "Van-de-Snepscheut-J-L-A"
            },
            {
                "family_name": "Chandy",
                "given_name": "K. Mani",
                "clpid": "Chandy-K-M"
            },
            {
                "family_name": "Nelson",
                "given_name": "Greg",
                "clpid": "Nelson-G"
            }
        ],
        "thesis_committee": [
            {
                "family_name": "Van de Snepscheut",
                "given_name": "Jan L. A.",
                "clpid": "Van-de-Snepscheut-J-L-A"
            },
            {
                "family_name": "Martin",
                "given_name": "Alain J.",
                "clpid": "Martin-A-J"
            },
            {
                "family_name": "Sanders",
                "given_name": "Beverly",
                "clpid": "Sanders-B"
            },
            {
                "family_name": "Nelson",
                "given_name": "Greg",
                "clpid": "Nelson-G"
            },
            {
                "family_name": "Chandy",
                "given_name": "K. Mani",
                "clpid": "Chandy-K-M"
            },
            {
                "family_name": "Wilson",
                "given_name": "Richard M.",
                "clpid": "Wilson-R-M"
            }
        ],
        "local_group": [
            {
                "literal": "div_eng"
            }
        ],
        "abstract": "Software is being applied in an ever-increasing number of areas. Computer programs and systems are becoming more complex and consisting of more delicately interconnected components. Errors surfacing in programs are still a conspicuous and costly problem. It's about time we employ some techniques that guide us toward higher reliability of practical programs. The goal of this thesis is just that.\n\nThis thesis presents a theory for verifying programs based on Dijkstra's weakest-precondition calculus. A variety of program paradigms used in practice, such as exceptions, procedures, object orientation, and modularity, are dealt with.\n\nThe thesis sheds new light on the theory behind programs with exceptions. It develops an elegant algebra, and shows it to be the foundation on which the semantics of exceptions rests. It develops a trace semantics for programs with exceptions, from which the weakest-precondition semantics is derived. It also proves a theorem on programming methodology relating to exceptions, and applies this theorem in the novel derivation of a simple program.\n\nThe thesis presents a simple model for object-oriented data types, in which concerns have been separated, resulting in the simplicity of the model.\n\nTo deal with large programs, this thesis takes a practical look at modularity and abstraction. It reveals a problem that arises in writing specifications for modular programs where previous techniques fail. The thesis introduces a new specification construct that solves that problem, and gives a formal proof of soundness for modular verification using that construct. The model is a generalization of Hoare's classical data refinement. However, there are more problems to be solved. The thesis reports on some of these problems and suggests some future directions toward more reliable modular programs.\n",
        "doi": "10.7907/ynt2-nn65",
        "publication_date": "1995",
        "thesis_type": "phd",
        "thesis_year": "1995"
    },
    {
        "id": "thesis:4036",
        "collection": "thesis",
        "collection_id": "4036",
        "cite_using_url": "https://resolver.caltech.edu/CaltechETD:etd-10112007-083903",
        "primary_object_url": {
            "basename": "Hofstee_hp_1995.pdf",
            "content": "final",
            "filesize": 4307477,
            "license": "other",
            "mime_type": "application/pdf",
            "url": "/4036/1/Hofstee_hp_1995.pdf",
            "version": "v3.0.0"
        },
        "type": "thesis",
        "title": "Synchronizing processes",
        "author": [
            {
                "family_name": "Hofstee",
                "given_name": "H. Peter",
                "clpid": "Hofstee-H.-Peter"
            }
        ],
        "thesis_advisor": [
            {
                "family_name": "Van de Snepscheut",
                "given_name": "Jan L. A.",
                "clpid": "Van-de-Snepscheut-J-L-A"
            },
            {
                "family_name": "Chandy",
                "given_name": "K. Mani",
                "clpid": "Chandy-K-M"
            }
        ],
        "thesis_committee": [
            {
                "family_name": "Van de Snepscheut",
                "given_name": "Jan L. A.",
                "clpid": "Van-de-Snepscheut-J-L-A"
            },
            {
                "family_name": "Martin",
                "given_name": "Alain J.",
                "clpid": "Martin-A-J"
            },
            {
                "family_name": "Seitz",
                "given_name": "Charles L.",
                "clpid": "Seitz-C-L"
            },
            {
                "family_name": "Chandy",
                "given_name": "K. Mani",
                "clpid": "Chandy-K-M"
            },
            {
                "family_name": "Bagrodia",
                "given_name": "Rajive",
                "clpid": "Bagrodia-R"
            },
            {
                "family_name": "Abu-Mostafa",
                "given_name": "Yaser S.",
                "clpid": "Abu-Mostafa-Y-S"
            }
        ],
        "local_group": [
            {
                "literal": "div_eng"
            }
        ],
        "abstract": "In this monograph we develop a mathematical theory for a concurrent language based on angelic and demonic nondeterminism. An underlying model is defined with sets of sets of sequences of synchronization actions. A refinement relation is defined for the model, and equivalence classes under this relation are identified with processes. Processes, together with the refinement relation, form a complete distributive lattice.\r\n\r\n\tWe define a language with parallel composition, sequential composition, angelic and demonic nondeterminism, and an operator that connects pairs of synchronization actions into synchronization statements and hides these actions from observation. Also, angelic and demonic iteration are defined. All operators are monotonic with respect to the refinement ordering. Many algebraic properties are proven from these definitions. We study duals of processes and prove that they can be related to the most demonic environment in which a process will not deadlock. We give a simple example to illustrate the use of duals.\r\n\r\n\tWe study classes of programs for which angelic choice can be implemented by probing the environment for its next action. To this end specifications of processes are extended with simple conditions on the environment. We give a more elaborate example to illustrate the use of these conditions and the compositionality of the method.\r\n\r\n\tFinally we briefly introduce an operational model that describes implementable processes only. This model mentions probes explicitly. Such a model may form a basis for a language that is less restrictive than ours, but that will also have less attractive algebraic properties.\r\n",
        "doi": "10.7907/G620-GG65",
        "publication_date": "1995",
        "thesis_type": "phd",
        "thesis_year": "1995"
    },
    {
        "id": "thesis:4110",
        "collection": "thesis",
        "collection_id": "4110",
        "cite_using_url": "https://resolver.caltech.edu/CaltechETD:etd-10162007-093427",
        "primary_object_url": {
            "basename": "Van_der_goot_mr_1995.pdf",
            "content": "final",
            "filesize": 6021129,
            "license": "other",
            "mime_type": "application/pdf",
            "url": "/4110/1/Van_der_goot_mr_1995.pdf",
            "version": "v3.0.0"
        },
        "type": "thesis",
        "title": "Semantics of VLSI synthesis",
        "author": [
            {
                "family_name": "Van der Goot",
                "given_name": "Marcel Rene",
                "clpid": "Van-der-Goot-M-R"
            }
        ],
        "thesis_advisor": [
            {
                "family_name": "Martin",
                "given_name": "Alain J.",
                "clpid": "Martin-A-J"
            }
        ],
        "thesis_committee": [
            {
                "family_name": "Martin",
                "given_name": "Alain J.",
                "clpid": "Martin-A-J"
            },
            {
                "family_name": "Sanders",
                "given_name": "Beverly",
                "clpid": "Sanders-B"
            },
            {
                "family_name": "Hofstee",
                "given_name": "H. Peter",
                "clpid": "Hofstee-H-P"
            },
            {
                "family_name": "Chandy",
                "given_name": "K. Mani",
                "clpid": "Chandy-K-M"
            },
            {
                "family_name": "Abu-Mostafa",
                "given_name": "Yaser S.",
                "clpid": "Abu-Mostafa-Y-S"
            }
        ],
        "local_group": [
            {
                "literal": "div_eng"
            }
        ],
        "abstract": "We develop a new form of formal operational semantics, suitable for concurrent programming languages. The semantics directly supports sequential and parallel composition, rendezvous synchronization, shared variables, and non-determinism. Based on an abstract notion of program execution, a refinement relation is defined. We show how the refinement relation can be used to prove that one program implements another.\r\n\r\nWe use the operational semantics as a semantic framework for a synthesis method for asynchronous VLSI circuits. We define the semantics of the programming notations that are used, and use the refinement relation to prove the correctness of the program transformations that form the basis of the synthesis method. Among other transformations, we proof the correctness of the replacement of atomic synchronization actions by handshake protocols, and the transformation of a sequence of actions into a network of concurrently executing gates.\r\n",
        "doi": "10.7907/SR5V-KT18",
        "publication_date": "1995",
        "thesis_type": "phd",
        "thesis_year": "1995"
    },
    {
        "id": "thesis:4849",
        "collection": "thesis",
        "collection_id": "4849",
        "cite_using_url": "https://resolver.caltech.edu/CaltechETD:etd-12072007-131639",
        "primary_object_url": {
            "basename": "Seizovic_jn_1994.pdf",
            "content": "final",
            "filesize": 5533741,
            "license": "other",
            "mime_type": "application/pdf",
            "url": "/4849/1/Seizovic_jn_1994.pdf",
            "version": "v3.0.0"
        },
        "type": "thesis",
        "title": "The architecture and programming of a fine-grain multicomputer",
        "author": [
            {
                "family_name": "Seizovic",
                "given_name": "Jakov N.",
                "clpid": "Seizovic-Jakov-N"
            }
        ],
        "thesis_advisor": [
            {
                "family_name": "Seitz",
                "given_name": "Charles L.",
                "clpid": "Seitz-C-L"
            }
        ],
        "thesis_committee": [
            {
                "family_name": "Seitz",
                "given_name": "Charles L.",
                "clpid": "Seitz-C-L"
            },
            {
                "family_name": "Van de Velde",
                "given_name": "Eric",
                "clpid": "van-de-Velde-E"
            },
            {
                "family_name": "Van de Snepscheut",
                "given_name": "Jan L. A.",
                "clpid": "Van-de-Snepscheut-J-L-A"
            },
            {
                "family_name": "Chandy",
                "given_name": "K. Mani",
                "clpid": "Chandy-K-M"
            }
        ],
        "local_group": [
            {
                "literal": "div_eng"
            }
        ],
        "abstract": "NOTE: Text or symbols not renderable in plain ASCII are indicated by [...]. Abstract is included in .pdf document.\r\n\r\nThe research presented in this thesis was conducted in the context of the Mosaic C, an experimental, fine-grain multicomputer. The objective of the Mosaic experiment was to develop a concurrent-computing system with maximum performance per unit cost, while still retaining a general-purpose application span. A stipulation of the Mosaic project was that the complexity of a Mosaic node be limited by the silicon complexity available on a single VLSI chip.\r\n\r\nThe two most important original results reported in the thesis are:\r\n\r\n\u2022 The design and implementation of C+-, a concurrent, object-oriented programming system.\r\n\r\nSyntactically, C+- is an extension of C++. The concurrent semantics of C+- are contained within the process concept. A C+- process is analogous to a C++ object, but it is also an autonomous computing agent, and a unit of potential concurrency. Atomic single-process updates that can be individually enabled and disabled are the execution units of the concurrent computation. The limited set of primitives that C+- provides is shown to be sufficient to express a variety of concurrent-programming problems concisely and efficiently.\r\n\r\nAn important design requirement for C+- was that efficient implementations should exist on a variety of concurrent architectures, and, in particular, on the simple and inexpensive hardware of the Mosaic node. The Mosaic runtime system was written entirely in C+-.\r\n\r\n\u2022 Pipeline synchronization, a novel, generally-applicable technique for hardware synchronization.\r\n\r\nThis technique is a simple, low-cost, high-bandwidth, high-reliability solution to interfaces between synchronous and asynchronous systems, or between synchronous systems operating from different clocks.\r\n\r\nThe technique can sustain the full communication bandwidth and achieve an arbitrarily low, non-zero probability of synchronization failure, P[subscript f], with the price in both latency and chip area being [...].\r\n\r\nPipeline synchronization has been successfully applied to the high-performance inter-computer communication in Mosaic node ensembles.",
        "doi": "10.7907/53vc-hs15",
        "publication_date": "1994",
        "thesis_type": "phd",
        "thesis_year": "1994"
    },
    {
        "id": "thesis:454",
        "collection": "thesis",
        "collection_id": "454",
        "cite_using_url": "https://resolver.caltech.edu/CaltechETD:etd-02022005-162907",
        "primary_object_url": {
            "basename": "Mouchtaris_pn_1993.pdf",
            "content": "final",
            "filesize": 2914850,
            "license": "other",
            "mime_type": "application/pdf",
            "url": "/454/1/Mouchtaris_pn_1993.pdf",
            "version": "v2.0.0"
        },
        "type": "thesis",
        "title": "Analysis of an interactive video architecture",
        "author": [
            {
                "family_name": "Mouchtaris",
                "given_name": "Petros N.",
                "clpid": "Mouchtaris-P-N"
            }
        ],
        "thesis_advisor": [
            {
                "family_name": "Posner",
                "given_name": "Edward C.",
                "clpid": "Posner-E-C"
            }
        ],
        "thesis_committee": [
            {
                "family_name": "Posner",
                "given_name": "Edward C.",
                "clpid": "Posner-E-C"
            },
            {
                "family_name": "Chandy",
                "given_name": "K. Mani",
                "clpid": "Chandy-K-M"
            },
            {
                "family_name": "Vaidyanathan",
                "given_name": "P. P.",
                "clpid": "Vaidyanathan-P-P"
            },
            {
                "family_name": "McEliece",
                "given_name": "Robert J.",
                "clpid": "McEliece-R-J"
            }
        ],
        "local_group": [
            {
                "literal": "div_eng"
            }
        ],
        "abstract": "A new residential application for interactive video is proposed. There is a service provider that prepares and distributes daily news programs customized to subscriber interest. The provider assembles the programs from short news clips and uses a profile data base of subscribers for selecting the appropriate clips. The time of viewing the program can be selected by the customers in near-real-time. We model this service and propose a network architecture that can support it. There is a main node that contains most of the storage and sourcing facilities, and an intermediate node to which all customers are connected. Multicasting is used as much as possible for reducing the traffic load on the network. In addition to that, popular material is stored in the intermediate node which is closer to the customers, which further decreases the traffic load.\n\nOur main concern is the time that a customer has to wait until he starts getting his program. This time is a function of the capacity of the link that connects the main node to the intermediate node, the so-called main link. The case that the main link can only transport a single video connection is considered first. We propose a recurrent algorithm that calculates the probabilities of the states and uses them for evaluating the expected wait, and prove that there is a very simple relationship between the expected wait and the probabilities of the states. A simplified analysis that directly computes the expected wait is proposed next. This approach is computationally more efficient but does not give us any information about the probabilities of the states.\n\t\nFor the general case that the main link can transport more than one video connection, we generalize the recurrent algorithm that calculates the probabilities of the states and the simple relationship between the expected wait and the probabilities of the states. For the cases that the complexity of our algorithm is too large, we propose and evaluate three approximate techniques for estimating the expected wait. In the first technique we use the results for the case that a main link can only transport a single connection for estimating the results for the general case. In the second technique we use the idea of rescaling time. In the third, motivated by the fluid-flow theory, we solve a deterministic problem and use the results of that problem for estimating the expected wait for the problem we are interested in. We show that these approximate techniques compare well with simulations. Thus, we can now decide what the capacity of the main link should be so that our system has the desired performance, and we can do that even if the number of customers is very large.\n",
        "doi": "10.7907/A9Z4-N267",
        "publication_date": "1993",
        "thesis_type": "phd",
        "thesis_year": "1993"
    },
    {
        "id": "thesis:3136",
        "collection": "thesis",
        "collection_id": "3136",
        "cite_using_url": "https://resolver.caltech.edu/CaltechETD:etd-08152007-074128",
        "primary_object_url": {
            "basename": "Steele_cs_1992.pdf",
            "content": "final",
            "filesize": 8115425,
            "license": "other",
            "mime_type": "application/pdf",
            "url": "/3136/1/Steele_cs_1992.pdf",
            "version": "v3.0.0"
        },
        "type": "thesis",
        "title": "Affinity : a concurrent programming system for multicomputers",
        "author": [
            {
                "family_name": "Steele",
                "given_name": "Craig S.",
                "clpid": "Steele-C-S"
            }
        ],
        "thesis_advisor": [
            {
                "family_name": "Seitz",
                "given_name": "Charles L.",
                "clpid": "Seitz-C-L"
            }
        ],
        "thesis_committee": [
            {
                "family_name": "Seitz",
                "given_name": "Charles L.",
                "clpid": "Seitz-C-L"
            },
            {
                "family_name": "Rees",
                "given_name": "Douglas C.",
                "clpid": "Rees-D-C"
            },
            {
                "family_name": "Chandy",
                "given_name": "K. Mani",
                "clpid": "Chandy-K-M"
            },
            {
                "family_name": "Abu-Mostafa",
                "given_name": "Yaser S.",
                "clpid": "Abu-Mostafa-Y-S"
            },
            {
                "family_name": "Taylor",
                "given_name": "Stephen",
                "clpid": "Taylor-S"
            }
        ],
        "local_group": [
            {
                "literal": "div_eng"
            }
        ],
        "abstract": "Affinity is an experiment to explore a simple, convenient, and expressive programming model that provides adequate power for complex programming tasks while setting few constraints on potential concurrency. Although the programmer is required to formulate a computational problem explicitly into medium-sized pieces of data and code, most of the additional functions necessary for concurrent execution are implicit. The execution of the light-weight, reactive processes, called actions, implicitly induces atomicity and consistency of data modifications. The programmer accesses shared data structures in a shared-memory fashion, but without the need for explicit locking to manage the problems of concurrent access and mutual exclusion. Program control flow is distributed and implicit.\n\nThe name given to the programming model, Affinity, has a definition, \"causal connection or relationship,\" that is fitting to the way programs are structured and scheduled.\n\nAffinity consistency and coherence properties provide a tractable discipline for the dangerous power of a concurrent, shared-memory programming style. Existing programming complexity-management techniques such as object-oriented languages can be used in this multicomputer environment. Affinity programs can compute consistent and correct results despite staleness of data, and asynchrony and nondeterminism in execution of code. Program correctness is invariant under replication, or cloning, of actions. This aspect of the model yields a simple and robust mechanism for fault-tolerance.\n\nThe practicality of the Affinity programming model has been demonstrated by an implementation on a second-generation multicomputer, the Ametek S/2010. The implementation is distributed, scalable, and relatively insensitive to network latency. Affinity has demonstrated reasonable efficiency and performance for computations with tens of processing nodes, hundreds of actions, and thousands of shared data structures.",
        "doi": "10.7907/syrm-sx30",
        "publication_date": "1992",
        "thesis_type": "phd",
        "thesis_year": "1992"
    },
    {
        "id": "thesis:6691",
        "collection": "thesis",
        "collection_id": "6691",
        "cite_using_url": "https://resolver.caltech.edu/CaltechTHESIS:09282011-075406850",
        "primary_object_url": {
            "basename": "Barzel_r_1992.pdf",
            "content": "final",
            "filesize": 57807845,
            "license": "other",
            "mime_type": "application/pdf",
            "url": "/6691/1/Barzel_r_1992.pdf",
            "version": "v4.0.0"
        },
        "type": "thesis",
        "title": "A structured approach to physically-based modeling for computer graphics",
        "author": [
            {
                "family_name": "Barzel",
                "given_name": "Ronen",
                "clpid": "Barzel-R"
            }
        ],
        "thesis_advisor": [
            {
                "family_name": "Barr",
                "given_name": "Alan H.",
                "clpid": "Barr-A-H"
            }
        ],
        "thesis_committee": [
            {
                "family_name": "Barr",
                "given_name": "Alan H.",
                "clpid": "Barr-A-H"
            },
            {
                "family_name": "Kajiya",
                "given_name": "James Thomas",
                "clpid": "Kajiya-J-T"
            },
            {
                "family_name": "Mead",
                "given_name": "Carver",
                "orcid": "0000-0003-4051-0462",
                "clpid": "Mead-C-A"
            },
            {
                "family_name": "Chandy",
                "given_name": "K. Mani",
                "orcid": "0000-0001-9190-1290",
                "clpid": "Chandy-K-M"
            },
            {
                "family_name": "Burdick",
                "given_name": "Joel Wakeman",
                "orcid": "0000-0002-3091-540X",
                "clpid": "Burdick-J-W"
            }
        ],
        "local_group": [
            {
                "literal": "div_eng"
            }
        ],
        "abstract": "<p>This thesis presents a framework for the design of physically-based computer graphics models. The framework includes a paradigm for the structure of physically-based models, techniques for \"structured\" mathematical modeling, and a specification of a computer program structure in which to implement the models. The framework is based on known principles and methodologies of structured programming and mathematical modeling. Because the framework emphasizes the structure and organization of models, we refer to it as \"Structured Modeling.\"</p>\r\n\r\n<p>The Structured Modeling framework focuses on clarity and \"correctness\" of models, emphasizing explicit statement of assumptions, goals, and techniques. In particular, we partition physically-based models, separating them into conceptual and mathematical models, and posed problems. We control complexity of models by designing in a modular manner, piecing models together from smaller components.</p>\r\n\r\n<p>The framework places a particular emphasis on defining a complete formal statement of a model's mathematical equations, before attempting to simulate the model. To manage the complexity of these equations, we define a collection of mathematical constructs, notation, and terminology, that allow mathematical models to be created in a structured and modular manner.</p>\r\n\r\n<p>We construct a computer programming environment that directly supports the implementation of models designed using the above techniques. The environment is geared to a tool-oriented approach, in which models are built from an extensible collection of software objects, that correspond to elements and tasks of a \"blackboard\" design of models.</p>\r\n\r\n<p>A substantial portion of this thesis is devoted to developing a library of physically-based model \"modules,\" including rigid-body kinematics, rigid-body dynamics, and dynamic constraints, all built with the Structured Modeling framework. These modules are intended to serve both as examples of the framework, and as potentially useful tools for the computer graphics community. Each module includes statements of goals and assumptions, explicit mathematical models and problem statements, and descriptions of software objects that support them. We illustrate the use of the library to build some sample models, and include discussion of various possible additions and extensions to the library.</p>\r\n\r\n<p>Structured Modeling is an experiment in modeling: an exploration of designing via strict adherence to a dogma of structure, modularity, and mathematical formality. It does not stress issues such as particular numerical simulation techniques or efficiency of computer execution time or memory usage, all of which are important practical considerations in modeling. However, at least so far as the work carried on in this thesis, Structured Modeling has proven to be a useful aid in the design and understanding of complex physically based models.</p>",
        "doi": "10.7907/tbgd-g285",
        "publication_date": "1992",
        "thesis_type": "phd",
        "thesis_year": "1992"
    },
    {
        "id": "thesis:6691",
        "collection": "thesis",
        "collection_id": "6691",
        "cite_using_url": "https://resolver.caltech.edu/CaltechTHESIS:09282011-075406850",
        "primary_object_url": {
            "basename": "Barzel_r_1992.pdf",
            "content": "final",
            "filesize": 57807845,
            "license": "other",
            "mime_type": "application/pdf",
            "url": "/6691/1/Barzel_r_1992.pdf",
            "version": "v4.0.0"
        },
        "type": "thesis",
        "title": "A structured approach to physically-based modeling for computer graphics",
        "author": [
            {
                "family_name": "Barzel",
                "given_name": "Ronen",
                "clpid": "Barzel-R"
            }
        ],
        "thesis_advisor": [
            {
                "family_name": "Barr",
                "given_name": "Alan H.",
                "clpid": "Barr-A-H"
            }
        ],
        "thesis_committee": [
            {
                "family_name": "Barr",
                "given_name": "Alan H.",
                "clpid": "Barr-A-H"
            },
            {
                "family_name": "Kajiya",
                "given_name": "James Thomas",
                "clpid": "Kajiya-J-T"
            },
            {
                "family_name": "Mead",
                "given_name": "Carver",
                "orcid": "0000-0003-4051-0462",
                "clpid": "Mead-C-A"
            },
            {
                "family_name": "Chandy",
                "given_name": "K. Mani",
                "orcid": "0000-0001-9190-1290",
                "clpid": "Chandy-K-M"
            },
            {
                "family_name": "Burdick",
                "given_name": "Joel Wakeman",
                "orcid": "0000-0002-3091-540X",
                "clpid": "Burdick-J-W"
            }
        ],
        "local_group": [
            {
                "literal": "div_eng"
            }
        ],
        "abstract": "<p>This thesis presents a framework for the design of physically-based computer graphics models. The framework includes a paradigm for the structure of physically-based models, techniques for \"structured\" mathematical modeling, and a specification of a computer program structure in which to implement the models. The framework is based on known principles and methodologies of structured programming and mathematical modeling. Because the framework emphasizes the structure and organization of models, we refer to it as \"Structured Modeling.\"</p>\r\n\r\n<p>The Structured Modeling framework focuses on clarity and \"correctness\" of models, emphasizing explicit statement of assumptions, goals, and techniques. In particular, we partition physically-based models, separating them into conceptual and mathematical models, and posed problems. We control complexity of models by designing in a modular manner, piecing models together from smaller components.</p>\r\n\r\n<p>The framework places a particular emphasis on defining a complete formal statement of a model's mathematical equations, before attempting to simulate the model. To manage the complexity of these equations, we define a collection of mathematical constructs, notation, and terminology, that allow mathematical models to be created in a structured and modular manner.</p>\r\n\r\n<p>We construct a computer programming environment that directly supports the implementation of models designed using the above techniques. The environment is geared to a tool-oriented approach, in which models are built from an extensible collection of software objects, that correspond to elements and tasks of a \"blackboard\" design of models.</p>\r\n\r\n<p>A substantial portion of this thesis is devoted to developing a library of physically-based model \"modules,\" including rigid-body kinematics, rigid-body dynamics, and dynamic constraints, all built with the Structured Modeling framework. These modules are intended to serve both as examples of the framework, and as potentially useful tools for the computer graphics community. Each module includes statements of goals and assumptions, explicit mathematical models and problem statements, and descriptions of software objects that support them. We illustrate the use of the library to build some sample models, and include discussion of various possible additions and extensions to the library.</p>\r\n\r\n<p>Structured Modeling is an experiment in modeling: an exploration of designing via strict adherence to a dogma of structure, modularity, and mathematical formality. It does not stress issues such as particular numerical simulation techniques or efficiency of computer execution time or memory usage, all of which are important practical considerations in modeling. However, at least so far as the work carried on in this thesis, Structured Modeling has proven to be a useful aid in the design and understanding of complex physically based models.</p>",
        "doi": "10.7907/tbgd-g285",
        "publication_date": "1992",
        "thesis_type": "phd",
        "thesis_year": "1992"
    },
    {
        "id": "thesis:2835",
        "collection": "thesis",
        "collection_id": "2835",
        "cite_using_url": "https://resolver.caltech.edu/CaltechETD:etd-07092007-072640",
        "primary_object_url": {
            "basename": "Burns_sm_1991.pdf",
            "content": "final",
            "filesize": 6219416,
            "license": "other",
            "mime_type": "application/pdf",
            "url": "/2835/1/Burns_sm_1991.pdf",
            "version": "v3.0.0"
        },
        "type": "thesis",
        "title": "Performance analysis and optimization of asynchronous circuits",
        "author": [
            {
                "family_name": "Burns",
                "given_name": "Steven Morgan",
                "clpid": "Burns-Steven-Morgan"
            }
        ],
        "thesis_advisor": [
            {
                "family_name": "Martin",
                "given_name": "Alain J.",
                "clpid": "Martin-A-J"
            }
        ],
        "thesis_committee": [
            {
                "family_name": "Martin",
                "given_name": "Alain J.",
                "clpid": "Martin-A-J"
            },
            {
                "family_name": "Seitz",
                "given_name": "Charles L.",
                "clpid": "Seitz-C-L"
            },
            {
                "family_name": "Van de Snepscheut",
                "given_name": "Jan L. A.",
                "clpid": "Van-de-Snepscheut-J-L-A"
            },
            {
                "family_name": "Franklin",
                "given_name": "Joel N.",
                "clpid": "Franklin-J-N"
            },
            {
                "family_name": "Chandy",
                "given_name": "K. Mani",
                "clpid": "Chandy-K-M"
            }
        ],
        "local_group": [
            {
                "literal": "div_eng"
            }
        ],
        "abstract": "Analytical techniques are developed to determine the performance of asynchronous digital circuits. These techniques can be used to guide the designer during the synthesis of such a circuit, leading to a high-performance, efficient implementation. Optimization techniques are also developed that further improve this implementation by determining the optimal sizes of the low-level devices (CMOS transistors) that compose the circuit.\r\n",
        "doi": "10.7907/kez1-7q52",
        "publication_date": "1991",
        "thesis_type": "phd",
        "thesis_year": "1991"
    },
    {
        "id": "thesis:2740",
        "collection": "thesis",
        "collection_id": "2740",
        "cite_using_url": "https://resolver.caltech.edu/CaltechETD:etd-06272007-081805",
        "primary_object_url": {
            "basename": "Gupta_r_1991.pdf",
            "content": "final",
            "filesize": 4704494,
            "license": "other",
            "mime_type": "application/pdf",
            "url": "/2740/1/Gupta_r_1991.pdf",
            "version": "v3.0.0"
        },
        "type": "thesis",
        "title": "Compiler Optimization of Data Storage",
        "author": [
            {
                "family_name": "Gupta",
                "given_name": "Rajiv",
                "clpid": "Gupta-Rajiv"
            }
        ],
        "thesis_advisor": [
            {
                "family_name": "Kajiya",
                "given_name": "James Thomas",
                "clpid": "Kajiya-J-T"
            }
        ],
        "thesis_committee": [
            {
                "family_name": "Kajiya",
                "given_name": "James Thomas",
                "clpid": "Kajiya-J-T"
            },
            {
                "family_name": "Barr",
                "given_name": "Alan H.",
                "clpid": "Barr-A-H"
            },
            {
                "family_name": "Chandy",
                "given_name": "K. Mani",
                "clpid": "Chandy-K-M"
            }
        ],
        "local_group": [
            {
                "literal": "div_eng"
            }
        ],
        "abstract": "<p>The system efficiency and throughput of most architectures are critically dependent on the ability of the memory subsystem to satisfy data operand accesses. This ability is in turn dependent on the distribution or layout of the data relative to the access of the data by the executing code. Page faults, cache misses, truncated vectors, global communication, for example, are expensive but common symptoms of data and access misalignment.</p>\r\n\r\n<p>Compiler optimization, traditionally synonymous with code optimization, has addressed the issue of efficient data access by manipulating the code to better access the data under a fixed, default distribution. This approach is restrictive, and often suboptimal. Data optimization, or data-layout optimization, is presented as an integral part of compiler optimization.</p>\r\n\r\n<p>For scalar data, a good compile-time approximation of the \"reference string,\" or sequence of data accesses, is advanced for the purpose of distributing the data. However, the optimal distribution of the scalar data for such, or any, reference string is proved NP-complete. A methodology and a polynomial algorithm for an approximate solution are developed. Experiments with representative, but scaled, scientific programs and execution environments display a reduction in cache misses up to two orders in magnitude.</p>\r\n\r\n<p>For array data, compile-time predictions of the patterns in which the data is accessed by programs in scalar and array languages are examined. For arbitrary computations in an array language, the determination of the optimal layout of the data is proved to be NP-complete. Polynomial techniques for the approximate solutions to the optimal layout of arrays in both languages, scalar and array, are outlined.</p>\r\n\r\n<p>The general applicability of the techniques, in terms of environments other than hierarchical memories, and in terms of interdependence with code manipulations, is discussed. New code optimizations inspired by the data distribution techniques are motivated. The prudence of compiler- over user-optimized data distribution is argued.</p>",
        "doi": "10.7907/E8DD-VG68",
        "publication_date": "1991",
        "thesis_type": "phd",
        "thesis_year": "1991"
    },
    {
        "id": "thesis:6862",
        "collection": "thesis",
        "collection_id": "6862",
        "cite_using_url": "https://resolver.caltech.edu/CaltechTHESIS:03222012-091423469",
        "primary_object_url": {
            "basename": "Su_w-k_1990.pdf",
            "content": "final",
            "filesize": 29003250,
            "license": "other",
            "mime_type": "application/pdf",
            "url": "/6862/1/Su_w-k_1990.pdf",
            "version": "v5.0.0"
        },
        "type": "thesis",
        "title": "Reactive-Process Programming and Distributed Discrete-Event Simulation",
        "author": [
            {
                "family_name": "Su",
                "given_name": "Wen-King",
                "clpid": "Su-Wen-King"
            }
        ],
        "thesis_advisor": [
            {
                "family_name": "Seitz",
                "given_name": "Charles L.",
                "clpid": "Seitz-C-L"
            }
        ],
        "thesis_committee": [
            {
                "family_name": "Chandy",
                "given_name": "K. Mani",
                "clpid": "Chandy-K-M"
            },
            {
                "family_name": "Martin",
                "given_name": "Alain J.",
                "clpid": "Martin-A-J"
            },
            {
                "family_name": "Sturtevant",
                "given_name": "Bradford",
                "clpid": "Sturtevant-B"
            },
            {
                "family_name": "Van de Velde",
                "given_name": "Eric",
                "clpid": "van-de-Velde-E"
            }
        ],
        "local_group": [
            {
                "literal": "div_eng"
            }
        ],
        "abstract": "<p>The same forces that spurred the development of multicomputers - the demand for\r\nbetter performance and economy - are driving the evolution of multicomputers in\r\nthe direction of more abundant and less expensive computing nodes - the direction\r\nof fine-grain multicomputers. This evolution in multicomputer architecture derives\r\nfrom advances in integrated circuit, packaging, and message-routing technologies,\r\nand carries far-reaching implications in programming and applications. This thesis\r\npursues that trend with a balanced treatment of multicomputer programming and\r\napplications. First, a reactive-process programming system - Reactive-C - is\r\ninvestigated; then, a model application- discrete-event simulation - is developed;\r\nfinally, a number of logic-circuit simulators written in the Reactive-C notation are\r\nevaluated.</p>\r\n\r\n<p>One difficulty m multicomputer applications is the inefficiency of many distributed\r\nalgorithms compared to their sequential counterparts. When better formulations\r\nare developed, they often scale poorly with increasing numbers of nodes,\r\nand their beneficial effects eventually vanish when many nodes are used. However,\r\nrules for programming are quite different when nodes are plentiful and cheap: The\r\nprimary concern is to utilize all of the concurrency available in an application, rather\r\nthan to utilize all of the computing cycles available in a machine. We have shown in\r\nour research that it is possible to extract the maximum concurrency of a simulation\r\nsubject, even one as difficult as a logic circuit, when one simulation element is assigned\r\nto each node. Despite the initial inefficiency of a straightforward algorithm,\r\nas the the number of nodes increases, the computation time decreases linearly until\r\nthere are only a few elements in each node. We conclude by suggesting a technique\r\nto further increase the available concurrency when there are many more nodes than\r\nsimulation elements.</p>",
        "doi": "10.7907/9qzd-kv20",
        "publication_date": "1990",
        "thesis_type": "phd",
        "thesis_year": "1990"
    },
    {
        "id": "thesis:630",
        "collection": "thesis",
        "collection_id": "630",
        "cite_using_url": "https://resolver.caltech.edu/CaltechETD:etd-02132007-153533",
        "type": "thesis",
        "title": "A Framework for Adaptive Routing in Multicomputer Networks",
        "author": [
            {
                "family_name": "Ngai",
                "given_name": "John Yee-Keung",
                "clpid": "Ngai-John-Yee-Keung"
            }
        ],
        "thesis_advisor": [
            {
                "family_name": "Seitz",
                "given_name": "Charles L.",
                "clpid": "Seitz-C-L"
            }
        ],
        "thesis_committee": [
            {
                "family_name": "Seitz",
                "given_name": "Charles L.",
                "clpid": "Seitz-C-L"
            },
            {
                "family_name": "Martin",
                "given_name": "Alain J.",
                "clpid": "Martin-A-J"
            },
            {
                "family_name": "Posner",
                "given_name": "Edward C.",
                "clpid": "Posner-E-C"
            },
            {
                "family_name": "Franklin",
                "given_name": "Joel N.",
                "clpid": "Franklin-J-N"
            },
            {
                "family_name": "Chandy",
                "given_name": "K. Mani",
                "clpid": "Chandy-K-M"
            }
        ],
        "local_group": [
            {
                "literal": "div_eng"
            }
        ],
        "abstract": "<p>Message-passing concurrent computers, also known as multicomputers, such as the Caltech Cosmic Cube [47] and its commercial descendents, consist of many computing nodes that interact with each other by sending and receiving messages over communication channels between the nodes. The communication networks of the second-generation machines, such as the Symult Series 2010 and the Intel iPSC2 [2], employ an oblivious wormhole-routing technique that guarantees deadlock freedom. The network performance of this highly evolved oblivious technique has reached a limit of being capable of delivering, under random traffic, a stable maximum sustained throughput of \u2248 45 to 50% of the limit set by the network bisection bandwidth, while maintaining acceptable network latency. This thesis examines the possibility of performing adaptive routing as an approach to further improving upon the performance and reliability of these networks. In an adaptive multipath routing scheme, message trajectories are no longer deterministic, but are continuously perturbed by local message loading. Message packets will tend to follow their shortest-distance routes to destinations in normal traffic loading, but can be detoured to longer but less-loaded routes as local congestion occurs.</p>\r\n\r\n<p>A simple adaptive cut-through packet-switching framework is described, and a number of fundamental issues concerning the theoretical feasibility of the adaptive approach are studied. Freedom of communication deadlock is achieved by following a coherent channel protocol and by applying voluntary misrouting as needed. Packet deliveries are assured by resolving channel-access conflicts according to a priority assignment. Fairness of network access is assured either by sending round-trip packets or by having each node follow a local injection-synchronization protocol.</p>\r\n\r\n<p>The performance behavior of the proposed adaptive cut-through framework is studied with stochastic modeling and analysis, as well as through extensive simulation experiments for the 2D and 3D rectilinear networks. Theoretical bounds on various average network-performance metrics are derived for these rectilinear networks. These bounds provide a standard frame of reference for interpreting the performance results.</p>\r\n\r\n<p>In addition to the potential gain in network performance, the adaptive approach offers the potential for exploiting the inherent path redundancy found in richly connected networks in order to perform fault-tolerant routing. Two convexity-related notions are introduced to characterize the conditions under which our adaptive routing formulation is adequate to provide fault-tolerant routing, with minimal change in routing hardware. The effectiveness of these notions is studied through extensive simulations. The 2D octagonal-mesh network is suggested; this displays excellent fault-tolerant potential under the adaptive routing framework. Both performance and reliability behaviors of the octagonal mesh are studied in detail.</p>\r\n\r\n<p>A number of implementation issues are examined. Encoding schemes for packet headers that admit simple incremental updates while providing all necessary routing information in the first flit of a relatively narrow flit width are developed. A pipelined control structure that allows a packet to cut through an intermediate node with a minimum delay of two cycles is described. A distributed clocking scheme is developed that eliminates the problem of global clock-signal distribution. Under this clocking scheme, the adaptive routers can be tessellated to form a network of arbitrary size.</p>\r\n\r\n<p>[2] W.C. Athas and C.L. Seitz., \"Multicomputers: Message-Passing Concurrent Computers,\" IEEE Computer, August 1988, pp. 9-24.</p>\r\n\r\n<p>[47] C.L. Seitz, \"The Cosmic Cube,\" CACM, Vol. 28, No. 1, January 1985, pp. 22-33.</p>",
        "doi": "10.7907/a01h-0z81",
        "publication_date": "1989",
        "thesis_type": "phd",
        "thesis_year": "1989"
    }
]