CaltechAUTHORS: Article
https://feeds.library.caltech.edu/people/Doyle-J-C/article.rss
A Caltech Library Repository Feedhttp://www.rssboard.org/rss-specificationpython-feedgenenFri, 17 May 2024 19:24:20 -0700Guaranteed margins for LQG regulators
https://resolver.caltech.edu/CaltechAUTHORS:20190308-152210560
Year: 1978
DOI: 10.1109/tac.1978.1101812
There are none.https://resolver.caltech.edu/CaltechAUTHORS:20190308-152210560Robustness with observers
https://resolver.caltech.edu/CaltechAUTHORS:20190308-152210874
Year: 1979
DOI: 10.1109/tac.1979.1102095
This paper describes an adjustment procedure for observer-based linear control systems which asymptotically achieves the same loop transfer functions (and hence the same relative stability, robustness, and disturbance rejection properties) as full-state feedback control implementations.https://resolver.caltech.edu/CaltechAUTHORS:20190308-152210874Multivariable feedback design: Concepts for a classical/modern synthesis
https://resolver.caltech.edu/CaltechAUTHORS:20190308-152210959
Year: 1981
DOI: 10.1109/tac.1981.1102555
This paper presents a practical design perspective on multivariable feedback control problems. It reviews the basic issue-feedback design in the face of uncertainties-and generalizes known single-input, single-output (SISO) statements and constraints of the design problem to multiinput, multioutput (MIMO) cases. Two major MIMO design approaches are then evaluated in the context of these results.https://resolver.caltech.edu/CaltechAUTHORS:20190308-152210959Analysis of feedback systems with structured uncertainties
https://resolver.caltech.edu/CaltechAUTHORS:20190308-152211227
Year: 1982
DOI: 10.1049/ip-d.1982.0053
The paper introduces a general approach for analysing linear systems with structured uncertainty based on a new generalised spectral theory for matrices. The results of the paper naturally extend techniques based on singular values and eliminate their most serious difficulties.https://resolver.caltech.edu/CaltechAUTHORS:20190308-152211227Digikon and Honey-X: Interative packages for control system analysis and design
https://resolver.caltech.edu/CaltechAUTHORS:20190308-152211049
Year: 1982
DOI: 10.1109/mcs.1982.1103759
[no abstract]https://resolver.caltech.edu/CaltechAUTHORS:20190308-152211049Linear Control Theory with an ℋ∞ Optimality Criterion
https://resolver.caltech.edu/CaltechAUTHORS:FRAsiamjco87
Year: 1987
DOI: 10.1137/0325046
This expository paper sets out the principal results in ℋ∞ control theory in the context of continuous-time linear systems. The focus is on the mathematical theory rather than computational methods.https://resolver.caltech.edu/CaltechAUTHORS:FRAsiamjco87Robust control of ill-conditioned plants: high-purity distillation
https://resolver.caltech.edu/CaltechAUTHORS:SKOieeetac88
Year: 1988
DOI: 10.1109/9.14431
Using a high-purity distillation column as an example, the physical reason for the poor conditioning and its implications on control system design and performance are explained. It is shown that an acceptable performance/robustness tradeoff cannot be obtained by simple loop-shaping techniques (using singular values) and that a good understanding of the model uncertainty is essential for robust control system design. Physically motivated uncertainty descriptions (actuator uncertainties) are translated into the H∞/structured singular value framework, which is demonstrated to be a powerful tool to analyze and understand the complex phenomena.https://resolver.caltech.edu/CaltechAUTHORS:SKOieeetac88State-space solutions to standard H2 and H∞ control problems
https://resolver.caltech.edu/CaltechAUTHORS:DOYieeetac89
Year: 1989
DOI: 10.1109/9.29425
Simple state-space formulas are derived for all controllers solving the following standard H∞ problem: For a given number γ>0, find all controllers such that the H∞ norm of the closed-loop transfer function is (strictly) less than γ. It is known that a controller exists if and only if the unique stabilizing solutions to two algebraic Riccati equations are positive definite and the spectral radius of their product is less than γ2. Under these conditions, a parameterization of all controllers solving the problem is given as a linear fractional transformation (LFT) on a contractive, stable, free parameter. The state dimension of the coefficient matrix for the LFT, constructed using the two Riccati solutions, equals that of the plant and has a separation structure reminiscent of classical LQG (i.e. H2) theory. This paper is intended to be of tutorial value, so a standard H2 solution is developed in parallel.https://resolver.caltech.edu/CaltechAUTHORS:DOYieeetac89Quadratic stability with real and complex perturbations
https://resolver.caltech.edu/CaltechAUTHORS:20190313-100417671
Year: 1990
DOI: 10.1109/9.45179
It is shown that the equivalence between real and complex perturbations in the context of quadratic stability to linear, fractional, unstructured perturbations does not hold when the perturbations are block structured. For a limited class of problems, quadratic stability in the face of structured complex perturbations is equivalent to a particular class of scaled norms, and hence appropriate synthesis techniques, coupled with diagonal constant scalings, can be used to design quadratically stable systems.https://resolver.caltech.edu/CaltechAUTHORS:20190313-100417671Identification of flexible structures for robust control
https://resolver.caltech.edu/CaltechAUTHORS:BALieeecsm90
Year: 1990
DOI: 10.1109/37.56278
Documentation is provided of the authors' experience with modeling and identification of an experimental flexible structure for the purpose of control design, with the primary aim being to motivate some important research directions in this area. A multi-input/multi-output (MIMO) model of the structure is generated using the finite element method. This model is inadequate for control design, due to its large variation from the experimental data. Chebyshev polynomials are employed to fit the data with single-input/multi-output (SIMO) transfer function models. Combining these SIMO models leads to a MIMO model with more modes than the original finite element model. To find a physically motivated model, an ad hoc model reduction technique which uses a priori knowledge of the structure is developed. The ad hoc approach is compared with balanced realization model reduction to determine its benefits. Descriptions of the errors between the model and experimental data are formulated for robust control design. Plots of select transfer function models and experimental data are included.https://resolver.caltech.edu/CaltechAUTHORS:BALieeecsm90A J-Spectral Factorization Approach to ℋ∞ Control
https://resolver.caltech.edu/CaltechAUTHORS:20120508-131811131
Year: 1990
DOI: 10.1137/0328071
Necessary and sufficient conditions for the existence of suboptimal solutions to the standard model matching problem associated with ℋ∞ control, are derived using J-spectral factorization theory. The existence of solutions to the model matching problem is shown to be equivalent to the existence of solutions to two coupled J-spectral factorization problems, with the second factor providing a parametrization of all solutions to the model matching problem. The existence of the J-spectral factors is then shown to be equivalent to the existence of nonnegative definite, stabilizing solutions to two indefinite algebraic Riccati equations, allowing a state-space formula for a linear fractional representation of all controllers to be given. A virtue of the approach is that a very general class of problems may be tackled within a conceptually simple framework, and no additional auxiliary Riccati equations are required.https://resolver.caltech.edu/CaltechAUTHORS:20120508-131811131Robustness in the presence of mixed parametric uncertainty and unmodeled dynamics
https://resolver.caltech.edu/CaltechAUTHORS:FANieeetac91
Year: 1991
DOI: 10.1109/9.62265
Continuing the development of the structured singular value approach to robust control design, the authors investigate the problem of computing μ in the case of mixed real parametric and complex uncertainty. The problem is shown to be equivalent to a smooth constrained finite-dimensional optimization problem. In view of the fact that the functional to be maximized may have several local extrema, an upper bound on μ whose computation is numerically tractable is established; this leads to a sufficient condition of robust stability and performance. A historical perspective on the development of the μ theory is included.https://resolver.caltech.edu/CaltechAUTHORS:FANieeetac91A Characterization of all Solutions to the Four Block General Distance Problem
https://resolver.caltech.edu/CaltechAUTHORS:20120419-081456730
Year: 1991
DOI: 10.1137/0329016
All solutions to the four block general distance problem which arises in H^∞ optimal control are characterized. The procedure is to embed the original problem in an all-pass matrix which is constructed. It is then shown that part of this all-pass matrix acts as a generator of all solutions. Special attention is given to the characterization of all optimal solutions by invoking a new descriptor characterization of all-pass
transfer functions. As an application, necessary and sufficient conditions are found for the existence of an H^∞ optimal controller. Following that, a descriptor representation of all solutions is derived.https://resolver.caltech.edu/CaltechAUTHORS:20120419-081456730Model validation: a connection between robust control and identification
https://resolver.caltech.edu/CaltechAUTHORS:SMIieeetac92
Year: 1992
DOI: 10.1109/9.148346
The gap between the models used in control synthesis and those obtained from identification experiments is considered by investigating the connection between uncertain models and data. The model validation problem addressed is: given experimental data and a model with both additive noise and norm-bounded perturbations, is it possible that the model could produce the observed input-output data? This problem is studied for the standard H∞/μ framework models. A necessary condition for such a model to describe an experimental datum is obtained. For a large class of models in the robust control framework, this condition is computable as the solution of a quadratic optimization problem.https://resolver.caltech.edu/CaltechAUTHORS:SMIieeetac92The complex structured singular value
https://resolver.caltech.edu/CaltechAUTHORS:20170408-142303477
Year: 1993
DOI: 10.1016/0005-1098(93)90175-S
A tutorial introduction to the complex structured singular value (μ) is presented, with an emphasis on the mathematical aspects of μ. The μ-based methods discussed here have been useful for analysing the performance and robustness properties of linear feedback systems. Several tests for robust stability and performance with computable bounds for transfer functions and their state space realizations are compared, and a simple synthesis problem is studied. Uncertain systems are represented using Linear Fractional Transformations (LFTs) which naturally unify the frequency-domain and state space methods.https://resolver.caltech.edu/CaltechAUTHORS:20170408-142303477An example of active circulation control of the unsteady separated flow past a semi-infinite plate
https://resolver.caltech.edu/CaltechAUTHORS:CORjfm94
Year: 1994
DOI: 10.1017/S0022112094003460
Active circulation control of the two-dimensional unsteady separated flow past a semiinfinite plate with transverse motion is considered. The rolling-up of the separated shear layer is modelled by a point vortex whose time-dependent circulation is predicted by an unsteady Kutta condition. A suitable vortex shedding mechanism introduced. A control strategy able to maintain constant circulation when a vortex is present is derived. An exact solution for the nonlinear controller is then obtained. Dynamical systems analysis is used to explore the performance of the controlled system. The control strategy is applied to a class of flows and the results are discussed. A procedure to determine the position and the circulation of the vortex, knowing the velocity signature on the plate, is derived. Finally, a physical explanation of the control mechanism is presented.https://resolver.caltech.edu/CaltechAUTHORS:CORjfm94Computational complexity of μ calculation
https://resolver.caltech.edu/CaltechAUTHORS:BRAieeetac94
Year: 1994
DOI: 10.1109/9.284879
The structured singular value μ measures the robustness of uncertain systems. Numerous researchers over the last decade have worked on developing efficient methods for computing μ. This paper considers the complexity of calculating μ with general mixed real/complex uncertainty in the framework of combinatorial complexity theory. In particular, it is proved that the μ recognition problem with either pure real or mixed real/complex uncertainty is NP-hard. This strongly suggests that it is futile to pursue exact methods for calculating μ of general systems with pure real or mixed uncertainty for other than small problems.https://resolver.caltech.edu/CaltechAUTHORS:BRAieeetac94Mixed H_2 and H∞ performance objectives. I. Robust performance analysis
https://resolver.caltech.edu/CaltechAUTHORS:20190318-130328590
Year: 1994
DOI: 10.1109/9.310030
This paper introduces an induced-norm formulation of a mixed H_2 and H∞ performance criterion. It is shown that different mixed H_2 and H∞ norms arise from different assumptions on the input signals. While most mixed norms can be expressed explicitly using either transfer functions or state-space realizations of the system, there are cases where the explicit formulas are very hard to obtain. In the later cases, examples are given to show the intrinsic nature and difficulty of the problem. Mixed norm robust performance analysis under structured uncertainty is also considered in the paper.https://resolver.caltech.edu/CaltechAUTHORS:20190318-130328590Mixed H_2 and H∞ performance objectives. II. Optimal control
https://resolver.caltech.edu/CaltechAUTHORS:20190315-110821041
Year: 1994
DOI: 10.1109/9.310031
This paper considers the analysis and synthesis of control systems subject to two types of disturbance signals: white signals and signals with bounded power. The resulting control problem involves minimizing a mixed H_2 and H∞ norm of the system. It is shown that the controller shares a separation property similar to those of pure H_2 or H∞ controllers. Necessary conditions and sufficient conditions are obtained for the existence of a solution to the mixed problem. Explicit state-space formulas are also given for the optimal controller.https://resolver.caltech.edu/CaltechAUTHORS:20190315-110821041ℋ∞ control of nonlinear systems via output feedback: controller parameterization
https://resolver.caltech.edu/CaltechAUTHORS:LUWieeetac94
Year: 1994
DOI: 10.1109/9.362834
The standard state space solutions to the ℋ∞ control problem for linear time invariant systems are generalized to nonlinear time-invariant systems. A class of local nonlinear (output feedback) ℋ∞ controllers are parameterized as nonlinear fractional transformations on contractive, stable nonlinear parameters. As in the linear case, the ℋ∞ control problem is solved by its reduction to state feedback and output estimation problems, together with a separation argument. Sufficient conditions for ℋ∞-control problem to be locally solved are also derived with this machinery.https://resolver.caltech.edu/CaltechAUTHORS:LUWieeetac94Robustness and performance trade-offs in control design for flexible structures
https://resolver.caltech.edu/CaltechAUTHORS:BALieeetcs94
Year: 1994
DOI: 10.1109/87.338656
Linear control design models for flexible structures are only an approximation to the "real" structural system. There are always modeling errors or uncertainty present. Descriptions of these uncertainties determine the trade-off between achievable performance and robustness of the control design. In this paper it is shown that a controller synthesized for a plant model which is not described accurately by the nominal and uncertainty models may be unstable or exhibit poor performance when implemented on the actual system. In contrast, accurate structured uncertainty descriptions lead to controllers which achieve high performance when implemented on the experimental facility. It is also shown that similar performance, theoretically and experimentally, is obtained for a surprisingly wide range of uncertain levels in the design model. This suggests that while it is important to have reasonable structured uncertainty models, it may not always be necessary to pin down precise levels (i.e., weights) of uncertainty. Experimental results are presented which substantiate these conclusions.https://resolver.caltech.edu/CaltechAUTHORS:BALieeetcs94H∞ control of nonlinear systems: a convex characterization
https://resolver.caltech.edu/CaltechAUTHORS:LUWieeetac95
Year: 1995
DOI: 10.1109/9.412643
The nonlinear H∞-control problem is considered with an emphasis on developing machinery with promising computational properties. The solutions to H∞-control problems for a class of nonlinear systems are characterized in terms of nonlinear matrix inequalities which result in convex problems. The computational implications for the characterization are discussed.https://resolver.caltech.edu/CaltechAUTHORS:LUWieeetac95Stabilization of uncertain linear systems: an LFT approach
https://resolver.caltech.edu/CaltechAUTHORS:LUWieeetac96
Year: 1996
DOI: 10.1109/9.481607
This paper develops machinery for control of uncertain linear systems described in terms of linear fractional transformations (LFTs) on transform variables and uncertainty blocks with primary focus on stabilization and controller parameterization. This machinery directly generalizes familiar state-space techniques. The notation of Q-stability is defined as a natural type of robust stability, and output feedback stabilizability is characterized in terms of Q-stabilizability and Q-detectability which in turn are related to full information and full control problems. Computation is in terms of convex linear matrix inequalities (LMIs), the controllers have a separation structure, and the parameterization of all stabilizing controllers is characterized as an LFT on a stable, free parameter.https://resolver.caltech.edu/CaltechAUTHORS:LUWieeetac96Properties of the mixed μ problem and its bounds
https://resolver.caltech.edu/CaltechAUTHORS:YOUieeetac96
Year: 1996
DOI: 10.1109/9.481624
Upper and lower bounds for the mixed μ problem have recently been developed, and here we examine the relationship of these bounds to each other and to μ. A number of interesting properties are developed and the implications of these properties for the robustness analysis of linear systems and the development of practical computation schemes are discussed. In particular we find that current techniques can only guarantee easy computation for large problems when μ equals its upper bound, and computational complexity results prohibit this possibility for general problems. In this context we present some special cases where computation is easy and make some direct comparisons between mixed μ and "Kharitonov-type" analysis methods.https://resolver.caltech.edu/CaltechAUTHORS:YOUieeetac96Robust Nonlinear Control Theory with Applications to Aerospace Vehicles
https://resolver.caltech.edu/CaltechAUTHORS:20200310-145803556
Year: 1996
DOI: 10.1016/s1474-6670(17)58916-8
This paper is a very brief outline of an invited poster session giving a first-year progress report on a research program with the above title being carried out in the Control and Dynamical Systems (CDS) department at Caltech. This 5-year grant funded by the AFOSR Partnership for Research Excellence Transition (PRET) Program has a special emphasis on transitioning new methods to industrial practice and thus involves a high level of industrial participation. The focus of our program is fundamental research in general methods of analysis and design of complex uncertain nonlinear systems, from creating new mathematical theory to working to make that theory help engineers solve a variety of real industrial problems. Caltech's Control and Dynamical Systems department was created with precisely this goal, which is shared by our industrial collaborators, led by Honeywell. Further details will be available at the poster session.https://resolver.caltech.edu/CaltechAUTHORS:20200310-145803556Model reduction of multidimensional and uncertain systems
https://resolver.caltech.edu/CaltechAUTHORS:20190315-141159756
Year: 1996
DOI: 10.1109/9.539427
Model reduction methods are presented for systems represented by a linear fractional transformation on a repeated scalar uncertainty structure. These methods involve a complete generalization of balanced realizations, balanced Gramians, and balanced truncation model reduction with guaranteed error bounds, based on solutions to a pair of linear matrix inequalities which generalize Lyapunov equations. The resulting reduction methods immediately apply to uncertainty simplification and state order reduction in the case of uncertain systems but also may be interpreted as state order reduction for multidimensional systems.https://resolver.caltech.edu/CaltechAUTHORS:20190315-141159756A lower bound for the mixed µ problem
https://resolver.caltech.edu/CaltechAUTHORS:20190315-100211418
Year: 1997
DOI: 10.1109/9.553696
The mixed µ problem has been shown to be NP hard so that exact analysis appears intractable. Our goal then is to exploit the problem structure so as to develop a polynomial time algorithm that approximates µ and usually gives good answers. To this end it is shown that µ is equivalent to a real eigenvalue maximization problem, and a power algorithm is developed to tackle this problem. The algorithm not only provides a lower bound for µ but has the property that µ is (almost) always an equilibrium point of the algorithm.https://resolver.caltech.edu/CaltechAUTHORS:20190315-100211418Robustness analysis and synthesis for nonlinear uncertain systems
https://resolver.caltech.edu/CaltechAUTHORS:20190318-151348243
Year: 1997
DOI: 10.1109/9.650015
A state-space characterization of robustness analysis and synthesis for nonlinear uncertain systems is proposed. The robustness of a class of nonlinear systems subject to L_2-bounded structured uncertainties is characterized in terms of a nonlinear matrix inequality (NLMI), which yields a convex feasibility problem. As in the linear case, scalings are used to find a Lyapunov or storage function that give sufficient conditions for robust stability and performances. Sufficient conditions for the solvability of robustness synthesis problems are represented in terms of NLMIs as well. With the proposed NLMI characterizations, it is shown that the computation needed for robustness analysis and synthesis is not more difficult than that for checking Lyapunov stability; the numerical solutions for robustness problems are approximated by the use of finite element methods or finite difference schemes, and the computations are reduced to solving linear matrix inequalities. Unfortunately, while the development in this paper parallels the corresponding linear theory, the resulting computational consequences are, of course, not as favourable.https://resolver.caltech.edu/CaltechAUTHORS:20190318-151348243Two-degree-of-freedom controller design for an ill-conditioned distillation process using µ-synthesis
https://resolver.caltech.edu/CaltechAUTHORS:20190312-083605121
Year: 1999
DOI: 10.1109/87.736744
The structured singular value framework is applied to a distillation benchmark problem formulated for the 1991 IEEE Conference on decision and control (CDC). A two degree of freedom controller, which satisfies all control objectives of the CDC problem, is designed using /spl mu/-synthesis. The design methodology is presented and special attention is paid to the approximation of given control objectives into frequency domain weights.https://resolver.caltech.edu/CaltechAUTHORS:20190312-083605121Highly optimized tolerance: A mechanism for power laws in designed systems
https://resolver.caltech.edu/CaltechAUTHORS:CARpre99
Year: 1999
DOI: 10.1103/PhysRevE.60.1412
We introduce a mechanism for generating power law distributions, referred to as highly optimized tolerance (HOT), which is motivated by biological organisms and advanced engineering technologies. Our focus is on systems which are optimized, either through natural selection or engineering design, to provide robust performance despite uncertain environments. We suggest that power laws in these systems are due to tradeoffs between yield, cost of resources, and tolerance to risks. These tradeoffs lead to highly optimized designs that allow for occasional large events. We investigate the mechanism in the context of percolation and sand pile models in order to emphasize the sharp contrasts between HOT and self-organized criticality (SOC), which has been widely suggested as the origin for power laws in complex systems. Like SOC, HOT produces power laws. However, compared to SOC, HOT states exist for densities which are higher than the critical density, and the power laws are not restricted to special values of the density. The characteristic features of HOT systems include: (1) high efficiency, performance, and robustness to designed-for uncertainties; (2) hypersensitivity to design flaws and unanticipated perturbations; (3) nongeneric, specialized, structured configurations; and (4) power laws. The first three of these are in contrast to the traditional hallmarks of criticality, and are obtained by simply adding the element of design to percolation and sand pile models, which completely changes their characteristics.https://resolver.caltech.edu/CaltechAUTHORS:CARpre99A necessary and sufficient minimality condition for uncertain systems
https://resolver.caltech.edu/CaltechAUTHORS:BECieeetac99
Year: 1999
DOI: 10.1109/9.793720
A necessary and sufficient condition is given for the exact reduction of systems modeled by linear fractional transformations (LFTs) on structured operator sets. This condition is based on the existence of a rank-deficient solution to either of a pair of linear matrix inequalities which generalize Lyapunov equations; the notion of Gramians is thus also generalized to uncertain systems, as well as Kalman-like decomposition structures. A related minimality condition, the converse of the reducibility condition, may then be inferred from these results and the equivalence class of all minimal LFT realizations defined. These results comprise the first stage of a complete generalization of realization theory concepts to uncertain systems. Subsequent results, such as the definition of and rank tests on structured controllability and observability matrices are also given. The minimality results described are applicable to multidimensional system realizations as well as to uncertain systems; connections to formal powers series representations also exist.https://resolver.caltech.edu/CaltechAUTHORS:BECieeetac99Highly Optimized Tolerance: Robustness and Design in Complex Systems
https://resolver.caltech.edu/CaltechAUTHORS:CARprl00
Year: 2000
DOI: 10.1103/PhysRevLett.84.2529
Highly optimized tolerance (HOT) is a mechanism that relates evolving structure to power laws in interconnected systems. HOT systems arise where design and evolution create complex systems sharing common features, including (1) high efficiency, performance, and robustness to designed-for uncertainties, (2) hypersensitivity to design flaws and unanticipated perturbations, (3) nongeneric, specialized, structured configurations, and (4) power laws. We study the impact of incorporating increasing levels of design and find that even small amounts of design lead to HOT states in percolation.https://resolver.caltech.edu/CaltechAUTHORS:CARprl00Robust perfect adaptation in bacterial chemotaxis through integral feedback control
https://resolver.caltech.edu/CaltechAUTHORS:YITpnas00
Year: 2000
PMCID: PMC18287
Integral feedback control is a basic engineering strategy for ensuring that the output of a system robustly tracks its desired value independent of noise or variations in system parameters. In biological systems, it is common for the response to an extracellular stimulus to return to its prestimulus value even in the continued presence of the signal-a process termed adaptation or desensitization. Barkai, Alon, Surette, and Leibler have provided both theoretical and experimental evidence that the precision of adaptation in bacterial chemotaxis is robust to dramatic changes in the levels and kinetic rate constants of the constituent proteins in this signaling network [Alon. U., Surette, M. G., Barkai. N. & Leibler, S. (1998) Nature (London) 397, 168-171]. Here we propose that the robustness of perfect adaptation is the result of this system possessing the property of integral feedback control. Using techniques from control and dynamical systems theory, we demonstrate that integral control is structurally inherent in the Barkai-Leibler model and identify and characterize the key assumptions of the model. Most importantly, we argue that integral control in some form is necessary for a robust implementation of perfect adaptation. More generally, integral control may underlie the robustness of many homeostatic mechanisms.https://resolver.caltech.edu/CaltechAUTHORS:YITpnas00A receding horizon generalization of pointwise min-norm controllers
https://resolver.caltech.edu/CaltechAUTHORS:PRIieeetac00
Year: 2000
DOI: 10.1109/9.855550
Control Lyapunov functions (CLFs) are used in conjunction with receding horizon control to develop a new class of receding horizon control schemes. In the process, strong connections between the seemingly disparate approaches are revealed, leading to a unified picture that ties together the notions of pointwise min-norm, receding horizon, and optimal control. This framework is used to develop a CLF based receding horizon scheme, of which a special case provides an appropriate extension of Sontag's formula. The scheme is first presented as an idealized continuous-time receding horizon control law. The issue of implementation under discrete-time sampling is then discussed as a modification. These schemes are shown to possess a number of desirable theoretical and implementation properties. An example is provided, demonstrating their application to a nonlinear control problem. Finally, stronger connections to both optimal and pointwise min-norm control are proved.https://resolver.caltech.edu/CaltechAUTHORS:PRIieeetac00Power Laws, Highly Optimized Tolerance, and Generalized Source Coding
https://resolver.caltech.edu/CaltechAUTHORS:DOYprl00
Year: 2000
DOI: 10.1103/PhysRevLett.84.5656
We introduce a family of robust design problems for complex systems in uncertain environments which are based on tradeoffs between resource allocations and losses. Optimized solutions yield the "robust, yet fragile" features of highly optimized tolerance and exhibit power law tails in the distributions of events for all but the special case of Shannon coding for data compression. In addition to data compression, we construct specific solutions for world wide web traffic and forest fires, and obtain excellent agreement with measured data.https://resolver.caltech.edu/CaltechAUTHORS:DOYprl00Highly optimized tolerance in epidemic models incorporating local optimization and regrowth
https://resolver.caltech.edu/CaltechAUTHORS:ROBpre01
Year: 2001
DOI: 10.1103/PhysRevE.63.056122
In the context of a coupled map model of population dynamics, which includes the rapid spread of fatal epidemics, we investigate the consequences of two new features in highly optimized tolerance (HOT), a mechanism which describes how complexity arises in systems which are optimized for robust performance in the presence of a harsh external environment. Specifically, we (1) contrast global and local optimization criteria and (2) investigate the effects of time dependent regrowth. We find that both local and global optimization lead to HOT states, which may differ in their specific layouts, but share many qualitative features. Time dependent regrowth leads to HOT states which deviate from the optimal configurations in the corresponding static models in order to protect the system from slow (or impossible) regrowth which follows the largest losses and extinctions. While the associated map can exhibit complex, chaotic solutions, HOT states are confined to relatively simple dynamical regimes.https://resolver.caltech.edu/CaltechAUTHORS:ROBpre01Beyond the spherical cow
https://resolver.caltech.edu/CaltechAUTHORS:20150331-080649691
Year: 2001
DOI: 10.1038/35075703
Computational and mathematical models are helping biologists to understand the beating of a heart, the molecular dances underlying the cell-division cycle and cell movement, and much more.https://resolver.caltech.edu/CaltechAUTHORS:20150331-080649691Does AS size determine degree in as topology?
https://resolver.caltech.edu/CaltechAUTHORS:20161220-170012134
Year: 2001
DOI: 10.1145/1037107.1037108
In a recent and much celebrated paper, Faloutsos et al. [6]
found that the inter Autonomous System (AS) topology
exhibits a power-law degree distribution . This result
was quite unexpected in the networking community ,
and stirred significant interest in exploring the possible
causes of this phenomenon. The work of Barabasi et
al. [2], and its application to network topology generation
in the work of Medina et al. [9], have explored a
promising class of models that yield strict power-law degree
distributions. These models, which we will refer
to collectively as the B-A model, describe the detaile d
dynamics of the network growth process, modeling the
way in which connections are made between ASs. There
are two simple connectivity rules that define the evolution
of AS connectivity over time : incremental growth
where a new AS connects to existing ASs, and preferential
connectivity where the likelihood of connecting to an
AS is proportional to the vertex outdegree of the target
AS. These simple rules, which are similar to the classical
"rich get richer" model originally proposed by Simon
[12], lead to power-law degree distributions.https://resolver.caltech.edu/CaltechAUTHORS:20161220-170012134Internet congestion control
https://resolver.caltech.edu/CaltechAUTHORS:LOWieeecsm02
Year: 2002
DOI: 10.1109/37.980245
This article reviews the current transmission control protocol (TCP) congestion control protocols and overviews recent advances that have brought analytical tools to this problem. We describe an optimization-based framework that provides an interpretation of various flow control mechanisms, in particular, the utility being optimized by the protocol's equilibrium structure. We also look at the dynamics of TCP and employ linear models to exhibit stability limitations in the predominant TCP versions, despite certain built-in compensations for delay. Finally, we present a new protocol that overcomes these limitations and provides stability in a way that is scalable to arbitrary networks, link capacities, and delays.https://resolver.caltech.edu/CaltechAUTHORS:LOWieeecsm02Complexity and robustness
https://resolver.caltech.edu/CaltechAUTHORS:CARpnas02
Year: 2002
DOI: 10.1073/pnas.012582499
PMCID: PMC128573
Highly optimized tolerance (HOT) was recently introduced as a conceptual framework to study fundamental aspects of complexity. HOT is motivated primarily by systems from biology and engineering and emphasizes, (i) highly structured, nongeneric, self-dissimilar internal configurations, and (ii) robust yet fragile external behavior. HOT claims these are the most important features of complexity and not accidents of evolution or artifices of engineering design but are inevitably intertwined and mutually reinforcing. In the spirit of this collection, our paper contrasts HOT with alternative perspectives on complexity, drawing on real-world examples and also model systems, particularly those from self-organized criticality.https://resolver.caltech.edu/CaltechAUTHORS:CARpnas02Mutation, specialization, and hypersensitivity in highly optimized tolerance
https://resolver.caltech.edu/CaltechAUTHORS:ZHOpnas02b
Year: 2002
DOI: 10.1073/pnas.261714399
PMCID: PMC122317
We introduce a model of evolution in which competing organisms are represented by percolation lattice models. Fitness is based on the number of occupied sites remaining after removing a cluster connected to a randomly selected site. High-fitness individuals arising through mutation and selection must trade off density versus robustness to loss, and are characterized by cellular barrier patterns that prevent large cascading losses to common disturbances. This model shows that Highly Optimized Tolerance (HOT), which links complexity to robustness in designed systems, arises naturally through Darwinian mechanisms. Although the model is a severe abstraction of biology, it produces a surprisingly wide variety of micro- and macroevolutionary features strikingly similar to real biological evolution.https://resolver.caltech.edu/CaltechAUTHORS:ZHOpnas02bReverse Engineering of Biological Complexity
https://resolver.caltech.edu/CaltechAUTHORS:20141112-141330446
Year: 2002
DOI: 10.1126/science.1069981
Advanced technologies and biology have extremely different physical implementations, but they are far more alike in systems-level organization than is widely appreciated. Convergent evolution in both domains produces modular architectures that are composed of elaborate hierarchies of protocols and layers of feedback regulation, are driven by demand for robustness to uncertain environments, and use often imprecise components. This complexity may be largely hidden in idealized laboratory settings and in normal operation, becoming conspicuous only when contributing to rare cascading failures. These puzzling and paradoxical features are neither accidental nor artificial, but derive from a deep and necessary interplay between complexity and robustness, modularity, feedback, and fragility. This review describes insights from engineering theory and practice that can shed some light on biological complexity.https://resolver.caltech.edu/CaltechAUTHORS:20141112-141330446Design degrees of freedom and mechanisms for complexity
https://resolver.caltech.edu/CaltechAUTHORS:REYpre02
Year: 2002
DOI: 10.1103/PhysRevE.66.016108
We develop a discrete spectrum of percolation forest fire models characterized by increasing design degrees of freedom (DDOF's). The DDOF's are tuned to optimize the yield of trees after a single spark. In the limit of a single DDOF, the model is tuned to the critical density. Additional DDOF's allow for increasingly refined spatial patterns, associated with the cellular structures seen in highly optimized tolerance (HOT). The spectrum of models provides a clear illustration of the contrast between criticality and HOT, as well as a concrete quantitative example of how a sequence of robustness tradeoffs naturally arises when increasingly complex systems are developed through additional layers of design. Such tradeoffs are familiar in engineering and biology and are a central aspect of the complex systems that can be characterized as HOT.https://resolver.caltech.edu/CaltechAUTHORS:REYpre02Overview of the Alliance for Cellular Signaling
https://resolver.caltech.edu/CaltechAUTHORS:20150401-065047086
Year: 2002
DOI: 10.1038/nature01304
The Alliance for Cellular Signaling is a large-scale collaboration designed to answer global questions about signalling networks. Pathways will be studied intensively in two cells — B lymphocytes (the cells of the immune system) and cardiac myocytes — to facilitate quantitative modelling. One goal is to catalyse complementary research in individual laboratories; to facilitate this, all alliance data are freely available for use by the entire research community.https://resolver.caltech.edu/CaltechAUTHORS:20150401-065047086Toward an Optimization-Driven Framework for Designing and Generating Realistic Internet Topologies
https://resolver.caltech.edu/CaltechAUTHORS:20160414-163819524
Year: 2003
DOI: 10.1145/774763.774769
We propose a novel approach to the study of Internet topology in which we use an optimization framework to model
the mechanisms driving incremental growth. While previous
methods of topology generation have focused on explicit
replication of statistical properties, such as node hierarchies and node degree distributions, our approach addresses the economic tradeoffs, such as cost and performance, and the technical constraints faced by a single ISP in its network design. By investigating plausible objectives and constraints in the design of actual networks, observed network properties
such as certain hierarchical structures and node degree
distributions can be expected to be the natural by-product
of an approximately optimal solution chosen by network designers and operators. In short, we advocate here essentially an approach to network topology design, modeling, and generation that is based on the concept of Highly Optimized Tolerance (HOT). In contrast with purely descriptive topology modeling, this opens up new areas of research that focus on the causal forces at work in network design and aim at identifying the economic and technical drivers responsible for the observed large-scale network behavior. As a result, the proposed approach should have significantly more predictive power than currently pursued efforts and should provide a scientific foundation for the investigation of other important problems, such as pricing, peering, or the dynamics of routing protocols.https://resolver.caltech.edu/CaltechAUTHORS:20160414-163819524The systems biology markup language (SBML): a medium for representation and exchange of biochemical network models
https://resolver.caltech.edu/CaltechAUTHORS:20111020-083352718
Year: 2003
DOI: 10.1093/bioinformatics/btg015
Motivation: Molecular biotechnology now makes it possible to build elaborate systems models, but the systems biology community needs information standards if models are to be shared, evaluated and developed cooperatively.
Results: We summarize the Systems Biology Markup Language (SBML) Level 1, a free, open, XML-based format for representing biochemical reaction networks. SBML is a software-independent language for describing models common to research in many areas of computational biology, including cell signaling pathways, metabolic pathways, gene regulation, and others.https://resolver.caltech.edu/CaltechAUTHORS:20111020-083352718Next Generation Simulation Tools: The Systems Biology Workbench and BioSPICE Integration
https://resolver.caltech.edu/CaltechAUTHORS:20121026-183338290
Year: 2003
DOI: 10.1089/153623103322637670
Researchers in quantitative systems biology make use of a large number of different software packages for modelling, analysis, visualization, and general data manipulation. In this paper, we describe the Systems Biology Workbench (SBW), a software framework that allows heterogeneous application components—written in diverse programming languages and running on different platforms—to communicate and use each others' capabilities via a fast binary encoded-message system. Our goal was to create a simple, high performance, opensource software infrastructure which is easy to implement and understand. SBW enables applications (potentially running on separate, distributed computers) to communicate via a simple network protocol. The interfaces to the system are encapsulated in client-side libraries that we provide for different programming languages. We describe in this paper the SBW architecture, a selection of current modules, including Jarnac, JDesigner, and SBWMeta-tool, and the close integration of SBW into BioSPICE, which enables both frameworks to share tools and compliment and strengthen each others capabilities.https://resolver.caltech.edu/CaltechAUTHORS:20121026-183338290Linear stability of TCP/RED and a scalable control
https://resolver.caltech.edu/CaltechAUTHORS:20170810-140056148
Year: 2003
DOI: 10.1016/S1389-1286(03)00304-9
We demonstrate that the dynamic behavior of queue and average window is determined predominantly by the stability of TCP/RED, not by AIMD probing nor noise traffic. We develop a general multi-link multi-source model for TCP/RED and derive a local stability condition in the case of a single link with heterogeneous sources. We validate our model with simulations and illustrate the stability region of TCP/RED. These results suggest that TCP/RED becomes unstable when delay increases, or more strikingly, when link capacity increases. The analysis illustrates the difficulty of setting RED parameters to stabilize TCP: they can be tuned to improve stability, but only at the cost of large queues even when they are dynamically adjusted. Finally, we present a simple distributed congestion control algorithm that maintains stability for arbitrary network delay, capacity, load and topology.https://resolver.caltech.edu/CaltechAUTHORS:20170810-140056148Evolving a lingua franca and associated software infrastructure for computational systems biology: the Systems Biology Markup Language (SBML) project
https://resolver.caltech.edu/CaltechAUTHORS:HUCieesb04
Year: 2004
DOI: 10.1049/sb:20045008
Biologists are increasingly recognising that computational modelling is crucial for making sense of the vast quantities of complex experimental data that are now being collected. The systems biology field needs agreed-upon informationstandards if models are to be shared, evaluated and developed cooperatively. Over the last four years, our team has been developing the Systems Biology Markup Language (SBML) in collaboration with an international community of modellers and software developers. SBML has become a de facto standard format for representing formal, quantitative and qualitative models at the level of biochemical reactions and regulatory networks. In this article, we summarise the current and upcoming versions of SBML and our efforts at developing software infrastructure for supporting and broadening its use. We also provide a brief overview of the many SBML-compatible software tools available today.https://resolver.caltech.edu/CaltechAUTHORS:HUCieesb04Methodological frameworks for large-scale network analysis and design
https://resolver.caltech.edu/CaltechAUTHORS:20161207-172521946
Year: 2004
DOI: 10.1145/1031134.1031138
This paper emphasizes the need for methodological frameworks for analysis and design of large scale networks which are independent of specific design innovations and their advocacy, with the aim of making networking a more systematic engineering discipline. Networking problems have largely confounded existing theory, and innovation based on intuition has dominated design. This paper will illustrate potential pitfalls of this practice. The general aim is to illustrate universal aspects of theoretical and methodological research that can be applied to network design and verification. The issues focused on will include the choice of models, including the relationship between flow and packet level descriptions, the need to account for uncertainty generated by modelling abstractions, and the challenges of dealing with network scale. The rigorous comparison of proposed schemes will be illustrated using various abstractions. While standard tools from robust control theory have been applied in this area, we will also illustrate how network-specific challenges can drive the development of new mathematics that expand their range of applicability, and how many enormous challenges remain.https://resolver.caltech.edu/CaltechAUTHORS:20161207-172521946Robustness of Cellular Functions
https://resolver.caltech.edu/CaltechAUTHORS:20170408-213739584
Year: 2004
DOI: 10.1016/j.cell.2004.09.008
Robustness, the ability to maintain performance in the face of perturbations and uncertainty, is a long-recognized key property of living systems. Owing to intimate links to cellular complexity, however, its molecular and cellular basis has only recently begun to be understood. Theoretical approaches to complex engineered systems can provide guidelines for investigating cellular robustness because biology and engineering employ a common set of basic mechanisms in different combinations. Robustness may be a key to understanding cellular complexity, elucidating design principles, and fostering closer interactions between experimentation and theory.https://resolver.caltech.edu/CaltechAUTHORS:20170408-213739584Imitation of life: How biology is inspiring computing [Book Review]
https://resolver.caltech.edu/CaltechAUTHORS:20150325-141647738
Year: 2004
DOI: 10.1038/431908a
Generations of engineers have recognized
that, in many respects, biology does it better.
Imitation of Life is a whirlwind history, richer
even than its subtitle suggests, through
various computational disciplines inspired
by biology. This is an ambitious undertaking
for such a short book but, although it
ignores some important unifying principles,
its brevity is also a virtue. The inspirations
from biology are scattered throughout the
book, and their collective impact is felt
best when the book is digested whole, at one
sitting. The early chapters on biology as a
metaphor are the least satisfying, and any
reader who stops there may never return for
the genuine delights that follow.https://resolver.caltech.edu/CaltechAUTHORS:20150325-141647738MathSBML: a package for manipulating SBML-based biological models
https://resolver.caltech.edu/CaltechAUTHORS:20111004-150808819
Year: 2004
DOI: 10.1093/bioinformatics/bth271
PMCID: PMC1409765
Summary: MathSBML is a Mathematica package designed
for manipulating Systems Biology Markup Language (SBML)
models. It converts SBML models into Mathematica data structures and provides a platform for manipulating and evaluating these models. Once a model is read by MathSBML, it is fully compatible with standard Mathematica functions such as NDSolve (a differential-algebraic equations solver). Math-SBML also provides an application programming interface for viewing, manipulating, running numerical simulations; exporting SBML models; and converting SBML models in to other formats, such as XPP, HTML and FORTRAN. By accessing the full breadth of Mathematica functionality, MathSBML is fully extensible to SBML models of any size or complexity.
Availability: Open Source (LGPL) at http://www.sbml.org and http://www.sf.net/projects/sbml.
Supplementary information: Extensive online documentation is available at http://www.sbml.org/mathsbml.html. Additional examples are provided at http://www.sbml.org/software/mathsbml/bioinformatics-application-notehttps://resolver.caltech.edu/CaltechAUTHORS:20111004-150808819FAST TCP: from theory to experiments
https://resolver.caltech.edu/CaltechAUTHORS:JINieeen05
Year: 2005
DOI: 10.1109/MNET.2005.1383434
We describe a variant of TCP, called FAST, that can sustain high throughput and utilization at multigigabits per second over large distances. We present the motivation, review the background theory, summarize key features of FAST TCP, and report our first experimental results.https://resolver.caltech.edu/CaltechAUTHORS:JINieeen05Congestion control for high performance, stability, and fairness in general networks
https://resolver.caltech.edu/CaltechAUTHORS:PAGieeeacmtn05
Year: 2005
DOI: 10.1109/TNET.2004.842216
This paper is aimed at designing a congestion control system that scales gracefully with network capacity, providing high utilization, low queueing delay, dynamic stability, and fairness among users. The focus is on developing decentralized control laws at end-systems and routers at the level of fluid-flow models, that can provably satisfy such properties in arbitrary networks, and subsequently approximate these features through practical packet-level implementations. Two families of control laws are developed. The first "dual" control law is able to achieve the first three objectives for arbitrary networks and delays, but is forced to constrain the resource allocation policy. We subsequently develop a "primal-dual" law that overcomes this limitation and allows sources to match their steady-state preferences at a slower time-scale, provided a bound on round-trip-times is known. We develop two packet-level implementations of this protocol, using 1) ECN marking, and 2) queueing delay, as means of communicating the congestion measure from links to sources. We demonstrate using ns-2 simulations the stability of the protocol and its equilibrium features in terms of utilization, queueing and fairness, under a variety of scaling parameters.https://resolver.caltech.edu/CaltechAUTHORS:PAGieeeacmtn05Surviving heat shock: Control strategies for robustness and performance
https://resolver.caltech.edu/CaltechAUTHORS:ELSpnas05
Year: 2005
DOI: 10.1073/pnas.403510102
PMCID: PMC549435
Molecular biology studies the cause-and-effect relationships among microscopic processes initiated by individual molecules within a cell and observes their macroscopic phenotypic effects on cells and organisms. These studies provide a wealth of information about the underlying networks and pathways responsible for the basic functionality and robustness of biological systems. At the same time, these studies create exciting opportunities for the development of quantitative and predictive models that connect the mechanism to its phenotype then examine various modular structures and the range of their dynamical behavior. The use of such models enables a deeper understanding of the design principles underlying biological organization and makes their reverse engineering and manipulation both possible and tractable The heat shock response presents an interesting mechanism where such an endeavor is possible. Using a model of heat shock, we extract the design motifs in the system and justify their existence in terms of various performance objectives. We also offer a modular decomposition that parallels that of traditional engineering control architectures.https://resolver.caltech.edu/CaltechAUTHORS:ELSpnas05Cross-layer optimization in TCP/IP networks
https://resolver.caltech.edu/CaltechAUTHORS:WANiatnet05
Year: 2005
DOI: 10.1109/TNET.2005.850219
TCP-AQM can be interpreted as distributed primal-dual algorithms to maximize aggregate utility over source rates. We show that an equilibrium of TCP/IP, if exists, maximizes aggregate utility over both source rates and routes, provided congestion prices are used as link costs. An equilibrium exists if and only if this utility maximization problem and its Lagrangian dual have no duality gap. In this case, TCP/IP incurs no penalty in not splitting traffic across multiple paths. Such an equilibrium, however, can be unstable. It can be stabilized by adding a static component to link cost, but at the expense of a reduced utility in equilibrium. If link capacities are optimally provisioned, however, pure static routing, which is necessarily stable, is sufficient to maximize utility. Moreover single-path routing again achieves the same utility as multipath routing at optimality.https://resolver.caltech.edu/CaltechAUTHORS:WANiatnet05Highly optimized tolerance and power laws in dense and sparse resource regimes
https://resolver.caltech.edu/CaltechAUTHORS:MANpre05
Year: 2005
DOI: 10.1103/PhysRevE.72.016108
Power law cumulative frequency (P) versus event size (l) distributions P(>= l)similar to l(-alpha) are frequently cited as evidence for complexity and serve as a starting point for linking theoretical models and mechanisms with observed data. Systems exhibiting this behavior present fundamental mathematical challenges in probability and statistics. The broad span of length and time scales associated with heavy tailed processes often require special sensitivity to distinctions between discrete and continuous phenomena. A discrete highly optimized tolerance (HOT) model, referred to as the probability, loss, resource (PLR) model, gives the exponent alpha=1/d as a function of the dimension d of the underlying substrate in the sparse resource regime. This agrees well with data for wildfires, web file sizes, and electric power outages. However, another HOT model, based on a continuous (dense) distribution of resources, predicts alpha=1+1/d. In this paper we describe and analyze a third model, the cuts model, which exhibits both behaviors but in different regimes. We use the cuts model to show all three models agree in the dense resource limit. In the sparse resource regime, the continuum model breaks down, but in this case, the cuts and PLR models are described by the same exponent.https://resolver.caltech.edu/CaltechAUTHORS:MANpre05The "robust yet fragile" nature of the Internet
https://resolver.caltech.edu/CaltechAUTHORS:DOYpnas05
Year: 2005
DOI: 10.1073/pnas.0501426102
PMCID: PMC1240072
The search for unifying properties of complex networks is popular, challenging, and important. For modeling approaches that focus on robustness and fragility as unifying concepts, the Internet is an especially attractive case study, mainly because its applications are ubiquitous and pervasive, and widely available expositions exist at every level of detail. Nevertheless, alternative approaches to modeling the Internet often make extremely different assumptions and derive opposite conclusions about fundamental properties of one and the same system. Fortunately, a detailed understanding of Internet technology combined with a unique ability to measure the network means that these differences can be understood thoroughly and resolved unambiguously. This article aims to make recent results of this process accessible beyond Internet specialists to the broader scientific community and to clarify several sources of basic methodological differences that are relevant beyond either the Internet or the two specific approaches focused on here (i.e., scale-free networks and highly optimized tolerance networks).https://resolver.caltech.edu/CaltechAUTHORS:DOYpnas05Motifs, Control, and Stability
https://resolver.caltech.edu/CaltechAUTHORS:DOYplosb05
Year: 2005
DOI: 10.1371/journal.pbio.0030392
PMCID: PMC1283396
Many of the detailed mechanisms by which bacteria express genes in response to various environmental signals are well-known. The molecular players underlying these responses are part of a bacterial transcriptional regulatory network (BTN). To explore the properties and evolution of such networks and to extract general principles, biologists have looked for common themes or motifs, and their interconnections, such as reciprocal links or feedback loops. A BTN motif can be thought of as a directed graph with regulatory interactions connecting transcription factors to their operon targets (the set of related bacterial genes that are transcribed together). For example, Figure 1A shows a BTN motif that describes a part of the transcriptional response to heat (and other) stressors.
But biological networks are not just static physical constructs, and it is, in fact, their dynamical properties that determine their function. In this issue of PLoS Biology, Prill et al. [1] show that the relative abundance of small motifs in biological networks, including the BTN, may be explained by the stability of their dynamics across a wide range of cellular conditions. In a dynamical system, control engineers define "stability" as preservation of a specific behavior over time under some set of perturbations. The definitions of stability vary somewhat depending on the types of system, behavior, and perturbation specified [2]. For the BTN example, Prill et al. [1] study stability of gene expression levels, as modeled by a set of linear differential equations. Given interactions from a BTN motif, "structural stability" is robustness of stability to arbitrary signs and magnitudes of interactions. This is such a stringent notion of stability that it would be satisfied by few systems, yet Prill et al. [1] show that all BTN motifs are stable for all signs and magnitudes of interactions. For several other biological networks, they show a level of correlation between abundance and structural stability that is highly unlikely to occur at random. The significance of these results as well as those in recent related papers (see references in [1], particularly those of Alon and colleagues) can be better appreciated within the larger context of well-known concepts from biology and engineering, particularly control theory [3]. For additional mathematical details underlying the qualitative arguments presented here, see the online supplement (Text S1 and S2).https://resolver.caltech.edu/CaltechAUTHORS:DOYplosb05Three mechanisms for power laws on the Cayley tree
https://resolver.caltech.edu/CaltechAUTHORS:BROpre05
Year: 2005
DOI: 10.1103/PhysRevE.72.056120
We compare preferential growth, critical phase transitions, and highly optimized tolerance (HOT) as mechanisms for generating power laws in the familiar and analytically tractable context of lattice percolation and forest fire models on the Cayley tree. All three mechanisms have been widely discussed in the context of complexity in natural and technological systems. This parallel study enables direct comparison of the mechanisms and associated lattice solutions. Criticality fits most naturally into the category of random processes, where power laws are a consequence of fluctuations in an ensemble with no intrinsic scale. The power laws in preferential growth can be understood in the context of competing exponential growth and decay processes. HOT generalizes this functional mechanism involving exponentials of exponentials to a broader class of nonexponential functions, which arise from optimization.https://resolver.caltech.edu/CaltechAUTHORS:BROpre05Understanding Internet topology: principles, models, and validation
https://resolver.caltech.edu/CaltechAUTHORS:ALDieeeacmtn05
Year: 2005
DOI: 10.1109/TNET.2005.861250
Building on a recent effort that combines a first-principles approach to modeling router-level connectivity with a more pragmatic use of statistics and graph theory, we show in this paper that for the Internet, an improved understanding of its physical infrastructure is possible by viewing the physical connectivity as an annotated graph that delivers raw connectivity and bandwidth to the upper layers in the TCP/IP protocol stack, subject to practical constraints (e.g., router technology) and economic considerations (e.g., link costs). More importantly, by relying on data from Abilene, a Tier-1 ISP, and the Rocketfuel project, we provide empirical evidence in support of the proposed approach and its consistency with networking reality. To illustrate its utility, we: 1) show that our approach provides insight into the origin of high variability in measured or inferred router-level maps; 2) demonstrate that it easily accommodates the incorporation of additional objectives of network design (e.g., robustness to router failure); and 3) discuss how it complements ongoing community efforts to reverse-engineer the Internet.https://resolver.caltech.edu/CaltechAUTHORS:ALDieeeacmtn05Highly optimised global organisation of metabolic networks
https://resolver.caltech.edu/CaltechAUTHORS:20110815-134554423
Year: 2005
DOI: 10.1049/ip-syb:20050042
High-level, mathematically precise descriptions of the global organisation of complex metabolic networks are necessary for understanding the global structure of metabolic networks, the interpretation and integration of large amounts of biologic data (sequences, various -omics)
and ultimately for rational design of therapies for disease processes. Metabolic networks are highly organised to execute their function efficiently while tolerating wide variation in their environment. These networks are constrained by physical requirements (e.g. conservation of
energy, redox and small moieties) but are also remarkably robust and evolvable. The authors use well-known features of the stoichiometry of bacterial metabolic networks to demonstrate how network architecture facilitates such capabilities, and to develop a minimal abstract metabolism
which incorporates the known features of the stoichiometry and respects the constraints on enzymes and reactions. This model shows that the essential functionality and constraints
drive the tradeoffs between robustness and fragility, as well as the large-scale structure and organisation
of the whole network, particularly high variability. The authors emphasise how domain specific constraints and tradeoffs imposed by the environment are important factors in shaping stoichiometry. Importantly, the consequence of these highly organised tradeoffs and tolerances
is an architecture that has a highly structured modularity that is self-dissimilar and scale-rich.https://resolver.caltech.edu/CaltechAUTHORS:20110815-134554423Wildfires, complexity, and highly optimized tolerance
https://resolver.caltech.edu/CaltechAUTHORS:MORpnas05
Year: 2005
DOI: 10.1073/pnas.0508985102
PMCID: PMC1312407
Recent, large fires in the western United States have rekindled debates about fire management and the role of natural fire regimes in the resilience of terrestrial ecosystems. This real-world experience parallels debates involving abstract models of forest fires, a central metaphor in complex systems theory. Both real and modeled fire-prone landscapes exhibit roughly power law statistics in fire size versus frequency. Here, we examine historical fire catalogs and a detailed fire simulation model; both are in agreement with a highly optimized tolerance model. Highly optimized tolerance suggests robustness tradeoffs underlie resilience in different fire-prone ecosystems. Understanding these mechanisms may provide new insights into the structure of ecological systems and be key in evaluating fire management strategies and sensitivities to climate change.https://resolver.caltech.edu/CaltechAUTHORS:MORpnas05Advanced methods and algorithms for biological networks analysis
https://resolver.caltech.edu/CaltechAUTHORS:ELSprocieee06
Year: 2006
DOI: 10.1109/JPROC.2006.871776
Modeling and analysis of complex biological networks presents a number of mathematical challenges. For the models to be useful from a biological standpoint, they must be systematically compared with data. Robustness is a key to biological understanding and proper feedback to guide experiments,including both the deterministic stability and performance properties of models in the presence of parametric uncertainties and their stochastic behavior in the presence of noise. In this paper, we present mathematical and algorithmic tools to address such questions for models that may be nonlinear, hybrid,and stochastic. These tools are rooted in solid mathematical theories, primarily from robust control and dynamical systems, but with important recent developments. They also have the potential for great practical relevance, which we explore through a series of biologically motivated examples.https://resolver.caltech.edu/CaltechAUTHORS:ELSprocieee06Module-Based Analysis of Robustness Tradeoffs in the Heat Shock Response System
https://resolver.caltech.edu/CaltechAUTHORS:KURploscompbio06
Year: 2006
DOI: 10.1371/journal.pcbi.0020059
PMCID: PMC1523291
Biological systems have evolved complex regulatory mechanisms, even in situations where much simpler designs seem to be sufficient for generating nominal functionality. Using module-based analysis coupled with rigorous mathematical comparisons, we propose that in analogy to control engineering architectures, the complexity of cellular systems and the presence of hierarchical modular structures can be attributed to the necessity of achieving robustness. We employ the Escherichia coli heat shock response system, a strongly conserved cellular mechanism, as an example to explore the design principles of such modular architectures. In the heat shock response system, the sigma-factor σ32 is a central regulator that integrates multiple feedforward and feedback modules. Each of these modules provides a different type of robustness with its inherent tradeoffs in terms of transient response and efficiency. We demonstrate how the overall architecture of the system balances such tradeoffs. An extensive mathematical exploration nevertheless points to the existence of an array of alternative strategies for the existing heat shock response that could exhibit similar behavior. We therefore deduce that the evolutionary constraints facing the system might have steered its architecture toward one of many robustly functional solutions.https://resolver.caltech.edu/CaltechAUTHORS:KURploscompbio06Fundamental Limitations of Disturbance Attenuation in the Presence of Side Information
https://resolver.caltech.edu/CaltechAUTHORS:MARieeetac07
Year: 2007
DOI: 10.1109/TAC.2006.887898
In this paper, we study fundamental limitations of disturbance attenuation of feedback systems, under the assumption that the controller has a finite horizon preview of the disturbance. In contrast with prior work, we extend Bode's integral equation for the case where the preview is made available to the controller via a general, finite capacity, communication system. Under asymptotic stationarity assumptions, our results show that the new fundamental limitation differs from Bode's only by a constant, which quantifies the information rate through the communication system. In the absence of asymptotic stationarity, we derive a universal lower bound which uses Shannon's entropy rate as a measure of performance. By means of a case-study, we show that our main bounds may be achieved.https://resolver.caltech.edu/CaltechAUTHORS:MARieeetac07Layering as Optimization Decomposition: A Mathematical Theory of Network Architectures
https://resolver.caltech.edu/CaltechAUTHORS:20170810-104152899
Year: 2007
DOI: 10.1109/JPROC.2006.887322
Network protocols in layered architectures have historically been obtained on an ad hoc basis, and many of the recent cross-layer designs are also conducted through piecemeal approaches. Network protocol stacks may instead be holistically analyzed and systematically designed as distributed solutions to some global optimization problems. This paper presents a survey of the recent efforts towards a systematic understanding of layering as optimization decomposition, where the overall communication network is modeled by a generalized network utility maximization problem, each layer corresponds to a decomposed subproblem, and the interfaces among layers are quantified as functions of the optimization variables coordinating the subproblems. There can be many alternative decompositions, leading to a choice of different layering architectures. This paper surveys the current status of horizontal decomposition into distributed computation, and vertical decomposition into functional modules such as congestion control, routing, scheduling, random access, power control, and channel coding. Key messages and methods arising from many recent works are summarized, and open issues discussed. Through case studies, it is illustrated how layering as Optimization Decomposition provides a common language to think about modularization in the face of complex, networked interactions, a unifying, top-down approach to design protocol stacks, and a mathematical theory of network architectures.https://resolver.caltech.edu/CaltechAUTHORS:20170810-104152899Rules of engagement
https://resolver.caltech.edu/CaltechAUTHORS:20150317-084036709
Year: 2007
DOI: 10.1038/446860a
Complex engineered and biological systems share protocol-based architectures that make them robust and evolvable, but with hidden fragilities to rare perturbations.https://resolver.caltech.edu/CaltechAUTHORS:20150317-084036709Appreciation of the Machinations of the Blind Watchmaker
https://resolver.caltech.edu/CaltechAUTHORS:ARKieeetac08
Year: 2008
DOI: 10.1109/TAC.2007.913342
One danger in using the language of engineering to describe the patterns and operations of the evident products of natural selection is that invoking principles of design runs the risk of invoking a designer. But as we analyze the increasing amount of data on the genome and its organization across a wide array of organisms, we are discovering there are patterns and dynamics reminiscent of designs that we, as imperfect human designers, recognize as serving an engineering purpose, including the purpose to be designable and evolvable.
There is no doubt that biological artifacts are the product of Dawkins' Blind Watchmaker, natural selection. But natural selection has at its heart one of engineering's most prized principles, optimization. Survival of the fittest, while not directly specifying an objective function that an organism must meet, nonetheless provides a clear figure of merit for long term biological success, persistence of lineages through reproduction of organisms, and is a well-formed if ever-changing specification. The mechanisms which provide the optimization algorithm for an organism to meet the demands of this changeable requirement, composed of a program subject to operations of mutation and interorganismal transfer and inheritance, are themselves under selection. Repeated rounds of this process leads, some argue, to architectures that facilitate evolution itself, the evolving of evolvability.https://resolver.caltech.edu/CaltechAUTHORS:ARKieeetac08A proposal for a coordinated effort for the determination of brainwide neuroanatomical connectivity in model organisms at a mesoscopic scale
https://resolver.caltech.edu/CaltechAUTHORS:20090717-142050047
Year: 2009
DOI: 10.1371/journal.pcbi.1000334
PMCID: PMC2655718
In this era of complete genomes, our knowledge of neuroanatomical circuitry remains surprisingly sparse. Such knowledge is critical, however, for both basic and clinical research into brain function. Here we advocate for a concerted effort to fill this gap, through systematic, experimental mapping of neural circuits at a mesoscopic scale of resolution suitable for comprehensive, brainwide coverage, using injections of tracers or viral vectors. We detail the scientific and medical rationale and briefly review existing knowledge and experimental techniques. We define a set of desiderata, including brainwide coverage; validated and extensible experimental techniques suitable for standardization and automation; centralized, open-access data repository; compatibility with existing resources; and tractability with current informatics technology. We discuss a hypothetical but tractable plan for mouse, additional efforts for the macaque, and technique development for human. We estimate that the mouse connectivity project could be completed within five years with a comparatively modest budget.https://resolver.caltech.edu/CaltechAUTHORS:20090717-142050047Fire in the Earth system
https://resolver.caltech.edu/CaltechAUTHORS:20090707-150808418
Year: 2009
DOI: 10.1126/science.1163886
Fire is a worldwide phenomenon that appears in the geological record soon after the appearance of terrestrial plants. Fire influences global ecosystem patterns and processes, including vegetation distribution and structure, the carbon cycle, and climate. Although humans and fire have always coexisted, our capacity to manage fire remains imperfect and may become more difficult in the future as climate change alters fire regimes. This risk is difficult to assess, however, because fires are still poorly represented in global models. Here, we discuss some of the most important issues involved in developing a better understanding of the role of fire in the Earth system.https://resolver.caltech.edu/CaltechAUTHORS:20090707-150808418Mathematics and the Internet: A Source of Enormous Confusion and Great Potential
https://resolver.caltech.edu/CaltechAUTHORS:20090904-141033791
Year: 2009
Graph theory models the Internet mathematically, and a number of plausible mathematically intersecting network models for the Internet have been developed and studied. Simultaneously, Internet researchers have developed methodology to use real data to validate, or invalidate, proposed Internet models. The authors look at these parallel developments, particularly as they apply to scale-free network models of the preferential attachment type.https://resolver.caltech.edu/CaltechAUTHORS:20090904-141033791A streamwise constant model of turbulence in plane Couette flow
https://resolver.caltech.edu/CaltechAUTHORS:20110131-095901289
Year: 2010
DOI: 10.1017/S0022112010003861
Streamwise and quasi-streamwise elongated structures have been shown to play a significant role in turbulent shear flows. We model the mean behaviour of fully turbulent plane Couette flow using a streamwise constant projection of the Navier–Stokes equations. This results in a two-dimensional three-velocity-component (2D/3C) model. We first use a steady-state version of the model to demonstrate that its nonlinear coupling provides the mathematical mechanism that shapes the turbulent velocity profile. Simulations of the 2D/3C model under small-amplitude Gaussian forcing of the cross-stream components are compared to direct numerical simulation (DNS) data. The results indicate that a streamwise constant projection of the Navier–Stokes equations captures salient features of fully turbulent plane Couette flow at low Reynolds numbers. A systems-theoretic approach is used to demonstrate the presence of large input–output amplification through the forced 2D/3C model. It is this amplification coupled with the appropriate nonlinearity that enables the 2D/3C model to generate turbulent behaviour under the small-amplitude forcing employed in this study.https://resolver.caltech.edu/CaltechAUTHORS:20110131-095901289Contrasting Views of Complexity and Their Implications For Network-Centric Infrastructures
https://resolver.caltech.edu/CaltechAUTHORS:20100709-151451379
Year: 2010
DOI: 10.1109/TSMCA.2010.2048027
There exists a widely recognized need to better understand
and manage complex "systems of systems," ranging from
biology, ecology, and medicine to network-centric technologies.
This is motivating the search for universal laws of highly evolved
systems and driving demand for new mathematics and methods
that are consistent, integrative, and predictive. However, the theoretical
frameworks available today are not merely fragmented
but sometimes contradictory and incompatible. We argue that
complexity arises in highly evolved biological and technological
systems primarily to provide mechanisms to create robustness.
However, this complexity itself can be a source of new fragility,
leading to "robust yet fragile" tradeoffs in system design. We
focus on the role of robustness and architecture in networked
infrastructures, and we highlight recent advances in the theory
of distributed control driven by network technologies. This view
of complexity in highly organized technological and biological systems
is fundamentally different from the dominant perspective in
the mainstream sciences, which downplays function, constraints,
and tradeoffs, and tends to minimize the role of organization and
design.https://resolver.caltech.edu/CaltechAUTHORS:20100709-151451379Random Access Game and Medium Access Control Design
https://resolver.caltech.edu/CaltechAUTHORS:20100907-110446104
Year: 2010
DOI: 10.1109/TNET.2010.2041066
Motivated partially by a control-theoretic viewpoint, we propose a game-theoretic model, called random access game, for contention control. We characterize Nash equilibria of random access games, study their dynamics, and propose distributed algorithms (strategy evolutions) to achieve Nash equilibria. This provides a general analytical framework that is capable of modeling a large class of system-wide quality-of-service (QoS) models via the specification of per-node utility functions, in which system-wide fairness or service differentiation can be achieved in a distributed manner as long as each node executes a contention resolution algorithm that is designed to achieve the Nash equilibrium. We thus propose a novel medium access method derived from carrier sense multiple access/collision avoidance (CSMA/CA) according to distributed strategy update mechanism achieving the Nash equilibrium of random access game. We present a concrete medium access method that adapts to a continuous contention measure called conditional collision probability, stabilizes the network into a steady state that achieves optimal throughput with targeted fairness (or service differentiation), and can decouple contention control from handling failed transmissions. In addition to guiding medium access control design, the random access game model also provides an analytical framework to understand equilibrium and dynamic properties of different medium access protocols.https://resolver.caltech.edu/CaltechAUTHORS:20100907-110446104Solving Large-Scale Hybrid Circuit-Antenna Problems
https://resolver.caltech.edu/CaltechAUTHORS:20110310-133536770
Year: 2011
DOI: 10.1109/TCSI.2010.2072010
Motivated by different applications in circuits, electromagnetics, and optics, this paper is concerned with the synthesis of a particular type of linear circuit, where the circuit is associated with a control unit. The objective is to design a controller for this control unit such that certain specifications on the parameters of the circuit are satisfied. It is shown that designing a control unit in the form of a switching network is an NP-complete problem that can be formulated as a rank-minimization problem. It is then proven that the underlying design problem can be cast as a semidefinite optimization if a passive network is designed instead of a switching network. Since the implementation of a passive network may need too many components, the design of a decoupled (sparse) passive network is subsequently studied. This paper introduces a tradeoff between design simplicity and implementation complexity for an important class of linear circuits. The superiority of the developed techniques is demonstrated by different simulations. In particular, for the first time in the literature, a wavelength-size passive antenna is designed, which has an excellent beamforming capability and which can concurrently make a null in at least eight directions.https://resolver.caltech.edu/CaltechAUTHORS:20110310-133536770On Lossless Approximations, the Fluctuation-Dissipation Theorem, and Limitations of Measurements
https://resolver.caltech.edu/CaltechAUTHORS:20110309-120503093
Year: 2011
DOI: 10.1109/TAC.2010.2056450
In this paper, we take a control-theoretic approach
to answering some standard questions in statistical mechanics, and use the results to derive limitations of classical measurements. A central problem is the relation between systems which appear macroscopically dissipative but are microscopically lossless. We show that a linear system is dissipative if, and only if, it can be
approximated by a linear lossless system over arbitrarily long time intervals. Hence lossless systems are in this sense dense in dissipative systems. A linear active system can be approximated by a nonlinear lossless system that is charged with initial energy. As a by-product, we obtain mechanisms explaining the Onsager relations from time-reversible lossless approximations, and the fluctuation-dissipation theorem from uncertainty in the initial
state of the lossless system. The results are applied to measurement devices and are used to quantify limits on the so-called observer effect, also called back action, which is the impact the measurement device has on the observed system. In particular, it is shown that deterministic back action can be compensated by using active elements, whereas stochastic back action is unavoidable and depends on the temperature of the measurement device.https://resolver.caltech.edu/CaltechAUTHORS:20110309-120503093Cross-layer design in multihop wireless networks
https://resolver.caltech.edu/CaltechAUTHORS:20110315-091124013
Year: 2011
DOI: 10.1016/j.comnet.2010.09.005
In this paper, we take a holistic approach to the protocol architecture design in multihop wireless networks. Our goal is to integrate various protocol layers into a rigorous framework, by regarding them as distributed computations over the network to solve some optimization problem. Different layers carry out distributed computation on different subsets of the decision variables using local information to achieve individual optimality. Taken
together, these local algorithms (with respect to different layers) achieve a global optimality. Our current theory integrates three functions—congestion control, routing and scheduling—in transport, network and link layers into a coherent framework. These three functions interact through and are regulated by congestion price so as to achieve a global optimality, even in a time-varying environment. Within this context, this model allows us to systematically derive the layering structure of the various mechanisms of different protocol layers, their interfaces, and the control information that must cross these interfaces to
achieve a certain performance and robustness.https://resolver.caltech.edu/CaltechAUTHORS:20110315-091124013Analysis of autocatalytic networks in biology
https://resolver.caltech.edu/CaltechAUTHORS:20110624-104150056
Year: 2011
DOI: 10.1016/j.automatica.2011.02.040
Autocatalytic networks, in particular the glycolytic pathway, constitute an important part of the cell metabolism. Changes in the concentration of metabolites and catalyzing enzymes during the lifetime of the cell can lead to perturbations from its nominal operating condition. We investigate the effects of such perturbations on stability properties, e.g., the extent of regions of attraction, of a particular family of autocatalytic network models. Numerical experiments demonstrate that systems that are robust with respect to perturbations in the parameter space have an easily "verifiable" (in terms of proof complexity) region of attraction properties. Motivated by the computational complexity of optimization-based formulations, we take a compositional approach and exploit a natural decomposition of the system, induced by the underlying biological structure, into a feedback interconnection of two input–output subsystems: a small subsystem with complicating nonlinearities and a large subsystem with simple dynamics. This decomposition simplifies the analysis of large pathways by assembling region of attraction certificates based on the input–output properties of the subsystems. It enables numerical as well as analytical construction of block-diagonal Lyapunov functions for a large family of autocatalytic pathways.https://resolver.caltech.edu/CaltechAUTHORS:20110624-104150056Amplification and nonlinear mechanisms in plane Couette flow
https://resolver.caltech.edu/CaltechAUTHORS:20110722-112408053
Year: 2011
DOI: 10.1063/1.3599701
We study the input-output response of a streamwise constant projection of the Navier-Stokes equations for plane Couette flow, the so-called 2D/3C model. Study of a streamwise constant model is motivated by numerical and experimental observations that suggest the prevalence and importance of streamwise and quasi-streamwise elongated structures. Periodic spanwise/wall-normal (z–y) plane stream functions are used as input to develop a forced 2D/3C streamwise velocity field that is qualitatively similar to a fully turbulent spatial field of direct numerical simulation data. The input-output response associated with the 2D/3C nonlinear coupling is used to estimate the energy optimal spanwise wavelength over a range of Reynolds numbers. The results of the input-output analysis agree with previous studies of the linearized Navier-Stokes equations. The optimal energy corresponds to minimal nonlinear coupling. On the other hand, the nature of the forced 2D/3C streamwise velocity field provides evidence that the nonlinear coupling in the 2D/3C model is responsible for creating the well known characteristic "S" shaped turbulent velocity profile. This indicates that there is an important tradeoff between energy amplification, which is primarily linear, and the seemingly nonlinear momentum transfer mechanism that produces a turbulent-like mean profile.https://resolver.caltech.edu/CaltechAUTHORS:20110722-112408053Glycolytic Oscillations and Limits on Robust Efficiency
https://resolver.caltech.edu/CaltechAUTHORS:20110712-150344607
Year: 2011
DOI: 10.1126/science.1200705
Both engineering and evolution are constrained by trade-offs between efficiency and robustness, but theory that formalizes this fact is limited. For a simple two-state model of glycolysis, we explicitly derive analytic equations for hard trade-offs between robustness and efficiency with oscillations as an inevitable side effect. The model describes how the trade-offs arise from individual parameters, including the interplay of feedback control with autocatalysis of network products necessary to power and catalyze intermediate reactions. We then use control theory to prove that the essential features of these hard trade-off "laws" are universal and fundamental, in that they depend minimally on the details of this system and generalize to the robust efficiency of any autocatalytic network. The theory also suggests worst-case conditions that are consistent with initial experiments.https://resolver.caltech.edu/CaltechAUTHORS:20110712-150344607Robustness, Optimization, and Architectures
https://resolver.caltech.edu/CaltechAUTHORS:20120123-110052341
Year: 2011
DOI: 10.3166/ejc.17.472-482
This paper will review recent progress on developing a unified theory for complex networks from biological systems and physics to engineering and technology. Insights into what the potential universal laws, architecture, and organizational principles are can be drawn from three converging research themes: growing attention to complexity and robustness in systems biology, layering and organization in network technology, and new mathematical frameworks for the study of complex networks. We will illustrate how tools in robust control theory and optimization can be integrated towards such unified theory by focusing on their applications in biology, physics, network design, and electric grid.https://resolver.caltech.edu/CaltechAUTHORS:20120123-110052341Architecture, constraints, and behavior
https://resolver.caltech.edu/CaltechAUTHORS:20111003-134717951
Year: 2011
DOI: 10.1073/pnas.1103557108
PMCID: PMC3176601
This paper aims to bridge progress in neuroscience involving sophisticated quantitative analysis of behavior, including the use of robust control, with other relevant conceptual and theoretical frameworks from systems engineering, systems biology, and mathematics. Familiar and accessible case studies are used to illustrate concepts of robustness, organization, and architecture (modularity and protocols) that are central to understanding complex networks. These essential organizational features are hidden during normal function of a system but are fundamental for understanding the nature, design, and function of complex biologic and technologic systems.https://resolver.caltech.edu/CaltechAUTHORS:20111003-134717951The magnitude distribution of earthquakes near Southern California faults
https://resolver.caltech.edu/CaltechAUTHORS:20120123-130627739
Year: 2011
DOI: 10.1029/2010JB007933
We investigate seismicity near faults in the Southern California Earthquake Center Community Fault Model. We search for anomalously large events that might be signs of a characteristic earthquake distribution. We find that seismicity near major fault zones in Southern California is well modeled by a Gutenberg-Richter distribution, with no evidence of characteristic earthquakes within the resolution limits of the modern instrumental catalog. However, the b value of the locally observed magnitude distribution is found to depend on distance to the nearest mapped fault segment, which suggests that earthquakes nucleating near major faults are likely to have larger magnitudes relative to earthquakes nucleating far from major faults.https://resolver.caltech.edu/CaltechAUTHORS:20120123-130627739Sepsis: Something old, something new, and a systems view
https://resolver.caltech.edu/CaltechAUTHORS:20120702-142216264
Year: 2012
DOI: 10.1016/j.jcrc.2011.05.025
Sepsis is a clinical syndrome characterized by a multisystem response to a microbial pathogenic insult consisting of a mosaic of interconnected biochemical, cellular, and organ-organ interaction networks. A central thread that connects these responses is inflammation that, while attempting to defend the body and prevent further harm, causes further damage through the feed-forward, proinflammatory effects of damage-associated molecular pattern molecules. In this review, we address the epidemiology and current definitions of sepsis and focus specifically on the biologic cascades that comprise the inflammatory response to sepsis. We suggest that attempts to improve clinical outcomes by targeting specific components of this network have been unsuccessful due to the lack of an integrative, predictive, and individualized systems-based approach to define the time-varying, multidimensional state of the patient. We highlight the translational impact of computational modeling and other complex systems approaches as applied to sepsis, including in silico clinical trials, patient-specific models, and complexity-based assessments of physiology.https://resolver.caltech.edu/CaltechAUTHORS:20120702-142216264Congestion Control for Multicast Flows With Network Coding
https://resolver.caltech.edu/CaltechAUTHORS:20121008-112913029
Year: 2012
DOI: 10.1109/TIT.2012.2204170
Recent advances in network coding have shown great potential for efficient information multicasting in communication networks, in terms of both network throughput and network management. In this paper, the problem of flow control at end-systems for network-coding-based multicast flows is addressed. Optimization-based models are formulated for network resource allocation, based on which two sets of decentralized controllers at sources and links/nodes for congestion control are developed for wired networks with given coding subgraphs and without given coding subgraphs, respectively. With random network coding, both sets of controllers can be implemented in a distributed manner, and work at the transport layer to adjust source rates and at network layer to carry out network coding. The convergence of the proposed controllers to the desired equilibrium operating points is proved, and numerical examples are provided to complement the theoretical analysis. The extension to wireless networks is also briefly discussed.https://resolver.caltech.edu/CaltechAUTHORS:20121008-112913029A multiscale modeling approach to inflammation: A case study in human endotoxemia
https://resolver.caltech.edu/CaltechAUTHORS:20130628-075744899
Year: 2013
DOI: 10.1016/j.jcp.2012.09.024
Inflammation is a critical component in the body's response to injury. A dysregulated inflammatory response, in which either the injury is not repaired or the inflammatory response does not appropriately self-regulate and end, is associated with a wide range of inflammatory diseases such as sepsis. Clinical management of sepsis is a significant problem, but progress in this area has been slow. This may be due to the inherent nonlinearities and complexities in the interacting multiscale pathways that are activated in response to systemic inflammation, motivating the application of systems biology techniques to better understand the inflammatory response. Here, we review our past work on a multiscale modeling approach applied to human endotoxemia, a model of systemic inflammation, consisting of a system of compartmentalized differential equations operating at different time scales and through a discrete model linking inflammatory mediators with changing patterns in the beating of the heart, which has been correlated with outcome and severity of inflammatory disease despite unclear mechanistic underpinnings. Working towards unraveling the relationship between inflammation and heart rate variability (HRV) may enable greater understanding of clinical observations as well as novel therapeutic targets.https://resolver.caltech.edu/CaltechAUTHORS:20130628-075744899In Darwinian evolution, feedback from natural selection leads to biased mutations
https://resolver.caltech.edu/CaltechAUTHORS:20140203-132837058
Year: 2013
DOI: 10.1111/nyas.12235
Natural selection provides feedback through which information about the environment and its recurring challenges is captured, inherited, and accumulated within genomes in the form of variations that contribute to survival. The variation upon which natural selection acts is generally described as "random." Yet evidence has been mounting for decades, from such phenomena as mutation hotspots, horizontal gene transfer, and highly mutable repetitive sequences, that variation is far from the simplifying idealization of random processes as white (uniform in space and time and independent of the environment or context). This paper focuses on what is known about the generation and control of mutational variation, emphasizing that it is not uniform across the genome or in time, not unstructured with respect to survival, and is neither memoryless nor independent of the (also far from white) environment. We suggest that, as opposed to frequentist methods, Bayesian analysis could capture the evolution of nonuniform probabilities of distinct classes of mutation, and argue not only that the locations, styles, and timing of real mutations are not correctly modeled as generated by a white noise random process, but that such a process would be inconsistent with evolutionary theory.https://resolver.caltech.edu/CaltechAUTHORS:20140203-132837058Resilience in Large Scale Distributed Systems
https://resolver.caltech.edu/CaltechAUTHORS:20151002-161607634
Year: 2014
DOI: 10.1016/j.procs.2014.03.036
Distributed systems are comprised of multiple subsystems that interact in two distinct ways: (1) physical interactions and (2) cyber interactions; i.e. sensors, actuators and computers controlling these subsystems, and the network over which they communicate. A broad class of cyber-physical systems (CPS) are described by such interactions, such as the smart grid, platoons of autonomous vehicles and the sensorimotor system. This paper will survey recent progress in developing a coherent mathematical framework that describes the rich CPS "design space" of fundamental limits and tradeoffs between efficiency, robustness, adaptation, verification and scalability. Whereas most research treats at most one of these issues, we attempt a holistic approach in examining these metrics. In particular, we will argue that a control architecture that emphasizes scalability leads to improvements in robustness, adaptation, and verification, all the while having only minor effects on efficiency – i.e. through the choice of a new architecture, we believe that we are able to bring a system closer to the true fundamental hard limits of this complex design space.https://resolver.caltech.edu/CaltechAUTHORS:20151002-161607634Robust efficiency and actuator saturation explain healthy heart rate control and variability
https://resolver.caltech.edu/CaltechAUTHORS:20140805-091023982
Year: 2014
DOI: 10.1073/pnas.1401883111
PMCID: PMC4143073
The correlation of healthy states with heart rate variability (HRV) using time series analyses is well documented. Whereas these studies note the accepted proximal role of autonomic nervous system balance in HRV patterns, the responsible deeper physiological, clinically relevant mechanisms have not been fully explained. Using mathematical tools from control theory, we combine mechanistic models of basic physiology with experimental exercise data from healthy human subjects to explain causal relationships among states of stress vs. health, HR control, and HRV, and more importantly, the physiologic requirements and constraints underlying these relationships. Nonlinear dynamics play an important explanatory role––most fundamentally in the actuator saturations arising from unavoidable tradeoffs in robust homeostasis and metabolic efficiency. These results are grounded in domain-specific mechanisms, tradeoffs, and constraints, but they also illustrate important, universal properties of complex systems. We show that the study of complex biological phenomena like HRV requires a framework which facilitates inclusion of diverse domain specifics (e.g., due to physiology, evolution, and measurement technology) in addition to general theories of efficiency, robustness, feedback, dynamics, and supporting mathematical tools.https://resolver.caltech.edu/CaltechAUTHORS:20140805-091023982The mathematician's control toolbox for management of type 1 diabetes
https://resolver.caltech.edu/CaltechAUTHORS:20141211-084533887
Year: 2014
DOI: 10.1098/rsfs.2014.0042
PMCID: PMC4142019
Blood glucose levels are controlled by well-known physiological feedback loops: high glucose levels promote insulin release from the pancreas, which in turn stimulates cellular glucose uptake. Low blood glucose levels promote pancreatic glucagon release, stimulating glycogen breakdown to glucose in the liver. In healthy people, this control system is remarkably good at maintaining blood glucose in a tight range despite many perturbations to the system imposed by diet and fasting, exercise, medications and other stressors. Type 1 diabetes mellitus (T1DM) results from loss of the insulin-producing cells of the pancreas, the beta cells. These cells serve as both sensor (of glucose levels) and actuator (insulin/glucagon release) in a control physiological feedback loop. Although the idea of rebuilding this feedback loop seems intuitively easy, considerable control mathematics involving multiple types of control schema were necessary to develop an artificial pancreas that still does not function as well as evolved control mechanisms. Here, we highlight some tools from control engineering used to mimic normal glucose control in an artificial pancreas, and the constraints, trade-offs and clinical consequences inherent in various types of control schemes. T1DM can be viewed as a loss of normal physiologic controls, as can many other disease states. For this reason, we introduce basic concepts of control engineering applicable to understanding pathophysiology of disease and development of physiologically based control strategies for treatment.https://resolver.caltech.edu/CaltechAUTHORS:20141211-084533887Buffering Dynamics and Stability of Internet Congestion Controllers
https://resolver.caltech.edu/CaltechAUTHORS:20150109-091116665
Year: 2014
DOI: 10.1109/TNET.2013.2287198
Many existing fluid-flow models of the Internet congestion
control algorithms make simplifying assumptions on the
effects of buffers on the data flows. In particular, they assume that the flow rate of a TCP flow at every link in its path is equal to the original source rate. However, a fluid flow in practice is modified by the queueing processes on its path, so that an intermediate link will generally not see the original source rate. In this paper, a more accurate model is derived for the behavior of the
network under a congestion controller, which takes into account the effect of buffering on output flows. It is shown how this model can be deployed for some well-known service disciplines such as first-in–first-out and generalized weighted fair queueing. Based on the derived model, the dual and primal-dual algorithms are studied
under the common pricing mechanisms, and it is shown that these algorithms can become unstable. Sufficient conditions are provided to guarantee the stability of the dual and primal-dual algorithms. Finally, a new pricing mechanism is proposed under which these congestion control algorithms are both stable.https://resolver.caltech.edu/CaltechAUTHORS:20150109-091116665The ℋ_2 Control Problem for Quadratically Invariant Systems With Delays
https://resolver.caltech.edu/CaltechAUTHORS:20150716-123217938
Year: 2015
DOI: 10.1109/TAC.2014.2363917
This technical note gives a new solution to the output feedback ℋ_2 problem for quadratically invariant communication delay patterns. A characterization of all stabilizing controllers satisfying the delay constraints is given and the decentralized ℋ_2 problem is cast as a convex model matching problem. The main result shows that the model matching problem can be reduced to a finite-dimensional quadratic program. A recursive state-space method for computing the optimal controller based on vectorization is given.https://resolver.caltech.edu/CaltechAUTHORS:20150716-123217938On Channel Failures, File Fragmentation Policies, and Heavy-Tailed Completion Times
https://resolver.caltech.edu/CaltechAUTHORS:20160317-122351101
Year: 2016
DOI: 10.1109/TNET.2014.2375920
It has been recently discovered that heavy-tailed completion times can result from protocol interaction even when file sizes are light-tailed. A key to this phenomenon is the use of a restart policy where if the file is interrupted before it is completed, it needs to restart from the beginning. In this paper, we show that fragmenting a file into pieces whose sizes are either bounded or independently chosen after each interruption guarantees light-tailed completion time as long as the file size is light-tailed; i.e., in this case, heavy-tailed completion time can only originate from heavy-tailed file sizes. If the file size is heavy-tailed, then the completion time is necessarily heavy-tailed. For this case, we show that when the file size distribution is regularly varying, then under independent or bounded fragmentation, the completion time tail distribution function is asymptotically bounded above by that of the original file size stretched by a constant factor. We then prove that if the distribution of times between interruptions has nondecreasing failure rate, the expected completion time is minimized by dividing the file into equal-sized fragments; this optimal fragment size is unique but depends on the file size. We also present a simple blind fragmentation policy where the fragment sizes are constant and independent of the file size and prove that it is asymptotically optimal. Both these policies are also shown to have desirable completion time tail behavior. Finally, we bound the error in expected completion time due to error in modeling of the failure process.https://resolver.caltech.edu/CaltechAUTHORS:20160317-122351101Even Noisy Responses Can Be Perfect If Integrated Properly
https://resolver.caltech.edu/CaltechAUTHORS:20160504-095129745
Year: 2016
DOI: 10.1016/j.cels.2016.02.012
Integral feedback for perfect adaptation is a ubiquitous strategy in engineering and biology. Long studied in deterministic settings, it can now be understood in the context of the fully stochastic systems that are prevalent in biology.https://resolver.caltech.edu/CaltechAUTHORS:20160504-095129745Evolutionary tradeoffs in cellular composition across diverse bacteria
https://resolver.caltech.edu/CaltechAUTHORS:20161117-075517086
Year: 2016
DOI: 10.1038/ismej.2016.21
One of the most important classic and contemporary interests in biology is the connection between cellular composition and physiological function. Decades of research have allowed us to understand the detailed relationship between various cellular components and processes for individual species, and have uncovered common functionality across diverse species. However, there still remains the need for frameworks that can mechanistically predict the tradeoffs between cellular functions and elucidate and interpret average trends across species. Here we provide a comprehensive analysis of how cellular composition changes across the diversity of bacteria as connected with physiological function and metabolism, spanning five orders of magnitude in body size. We present an analysis of the trends with cell volume that covers shifts in genomic, protein, cellular envelope, RNA and ribosomal content. We show that trends in protein content are more complex than a simple proportionality with the overall genome size, and that the number of ribosomes is simply explained by cross-species shifts in biosynthesis requirements. Furthermore, we show that the largest and smallest bacteria are limited by physical space requirements. At the lower end of size, cell volume is dominated by DNA and protein content—the requirement for which predicts a lower limit on cell size that is in good agreement with the smallest observed bacteria. At the upper end of bacterial size, we have identified a point at which the number of ribosomes required for biosynthesis exceeds available cell volume. Between these limits we are able to discuss systematic and dramatic shifts in cellular composition. Much of our analysis is connected with the basic energetics of cells where we show that the scaling of metabolic rate is surprisingly superlinear with all cellular components.https://resolver.caltech.edu/CaltechAUTHORS:20161117-075517086Interpretation of the Precision Matrix and Its Application in Estimating Sparse Brain Connectivity during Sleep Spindles from Human Electrocorticography Recordings
https://resolver.caltech.edu/CaltechAUTHORS:20170118-142813403
Year: 2017
DOI: 10.1162/NECO_a_00936
PMCID: PMC5424817
The correlation method from brain imaging has been used to estimate functional connectivity in the human brain. However, brain regions might show very high correlation even when the two regions are not directly connected due to the strong interaction of the two regions with common input from a third region. One previously proposed solution to this problem is to use a sparse regularized inverse covariance matrix or precision matrix (SRPM) assuming that the connectivity structure is sparse. This method yields partial correlations to measure strong direct interactions between pairs of regions while simultaneously removing the influence of the rest of the regions, thus identifying regions that are conditionally independent. To test our methods, we first demonstrated conditions under which the SRPM method could indeed find the true physical connection between a pair of nodes for a spring-mass example and an RC circuit example. The recovery of the connectivity structure using the SRPM method can be explained by energy models using the Boltzmann distribution. We then demonstrated the application of the SRPM method for estimating brain connectivity during stage 2 sleep spindles from human electrocorticography (ECoG) recordings using an 8 x 8 electrode array. The ECoG recordings that we analyzed were from a 32-year-old male patient with long-standing pharmaco-resistant left temporal lobe complex partial epilepsy. Sleep spindles were automatically detected using delay differential analysis and then analyzed with SRPM and the Louvain method for community detection. We found spatially localized brain networks within and between neighboring cortical areas during spindles, in contrast to the case when sleep spindles were not present.https://resolver.caltech.edu/CaltechAUTHORS:20170118-142813403Novel computational method for predicting polytherapy switching strategies to overcome tumor heterogeneity and evolution
https://resolver.caltech.edu/CaltechAUTHORS:20170206-090159752
Year: 2017
DOI: 10.1038/srep44206
PMCID: PMC5347024
The success of targeted cancer therapy is limited by drug resistance that can result from tumor genetic heterogeneity. The current approach to address resistance typically involves initiating a new treatment after clinical/radiographic disease progression, ultimately resulting in futility in most patients. Towards a potential alternative solution, we developed a novel computational framework that uses human cancer profiling data to systematically identify dynamic, pre-emptive, and sometimes non-intuitive treatment strategies that can better control tumors in real-time. By studying lung adenocarcinoma clinical specimens and preclinical models, our computational analyses revealed that the best anti-cancer strategies addressed existing resistant subpopulations as they emerged dynamically during treatment. In some cases, the best computed treatment strategy used unconventional therapy switching while the bulk tumor was responding, a prediction we confirmed in vitro. The new framework presented here could guide the principled implementation of dynamic molecular monitoring and treatment strategies to improve cancer control.https://resolver.caltech.edu/CaltechAUTHORS:20170206-09015975215th International Conference on Complex Acute Illness (ICCAI) Society for Complex Acute Illness: August 10–14, 2016 at the Beckman Institute, Caltech
https://resolver.caltech.edu/CaltechAUTHORS:20161214-135019937
Year: 2017
DOI: 10.1016/j.jcrc.2016.11.011
[no abstract]https://resolver.caltech.edu/CaltechAUTHORS:20161214-135019937Effects of Delays, Poles, and Zeros on Time Domain Waterbed Tradeoffs and Oscillations
https://resolver.caltech.edu/CaltechAUTHORS:20170614-164911018
Year: 2017
DOI: 10.1109/LCSYS.2017.2710327
his letter aims for a simple and accessible explanation as to why oscillations naturally arise due to tradeoffs in feedback systems, and how these can be aggravated by delays and unstable poles and zeros. Such results have been standard for decades using frequency domain methods, which yield a rich variety of familiar "waterbed" tradeoffs. While almost trivial for control experts, frequency domain methods are less familiar to many scientists and engineers who could benefit from the insights such tradeoffs can provide. So here we present an entirely time domain model using discrete time dynamics and l1 norm performance. A simple waterbed effect is that imposing zero steady state response to a step naturally create oscillations that double the response to periodic disturbances. We also show how this tradeoff is further aggravated not only by unstable poles and zeros, but also delays, in a way clearer than in the frequency domain versions.https://resolver.caltech.edu/CaltechAUTHORS:20170614-164911018Evaluation of Hansen et al.: Nuance Is Crucial in Comparisons of Noise
https://resolver.caltech.edu/CaltechAUTHORS:20181024-161225614
Year: 2018
DOI: 10.1016/j.cels.2018.10.003
This is a first-round review of "Cytoplasmic Amplification of Transcriptional Noise Generates Substantial Cell-to-Cell Variability" by Leor Weinberger, Maike Hansen, and their colleagues; it was written for Cell Systems as part of the peer review process. We chose to feature it here because its nuanced treatment of noise, Hansen et al. (2018, this issue of Cell Systems), and Battich et al. (2015) exemplifies scholarship. The constructive critique it presents also improved Hansen et al. (2018) without imposing an agenda on its authors. After the first round of review,Hansen et al. (2018)was revised to take the reviewers' comments into account, re-submitted, re-reviewed, accepted for publication, and then published in this issue of Cell Systems. For comparison, an earlier version of Hansen et al. was deposited on bioRxiv ahead of review and can be found here:https://doi.org/10.1101/222901. Olsman et al. chose to reveal their identities during the peer review process within this peer review. Hansen et al. support the publication of this Peer Review; their permission to use it was obtained after their paper was officially accepted. This Peer Review was not itself peer reviewed. It has been lightly edited for stylistic polish and clarity. No scientific content has been substantively altered.https://resolver.caltech.edu/CaltechAUTHORS:20181024-161225614Separable and Localized System Level Synthesis for Large-Scale Systems
https://resolver.caltech.edu/CaltechAUTHORS:20180330-082054929
Year: 2018
DOI: 10.1109/TAC.2018.2819246
A major challenge faced in the design of large-scale cyber-physical systems, such as power systems, the Internet of Things or intelligent transportation systems, is that traditional distributed optimal control methods do not scale gracefully, neither in controller synthesis nor in controller implementation, to systems composed of a large number (e.g., on the order of billions) of interacting subsystems. This paper shows that this challenge can now be addressed by leveraging the recently introduced System Level Approach (SLA) to controller synthesis. In particular, in the context of the SLA, we define suitable notions of separability for control objective functions and system constraints such that the global optimization problem (or iterate update problems of a distributed optimization algorithm) can be decomposed into parallel subproblems. We then further show that if additional locality (i.e., sparsity) constraints are imposed, then these subproblems can be solved using local models and local decision variables. The SLA is essential to maintaining the convexity of the aforementioned problems under locality constraints. As a consequence, the resulting synthesis methods have O(1) complexity relative to the size of the global system. We further show that many optimal control problems of interest, such as (localized) LQR and LQG, H_2 optimal control with joint actuator and sensor regularization, and (localized) mixed H_2/L_1 optimal control problems, satisfy these notions of separability, and use these problems to explore tradeoffs in performance, actuator and sensing density, and average versus worst-case performance for a large-scale power inspired system.https://resolver.caltech.edu/CaltechAUTHORS:20180330-082054929A Control-Theoretic Approach to In-Network Congestion Management
https://resolver.caltech.edu/CaltechAUTHORS:20181005-093939759
Year: 2018
DOI: 10.1109/tnet.2018.2866785
WANs are often over-provisioned to accommodate worst-case operating conditions, with many links typically running at only around 30% capacity. In this paper, we show that in-network congestion management can play an important role in increasing network utilization. To mitigate the effects of in-network congestion caused by rapid variations in traffic demand, we propose using high-frequency traffic control (HFTraC) algorithms that exchange real-time flow rate and buffer occupancy information between routers to dynamically coordinate their link-service rates. We show that the design of such dynamic link-service rate policies can be cast as a distributed optimal control problem that allows us to systematically explore an enlarged design space of in-network congestion management algorithms. This also provides a means of quantitatively comparing different controller architectures: we show, perhaps surprisingly, that centralized control is not always better. We implement and evaluate HFTraC in the face of rapidly varying UDP and TCP flows and in combination with AQM algorithms. Using a custom experimental testbed, a Mininet emulator, and a production WAN, we show that HFTraC leads to up to 66% decreases in packet loss rates at high link utilizations as compared to FIFO policies.https://resolver.caltech.edu/CaltechAUTHORS:20181005-093939759Impact of Stochasticity and Its Control on a Model of the Inflammatory Response
https://resolver.caltech.edu/CaltechAUTHORS:20190128-135436096
Year: 2019
DOI: 10.3390/computation7010003
The dysregulation of inflammation, normally a self-limited response that initiates healing, is a critical component of many diseases. Treatment of inflammatory disease is hampered by an incomplete understanding of the complexities underlying the inflammatory response, motivating the application of systems and computational biology techniques in an effort to decipher this complexity and ultimately improve therapy. Many mathematical models of inflammation are based on systems of deterministic equations that do not account for the biological noise inherent at multiple scales, and consequently the effect of such noise in regulating inflammatory responses has not been studied widely. In this work, noise was added to a deterministic system of the inflammatory response in order to account for biological stochasticity. Our results demonstrate that the inflammatory response is highly dependent on the balance between the concentration of the pathogen and the level of biological noise introduced to the inflammatory network. In cases where the pro- and anti-inflammatory arms of the response do not mount the appropriate defense to the inflammatory stimulus, inflammation transitions to a different state compared to cases in which pro- and anti-inflammatory agents are elaborated adequately and in a timely manner. In this regard, our results show that noise can be both beneficial and detrimental for the inflammatory endpoint. By evaluating the parametric sensitivity of noise characteristics, we suggest that efficiency of inflammatory responses can be controlled. Interestingly, the time period on which parametric intervention can be introduced efficiently in the inflammatory system can be also adjusted by controlling noise. These findings represent a novel understanding of inflammatory systems dynamics and the potential role of stochasticity thereon.https://resolver.caltech.edu/CaltechAUTHORS:20190128-135436096Architectural Principles for Characterizing the Performance of Antithetic Integral Feedback Networks
https://resolver.caltech.edu/CaltechAUTHORS:20180927-114225624
Year: 2019
DOI: 10.1016/j.isci.2019.04.004
PMCID: PMC6479019
As we begin to design increasingly complex synthetic biomolecular systems, it is essential to develop rational design methodologies that yield predictable circuit performance. Here we apply theoretical tools from the theory of control and dynamical systems to yield practical insights into the architecture and function of a particular class of biological feedback circuit. Specifically, we show that it is possible to analytically characterize both the operating regime and performance tradeoffs of a sequestration feedback circuit architecture. Further, we demonstrate how these principles can be applied to inform the design process of a particular synthetic feedback circuit.https://resolver.caltech.edu/CaltechAUTHORS:20180927-114225624System level synthesis
https://resolver.caltech.edu/CaltechAUTHORS:20190513-131853183
Year: 2019
DOI: 10.1016/j.arcontrol.2019.03.006
This article surveys the System Level Synthesis framework, which presents a novel perspective on constrained robust and optimal controller synthesis for linear systems. We show how SLS shifts the controller synthesis task from the design of a controller to the design of the entire closed loop system, and highlight the benefits of this approach in terms of scalability and transparency. We emphasize two particular applications of SLS, namely large-scale distributed optimal control and robust control. In the case of distributed control, we show how SLS allows for localized controllers to be computed, extending robust and optimal control methods to large-scale systems under practical and realistic assumptions. In the case of robust control, we show how SLS allows for novel design methodologies that, for the first time, quantify the degradation in performance of a robust controller due to model uncertainty – such transparency is key in allowing robust control methods to interact, in a principled way, with modern techniques from machine learning and statistical inference. Throughout, we emphasize practical and efficient computational solutions, and demonstrate our methods on easy to understand case studies.https://resolver.caltech.edu/CaltechAUTHORS:20190513-131853183Hard Limits and Performance Tradeoffs in a Class of Antithetic Integral Feedback Networks
https://resolver.caltech.edu/CaltechAUTHORS:20190708-153642582
Year: 2019
DOI: 10.1016/j.cels.2019.06.001
Feedback regulation is pervasive in biology at both the organismal and cellular level. In this article, we explore the properties of a particular biomolecular feedback mechanism called antithetic integral feedback, which can be implemented using the binding of two molecules. Our work develops an analytic framework for understanding the hard limits, performance tradeoffs, and architectural properties of this simple model of biological feedback control. Using tools from control theory, we show that there are simple parametric relationships that determine both the stability and the performance of these systems in terms of speed, robustness, steady-state error, and leakiness. These findings yield a holistic understanding of the behavior of antithetic integral feedback and contribute to a more general theory of biological control systems.https://resolver.caltech.edu/CaltechAUTHORS:20190708-153642582A System Level Approach to Controller Synthesis
https://resolver.caltech.edu/CaltechAUTHORS:20190201-155312160
Year: 2019
DOI: 10.1109/tac.2018.2890753
Biological and advanced cyber-physical control systems often have limited, sparse, uncertain, and distributed communication and computing in addition to sensing and actuation. Fortunately, the corresponding plants and performance requirements are also sparse and structured, and this must be exploited to make constrained controller design feasible and tractable. We introduce a new "system level" (SL) approach involving three complementary SL elements. SL parameterizations (SLPs) provide an alternative to the Youla parameterization of all stabilizing controllers and the responses they achieve, and combine with SL constraints (SLCs) to parameterize the largest known class of constrained stabilizing controllers that admit a convex characterization, generalizing quadratic invariance. SLPs also lead to a generalization of detectability and stabilizability, suggesting the existence of a rich separation structure, that when combined with SLCs is naturally applicable to structurally constrained controllers and systems. We further provide a catalog of useful SLCs, most importantly including sparsity, delay, and locality constraints on both communication and computing internal to the controller, and external system performance. Finally, we formulate SL synthesis problems, which define the broadest known class of constrained optimal control problems that can be solved using convex programming.https://resolver.caltech.edu/CaltechAUTHORS:20190201-155312160Lessons from "a first-principles approach to understanding the internet's router-level topology"
https://resolver.caltech.edu/CaltechAUTHORS:20191111-134656474
Year: 2019
DOI: 10.1145/3371934.3371964
Our main purpose for this editorial is to reiterate the main message that we tried to convey in our SIGCOMM'04 paper but that got largely lost in all the hype surrounding the use of scale-free network models throughout the sciences in the last two decades. That message was that because of (1) the Internet's highly-engineered architecture, (2) a thorough understanding of its component technologies, and (3) the availability of extensive (but typically noisy) measurements, this complex man-made system affords unique opportunities to unambiguously resolve most claims about its properties, structure, and functionality. In the process, we point out the fallacy of popular approaches that consider complex systems such as the Internet from the perspective of disorganized complexity and argue for renewed efforts and increased focus on advancing an "architecture first" view with its emphasis on studying the organized complexity of systems such as the Internet.https://resolver.caltech.edu/CaltechAUTHORS:20191111-134656474Fundamental Limits and Tradeoffs in Autocatalytic Pathways
https://resolver.caltech.edu/CaltechAUTHORS:20190613-142206629
Year: 2020
DOI: 10.1109/tac.2019.2921671
This paper develops some basic principles to study autocatalytic networks and exploit their structural properties in order to characterize their inherent fundamental limits and tradeoffs. In a dynamical system with autocatalytic structure, the system's output is necessary to catalyze its own production. Our study has been motivated by a simplified model of a glycolysis pathway. First, the properties of this class of pathways are investigated through a network model, which consists of a chain of enzymatically catalyzed intermediate reactions coupled with an autocatalytic component. We explicitly derive a hard limit on the minimum achievable L₂-gain disturbance attenuation and a hard limit on its minimum required output energy. Then, we show how these resulting hard limits lead to some fundamental tradeoffs between transient and steady-state behavior of the network and its net production.https://resolver.caltech.edu/CaltechAUTHORS:20190613-142206629SBML Level 3: an extensible format for the exchange and reuse of biological models
https://resolver.caltech.edu/CaltechAUTHORS:20200827-073238775
Year: 2020
DOI: 10.15252/msb.20199110
Systems biology has experienced dramatic growth in the number, size, and complexity of computational models. To reproduce simulation results and reuse models, researchers must exchange unambiguous model descriptions. We review the latest edition of the Systems Biology Markup Language (SBML), a format designed for this purpose. A community of modelers and software authors developed SBML Level 3 over the past decade. Its modular form consists of a core suited to representing reaction‐based models and packages that extend the core with features suited to other model types including constraint‐based models, reaction‐diffusion models, logical network models, and rule‐based models. The format leverages two decades of SBML and a rich software ecosystem that transformed how systems biologists build and interact with models. More recently, the rise of multiscale models of whole cells and organs, and new data sources such as single‐cell measurements and live imaging, has precipitated new ways of integrating data with models. We provide our perspectives on the challenges presented by these developments and how SBML Level 3 provides the foundation needed to support this evolution.https://resolver.caltech.edu/CaltechAUTHORS:20200827-073238775Metabolic multistability and hysteresis in a model aerobe-anaerobe microbiome community
https://resolver.caltech.edu/CaltechAUTHORS:20200813-144241318
Year: 2020
DOI: 10.1126/sciadv.aba0353
PMCID: PMC7423363
Major changes in the microbiome are associated with health and disease. Some microbiome states persist despite seemingly unfavorable conditions, such as the proliferation of aerobe-anaerobe communities in oxygen-exposed environments in wound infections or small intestinal bacterial overgrowth. Mechanisms underlying transitions into and persistence of these states remain unclear. Using two microbial taxa relevant to the human microbiome, we combine genome-scale mathematical modeling, bioreactor experiments, transcriptomics, and dynamical systems theory to show that multistability and hysteresis (MSH) is a mechanism describing the shift from an aerobe-dominated state to a resilient, paradoxically persistent aerobe-anaerobe state. We examine the impact of changing oxygen and nutrient regimes and identify changes in metabolism and gene expression that lead to MSH and associated multi-stable states. In such systems, conceptual causation-correlation connections break and MSH must be used for analysis. Using MSH to analyze microbiome dynamics will improve our conceptual understanding of stability of microbiome states and transitions between states.https://resolver.caltech.edu/CaltechAUTHORS:20200813-144241318WheelCon: A Wheel Control-Based Gaming Platform for Studying Human Sensorimotor Control
https://resolver.caltech.edu/CaltechAUTHORS:20200909-133721965
Year: 2020
DOI: 10.3791/61092
WheelCon is a novel, free and open-source platform to design video games that noninvasively simulates mountain biking down a steep, twisting, bumpy trail. It contains components presenting in human sensorimotor control (delay, quantization, noise, disturbance, and multiple feedback loops) and allows researchers to study the layered architecture in sensorimotor control.https://resolver.caltech.edu/CaltechAUTHORS:20200909-133721965Heart rate and blood pressure decreases after a motor task in pre‐symptomatic AD
https://resolver.caltech.edu/CaltechAUTHORS:20201210-160240164
Year: 2020
DOI: 10.1002/alz.045521
Background. Understanding how cardiovascular health affects the early Alzheimer's disease (AD) pathology is challenging because several variables can contribute to significant changes in blood pressure (BP) and heart rate (HR). Previous studies have suggested an association between Alzheimer's disease and HR. Little association is known about pre‐symptomatic AD. We aim to explore the HR and BP changes before and after a motor task in a cognitive healthy population.
Method. Participants (age ranged 61‐95 years old) were recruited from the local community, including cognitively healthy (CH) individuals who were further subdivided based on cerebrospinal fluid (CSF) classification: those with a normal amyloid/tau ratio (CH‐NAT, n = 11) or a pathological amyloid/tau ratio (CH‐PAT, n = 8). Two groups were age, gender, BMI, and education matched. Participants were asked to use a steering wheel to follow a moving line presented on the monitor. Practice session were followed by three task‐identical sessions, 90 seconds per session. Each session consisted of 3 repeated blocks, 30 seconds per block: bump, trail, and bump & trail. Systolic pressure (SP), diastolic pressure (DP), pulse pressure (PP), and HR were measured before and after completing the whole task.
Result. No differences in BP and HR were found between CH‐NATs and CH‐PATs. We found a significant decrease in SP for both CH‐NATs (pre‐task 135.2±24.2, post‐task 127.2±18.8, reduced by 8±9.7, p = 0.0208) and CH‐PATs (pre‐task: 146.1±20.6, post‐task: 134.8±19.6, reduced by 11.3±5.2, p = 0.0004). Furthermore, CH‐PATs have a significant drop in HR (pre‐task: 71.4±10.7, post‐task: 66.3±8.2, reduced by 5.1±4.1 beats, p = 0.0098) compared to CH‐NATs (pre‐task: 70.8±11.8, post‐task: 69.2±13.3, reduced by 1.5±6.2 beats, p = 0.4302). CH‐PATs has decreased HR*SP (pre‐task: 10464±2375.6, post‐task: 8941.1±1843.1, reduced by 1522.9±775.8, p = 0.0009), compared to CH‐NATs (pre‐task: 9537.2±2098.3, post‐task: 8765.2±1769.2, reduced by 772.0±1153.5, p=0.0507). No significant change was found in DP for either CH‐NATs or CH‐PATs.
Conclusion. Pre‐symptomatic AD participants have a significant drop of HR, SP, and SP*HR compared to CH‐NATs, indicating reduced sympathetic responses after motor task. These changes in HR and SP may provide evidence of compromised cardiovascular health and autonomic regulation in pre‐symptomatic AD.https://resolver.caltech.edu/CaltechAUTHORS:20201210-160240164Online Robust Control of Nonlinear Systems with Large Uncertainty
https://resolver.caltech.edu/CaltechAUTHORS:20210510-094452379
Year: 2021
DOI: 10.48550/arXiv.2103.11055
Robust control is a core approach for controlling systems with performance guarantees that are robust to modeling error, and is widely used in real-world systems. However, current robust control approaches can only handle small system uncertainty, and thus require significant effort in system identification prior to controller design. We present an online approach that robustly controls a nonlinear system under large model uncertainty. Our approach is based on decomposing the problem into two sub-problems, "robust control design" (which assumes small model uncertainty) and "chasing consistent models", which can be solved using existing tools from control theory and online learning, respectively. We provide a learning convergence analysis that yields a finite mistake bound on the number of times performance requirements are not met and can provide strong safety guarantees, by bounding the worst-case state deviation. To the best of our knowledge, this is the first approach for online robust control of nonlinear systems with such learning theoretic and safety guarantees. We also show how to instantiate this framework for general robotic systems, demonstrating the practicality of our approach.https://resolver.caltech.edu/CaltechAUTHORS:20210510-094452379Diversity-enabled sweet spots in layered architectures and speed–accuracy trade-offs in sensorimotor control
https://resolver.caltech.edu/CaltechAUTHORS:20210510-141357701
Year: 2021
DOI: 10.1073/pnas.1916367118
PMCID: PMC8179159
Nervous systems sense, communicate, compute, and actuate movement using distributed components with severe trade-offs in speed, accuracy, sparsity, noise, and saturation. Nevertheless, brains achieve remarkably fast, accurate, and robust control performance due to a highly effective layered control architecture. Here, we introduce a driving task to study how a mountain biker mitigates the immediate disturbance of trail bumps and responds to changes in trail direction. We manipulated the time delays and accuracy of the control input from the wheel as a surrogate for manipulating the characteristics of neurons in the control loop. The observed speed–accuracy trade-offs motivated a theoretical framework consisting of two layers of control loops—a fast, but inaccurate, reflexive layer that corrects for bumps and a slow, but accurate, planning layer that computes the trajectory to follow—each with components having diverse speeds and accuracies within each physical level, such as nerve bundles containing axons with a wide range of sizes. Our model explains why the errors from two control loops are additive and shows how the errors in each control loop can be decomposed into the errors caused by the limited speeds and accuracies of the components. These results demonstrate that an appropriate diversity in the properties of neurons across layers helps to create "diversity-enabled sweet spots," so that both fast and accurate control is achieved using slow or inaccurate components.https://resolver.caltech.edu/CaltechAUTHORS:20210510-141357701Internal feedback in the cortical perception–action loop enables fast and accurate behavior
https://authors.library.caltech.edu/records/s2bj5-0ty91
Year: 2023
DOI: 10.1073/pnas.2300445120
PMCID: PMC10523540
Animals move smoothly and reliably in unpredictable environments. Models of sensorimotor control, drawing on control theory, have assumed that sensory information from the environment leads to actions, which then act back on the environment, creating a single, unidirectional perception–action loop. However, the sensorimotor loop contains internal delays in sensory and motor pathways, which can lead to unstable control. We show here that these delays can be compensated by internal feedback signals that flow backward, from motor toward sensory areas. This internal feedback is ubiquitous in neural sensorimotor systems, and we show how internal feedback compensates internal delays. This is accomplished by filtering out self-generated and other predictable changes so that unpredicted, actionable information can be rapidly transmitted toward action by the fastest components, effectively compressing the sensory input to more efficiently use feedforward pathways: Tracts of fast, giant neurons necessarily convey less accurate signals than tracts with many smaller neurons, but they are crucial for fast and accurate behavior. We use a mathematically tractable control model to show that internal feedback has an indispensable role in achieving state estimation, localization of function (how different parts of the cortex control different parts of the body), and attention, all of which are crucial for effective sensorimotor control. This control model can explain anatomical, physiological, and behavioral observations, including motor signals in the visual cortex, heterogeneous kinetics of sensory receptors, and the presence of giant cells in the cortex of humans as well as internal feedback patterns and unexplained heterogeneity in neural systems.https://authors.library.caltech.edu/records/s2bj5-0ty91