CaltechAUTHORS: Combined
https://feeds.library.caltech.edu/people/Doyle-J-C/combined.rss
A Caltech Library Repository Feedhttp://www.rssboard.org/rss-specificationpython-feedgenenFri, 14 Jun 2024 19:26:28 -0700Guaranteed margins for LQG regulators
https://resolver.caltech.edu/CaltechAUTHORS:20190308-152210560
Year: 1978
DOI: 10.1109/tac.1978.1101812
There are none.https://resolver.caltech.edu/CaltechAUTHORS:20190308-152210560Robustness with observers
https://resolver.caltech.edu/CaltechAUTHORS:20190308-152210784
Year: 1979
DOI: 10.1109/cdc.1978.267883
This paper describes an adjustment procedure for observer-based linear control systems which asymptotically achieves the same loop transfer functions (and hence the same relative stability, robustness, and disturbance rejection properties) as full-state feedback control implementations.https://resolver.caltech.edu/CaltechAUTHORS:20190308-152210784Robustness of multiloop linear feedback systems
https://resolver.caltech.edu/CaltechAUTHORS:20190308-152210699
Year: 1979
DOI: 10.1109/cdc.1978.267885
This paper presents a new approach to the frequency-domain analysis of multiloop linear feed-back systems. The properties of the return difference equation are examined using the concepts of singular values, singular vectors and the spectral norm of a matrix. A number of new tools for multiloop systems are developed which are analogous to those for scalar Nyquist and Bode analysis. These provide a generalization of the scalar frequency-domain notions such as gain, bandwidth, stability margins and M-circles, and provide considerable insight into system robustness.https://resolver.caltech.edu/CaltechAUTHORS:20190308-152210699Robustness with observers
https://resolver.caltech.edu/CaltechAUTHORS:20190308-152210874
Year: 1979
DOI: 10.1109/tac.1979.1102095
This paper describes an adjustment procedure for observer-based linear control systems which asymptotically achieves the same loop transfer functions (and hence the same relative stability, robustness, and disturbance rejection properties) as full-state feedback control implementations.https://resolver.caltech.edu/CaltechAUTHORS:20190308-152210874Multivariable feedback design: Concepts for a classical/modern synthesis
https://resolver.caltech.edu/CaltechAUTHORS:20190308-152210959
Year: 1981
DOI: 10.1109/tac.1981.1102555
This paper presents a practical design perspective on multivariable feedback control problems. It reviews the basic issue-feedback design in the face of uncertainties-and generalizes known single-input, single-output (SISO) statements and constraints of the design problem to multiinput, multioutput (MIMO) cases. Two major MIMO design approaches are then evaluated in the context of these results.https://resolver.caltech.edu/CaltechAUTHORS:20190308-152210959Analysis of feedback systems with structured uncertainties
https://resolver.caltech.edu/CaltechAUTHORS:20190308-152211227
Year: 1982
DOI: 10.1049/ip-d.1982.0053
The paper introduces a general approach for analysing linear systems with structured uncertainty based on a new generalised spectral theory for matrices. The results of the paper naturally extend techniques based on singular values and eliminate their most serious difficulties.https://resolver.caltech.edu/CaltechAUTHORS:20190308-152211227Performance and robustness analysis for structured uncertainty
https://resolver.caltech.edu/CaltechAUTHORS:20190308-152211137
Year: 1982
DOI: 10.1109/cdc.1982.268218
This paper introduces a nonconservative measure of performance for linear feedback systems in the face of structured uncertainty. This measure is based on a new matrix function, which we call the Structured Singular Value.https://resolver.caltech.edu/CaltechAUTHORS:20190308-152211137Digikon and Honey-X: Interative packages for control system analysis and design
https://resolver.caltech.edu/CaltechAUTHORS:20190308-152211049
Year: 1982
DOI: 10.1109/mcs.1982.1103759
[no abstract]https://resolver.caltech.edu/CaltechAUTHORS:20190308-152211049Synthesis of robust controllers and filters
https://resolver.caltech.edu/CaltechAUTHORS:20190308-152211312
Year: 1983
DOI: 10.1109/cdc.1983.269806
This paper outlines a general framework for analysis and synthesis of linear control systems and reports on a new solution to a very general L_∞/H_∞ optimal control problem.https://resolver.caltech.edu/CaltechAUTHORS:20190308-152211312On inner-outer and spectral factorizations
https://resolver.caltech.edu/CaltechAUTHORS:20190308-152211397
Year: 1984
DOI: 10.1109/cdc.1984.272438
This paper outlines methods for computing the key factorizations necessary to solve general H2 and H∞ linear optimal control problems.https://resolver.caltech.edu/CaltechAUTHORS:20190308-152211397Matrix interpolation and H_∞ performance bounds
https://resolver.caltech.edu/CaltechAUTHORS:20170724-174332489
Year: 1985
This paper introduces a methodology for obtaining bounds on the achievable performance of a multivariable control system involving tradeoffs between potentially conflicting performance requirements.https://resolver.caltech.edu/CaltechAUTHORS:20170724-174332489The general distance problem in H_∞ synthesis
https://resolver.caltech.edu/CaltechAUTHORS:20190308-152211485
Year: 1985
DOI: 10.1109/cdc.1985.268720
The general distance problem which arises in the general H_∞ optimal control problem is considered. The existence of an optimal solution is proved and the expression of the optimal norm γ_o is obtained from a somewhat abstract operator point of view. An iterative scheme, called γ-iteration, is introduced which reduces the general distance problem to a standard best approximation problem. Bounds for γ_o are also derived. The γ-iteration is viewed as a problem of finding the zero crossing of a function. This function is shown to be continuous, monotonically decreasing, convex and be bounded by some very simple functions. These properties make it possible to obtain very rapid convergence of the iterative process. The issue of model-reduction in H_∞ - synthesis will also be addressed.https://resolver.caltech.edu/CaltechAUTHORS:20190308-152211485Structured uncertainty in control system design
https://resolver.caltech.edu/CaltechAUTHORS:20170719-174055387
Year: 1985
DOI: 10.1109/CDC.1985.268842
This paper reviews control system analysis and synthesis techniques for robust performance with structured uncertainty in the form of multiple unstructured perturbations and parameter variations. The structured singular value, µ, plays a central role. The case where parameter variations are known to be real is considered.https://resolver.caltech.edu/CaltechAUTHORS:20170719-174055387Quantitative Feedback Theory (QFT) and Robust Control
https://resolver.caltech.edu/CaltechAUTHORS:20170718-173812529
Year: 1986
QFT, a theory developed by Horowitz [H3], is claimed by its advocates to provide a complete and general treatment of feedback design for highly uncertain multi-input-output (MIMO) systems. This paper reviews QFT and shows that while the philosophy behind QFT is attractive, the claims for the theory are unjustified. In particular, counterexamples are given for the main theorem of QFT on which the claims are based. This is in spite of the severe assumptions (no rhp zeros and fixed relative degree) that QFT requires on the plant model.https://resolver.caltech.edu/CaltechAUTHORS:20170718-173812529Design examples using µ-synthesis: Space shuttle lateral axis FCS during reentry
https://resolver.caltech.edu/CaltechAUTHORS:20190312-154519116
Year: 1986
DOI: 10.1109/CDC.1986.267482
This paper studies the application of Structured Singular Values (SSV or µ) for analysis and synthesis of the Space Shuttle lateral axis flight control system (FCS) during reentry. While this is a fairly standard FCS problem in most respects, the aircraft model is highly uncertain due to the poorly known aerodynamic characteristics (e.g. aero coefficients). Comparisons are made of the conventional FCS with alternatives based on H∞ optimal control and µ-synthesis. The problem as formulated is particularly interesting and challenging because the uncertainty is large and highly structured.https://resolver.caltech.edu/CaltechAUTHORS:20190312-154519116Uncertain Multivariable Systems from a State Space Perspective
https://resolver.caltech.edu/CaltechAUTHORS:20190319-110904871
Year: 1987
DOI: 10.23919/ACC.1987.4789666
This paper introduces some new extensions of μ analysis for LTI systems with structured uncertainty to time varying and nonlinear systems.https://resolver.caltech.edu/CaltechAUTHORS:20190319-110904871Control of Plants with Input Saturation Nonlinearities
https://resolver.caltech.edu/CaltechAUTHORS:20190320-075948577
Year: 1987
DOI: 10.23919/ACC.1987.4789464
This paper considers control design for systems with input magnitude saturation. Four examples, 2 SISO and 2 MIMO, are used to illustrate the properties of several existing schemes. A new method based on a modification of conventional antiwindup compensation is introduced. It is assumed that the reader is familiar with the problem of integral windup for saturating plants and conventional schemes for dealing with it.https://resolver.caltech.edu/CaltechAUTHORS:20190320-075948577When is classical loop shaping H∞-optimal?
https://resolver.caltech.edu/CaltechAUTHORS:20190313-142557997
Year: 1987
DOI: 10.23919/ACC.1987.4789392
This paper examines conditions under which a given SISO LTI control system is H∞-optimal with respect to weighted combinations of its sensitivity function and its complementary sensitivity function. The specific weighting functions considered are defined in terms of the sensitivity and complementary sesitivity functions. We show that a large class of practical controllers are in fact H∞-optimal, including typical stable controllers.https://resolver.caltech.edu/CaltechAUTHORS:20190313-142557997Robust Control with an H_2 Performance Objective
https://resolver.caltech.edu/CaltechAUTHORS:20190315-134650754
Year: 1987
DOI: 10.23919/ACC.1987.4789665
This paper considers the problem of designing robust controllers with an H_2 performance objective. A modified version of μ-synthesis is proposed and compared with two alternative schemes.https://resolver.caltech.edu/CaltechAUTHORS:20190315-134650754Linear Control Theory with an ℋ∞ Optimality Criterion
https://resolver.caltech.edu/CaltechAUTHORS:FRAsiamjco87
Year: 1987
DOI: 10.1137/0325046
This expository paper sets out the principal results in ℋ∞ control theory in the context of continuous-time linear systems. The focus is on the mathematical theory rather than computational methods.https://resolver.caltech.edu/CaltechAUTHORS:FRAsiamjco87Relations between H_∞ and risk sensitive controllers
https://resolver.caltech.edu/CaltechAUTHORS:20200930-113054991
Year: 1988
DOI: 10.1007/bfb0042196
The motivation for designing controllers to satisfy H_∞-norm bounds on specified closed-loop transfer functions is briefly discussed. The characterization of all such controllers is then described and it is shown that the controller that maximizes a corresponding entropy integral is in fact the steady state risk sensitive optimal controller. This gives a direct relation between robust and stochastic control.https://resolver.caltech.edu/CaltechAUTHORS:20200930-113054991On the Caltech Experimental Large Space Structure
https://resolver.caltech.edu/CaltechAUTHORS:20190313-113336008
Year: 1988
DOI: 10.23919/ACC.1988.4789995
This paper focuses on a large space structure experiment developed at the California Institute of Technology. The main thrust of the experiment is to address the identification and robust control issues associated with large space structures by capturing their characteristics in the laboratory. The design, modeling, identification and control objectives are discussed within the paper.https://resolver.caltech.edu/CaltechAUTHORS:20190313-113336008Structured singular value with repeated scalar blocks
https://resolver.caltech.edu/CaltechAUTHORS:20170712-154141091
Year: 1988
The structured singular value, μ, is an important linear algebra tool to study a class of matrix perturbation problems, [Doy]. It is useful for analyzing the robustness of stability and performance of dynamical systems [DoyWS]. This paper studies uncertainty structures involving repeated scalar parameters in more detail than in [Doy]. In [DoyP], it was shown that the frequency domain μ tests of [DoyWS] can conceptually be reduced to a single constant matrix μ test, but the uncertainty structure must be augmented with a large repeated scalar block. This paper studies the properties of μ and the upper bound with these types of uncertainty blocks, and compares the frequency domain vs. state space μ based tests, assuming that the upper bound is what can be reliably computed.https://resolver.caltech.edu/CaltechAUTHORS:20170712-154141091Robustness in the Presence of Joint Parametric Uncertainty and Unmodeled Dynamics
https://resolver.caltech.edu/CaltechAUTHORS:20190313-074303113
Year: 1988
DOI: 10.23919/ACC.1988.4789902
It is shown that, in the case of joint real parametric and complex uncertainty, Doyle's structured singular value can be obtained as the solution of a smooth constrained optimization problem. While this problem may have local maxima, an improved computable upper bound to the structured singular value is derived, leading to a sufficient condition for robust stability and performance.https://resolver.caltech.edu/CaltechAUTHORS:20190313-074303113State-space solutions to standard H_2 and H_∞ control problems
https://resolver.caltech.edu/CaltechAUTHORS:20170712-162848745
Year: 1988
DOI: 10.23919/ACC.1988.4789992
Simple state-space formulas are presented for a controller solving a standard H_∞-problem. The controller has the same state-dimension as the plant, its computation involves only two Riccati equations, and it has a separation structure reminiscent of classical LQG (i.e., H_2) theory. This paper is also intended to be of tutorial value, so a standard H_2-solution is developed in parallel.https://resolver.caltech.edu/CaltechAUTHORS:20170712-162848745A General Statement of Structured Singular Value Concepts
https://resolver.caltech.edu/CaltechAUTHORS:20170712-164435070
Year: 1988
Some key concepts of strucred singular value theory for the stability and performance-robustness analysis of linear time-invariant multivariable systems are stated. Using a set-invariance principle, the theory is then generalized to allow for nonlinear and/or time-varying nominal systems and uncertainties. The general theory is then re-specialized to the case of nominally linear time-invariant systems subject to L2-induced-norm bounded uncertainties.https://resolver.caltech.edu/CaltechAUTHORS:20170712-164435070Controller Order Reduction with Guaranteed Stability and Performance
https://resolver.caltech.edu/CaltechAUTHORS:20190315-152803266
Year: 1988
DOI: 10.23919/ACC.1988.4789993
In this paper we consider the problem of controller order
reduction for control design for robust performance. In practical
control design it may be important to have low order controllers.
For example, one may want to gain schedule a series
of LTI (linear, time invariant) controllers, or give simple physical
interpretations to the control dynamics. When solving practical
design problems using, say, H∞ software it is common to
produce controllers of high order - equal to the sum of the
order of the plant plus each of the weighting functions. However,
there may be lower order controllers which stabilize the
plant and provide satisfactory H∞ closed loop performance.
The objectives of a method for controller order reduction
within the H∞ framework, then, should be to find low order
controllers which stabilize a given plant and provides satisfactory
H∞ performance. Ideally, the method should apply to a
large class of problems, be easy to implement and be
guaranteed to work.https://resolver.caltech.edu/CaltechAUTHORS:20190315-152803266A power method for the structured singular value
https://resolver.caltech.edu/CaltechAUTHORS:20170712-160003131
Year: 1988
DOI: 10.1109/CDC.1988.194710
An iterative algorithm is presented to compute lower bounds for the structured singular value ( mu ). The algorithm resembles a mixture of power methods for eigenvalues and singular values, since the structured singular value can be viewed as a generalization of both. If the algorithm converges, a lower bound for mu results. The authors prove that mu is always an equilibrium point of the algorithm. However, since in general there are many equilibrium points, some heuristic ideas to achieve convergence are presented. Extensive numerical experience with the algorithm is discussed.https://resolver.caltech.edu/CaltechAUTHORS:20170712-160003131Robust control of ill-conditioned plants: high-purity distillation
https://resolver.caltech.edu/CaltechAUTHORS:SKOieeetac88
Year: 1988
DOI: 10.1109/9.14431
Using a high-purity distillation column as an example, the physical reason for the poor conditioning and its implications on control system design and performance are explained. It is shown that an acceptable performance/robustness tradeoff cannot be obtained by simple loop-shaping techniques (using singular values) and that a good understanding of the model uncertainty is essential for robust control system design. Physically motivated uncertainty descriptions (actuator uncertainties) are translated into the H∞/structured singular value framework, which is demonstrated to be a powerful tool to analyze and understand the complex phenomena.https://resolver.caltech.edu/CaltechAUTHORS:SKOieeetac88Optimal control with mixed H_2 and H∞ performance objectives
https://resolver.caltech.edu/CaltechAUTHORS:20190313-095844804
Year: 1989
DOI: 10.23919/ACC.1989.4790529
This paper considers the analysis and synthesis of control systems subject to two types of disturbance signals: signals with bounded power spectral density and signals with bounded power. The resulting control problem involves minimizing a mixed H_2 and H∞ norm of the system. It is shown that the controller shares a separation property similar to those of pare H_2 or H∞. controller. It is also shown that the mixed problem reduces naturally to H_2 and H∞ problem in special cases. Some necessary and sufficient conditions are obtained for the existence of a solution to the mixed problem. Explicit state space formulae are given for the optimal controllers.https://resolver.caltech.edu/CaltechAUTHORS:20190313-095844804Identification for Robust Control of Flexible Structures
https://resolver.caltech.edu/CaltechAUTHORS:20190313-092315130
Year: 1989
DOI: 10.23919/ACC.1989.4790620
An accurate multivariable transfer function model of an experimental structure is required for research involving robust control of flexible structures. Initially, a multi-input/multi-output model of the structure is generated using the finite element method. This model was insufficient due to its variation from the experimental data. Therefore, Chebyshev polynomials are employed to fit the data with a single-input/multi-output transfer function models. Combining these lead to a multivariable model with more modes than the original finite element model. To find a physically motivated model, as ad hoc model reduction technique which uses a priori knowledge of the structure is developed. The ad hoc approach is compared with balanced realisation model reduction to determine its benefits. Plots of select transfer function models and experimental data are included.https://resolver.caltech.edu/CaltechAUTHORS:20190313-092315130Model Invalidation: A Connection between Robust Control and Identification
https://resolver.caltech.edu/CaltechAUTHORS:20190313-075623254
Year: 1989
DOI: 10.23919/ACC.1989.4790413
This paper begins to address the gap between the models used in robust control theory and those obtained from identification experiments by considering the connection between uncertain models and data. The model invalidation problem considered here is: given experimental data and a model with both additive noise and norm-bounded perturbations, is it possible that the model could produce the input/output data?https://resolver.caltech.edu/CaltechAUTHORS:20190313-075623254State-space solutions to standard H2 and H∞ control problems
https://resolver.caltech.edu/CaltechAUTHORS:DOYieeetac89
Year: 1989
DOI: 10.1109/9.29425
Simple state-space formulas are derived for all controllers solving the following standard H∞ problem: For a given number γ>0, find all controllers such that the H∞ norm of the closed-loop transfer function is (strictly) less than γ. It is known that a controller exists if and only if the unique stabilizing solutions to two algebraic Riccati equations are positive definite and the spectral radius of their product is less than γ2. Under these conditions, a parameterization of all controllers solving the problem is given as a linear fractional transformation (LFT) on a contractive, stable, free parameter. The state dimension of the coefficient matrix for the LFT, constructed using the two Riccati solutions, equals that of the plant and has a separation structure reminiscent of classical LQG (i.e. H2) theory. This paper is intended to be of tutorial value, so a standard H2 solution is developed in parallel.https://resolver.caltech.edu/CaltechAUTHORS:DOYieeetac89Vibration damping and robust control of the JPL/AFAL experiment using µ-synthesis
https://resolver.caltech.edu/CaltechAUTHORS:20190313-102804933
Year: 1989
DOI: 10.1109/CDC.1989.70668
The technology for controlling elastic deformations of flexible structures is one of the key considerations for future space initiatives. A vital area needed to achieve this objective is the development of a control design methodology applicable to future structures. The mu -synthesis technique is employed to design a high-performance vibration attenuation controller for the JPL/AFAL experimental flexible antenna structure. The results presented deal primarily with the control of first two global flexible modes using only two hub actuators and two hub sensors. Implementation of the multivariable control laws based on a finite-element model is presented. All results are from actual implementation on the JPL/AFAL flexible structure testbed.https://resolver.caltech.edu/CaltechAUTHORS:20190313-102804933Quadratic stability with real and complex perturbations
https://resolver.caltech.edu/CaltechAUTHORS:20190313-100417671
Year: 1990
DOI: 10.1109/9.45179
It is shown that the equivalence between real and complex perturbations in the context of quadratic stability to linear, fractional, unstructured perturbations does not hold when the perturbations are block structured. For a limited class of problems, quadratic stability in the face of structured complex perturbations is equivalent to a particular class of scaled norms, and hence appropriate synthesis techniques, coupled with diagonal constant scalings, can be used to design quadratically stable systems.https://resolver.caltech.edu/CaltechAUTHORS:20190313-100417671Collocated versus Non-collocated Multivariable Control for Flexible Structure
https://resolver.caltech.edu/CaltechAUTHORS:20190313-090053071
Year: 1990
DOI: 10.23919/ACC.1990.4791064
Future space structures have many closely spaced, lightly damped natural frequencies throughout the frequency domain. To achieve desired performance objectives, a number of these modes must actively be controlled. For control, a combination of collocated and noncollocated sensors and actuators will be employed. The control designs will be formulated based on models which have inaccuracies due to unmodeled dynamics, and variations in damping levels, natural frequencies and mode shapes. Therefore, along with achieving the performance objectives, the control design must be robust to a variety of uncertainty. This paper focuses on the benefits and limitations associated with multivariable control design using noncollocated versus collocated sensors and actuators. We address the question of whether performance is restricted due to the noncollocation of the sensors and actuators or the uncertainty associated with modeling of the flexible structures. Control laws are formulated based on models of the system and evaluated analytically and experimentally. Results of implementation of these control laws on the Caltech flexible structure are presented.https://resolver.caltech.edu/CaltechAUTHORS:20190313-090053071Towards a Methodology for Robust Parameter Identification
https://resolver.caltech.edu/CaltechAUTHORS:20190313-083856627
Year: 1990
DOI: 10.23919/ACC.1990.4791156
The paper considers the problem of estimating, from experimental data, real parameters for a model with uncertainty in the form of both additive noise and norm bounded perturbations. Such models frequently arise in robust control theory, and a framework is introduced for the consideration of experimental data in robust control analysis problems. If the analysis tools applied include robust stability tests for real parameter variations (real μ), then the framework can be used to address the problem of "robust" parameter identification. While the techniques discussed here can quickly become computationally overwhelming when applied to physical systems and real data, the approach introduces a new way of looking at the identification problem and may be helpful in arriving at a more tractable methodology.https://resolver.caltech.edu/CaltechAUTHORS:20190313-083856627Mixed H_2 and H∞ control
https://resolver.caltech.edu/CaltechAUTHORS:20190313-085246171
Year: 1990
DOI: 10.23919/ACC.1990.4791177
Mixed H_2 and H∞ norm analysis and synthesis problems are considered in this paper. It is shown that the mixed norm analysis combined with structured uncertainty can be used to give bounds on robust H_2 and H∞ performance. It is also shown that the mixed norm controller shares a separation property similar to those of pure H_2 or H∞ controllers. The obvious advantage for a mixed norm is that it gives a natural trade-off between H_2 performance and H∞ performance, and provides a potential framework for extending the μ-synthesis framework to address robust H_2 performance. A simple example is used to motivate the possible advantages such a framework might have over a pure H∞ theory.https://resolver.caltech.edu/CaltechAUTHORS:20190313-085246171Identification of flexible structures for robust control
https://resolver.caltech.edu/CaltechAUTHORS:BALieeecsm90
Year: 1990
DOI: 10.1109/37.56278
Documentation is provided of the authors' experience with modeling and identification of an experimental flexible structure for the purpose of control design, with the primary aim being to motivate some important research directions in this area. A multi-input/multi-output (MIMO) model of the structure is generated using the finite element method. This model is inadequate for control design, due to its large variation from the experimental data. Chebyshev polynomials are employed to fit the data with single-input/multi-output (SIMO) transfer function models. Combining these SIMO models leads to a MIMO model with more modes than the original finite element model. To find a physically motivated model, an ad hoc model reduction technique which uses a priori knowledge of the structure is developed. The ad hoc approach is compared with balanced realization model reduction to determine its benefits. Descriptions of the errors between the model and experimental data are formulated for robust control design. Plots of select transfer function models and experimental data are included.https://resolver.caltech.edu/CaltechAUTHORS:BALieeecsm90A J-Spectral Factorization Approach to ℋ∞ Control
https://resolver.caltech.edu/CaltechAUTHORS:20120508-131811131
Year: 1990
DOI: 10.1137/0328071
Necessary and sufficient conditions for the existence of suboptimal solutions to the standard model matching problem associated with ℋ∞ control, are derived using J-spectral factorization theory. The existence of solutions to the model matching problem is shown to be equivalent to the existence of solutions to two coupled J-spectral factorization problems, with the second factor providing a parametrization of all solutions to the model matching problem. The existence of the J-spectral factors is then shown to be equivalent to the existence of nonnegative definite, stabilizing solutions to two indefinite algebraic Riccati equations, allowing a state-space formula for a linear fractional representation of all controllers to be given. A virtue of the approach is that a very general class of problems may be tackled within a conceptually simple framework, and no additional auxiliary Riccati equations are required.https://resolver.caltech.edu/CaltechAUTHORS:20120508-131811131Robustness and performance tradeoffs in control design for flexible structures
https://resolver.caltech.edu/CaltechAUTHORS:20190313-110937657
Year: 1990
DOI: 10.1109/CDC.1990.203334
The design of control laws for the Caltech flexible structure experiment using a nominal design model with varying levels of uncertainty is considered. A brief overview of the structured singular value (µ) H∞ control design, and µ-synthesis design techniques is presented. Tradeoffs associated with uncertainty modeling of flexible structures are discussed. A series of controllers are synthesized based on different uncertainty descriptions. It is shown that an improper selection of nominal and uncertainty models may lead to unstable or poor-performing controllers on the actual system. In contrast, if descriptions of uncertainty are overly conservative, performance of the closed-loop system may be severely limited. Experimental results on control laws synthesized for different uncertainty levels on the Caltech structure are presented.https://resolver.caltech.edu/CaltechAUTHORS:20190313-110937657Computation of µ with real and complex uncertainties
https://resolver.caltech.edu/CaltechAUTHORS:20190313-112035048
Year: 1990
DOI: 10.1109/CDC.1990.203804
The robustness analysis of system performance is one of the key issues in control theory, and one approach is to reduce this problem to that of computing the structured singular value, mu . When real parametric uncertainty is included, then mu must be computed with respect to a block structure containing both real and complex uncertainties. It is shown that mu is equivalent to a real eigenvalue maximization problem, and a power algorithm is developed to solve this problem. The algorithm has the property that mu is (almost) always an equilibrium point of the algorithm, and that whenever the algorithm converges a lower bound for mu results. This scheme has been found to have fairly good convergence properties. Each iteration of the scheme is very cheap, requiring only such operations as matrix-vector multiplications and vector inner products, and the method is sufficiently general to handle arbitrary numbers of repeated real scalars, repeated complex scalars, and full complex blocks.https://resolver.caltech.edu/CaltechAUTHORS:20190313-112035048Robustness in the presence of mixed parametric uncertainty and unmodeled dynamics
https://resolver.caltech.edu/CaltechAUTHORS:FANieeetac91
Year: 1991
DOI: 10.1109/9.62265
Continuing the development of the structured singular value approach to robust control design, the authors investigate the problem of computing μ in the case of mixed real parametric and complex uncertainty. The problem is shown to be equivalent to a smooth constrained finite-dimensional optimization problem. In view of the fact that the functional to be maximized may have several local extrema, an upper bound on μ whose computation is numerically tractable is established; this leads to a sufficient condition of robust stability and performance. A historical perspective on the development of the μ theory is included.https://resolver.caltech.edu/CaltechAUTHORS:FANieeetac91A Characterization of all Solutions to the Four Block General Distance Problem
https://resolver.caltech.edu/CaltechAUTHORS:20120419-081456730
Year: 1991
DOI: 10.1137/0329016
All solutions to the four block general distance problem which arises in H^∞ optimal control are characterized. The procedure is to embed the original problem in an all-pass matrix which is constructed. It is then shown that part of this all-pass matrix acts as a generator of all solutions. Special attention is given to the characterization of all optimal solutions by invoking a new descriptor characterization of all-pass
transfer functions. As an application, necessary and sufficient conditions are found for the existence of an H^∞ optimal controller. Following that, a descriptor representation of all solutions is derived.https://resolver.caltech.edu/CaltechAUTHORS:20120419-081456730Development of Advanced Control Design Software for Researchers and Engineers
https://resolver.caltech.edu/CaltechAUTHORS:20190313-084558863
Year: 1991
DOI: 10.23919/ACC.1991.4791529
This paper provides a brief description of The μ Analysis and Synthesis Toolbox (μ-Tools), an advanced control design toolbox to be used in conjunction with MATLAB.https://resolver.caltech.edu/CaltechAUTHORS:20190313-084558863The Process of Control Design for the NASA Langley Minimast Structure
https://resolver.caltech.edu/CaltechAUTHORS:20170619-163711765
Year: 1991
DOI: 10.23919/ACC.1991.4791434
he NASA Langley Minimast Facility is an experimental flexible structure designed to emulate future large space structures. The Minimast system consists of a 18 bay, 20 meter-long truss beam structure which is cantilevered at its base from a rigid foundation. It is desired to use active control to attenuate the response of the structure at bay 10 and 18 due to impulse disturbances at bay 9 while minimizing actuator torque commanded from the torque wheel actuators. This paper details the design process used to select sensors for feedback and performance weights on the Minimast facility. Initially, a series of controllers are synthesized using H2 optimal control techniques for the given structural model, a variety of sensor locations and performance criteria to determine the "best" displacement sensor and/or accelerometers to be used for feedback. Upon selection of the sensors, controllers are formulated to determine the affect of using a reduced order model of the Minimast structure instead of the higher order structural analysis model for control design and the relationship between the actuator torque level and the closed-loop performance. Based on this information, controllers are designed using μ-synthesis techniques and implemented on the Minimast structure. Results of the implementation of these controllers on the Minimast experimental facility are presented.https://resolver.caltech.edu/CaltechAUTHORS:20170619-163711765Review of LFTs, LMIs, and μ
https://resolver.caltech.edu/CaltechAUTHORS:20170619-173047637
Year: 1991
DOI: 10.1109/CDC.1991.261572
The purpose of this paper is to present a tutorial overview of Linear Fractional Transformations (LFT) and the role of the Structured Singular Value, μ, and Linear Matrix Inequalities (LMI) in solving LFT problems.https://resolver.caltech.edu/CaltechAUTHORS:20170619-173047637Stabilization of LFT systems
https://resolver.caltech.edu/CaltechAUTHORS:20190315-095540812
Year: 1991
DOI: 10.1109/CDC.1991.261575
The problem of parametrizing all stabilizing controllers for general linear fractional transformation (LFT) systems is studied. The LFT systems can be variously interpreted as multidimensional systems or uncertain systems, and the controller is allowed to have the same dependence on the frequency/uncertainty structure as the plant. For multidimensional systems, this means that the controller is allowed dynamic feedback, while the uncertain system case can be given a gain scheduling interpretation. Both mu and Q stability are considered, although the latter is emphasized. In both cases, the output feedback problem is reduced by a separation argument to two simpler problems, involving the dual problems of full information and full control. For Q stability, these problems can be characterized completely in terms of linear matrix inequalities. In the standard 1D system case with no uncertainty, the results in the present work reduce to the standard parametrization of D.C. Youla, H.A. Jabr and J.J. Bongiorno (1976), although the development appears to be much simpler, and does not require coprime factorizations.https://resolver.caltech.edu/CaltechAUTHORS:20190315-095540812µ analysis with real parametric uncertainty
https://resolver.caltech.edu/CaltechAUTHORS:20190318-131438205
Year: 1991
DOI: 10.1109/CDC.1991.261579
The authors give a broad overview, from a LFT (linear fractional transformation) µ perspective, of some of the theoretical and practical issues associated with robustness in the presence of real parametric uncertainty, with a focus on computation. Recent results on the properties of µ in the mixed case are reviewed, including issues of NP completeness, continuity, computation of bounds, the equivalence of µ and its bounds, and some direct comparisons with Kharitonov-type analysis methods. In addition, some advances in the computational aspects of the problem, including a branch-and-bound algorithm, are briefly presented together with the mixed µ problem may have inherently combinatoric worst-case behavior, practical algorithms with modes computational requirements can be developed for problems of medium size (<100 parameters) that are of engineering interest.https://resolver.caltech.edu/CaltechAUTHORS:20190318-131438205Model reduction of LFT systems
https://resolver.caltech.edu/CaltechAUTHORS:20190319-091249008
Year: 1991
DOI: 10.1109/CDC.1991.261574
The notion of balanced realizations and balanced truncation model reduction, including guaranteed error bounds, is extended to general Q-stable linear fractional transformations (LFTs). Since both multidimensional and uncertain systems are naturally represented using LFTs, this can be interpreted either as doing state order reduction for multidimensional systems or as uncertainty simplification in the case of uncertain systems. The role of Lyapunov equations in the 1D theory is replaced by linear matrix inequalities (LMIs). All proofs are given in detail as they are very short and greatly simplify even the standard 1D case.https://resolver.caltech.edu/CaltechAUTHORS:20190319-091249008Let's Get Real
https://resolver.caltech.edu/CaltechCDSTR:1992.001
Year: 1992
This paper gives an overview of promising new developments in robust stability and performance analysis of linear control systems with real parametric uncertainty. The goal is to develop a practical algorithm for medium size problems, where medium size means less than 100 real parameters, and "practical" means avoiding combinatoric (nonpolynomial) growth in computation with the number of parameters for all of the problems which arise in engineering applications. We present an algorithm and experimental evidence to suggest that this goal has, for the first time, been achieved. We also place these results in context by comparing with other approaches to robustness analysis and considering potential extensions, including controller synthesis.https://resolver.caltech.edu/CaltechCDSTR:1992.001Practical computation of the mixed μ problem
https://resolver.caltech.edu/CaltechAUTHORS:20190313-082229918
Year: 1992
DOI: 10.23919/ACC.1992.4792521
Upper and lower bounds for the mixed μ problem have recently been developed, and this paper examines the computational aspects of these bounds. In particular a practical algorithm is developed to compute the bounds. This has been implemented as a Matlab function (m-file), and will be available shortly in a test version in conjunction with the μ-Tools toolbox. The algorithm performance is very encouraging, both in terms of accuracy of the resulting bounds, and growth rate in required computation with problem size. In particular it appears that one can handle medium size problems (less than 100 perturbations) with reasonable computational requirements.https://resolver.caltech.edu/CaltechAUTHORS:20190313-082229918Model validation: a connection between robust control and identification
https://resolver.caltech.edu/CaltechAUTHORS:SMIieeetac92
Year: 1992
DOI: 10.1109/9.148346
The gap between the models used in control synthesis and those obtained from identification experiments is considered by investigating the connection between uncertain models and data. The model validation problem addressed is: given experimental data and a model with both additive noise and norm-bounded perturbations, is it possible that the model could produce the observed input-output data? This problem is studied for the standard H∞/μ framework models. A necessary condition for such a model to describe an experimental datum is obtained. For a large class of models in the robust control framework, this condition is computable as the solution of a quadratic optimization problem.https://resolver.caltech.edu/CaltechAUTHORS:SMIieeetac92Synthesizing robust mode shapes with μ and implicit model following
https://resolver.caltech.edu/CaltechAUTHORS:20170606-173114371
Year: 1992
DOI: 10.1109/CCA.1992.269732
Control synthesis problems involving assignment of closed-loop model shapes using implicit model following (IMF) structure are considered in the context of H_2, H∞ , and μ-synthesis theories. An extension to the dynamic output feedback case is given for the quadratic or H_2 IMF problem. The IMF problem is embedded within the framework of μ control theory, and extensions for including uncertainty are discussed. A robust synthesis methodology is presented using μ theory. An application of the robust IMF synthesis methodology to modal shape assignment for the longitudinal axis of a helicopter is demonstrated.https://resolver.caltech.edu/CaltechAUTHORS:20170606-173114371H∞ control of LFT systems: an LMI approach
https://resolver.caltech.edu/CaltechAUTHORS:20190319-080533883
Year: 1992
DOI: 10.1109/CDC.1992.371449
The standard H∞ control problem for linear state-space systems is extended to general LFT systems, which involve a LFT (linear fractional transformation) on a structured free parameter Delta and can be interpreted as structuredly perturbed uncertain systems. Two generalizations of H∞ performance are considered, referred too as µ-performance and Q-performance with the latter implying the former. Necessary and sufficient conditions for a system to have Q-performance and for the existence of a controller yielding Q-performance can be expressed in terms of structured linear matrix inequalities (LMIs).https://resolver.caltech.edu/CaltechAUTHORS:20190319-080533883Overview of robust stability and performance methods of systems with structured mixed perturbations
https://resolver.caltech.edu/CaltechAUTHORS:20190315-160750146
Year: 1992
DOI: 10.1109/CDC.1992.371246
Robust stability and performance analysis results for systems in the presence of structured mixed perturbations are outlined. Attention is limited to scalar perturbations. The goal is to develop succinctly an overall description of state of the art techniques in analyzing systems with mixed perturbations, and to point the reader to sources in the literature where more details and proofs can be found.https://resolver.caltech.edu/CaltechAUTHORS:20190315-160750146The parallel projection operators of a nonlinear feedback system
https://resolver.caltech.edu/CaltechAUTHORS:20190315-103331839
Year: 1992
DOI: 10.1109/CDC.1992.371556
The authors define and study a pair of nonlinear parallel projection operators associated with a nonlinear feedback system. The input-output L_2-stability of a feedback system is shown to be equivalent to a coordinating of the input and output spaces, which is also equivalent to the existence of a pair of nonlinear parallel projection operators onto the graph of the plant and the inverse graph of the controller. These projections have equal norms whenever one of the feedback elements is linear. A bound on this norm is given in the case of passive systems with unity negative feedback.https://resolver.caltech.edu/CaltechAUTHORS:20190315-103331839Mixed µ upper bound computation
https://resolver.caltech.edu/CaltechAUTHORS:20190319-085951997
Year: 1992
DOI: 10.1109/CDC.1992.371241
Computation of the mixed real and complex µ upper bound expressed in terms of linear matrix inequalities (LMIs) is considered. Two existing methods are used, the Osborne (1960) method for balancing matrices, and the method of centers as proposed by Boyd and El Ghaoui (1991). These methods are compared, and a hybrid algorithm that combines the best features of each is proposed. Numerical experiments suggest that this hybrid algorithm provides an efficient method to compute the upper bound for mixed µ.https://resolver.caltech.edu/CaltechAUTHORS:20190319-085951997The complex structured singular value
https://resolver.caltech.edu/CaltechAUTHORS:20170408-142303477
Year: 1993
DOI: 10.1016/0005-1098(93)90175-S
A tutorial introduction to the complex structured singular value (μ) is presented, with an emphasis on the mathematical aspects of μ. The μ-based methods discussed here have been useful for analysing the performance and robustness properties of linear feedback systems. Several tests for robust stability and performance with computable bounds for transfer functions and their state space realizations are compared, and a simple synthesis problem is studied. Uncertain systems are represented using Linear Fractional Transformations (LFTs) which naturally unify the frequency-domain and state space methods.https://resolver.caltech.edu/CaltechAUTHORS:20170408-142303477Computational Complexity of μ Calculation
https://resolver.caltech.edu/CaltechCDSTR:1993.005
Year: 1993
The structured singular value μ measures the robustness of uncertain systems. Numerous researchers over the last decade have worked on developing efficient methods for computing μ. This paper considers the complexity of calculating μ with general mixed real/complex uncertainty in the framework of combinatorial complexity theory. In particular, it is proved that the μ recognition problem with either pure real or mixed real/complex uncertainty is NP-hard. This strongly suggests that it is futile to pursue exact methods for calculating μ of general systems with pure real or mixed uncertainty for other than small problems.https://resolver.caltech.edu/CaltechCDSTR:1993.005Computational complexity of μ calculation
https://resolver.caltech.edu/CaltechAUTHORS:20190320-132001216
Year: 1993
DOI: 10.23919/ACC.1993.4793162
The structured singular value μ measures the robustness of uncertain Systems. Numerous researchers over the last decade have worked on developing efficient methods for computing μ. This paper considers the complexity of calculating μ with general mixed real/complex uncertainty in the framework of combinatorial complexity theory. In particular, it is proved that the μ recognition problem with either pure real or mixed real/complex uncertainty is NP-hard. This strongly suggests that it is futile to pursue exact methods for calculating μ of general systems with pure real or mixed uncertainty for other than small problems.https://resolver.caltech.edu/CaltechAUTHORS:20190320-132001216Uncertain Behavior
https://resolver.caltech.edu/CaltechCDSTR:1993.018
Year: 1993
The invited session on behaviors, modeling, and robust control reflects an emerging view that these apparently disjoint subjects have important connections that can have a major impact on both theoretical and practical aspects of control. The behavioral setting provides a convenient framework for connecting modeling and robust control, and is being extended to make deeper contacts with more mainstream control.https://resolver.caltech.edu/CaltechCDSTR:1993.018Model reduction of behavioural systems
https://resolver.caltech.edu/CaltechAUTHORS:20190320-142555896
Year: 1993
DOI: 10.1109/CDC.1993.325889
We consider model reduction of uncertain behavioural models. Machinery for gap-metric model reduction and multidimensional model reduction using linear matrix inequalities is extended to these behavioural models. The goal is a systematic method for reducing the complexity of uncertain components in hierarchically developed models which approximates the behavior of the full-order system. This paper focuses on component model reduction that preserves stability under interconnection.https://resolver.caltech.edu/CaltechAUTHORS:20190320-142555896H∞ control of nonlinear systems via output feedback: a class of controllers
https://resolver.caltech.edu/CaltechAUTHORS:20190319-084913906
Year: 1993
DOI: 10.1109/CDC.1993.325170
The standard state space solutions to the H∞ control problem for linear time invariant systems are generalized to nonlinear time-invariant systems. A class of nonlinear H∞ controllers are parametrized as nonlinear fractional transformation on contractive, stable free nonlinear parameters. As in the linear case, the H∞ control problem is solved by its reduction to a state feedback and output injection problems, together with a separation argument. the sufficient conditions for H∞-control problem to be solved are also derived with this machinery. The solvability for nonlinear H∞ control problem requires positive definite solution to two parallel decoupled Hamilton-Jacobi inequalities and these two solutions satisfy and additional coupling condition. An illustrative example, which deals with a passive plant, is given.https://resolver.caltech.edu/CaltechAUTHORS:20190319-084913906From data to control
https://resolver.caltech.edu/CaltechAUTHORS:20201023-102058277
Year: 1994
DOI: 10.1007/bfb0036249
The basic control problem for a given process can be stated as follows: Given some prior information about the process and a set of finite data, design a feedback controller that meets given performance specifications. Traditionally, this problem has been tackled by the introduction of an intermediate step, namely finding a model which describes the process in some precise sense, and then designing a robust controller using the model as the nominal plant.https://resolver.caltech.edu/CaltechAUTHORS:20201023-102058277An example of active circulation control of the unsteady separated flow past a semi-infinite plate
https://resolver.caltech.edu/CaltechAUTHORS:CORjfm94
Year: 1994
DOI: 10.1017/S0022112094003460
Active circulation control of the two-dimensional unsteady separated flow past a semiinfinite plate with transverse motion is considered. The rolling-up of the separated shear layer is modelled by a point vortex whose time-dependent circulation is predicted by an unsteady Kutta condition. A suitable vortex shedding mechanism introduced. A control strategy able to maintain constant circulation when a vortex is present is derived. An exact solution for the nonlinear controller is then obtained. Dynamical systems analysis is used to explore the performance of the controlled system. The control strategy is applied to a class of flows and the results are discussed. A procedure to determine the position and the circulation of the vortex, knowing the velocity signature on the plate, is derived. Finally, a physical explanation of the control mechanism is presented.https://resolver.caltech.edu/CaltechAUTHORS:CORjfm94Computational complexity of μ calculation
https://resolver.caltech.edu/CaltechAUTHORS:BRAieeetac94
Year: 1994
DOI: 10.1109/9.284879
The structured singular value μ measures the robustness of uncertain systems. Numerous researchers over the last decade have worked on developing efficient methods for computing μ. This paper considers the complexity of calculating μ with general mixed real/complex uncertainty in the framework of combinatorial complexity theory. In particular, it is proved that the μ recognition problem with either pure real or mixed real/complex uncertainty is NP-hard. This strongly suggests that it is futile to pursue exact methods for calculating μ of general systems with pure real or mixed uncertainty for other than small problems.https://resolver.caltech.edu/CaltechAUTHORS:BRAieeetac94H∞ control of nonlinear systems: a convex characterization
https://resolver.caltech.edu/CaltechAUTHORS:20190319-113943191
Year: 1994
DOI: 10.1109/ACC.1994.752446
The so-called nonlinear H∞-control problem in state space is considered with an emphasis on developing machinery with promising computational properties. Both state feedback and output feedback H∞-control problems for a class of nonlinear systems are characterized in terms of continuous positive definite solutions of algebraic nonlinear matrix inequalities (NLMIs) which are convex feasibility problems.https://resolver.caltech.edu/CaltechAUTHORS:20190319-113943191Linear matrix inequalities in analysis with multipliers
https://resolver.caltech.edu/CaltechAUTHORS:20190318-152115268
Year: 1994
DOI: 10.1109/ACC.1994.752254
We show that a number of standard robustness tests can be reinterpreted as special cases of the application of the passivity theorem with the appropriate choice of multipliers. We show how these tests can be performed using convex optimization over linear matrix inequalities.https://resolver.caltech.edu/CaltechAUTHORS:20190318-152115268Behavioral approach to robustness analysis
https://resolver.caltech.edu/CaltechAUTHORS:20190319-080011582
Year: 1994
DOI: 10.1109/ACC.1994.735075
This paper introduces a general and powerful framework for modeling and analysis of uncertain systems. One immediate concrete result of this work is a practical method for computing robust performance in the presence of norm-bounded perturbations and both norm-bounded and white-noise disturbances.https://resolver.caltech.edu/CaltechAUTHORS:20190319-080011582Mixed H_2 and H∞ performance objectives. II. Optimal control
https://resolver.caltech.edu/CaltechAUTHORS:20190315-110821041
Year: 1994
DOI: 10.1109/9.310031
This paper considers the analysis and synthesis of control systems subject to two types of disturbance signals: white signals and signals with bounded power. The resulting control problem involves minimizing a mixed H_2 and H∞ norm of the system. It is shown that the controller shares a separation property similar to those of pure H_2 or H∞ controllers. Necessary conditions and sufficient conditions are obtained for the existence of a solution to the mixed problem. Explicit state-space formulas are also given for the optimal controller.https://resolver.caltech.edu/CaltechAUTHORS:20190315-110821041Mixed H_2 and H∞ performance objectives. I. Robust performance analysis
https://resolver.caltech.edu/CaltechAUTHORS:20190318-130328590
Year: 1994
DOI: 10.1109/9.310030
This paper introduces an induced-norm formulation of a mixed H_2 and H∞ performance criterion. It is shown that different mixed H_2 and H∞ norms arise from different assumptions on the input signals. While most mixed norms can be expressed explicitly using either transfer functions or state-space realizations of the system, there are cases where the explicit formulas are very hard to obtain. In the later cases, examples are given to show the intrinsic nature and difficulty of the problem. Mixed norm robust performance analysis under structured uncertainty is also considered in the paper.https://resolver.caltech.edu/CaltechAUTHORS:20190318-130328590Model Reduction of Multi-Dimensional and Uncertain Systems
https://resolver.caltech.edu/CaltechCDSTR:1994.017
Year: 1994
We present model reduction methods with guaranteed error bounds for systems represented by a Linear Fractional Transformation (LFT) on a repeated scalar uncertainty structure. These reduction methods can be interpreted either as doing state order reduction for multi-dimensionalsystems, or as uncertainty simplification in the case of uncertain systems, and are based on finding solutions to a pair of Linear Matrix Inequalities (LMIs). A related necessary and sufficient condition for the exact reducibility of stable uncertain systems is also presented.https://resolver.caltech.edu/CaltechCDSTR:1994.017Analysis of implicitly defined systems
https://resolver.caltech.edu/CaltechAUTHORS:20190318-103730568
Year: 1994
DOI: 10.1109/CDC.1994.411726
An alternative paradigm is considered for robustness analysis, where systems are described in implicit form. The central role in this formulation is played by equations rather than input-output maps. The framework for robust stability analysis is appropriately extended, and a necessary and sufficient condition is proved for the case of arbitrary structured norm bounded perturbations. Finally, the constant matrix version of this framework is considered, leading to an extension of the structured singular value µ; the corresponding upper bound theory is developed fully.https://resolver.caltech.edu/CaltechAUTHORS:20190318-103730568Robustness analysis and synthesis for uncertain nonlinear systems
https://resolver.caltech.edu/CaltechAUTHORS:20190318-135325265
Year: 1994
DOI: 10.1109/CDC.1994.410855
The stability and performance robustness analysis for a class of uncertain nonlinear systems with bounded structured uncertainties are characterized in terms of various types of nonlinear matrix inequalities (NLMIs). As in the linear case, scalings or multipliers are used to find Lyapunov functions that give sufficient conditions, and the resulting NLMIs yield convex feasibility problem. For these problems, robustness analysis is essentially no harder than stability analysis of the system with no uncertainty. Sufficient conditions for the solvability of related robust synthesis problems are developed in terms of NLMIs as well.https://resolver.caltech.edu/CaltechAUTHORS:20190318-135325265Robustness and performance trade-offs in control design for flexible structures
https://resolver.caltech.edu/CaltechAUTHORS:BALieeetcs94
Year: 1994
DOI: 10.1109/87.338656
Linear control design models for flexible structures are only an approximation to the "real" structural system. There are always modeling errors or uncertainty present. Descriptions of these uncertainties determine the trade-off between achievable performance and robustness of the control design. In this paper it is shown that a controller synthesized for a plant model which is not described accurately by the nominal and uncertainty models may be unstable or exhibit poor performance when implemented on the actual system. In contrast, accurate structured uncertainty descriptions lead to controllers which achieve high performance when implemented on the experimental facility. It is also shown that similar performance, theoretically and experimentally, is obtained for a surprisingly wide range of uncertain levels in the design model. This suggests that while it is important to have reasonable structured uncertainty models, it may not always be necessary to pin down precise levels (i.e., weights) of uncertainty. Experimental results are presented which substantiate these conclusions.https://resolver.caltech.edu/CaltechAUTHORS:BALieeetcs94Unifying robustness analysis and system ID
https://resolver.caltech.edu/CaltechAUTHORS:20190319-105001140
Year: 1994
DOI: 10.1109/CDC.1994.411725
A unified systems analysis framework is presented, which includes conventional robustness analysis, model validation, and system identification as special cases and thus shows them to be instances of the same fundamental problem. A concrete version of this framework is developed for the linear case, based on a generalized structured singular value. This unification forms the basis for the use of common computational tools and and a more natural interplay between modeling, identification, and robustness analysis.https://resolver.caltech.edu/CaltechAUTHORS:20190319-105001140Finite time horizon robust performance analysis
https://resolver.caltech.edu/CaltechAUTHORS:20190315-155217994
Year: 1994
DOI: 10.1109/CDC.1994.411312
Robust performance problems for linear time varying systems considered over a finite horizon, are reduced to the computation of the structured singular value of a finite matrix. Connections are established between the time domain and frequency domain tests.https://resolver.caltech.edu/CaltechAUTHORS:20190315-155217994ℋ∞ control of nonlinear systems via output feedback: controller parameterization
https://resolver.caltech.edu/CaltechAUTHORS:LUWieeetac94
Year: 1994
DOI: 10.1109/9.362834
The standard state space solutions to the ℋ∞ control problem for linear time invariant systems are generalized to nonlinear time-invariant systems. A class of local nonlinear (output feedback) ℋ∞ controllers are parameterized as nonlinear fractional transformations on contractive, stable nonlinear parameters. As in the linear case, the ℋ∞ control problem is solved by its reduction to state feedback and output estimation problems, together with a separation argument. Sufficient conditions for ℋ∞-control problem to be locally solved are also derived with this machinery.https://resolver.caltech.edu/CaltechAUTHORS:LUWieeetac94Analysis of Implicit Uncertain Systems. Part I: Theoretical Framework
https://resolver.caltech.edu/CaltechCDSTR:1994.018-1
Year: 1994
This paper introduces a general and powerful framework for the analysis of uncertain systems, encompassing linear fractional transformations, the behavioral approach for system theory and the integral quadratic constraint formulation. In this approach, a system is defined by implicit equations, and the central analysis question is to test for solutions of these equations. In Part I, the general properties of this formulation are developed, and computable necessary and sufficient conditions are derived for a robust performance problem posed in this framework.https://resolver.caltech.edu/CaltechCDSTR:1994.018-1Numerically Efficient Robustness Analysis of Trajectory Tracking for Nonlinear Systems
https://resolver.caltech.edu/CaltechCDSTR:1995.CIT-CDS-95-032
Year: 1995
A numerical algorithm for computing necessary conditions for
performance specifications is developed for nonlinear uncertain systems
following a prescribed trajectory. This algorithm provides a computational
efficient means of evaluating the performance of a nonlinear system in the
presence of noise, real parametric uncertainty, and unmodeled dynamics. The
algorithm is similar in nature and behavior to the power algorithm for the mu
lower bound, and does not rely on a descent method. The algorithm is
applied to two flight control examples.https://resolver.caltech.edu/CaltechCDSTR:1995.CIT-CDS-95-032Application of multivariable feedback methods to intravenous anesthetic pharmacodynamics
https://resolver.caltech.edu/CaltechAUTHORS:20190318-102253057
Year: 1995
DOI: 10.1109/ACC.1995.529765
Continuous infusions of intravenous anesthetics are becoming increasingly popular during surgical procedures, largely because relatively precise, consistent control of anesthetic depth is possible over intravenous injection techniques. In this paper we investigate the main issues involved with the development of automatic intravenous anesthesia delivery systems in the context of robust multivariable control. We present a pharmacodynamic model that may be suitable for closed-loop control, and discuss clinical data collected from human subjects during actual surgical conditions with the anesthetic propofol.https://resolver.caltech.edu/CaltechAUTHORS:20190318-102253057An efficient algorithm for performance analysis of nonlinear control systems
https://resolver.caltech.edu/CaltechAUTHORS:20190315-142359300
Year: 1995
DOI: 10.1109/acc.1995.532342
A numerical algorithm for computing necessary conditions for performance specifications is developed for nonlinear uncertain systems. The algorithm is similar in nature and behavior to the power algorithm for the µ lower bound, and does not rely on a descent method. The algorithm is applied to a practical example.https://resolver.caltech.edu/CaltechAUTHORS:20190315-142359300On design methods for sampled-data systems
https://resolver.caltech.edu/CaltechAUTHORS:20190318-132618553
Year: 1995
DOI: 10.1109/ACC.1995.531236
In this paper we compare, via example, the standard approaches to sampled-data design with recently developed direct design methods for these hybrid systems. Simple intuitive examples are used to show that traditional design heuristics provide no performance guarantees whatsoever. Even when the sampling rate is a design parameter that can be chosen as fast as desired, using design heuristics can lead to either severe performance degradation or extreme over-design. These effects are apparently well-known to practitioners, but may not be widely appreciated by the control community at large. The paper contains no new theoretical results and is intended to be of a tutorial nature.https://resolver.caltech.edu/CaltechAUTHORS:20190318-132618553Realizations of uncertain systems and formal power series
https://resolver.caltech.edu/CaltechAUTHORS:20190315-102303166
Year: 1995
DOI: 10.1109/ACC.1995.520997
Rational functions of several noncommuting indeterminates arise naturally in robust control when studying systems with structured uncertainty. Linear fractional transformations (LFTs) provide a convenient way of obtaining realizations of such systems and a complete realization theory of LFTs is emerging. This paper establishes connections between a minimal LFT realization and minimal realizations of a formal power series, which have been studied extensively in a variety of disciplines. The result is a fairly complete generalization of standard minimal realization theory for linear systems to the formal power series and LFT setting.https://resolver.caltech.edu/CaltechAUTHORS:20190315-102303166H∞ control of nonlinear systems: a convex characterization
https://resolver.caltech.edu/CaltechAUTHORS:LUWieeetac95
Year: 1995
DOI: 10.1109/9.412643
The nonlinear H∞-control problem is considered with an emphasis on developing machinery with promising computational properties. The solutions to H∞-control problems for a class of nonlinear systems are characterized in terms of nonlinear matrix inequalities which result in convex problems. The computational implications for the characterization are discussed.https://resolver.caltech.edu/CaltechAUTHORS:LUWieeetac95Control problems and the polynomial time hierarchy
https://resolver.caltech.edu/CaltechAUTHORS:20190319-100707731
Year: 1995
DOI: 10.1109/CDC.1995.478587
Classifies control problems by exhibiting their alternating quantifier structure. This classification allows the authors to relate these control problems to the computational complexity classes of the polynomial time hierarchy. A specific synthesis problem for uncertain systems is shown to be hard in the class II^p_2.https://resolver.caltech.edu/CaltechAUTHORS:20190319-100707731Stabilization of uncertain linear systems: an LFT approach
https://resolver.caltech.edu/CaltechAUTHORS:LUWieeetac96
Year: 1996
DOI: 10.1109/9.481607
This paper develops machinery for control of uncertain linear systems described in terms of linear fractional transformations (LFTs) on transform variables and uncertainty blocks with primary focus on stabilization and controller parameterization. This machinery directly generalizes familiar state-space techniques. The notation of Q-stability is defined as a natural type of robust stability, and output feedback stabilizability is characterized in terms of Q-stabilizability and Q-detectability which in turn are related to full information and full control problems. Computation is in terms of convex linear matrix inequalities (LMIs), the controllers have a separation structure, and the parameterization of all stabilizing controllers is characterized as an LFT on a stable, free parameter.https://resolver.caltech.edu/CaltechAUTHORS:LUWieeetac96Properties of the mixed μ problem and its bounds
https://resolver.caltech.edu/CaltechAUTHORS:YOUieeetac96
Year: 1996
DOI: 10.1109/9.481624
Upper and lower bounds for the mixed μ problem have recently been developed, and here we examine the relationship of these bounds to each other and to μ. A number of interesting properties are developed and the implications of these properties for the robustness analysis of linear systems and the development of practical computation schemes are discussed. In particular we find that current techniques can only guarantee easy computation for large problems when μ equals its upper bound, and computational complexity results prohibit this possibility for general problems. In this context we present some special cases where computation is easy and make some direct comparisons between mixed μ and "Kharitonov-type" analysis methods.https://resolver.caltech.edu/CaltechAUTHORS:YOUieeetac96Robust Nonlinear Control Theory with Applications to Aerospace Vehicles
https://resolver.caltech.edu/CaltechAUTHORS:20200310-145803556
Year: 1996
DOI: 10.1016/s1474-6670(17)58916-8
This paper is a very brief outline of an invited poster session giving a first-year progress report on a research program with the above title being carried out in the Control and Dynamical Systems (CDS) department at Caltech. This 5-year grant funded by the AFOSR Partnership for Research Excellence Transition (PRET) Program has a special emphasis on transitioning new methods to industrial practice and thus involves a high level of industrial participation. The focus of our program is fundamental research in general methods of analysis and design of complex uncertain nonlinear systems, from creating new mathematical theory to working to make that theory help engineers solve a variety of real industrial problems. Caltech's Control and Dynamical Systems department was created with precisely this goal, which is shared by our industrial collaborators, led by Honeywell. Further details will be available at the poster session.https://resolver.caltech.edu/CaltechAUTHORS:20200310-145803556Model reduction of multidimensional and uncertain systems
https://resolver.caltech.edu/CaltechAUTHORS:20190315-141159756
Year: 1996
DOI: 10.1109/9.539427
Model reduction methods are presented for systems represented by a linear fractional transformation on a repeated scalar uncertainty structure. These methods involve a complete generalization of balanced realizations, balanced Gramians, and balanced truncation model reduction with guaranteed error bounds, based on solutions to a pair of linear matrix inequalities which generalize Lyapunov equations. The resulting reduction methods immediately apply to uncertainty simplification and state order reduction in the case of uncertain systems but also may be interpreted as state order reduction for multidimensional systems.https://resolver.caltech.edu/CaltechAUTHORS:20190315-141159756Robust simulation and nonlinear performance
https://resolver.caltech.edu/CaltechAUTHORS:20190318-134911414
Year: 1996
DOI: 10.1109/CDC.1996.573498
Robust simulation, defined as the simulation of sets, allows the computation of a system's global properties. By simulating entire sets, instead of individual points, performance guarantees can made. While exact algorithms for robust simulation are computationally prohibitive, reasonable approximations which preserve performance guarantees exist. An approximate solution, which provides an upper bound on performance, is tested on a large number of systems. In general, the upper bound is close to the best lower bound computed by search. Furthermore, when the bounds differ, several techniques exist to improve the upper bound.https://resolver.caltech.edu/CaltechAUTHORS:20190318-134911414Robust and optimal control
https://resolver.caltech.edu/CaltechAUTHORS:20190319-082820924
Year: 1996
DOI: 10.1109/CDC.1996.572756
This paper will very briefly review the history of the relationship between modern optimal control and robust control. The latter is commonly viewed as having arisen in reaction to certain perceived inadequacies of the former. More recently, the distinction has effectively disappeared. Once-controversial notions of robust control have become thoroughly mainstream, and optimal control methods permeate robust control theory. This has been especially true in H-infinity theory, the primary focus of this paper.https://resolver.caltech.edu/CaltechAUTHORS:20190319-082820924Nonlinear Games: examples and counterexamples
https://resolver.caltech.edu/CaltechAUTHORS:20140527-071022483
Year: 1996
DOI: 10.1109/CDC.1996.577292
Popular nonlinear control methodologies are compared using benchmark examples generated with a "converse Hamilton-Jacobi-Bellman" method (CoHJB). Starting with the cost and optimal value function V, CoHJB solves HJB PDEs "backwards" algebraically to produce nonlinear dynamics and optimal controllers and disturbances. Although useless for design, it is great for generating benchmark examples. It is easy to use, computationally tractable, and can generate essentially all possible nonlinear optimal control problems. The optimal control and disturbance are then known and can be used to study actual design methods, which must start with the cost and dynamics without knowledge of V. This paper gives a brief introduction to the CoHJB method and some of the ground rules for comparing various methods. Some very simple examples are given to illustrate the main ideas. Both Jacobian linearization and feedback linearization combined with linear optimal control are used as "strawmen" design methods.https://resolver.caltech.edu/CaltechAUTHORS:20140527-071022483Full information and full control in a behavioral context
https://resolver.caltech.edu/CaltechAUTHORS:20190319-102058453
Year: 1996
DOI: 10.1109/CDC.1996.572834
In this paper, the concepts of full information (FI) and full control (FC), which arise in standard H∞ theory, are extended to the behavioral framework. The H∞ optimal interconnection problem formulation is outlined and the solution presented. The behavioral versions of the FI and FC problems are introduced, followed by connections with the input/output versions of the FI and FC problems and the associated Riccati equations. An illustrative example is presented.https://resolver.caltech.edu/CaltechAUTHORS:20190319-102058453Soft vs. hard bounds in probabilistic robustness analysis
https://resolver.caltech.edu/CaltechAUTHORS:20190319-103641203
Year: 1996
DOI: 10.1109/CDC.1996.573688
The relationship between soft vs. hard bounds and probabilistic vs. worst-case problem formulations for robustness analysis has been a source of some apparent confusion in the control community, and this paper will attempt to clarify some of these issues. Essentially, worst-case analysis involves computing the maximum of a function which measures performance over some set of uncertainty. Probabilistic analysis assumes some distribution on the uncertainty and computes the resulting probability measure on performance. Exact computation in each case is intractable in general, and this paper explores the use of both soft, and hard bounds for computing estimates of performance, including extensive numerical experimentation. We will focus on the simplest possible problem formulations that we believe reveal the difficulties associated with more general robustness analysis.https://resolver.caltech.edu/CaltechAUTHORS:20190319-103641203Reducing uncertain systems and behaviors
https://resolver.caltech.edu/CaltechAUTHORS:20190315-144945947
Year: 1996
DOI: 10.1109/CDC.1996.574435
This paper considers the problem of reducing the dimension of a model for an uncertain system whilst bounding the resulting error. Model reduction methods with guaranteed upper error bounds have previously been established for uncertain systems described by a state-space type realization; specifically, by a linear fractional transformation (LFT) of a constant realization matrix over a structured uncertainty operator. In contrast to traditional 1-D model reduction where upper bounds on reduction are matched with comparable lower bounds, in the uncertain system problem there have previously been no lower bounds established. The computation of both upper and lower bounds is discussed in this paper, including a discussion of the use of Hankel-like matrices. These model reduction methods and error bound computations are then discussed in the context of kernel representations of behavioral uncertain systems.https://resolver.caltech.edu/CaltechAUTHORS:20190315-144945947Approximate behaviors
https://resolver.caltech.edu/CaltechAUTHORS:20190319-110229272
Year: 1996
DOI: 10.1109/CDC.1996.574430
The motivation for this paper is to contribute to a unified approach to modeling, realization, approximation and analysis for systems with a rich class of uncertainty structures. The specific focus is on what is the appropriate framework to model components with uncertainty, and what is the appropriate notion of approximation for such components. Components and systems are conceptualized in terms of their behaviors, which can be specified by parametrized equations. More questions are posed than are answered.https://resolver.caltech.edu/CaltechAUTHORS:20190319-110229272Uncertain hierarchical modeling
https://resolver.caltech.edu/CaltechAUTHORS:20190318-101348405
Year: 1996
DOI: 10.1109/CDC.1996.574346
For modeling complex systems, it is natural to reduce the system into subsystems and model each subsystem. The approach taken in this paper is that it is desired that a model should be consistent with the modeling methodology. Further it is important to explicitly represent the inaccuracies of the model as part of the model. Within this paper, uncertain hierarchical modeling is further motivated. A hierarchy, interconnection structure, and a fundamental component data type are proposed and the choices motivated. The framework is proposed with the intention of being implemented on a computer and having a family of models of different resolution representing a system.https://resolver.caltech.edu/CaltechAUTHORS:20190318-101348405A lower bound for the mixed µ problem
https://resolver.caltech.edu/CaltechAUTHORS:20190315-100211418
Year: 1997
DOI: 10.1109/9.553696
The mixed µ problem has been shown to be NP hard so that exact analysis appears intractable. Our goal then is to exploit the problem structure so as to develop a polynomial time algorithm that approximates µ and usually gives good answers. To this end it is shown that µ is equivalent to a real eigenvalue maximization problem, and a power algorithm is developed to tackle this problem. The algorithm not only provides a lower bound for µ but has the property that µ is (almost) always an equilibrium point of the algorithm.https://resolver.caltech.edu/CaltechAUTHORS:20190315-100211418Genetic algorithms and simulated annealing for robustness analysis
https://resolver.caltech.edu/CaltechAUTHORS:20190318-095624277
Year: 1997
DOI: 10.1109/ACC.1997.609547
Genetic algorithms (GAs) and simulated annealing (SA) have been promoted as useful, general tools for nonlinear optimization. This paper explores their use in robustness analysis with real parameter variations, a known NP hard problem which would appear to be ideally suited to demonstrate the power of GAs and SA. Numerical experiment results show convincingly that they turn out to be poorer than existing branch and bound (B&B) approaches. While this may appear to shed doubt on some of the hype surrounding these stochastic optimization techniques, we find that they do have attractive features, which are also demonstrated in this study. For example, both GAs and SA are almost trivial to understand and program, so they require essentially no expertise, in sharp contrast to the B&B methods. They may be suitable for problems where programming effort is much more important than running time or the quality of the answer. Robustness analysis for engineering problems is not the best candidate in this respect, but it does provide an interesting test case for the evaluation of GAs and SA. A simple hill climbing algorithm is also studied for comparison.https://resolver.caltech.edu/CaltechAUTHORS:20190318-095624277Robustness analysis and synthesis for nonlinear uncertain systems
https://resolver.caltech.edu/CaltechAUTHORS:20190318-151348243
Year: 1997
DOI: 10.1109/9.650015
A state-space characterization of robustness analysis and synthesis for nonlinear uncertain systems is proposed. The robustness of a class of nonlinear systems subject to L_2-bounded structured uncertainties is characterized in terms of a nonlinear matrix inequality (NLMI), which yields a convex feasibility problem. As in the linear case, scalings are used to find a Lyapunov or storage function that give sufficient conditions for robust stability and performances. Sufficient conditions for the solvability of robustness synthesis problems are represented in terms of NLMIs as well. With the proposed NLMI characterizations, it is shown that the computation needed for robustness analysis and synthesis is not more difficult than that for checking Lyapunov stability; the numerical solutions for robustness problems are approximated by the use of finite element methods or finite difference schemes, and the computations are reduced to solving linear matrix inequalities. Unfortunately, while the development in this paper parallels the corresponding linear theory, the resulting computational consequences are, of course, not as favourable.https://resolver.caltech.edu/CaltechAUTHORS:20190318-151348243On receding horizon extensions and control Lyapunov functions
https://resolver.caltech.edu/CaltechAUTHORS:20190315-104922109
Year: 1998
DOI: 10.1109/ACC.1998.703180
Control Lyapunov functions (CLFs) are used in conjunction with receding horizon control (RHC) to develop a new class of control schemes. In the process, strong connections between the seemingly disparate approaches are revealed, leading to a unified picture that ties together the notions of pointwise min-norm, receding horizon, and optimal control. This framework is used to develop a control Lyapunov function based receding horizon scheme, of which a special case provides an appropriate extension of a variation on Sontag's formula. These schemes are shown to possess a number of desirable theoretical and implementation properties. An example is provided, demonstrating their application to a nonlinear control problem.https://resolver.caltech.edu/CaltechAUTHORS:20190315-104922109Two-degree-of-freedom controller design for an ill-conditioned distillation process using µ-synthesis
https://resolver.caltech.edu/CaltechAUTHORS:20190312-083605121
Year: 1999
DOI: 10.1109/87.736744
The structured singular value framework is applied to a distillation benchmark problem formulated for the 1991 IEEE Conference on decision and control (CDC). A two degree of freedom controller, which satisfies all control objectives of the CDC problem, is designed using /spl mu/-synthesis. The design methodology is presented and special attention is paid to the approximation of given control objectives into frequency domain weights.https://resolver.caltech.edu/CaltechAUTHORS:20190312-083605121Performance validation of the Caltech ducted-fan at a fixed operating point
https://resolver.caltech.edu/CaltechAUTHORS:20190318-100108768
Year: 1999
DOI: 10.1109/ACC.1999.783229
Using measured input and output data and a priori assumptions on a nominal model and a linear fractional transformation uncertainty structure, a family of model validating uncertainty sets are constructed for robust control analysis and design of the Caltech ducted fan. Based on an identified uncertainty set, the predicted closed loop performance for any given controller is compared to the directly measured performance. The paper reports current status of the ongoing work at Caltech.https://resolver.caltech.edu/CaltechAUTHORS:20190318-100108768Highly optimized tolerance: A mechanism for power laws in designed systems
https://resolver.caltech.edu/CaltechAUTHORS:CARpre99
Year: 1999
DOI: 10.1103/PhysRevE.60.1412
We introduce a mechanism for generating power law distributions, referred to as highly optimized tolerance (HOT), which is motivated by biological organisms and advanced engineering technologies. Our focus is on systems which are optimized, either through natural selection or engineering design, to provide robust performance despite uncertain environments. We suggest that power laws in these systems are due to tradeoffs between yield, cost of resources, and tolerance to risks. These tradeoffs lead to highly optimized designs that allow for occasional large events. We investigate the mechanism in the context of percolation and sand pile models in order to emphasize the sharp contrasts between HOT and self-organized criticality (SOC), which has been widely suggested as the origin for power laws in complex systems. Like SOC, HOT produces power laws. However, compared to SOC, HOT states exist for densities which are higher than the critical density, and the power laws are not restricted to special values of the density. The characteristic features of HOT systems include: (1) high efficiency, performance, and robustness to designed-for uncertainties; (2) hypersensitivity to design flaws and unanticipated perturbations; (3) nongeneric, specialized, structured configurations; and (4) power laws. The first three of these are in contrast to the traditional hallmarks of criticality, and are obtained by simply adding the element of design to percolation and sand pile models, which completely changes their characteristics.https://resolver.caltech.edu/CaltechAUTHORS:CARpre99A necessary and sufficient minimality condition for uncertain systems
https://resolver.caltech.edu/CaltechAUTHORS:BECieeetac99
Year: 1999
DOI: 10.1109/9.793720
A necessary and sufficient condition is given for the exact reduction of systems modeled by linear fractional transformations (LFTs) on structured operator sets. This condition is based on the existence of a rank-deficient solution to either of a pair of linear matrix inequalities which generalize Lyapunov equations; the notion of Gramians is thus also generalized to uncertain systems, as well as Kalman-like decomposition structures. A related minimality condition, the converse of the reducibility condition, may then be inferred from these results and the equivalence class of all minimal LFT realizations defined. These results comprise the first stage of a complete generalization of realization theory concepts to uncertain systems. Subsequent results, such as the definition of and rank tests on structured controllability and observability matrices are also given. The minimality results described are applicable to multidimensional system realizations as well as to uncertain systems; connections to formal powers series representations also exist.https://resolver.caltech.edu/CaltechAUTHORS:BECieeetac99Heavy-Tailed Distributions, Generalized Source Coding and Optimal Web Layout Design
https://resolver.caltech.edu/CaltechCDSTR:2000.001
Year: 2000
The design of robust and reliable networks and network services has become an increasingly challenging task in today's Internet world. To achieve this goal, understanding the characteristics of Internet traffic plays a more and more critical role. Empirical studies of measured traffic traces have led to the wide recognition of self-similarity in network traffic. Moreover, a direct link has been established between the self-similar nature of measured aggregate network traffic and the underlying heavy-tailed distributions of the Web traffic at the source level.
This report provides a natural and plausible explanation for the origin of heavy tails in Web traffic by introducing a series of simplified models for optimal Web layout design with varying levels of realism and analytic tractability. The basic approach is to view the minimization of the average file download time as a generalization of standard source coding for data compression, but with the design of the Web layout rather than the codewords. The results, however, are quite different from standard source coding, as all assumptions produce power law distributions for a wide variety of user behavior models.
In addition, a simulation model of more complex Web site layouts is proposed, with more detailed hyperlinks and user behavior. The throughput of a Web site can be maximized by taking advantage of information on user access patterns and rearranging (splitting or merging) files on the Web site accordingly, with a constraint on available resources. A heuristic optimization on random graphs is formulated, with user navigation modeled as Markov Chains. Simulations on different classes of graphs as well as more realistic models with simple geometries in individual Web pages all produce power law tails in the resulting size distributions of the files transferred from the Web sites. This again verifies our conjecture that heavy-tailed distributions result naturally from the tradeoff between the design objective and limited resources, and suggests a methodology for aiding in the design of high-throughput Web sites.https://resolver.caltech.edu/CaltechCDSTR:2000.001Highly Optimized Tolerance: Robustness and Design in Complex Systems
https://resolver.caltech.edu/CaltechAUTHORS:CARprl00
Year: 2000
DOI: 10.1103/PhysRevLett.84.2529
Highly optimized tolerance (HOT) is a mechanism that relates evolving structure to power laws in interconnected systems. HOT systems arise where design and evolution create complex systems sharing common features, including (1) high efficiency, performance, and robustness to designed-for uncertainties, (2) hypersensitivity to design flaws and unanticipated perturbations, (3) nongeneric, specialized, structured configurations, and (4) power laws. We study the impact of incorporating increasing levels of design and find that even small amounts of design lead to HOT states in percolation.https://resolver.caltech.edu/CaltechAUTHORS:CARprl00Robust perfect adaptation in bacterial chemotaxis through integral feedback control
https://resolver.caltech.edu/CaltechAUTHORS:YITpnas00
Year: 2000
PMCID: PMC18287
Integral feedback control is a basic engineering strategy for ensuring that the output of a system robustly tracks its desired value independent of noise or variations in system parameters. In biological systems, it is common for the response to an extracellular stimulus to return to its prestimulus value even in the continued presence of the signal-a process termed adaptation or desensitization. Barkai, Alon, Surette, and Leibler have provided both theoretical and experimental evidence that the precision of adaptation in bacterial chemotaxis is robust to dramatic changes in the levels and kinetic rate constants of the constituent proteins in this signaling network [Alon. U., Surette, M. G., Barkai. N. & Leibler, S. (1998) Nature (London) 397, 168-171]. Here we propose that the robustness of perfect adaptation is the result of this system possessing the property of integral feedback control. Using techniques from control and dynamical systems theory, we demonstrate that integral control is structurally inherent in the Barkai-Leibler model and identify and characterize the key assumptions of the model. Most importantly, we argue that integral control in some form is necessary for a robust implementation of perfect adaptation. More generally, integral control may underlie the robustness of many homeostatic mechanisms.https://resolver.caltech.edu/CaltechAUTHORS:YITpnas00A receding horizon generalization of pointwise min-norm controllers
https://resolver.caltech.edu/CaltechAUTHORS:PRIieeetac00
Year: 2000
DOI: 10.1109/9.855550
Control Lyapunov functions (CLFs) are used in conjunction with receding horizon control to develop a new class of receding horizon control schemes. In the process, strong connections between the seemingly disparate approaches are revealed, leading to a unified picture that ties together the notions of pointwise min-norm, receding horizon, and optimal control. This framework is used to develop a CLF based receding horizon scheme, of which a special case provides an appropriate extension of Sontag's formula. The scheme is first presented as an idealized continuous-time receding horizon control law. The issue of implementation under discrete-time sampling is then discussed as a modification. These schemes are shown to possess a number of desirable theoretical and implementation properties. An example is provided, demonstrating their application to a nonlinear control problem. Finally, stronger connections to both optimal and pointwise min-norm control are proved.https://resolver.caltech.edu/CaltechAUTHORS:PRIieeetac00Power Laws, Highly Optimized Tolerance, and Generalized Source Coding
https://resolver.caltech.edu/CaltechAUTHORS:DOYprl00
Year: 2000
DOI: 10.1103/PhysRevLett.84.5656
We introduce a family of robust design problems for complex systems in uncertain environments which are based on tradeoffs between resource allocations and losses. Optimized solutions yield the "robust, yet fragile" features of highly optimized tolerance and exhibit power law tails in the distributions of events for all but the special case of Shannon coding for data compression. In addition to data compression, we construct specific solutions for world wide web traffic and forest fires, and obtain excellent agreement with measured data.https://resolver.caltech.edu/CaltechAUTHORS:DOYprl00Robust control in the quantum domain
https://resolver.caltech.edu/CaltechAUTHORS:DOHcdc00
Year: 2000
DOI: 10.1109/CDC.2000.912895
Progress in quantum physics has made it possible to perform experiments in which individual quantum systems are monitored and manipulated in real time. The advent of such new technical capabilities provides strong motivation for the development of theoretical and experimental methodologies for quantum feedback control. The availability of such methods would enable radically new approaches to experimental physics in the quantum realm. Likewise, the investigation of quantum feedback control will introduce crucial new considerations to control theory, such as the uniquely quantum phenomena of entanglement and measurement back-action. The extension of established analysis techniques from control theory into the quantum domain may also provide new insight into the dynamics of complex quantum systems. We anticipate that the successful formulation of an input-output approach to the analysis and reduction of large quantum systems could have very general applications in nonequilibrium quantum statistical mechanics and in the nascent field of quantum information theory.https://resolver.caltech.edu/CaltechAUTHORS:DOHcdc00The ERATO Systems Biology Workbench: An Integrated Environment for Multiscale and Multitheoretic Simulations in Systems Biology
https://resolver.caltech.edu/CaltechAUTHORS:20130108-145620726
Year: 2001
Over the years, a variety of biochemical network modeling packages have been developed and used by researchers in biology. No single package currently answers all the needs of the biology community; nor is one likely to do so in the near future, because the range of tools needed is vast and
new techniques are emerging too rapidly. It seems unavoidable that, for the foreseeable future, systems biology researchers are likely to continue using multiple packages to carry out their work.
In this chapter, we describe the ERATO Systems Biology Workbench (SBW) and the Systems Biology Markup Language (SBML), two related efforts directed at the problems of software package interoperability. The goal of the SBW project is to create an integrated, easy-to-use software environment that enables sharing of models and resources between simulation and analysis tools for systems biology. SBW uses a modular, plug-in architecture that permits easy introduction of new components. SBML is a proposed standard XML-based language for representing models communicated between software packages; it is used as the format of models communicated between components in SBW.https://resolver.caltech.edu/CaltechAUTHORS:20130108-145620726Heavy Tails, Generalized Coding, and Optimal Web Layout
https://resolver.caltech.edu/CaltechAUTHORS:20111122-144035028
Year: 2001
DOI: 10.1109/INFCOM.2001.916658
This paper considers Web layout design in the spirit of source coding for data compression and rate distortion theory, with the aim of minimizing the average size of files downloaded during Web browsing sessions. The novel aspect here is that the object of design is layout rather than codeword selection, and is subject to navigability constraints. This produces statistics for file transfers that are heavy tailed, completely unlike standard Shannon theory, and provides a natural and plausible explanation for the origin of observed power laws in Web traffic. We introduce a series of theoretical and simulation models for optimal Web layout design with varying levels of analytic tractability and realism with respect to modeling of structure, hyperlinks, and user behavior. All models produce power laws which are striking both for their consistency with each other and with observed data, and their robustness to modeling assumptions. These results suggest that heavy tails are a permanent and ubiquitous feature of Internet traffic, and not an artifice of current applications or user behavior. They also suggest new ways of thinking about protocol design that combines insights from information and control theory with traditional networking.https://resolver.caltech.edu/CaltechAUTHORS:20111122-144035028Highly optimized tolerance in epidemic models incorporating local optimization and regrowth
https://resolver.caltech.edu/CaltechAUTHORS:ROBpre01
Year: 2001
DOI: 10.1103/PhysRevE.63.056122
In the context of a coupled map model of population dynamics, which includes the rapid spread of fatal epidemics, we investigate the consequences of two new features in highly optimized tolerance (HOT), a mechanism which describes how complexity arises in systems which are optimized for robust performance in the presence of a harsh external environment. Specifically, we (1) contrast global and local optimization criteria and (2) investigate the effects of time dependent regrowth. We find that both local and global optimization lead to HOT states, which may differ in their specific layouts, but share many qualitative features. Time dependent regrowth leads to HOT states which deviate from the optimal configurations in the corresponding static models in order to protect the system from slow (or impossible) regrowth which follows the largest losses and extinctions. While the associated map can exhibit complex, chaotic solutions, HOT states are confined to relatively simple dynamical regimes.https://resolver.caltech.edu/CaltechAUTHORS:ROBpre01Beyond the spherical cow
https://resolver.caltech.edu/CaltechAUTHORS:20150331-080649691
Year: 2001
DOI: 10.1038/35075703
Computational and mathematical models are helping biologists to understand the beating of a heart, the molecular dances underlying the cell-division cycle and cell movement, and much more.https://resolver.caltech.edu/CaltechAUTHORS:20150331-080649691An Overview of the ERATO Systems Biology Workbench Project
https://resolver.caltech.edu/CaltechAUTHORS:20130107-141403406
Year: 2001
The goal of the ERATO Systems Biology Workbench (SBW) project is to create an integrated,
easy-to-use software environment that enables sharing of resources for systems
biology. Our initial focus is on achieving interoperability between 7 existing simulators.
Our long-term goal is to develop a flexible and adaptable environment that provides the
ability to interact with a wide variety of software tools applicable to the systems biology
field including databases and experimental devices. We place high value on ease of use and
ease of extensibility as important qualities of software for use in biological investigations.
The software products of this project will be open source, and portable to Windows and
Linux. The Systems Biology Workbench is a vehicle for collaboration between developers
of bioinformatics technology. We are actively seeking other collaborators to extend the
workbench. The motivation is to reduce the time spent by developers both creating
software infrastructure and creating tools that exist in a similar form in other packages,
allowing developers to concentrate on new algorithm and model development.https://resolver.caltech.edu/CaltechAUTHORS:20130107-141403406Does AS size determine degree in as topology?
https://resolver.caltech.edu/CaltechAUTHORS:20161220-170012134
Year: 2001
DOI: 10.1145/1037107.1037108
In a recent and much celebrated paper, Faloutsos et al. [6]
found that the inter Autonomous System (AS) topology
exhibits a power-law degree distribution . This result
was quite unexpected in the networking community ,
and stirred significant interest in exploring the possible
causes of this phenomenon. The work of Barabasi et
al. [2], and its application to network topology generation
in the work of Medina et al. [9], have explored a
promising class of models that yield strict power-law degree
distributions. These models, which we will refer
to collectively as the B-A model, describe the detaile d
dynamics of the network growth process, modeling the
way in which connections are made between ASs. There
are two simple connectivity rules that define the evolution
of AS connectivity over time : incremental growth
where a new AS connects to existing ASs, and preferential
connectivity where the likelihood of connecting to an
AS is proportional to the vertex outdegree of the target
AS. These simple rules, which are similar to the classical
"rich get richer" model originally proposed by Simon
[12], lead to power-law degree distributions.https://resolver.caltech.edu/CaltechAUTHORS:20161220-170012134The ERATO Systems Biology Workbench: Architectural Evolution
https://resolver.caltech.edu/CaltechAUTHORS:20130104-163929932
Year: 2001
Systems biology researchers make use of a large number of
different software packages for computational modeling and
analysis as well as data manipulation and visualization. To
help developers easily provide the ability for their applications
to communicate with other tools, we have developed a
simple, open-source, application integration framework, the
ERATO Systems Biology Workbench (SBW). In this paper,
we discuss the architecture of SBW, focusing on our motivations for various design decisions including the choice of the message-oriented communications infrastructure.https://resolver.caltech.edu/CaltechAUTHORS:20130104-163929932Feedback regulation of the heat shock response in E. coli
https://resolver.caltech.edu/CaltechAUTHORS:20190304-154305641
Year: 2001
DOI: 10.1109/cdc.2001.980210
Survival of organisms in extreme conditions has necessitated the evolution of stress response networks that detect and respond to environmental changes. Among the extreme conditions that cells must face is the exposure to higher than normal temperatures. In this paper, we propose a detailed biochemical model that captures the dynamical nature of the heat-shock response in Escherichia coli. Using this model, we show that both feedback and feedforward control are utilized to achieve robustness, performance, and efficiency of the response to the heat stress. We discuss the evolutionary advantages that feedback confers to the system, as compared to other strategies that could have been implemented to get the same performance.https://resolver.caltech.edu/CaltechAUTHORS:20190304-154305641Scalable laws for stable network congestion control
https://resolver.caltech.edu/CaltechAUTHORS:PAGdcc01
Year: 2001
Discusses flow control in networks, in which sources control their rates based on feedback signals received from the network links, a feature present in current TCP protocols. We develop a congestion control system which is arbitrarily scalable, in the sense that its stability is maintained for arbitrary network topologies and arbitrary amounts of delay. Such a system can be implemented in a decentralized way with information currently available in networks plus a small amount of additional signaling.https://resolver.caltech.edu/CaltechAUTHORS:PAGdcc01The ERATO Systems Biology Workbench: Enabling Interaction and Exchange Between Software Tools for Computational Biology
https://resolver.caltech.edu/CaltechAUTHORS:20130108-142104885
Year: 2002
Researchers in computational biology today make use of a large number of different software packages for modeling, analysis, and data manipulation and visualization.
In this paper, we describe the ERATO Systems Biology Workbench (SBW), a software framework that allows these heterogeneous application components--written in diverse programming languages and running on different platforms--to communicate and use each others' data and algorithmic capabilities. Our goal is to create a simple, open-source software infrastructure which is effective, easy to implement and easy to understand. SBW uses a broker-based architecture and enables applications (potentially running on separate, distributed computers) to communicate via a simple network protocol. The interfaces to the system are encapsulated in client-side libraries that we provide for different programming languages. We describe the SBW architecture and the current set of modules, as well as alternative implementation technologies.https://resolver.caltech.edu/CaltechAUTHORS:20130108-142104885Internet congestion control
https://resolver.caltech.edu/CaltechAUTHORS:LOWieeecsm02
Year: 2002
DOI: 10.1109/37.980245
This article reviews the current transmission control protocol (TCP) congestion control protocols and overviews recent advances that have brought analytical tools to this problem. We describe an optimization-based framework that provides an interpretation of various flow control mechanisms, in particular, the utility being optimized by the protocol's equilibrium structure. We also look at the dynamics of TCP and employ linear models to exhibit stability limitations in the predominant TCP versions, despite certain built-in compensations for delay. Finally, we present a new protocol that overcomes these limitations and provides stability in a way that is scalable to arbitrary networks, link capacities, and delays.https://resolver.caltech.edu/CaltechAUTHORS:LOWieeecsm02Mutation, specialization, and hypersensitivity in highly optimized tolerance
https://resolver.caltech.edu/CaltechAUTHORS:ZHOpnas02b
Year: 2002
DOI: 10.1073/pnas.261714399
PMCID: PMC122317
We introduce a model of evolution in which competing organisms are represented by percolation lattice models. Fitness is based on the number of occupied sites remaining after removing a cluster connected to a randomly selected site. High-fitness individuals arising through mutation and selection must trade off density versus robustness to loss, and are characterized by cellular barrier patterns that prevent large cascading losses to common disturbances. This model shows that Highly Optimized Tolerance (HOT), which links complexity to robustness in designed systems, arises naturally through Darwinian mechanisms. Although the model is a severe abstraction of biology, it produces a surprisingly wide variety of micro- and macroevolutionary features strikingly similar to real biological evolution.https://resolver.caltech.edu/CaltechAUTHORS:ZHOpnas02bComplexity and robustness
https://resolver.caltech.edu/CaltechAUTHORS:CARpnas02
Year: 2002
DOI: 10.1073/pnas.012582499
PMCID: PMC128573
Highly optimized tolerance (HOT) was recently introduced as a conceptual framework to study fundamental aspects of complexity. HOT is motivated primarily by systems from biology and engineering and emphasizes, (i) highly structured, nongeneric, self-dissimilar internal configurations, and (ii) robust yet fragile external behavior. HOT claims these are the most important features of complexity and not accidents of evolution or artifices of engineering design but are inevitably intertwined and mutually reinforcing. In the spirit of this collection, our paper contrasts HOT with alternative perspectives on complexity, drawing on real-world examples and also model systems, particularly those from self-organized criticality.https://resolver.caltech.edu/CaltechAUTHORS:CARpnas02Reverse Engineering of Biological Complexity
https://resolver.caltech.edu/CaltechAUTHORS:20141112-141330446
Year: 2002
DOI: 10.1126/science.1069981
Advanced technologies and biology have extremely different physical implementations, but they are far more alike in systems-level organization than is widely appreciated. Convergent evolution in both domains produces modular architectures that are composed of elaborate hierarchies of protocols and layers of feedback regulation, are driven by demand for robustness to uncertain environments, and use often imprecise components. This complexity may be largely hidden in idealized laboratory settings and in normal operation, becoming conspicuous only when contributing to rare cascading failures. These puzzling and paradoxical features are neither accidental nor artificial, but derive from a deep and necessary interplay between complexity and robustness, modularity, feedback, and fragility. This review describes insights from engineering theory and practice that can shed some light on biological complexity.https://resolver.caltech.edu/CaltechAUTHORS:20141112-141330446Robustness analysis of the heat shock response in E. coli
https://resolver.caltech.edu/CaltechAUTHORS:20190312-075150830
Year: 2002
DOI: 10.1109/ACC.2002.1023817
The bacterial heat shock response refers to the mechanism by which bacteria react to a sudden increase in the ambient temperature of growth. The consequences of such an unmediated temperature increase at the cellular level is the unfolding, misfolding, or aggregation of cell proteins, which threatens the life of the cell. Cells respond to the heat stress by initiating the production of heat-shock proteins whose function is to refold denatured proteins into their native states. The heat shock response, through the elevated synthesis of molecular chaperones and proteases, enables the repair of protein damage and the degradation of aggregated proteins. In a previous work (Kurata et al., 2001), we have devised a dynamic model for the heat shock response in E. coli. In the present paper, we provide a thorough discussion of the dynamical nature of this model. We use sensitivity analysis and simulation tools to illustrate the remarkable efficiency, robustness, and stability of the heat shock response system.https://resolver.caltech.edu/CaltechAUTHORS:20190312-075150830A new bound of the ℒ2[0, T]-induced norm and applications to model reduction
https://resolver.caltech.edu/CaltechAUTHORS:SZNacc02
Year: 2002
DOI: 10.1109/ACC.2002.1023179
We present a simple bound on the finite horizon ℒ2/[0, T]-induced norm of a linear time-invariant (LTI), not necessarily stable system which can be efficiently computed by calculating the ℋ∞ norm of a shifted version of the original operator. As an application, we show how to use this bound to perform model reduction of unstable systems over a finite horizon. The technique is illustrated with a non-trivial physical example relevant to the appearance of time-irreversible phenomena in statistical physics.https://resolver.caltech.edu/CaltechAUTHORS:SZNacc02Dynamics of TCP/RED and a scalable control
https://resolver.caltech.edu/CaltechAUTHORS:20170810-135606639
Year: 2002
DOI: 10.1109/INFCOM.2002.1019265
We demonstrate that the dynamic behavior of queue and average window is determined predominantly by the stability of TCP/RED, not by AIMD probing nor noise traffic. We develop a general multi-link multi-source model for TCP/RED and derive a local stability condition in the case of a single link with heterogeneous sources. We validate our model with simulations and illustrate the stability region of TCP/RED. These results suggest that TCP/RED becomes unstable when delay increases, or more strikingly, when link capacity increases. The analysis illustrates the difficulty of setting RED parameters to stabilize TCP: they can be tuned to improve stability, but only at the cost of large queues even when they are dynamically adjusted. Finally, we present a simple distributed congestion control algorithm that maintains stability for arbitrary network delay, capacity, load and topology.https://resolver.caltech.edu/CaltechAUTHORS:20170810-135606639Design degrees of freedom and mechanisms for complexity
https://resolver.caltech.edu/CaltechAUTHORS:REYpre02
Year: 2002
DOI: 10.1103/PhysRevE.66.016108
We develop a discrete spectrum of percolation forest fire models characterized by increasing design degrees of freedom (DDOF's). The DDOF's are tuned to optimize the yield of trees after a single spark. In the limit of a single DDOF, the model is tuned to the critical density. Additional DDOF's allow for increasingly refined spatial patterns, associated with the cellular structures seen in highly optimized tolerance (HOT). The spectrum of models provides a clear illustration of the contrast between criticality and HOT, as well as a concrete quantitative example of how a sequence of robustness tradeoffs naturally arises when increasingly complex systems are developed through additional layers of design. Such tradeoffs are familiar in engineering and biology and are a central aspect of the complex systems that can be characterized as HOT.https://resolver.caltech.edu/CaltechAUTHORS:REYpre02Highly optimized transitions to turbulence
https://resolver.caltech.edu/CaltechAUTHORS:20190306-081135649
Year: 2002
DOI: 10.1109/CDC.2002.1185094
We study the Navier-Stokes equations in three dimensional plane Couette flow geometry subject to stream-wise constant initial conditions and perturbations. The resulting two dimensional/three component (2D/3C) model has no bifurcations and is globally (non-linearly) stable for all Reynolds numbers R, yet has a total transient energy amplification that scales like R/sup 3/. These transients also have the particular dynamic flow structures known to play a central role in wall bounded shear flow transition and turbulence. This suggests a highly optimized tolerance (HOT) model of shear flow turbulence, where streamlining eliminates generic bifurcation cascade transitions that occur in bluff body flows, resulting in a flow which is stable to arbitrary changes in Reynolds number but highly fragile in amplifying arbitrarily small perturbations. This result indicates that transition and turbulence in special streamlined geometries is not a problem of linear or nonlinear instability, but rather a problem of robustness.https://resolver.caltech.edu/CaltechAUTHORS:20190306-081135649A new physics
https://resolver.caltech.edu/CaltechAUTHORS:DOYcdc02
Year: 2002
DOI: 10.1109/CDC.2002.1185093
This session considers the application of mathematics from control theory to several persistent mysteries at the foundations of physics where interconnected, multiscale systems issues arise. In addition to the ubiquity of power laws in natural and man-made systems, these include a new view of turbulence in highly sheared flows that results from design for drag minimization, the origin of macroscopic dissipation and thermodynamic irreversibility in microscopically reversible dynamics, the universality of quantum gates for quantum computing, decoherence minimization in quantum systems, and entanglement witnessing. The latter ones are problems at the heart of several important tasks such as quantum computing, teleportation and quantum key distribution. Much of the original motivation for a new science of complexity came from the hope that methods of theoretical physics could contribute to a theory of complex engineering and biological networks and systems. This collection of work shows that apparently exactly the opposite is true. The role that robust control methods play in this research will be the central theme of this paper, around which the other issues will be woven. The aim is not to provide a control-friendly rederivation of known results in physics, but rather to illustrate through representative examples, how exciting new results and important insight, as assessed by physicists themselves, can be obtained through the mathematics and methods that the control community has developed. Since this work is largely being published in the scientific literature, the controls community may be largely unaware of these developments.https://resolver.caltech.edu/CaltechAUTHORS:DOYcdc02Finite horizon model reduction and the appearance of dissipation in Hamiltonian systems
https://resolver.caltech.edu/CaltechAUTHORS:BARcdc02
Year: 2002
DOI: 10.1109/CDC.2002.1185095
An apparent paradox in classical statistical physics is the mechanism by which conservative, time-reversible microscopic dynamics, can give rise to seemingly dissipative behavior. In this paper we use system theoretic tools to show that dissipation can arise as an artifact of incomplete observations over a finite horizon. In addition, this approach allows us to obtain finite-time, low order, approximations of systems with moderate size and to establish how the approach to the thermodynamic limit depends on the different physical parameters.https://resolver.caltech.edu/CaltechAUTHORS:BARcdc02Overview of the Alliance for Cellular Signaling
https://resolver.caltech.edu/CaltechAUTHORS:20150401-065047086
Year: 2002
DOI: 10.1038/nature01304
The Alliance for Cellular Signaling is a large-scale collaboration designed to answer global questions about signalling networks. Pathways will be studied intensively in two cells — B lymphocytes (the cells of the immune system) and cardiac myocytes — to facilitate quantitative modelling. One goal is to catalyse complementary research in individual laboratories; to facilitate this, all alliance data are freely available for use by the entire research community.https://resolver.caltech.edu/CaltechAUTHORS:20150401-065047086Feedback Regulation of the Heat Shock Response in E. coli
https://resolver.caltech.edu/CaltechAUTHORS:20191009-114846374
Year: 2003
DOI: 10.1007/3-540-36589-3_9
Systems Biology is an emerging new field defined as the study of biology as an integrated system of components that act interdependently to accomplish certain functions. This approach holds the promise of offering precious insight into aspects of biological organization that cannot be identified through a reductionist approach concerned solely with the study of individual molecules. In this work, we illustrate this viewpoint through the example of the bacterial heat shock response. The heat shock response is an important mechanism that combats harmful effects of an unmediated increase in temperature. Such an increase in temperature causes the unfolding or aggregation of the cellular proteins, which imposes a tremendous amount of stress on the cell. The heat shock response is implemented through an elaborate system of controls whose purpose is to refold denatured proteins, therefore restoring their normal function. In this paper, we present a deterministic model for the heat shock response. We use this model to gain insight into the design and performance objectives of this response. We then provide a stochastic treatment based on the Stochastic Simulation Algorithm of Gillepsie [18]. This stochastic investigation validates the use of the deterministic approach in modeling the heat shock response, and motivates the investigation of feedback structures that play a role in attenuating stochastic fluctuations.https://resolver.caltech.edu/CaltechAUTHORS:20191009-114846374Toward an Optimization-Driven Framework for Designing and Generating Realistic Internet Topologies
https://resolver.caltech.edu/CaltechAUTHORS:20160414-163819524
Year: 2003
DOI: 10.1145/774763.774769
We propose a novel approach to the study of Internet topology in which we use an optimization framework to model
the mechanisms driving incremental growth. While previous
methods of topology generation have focused on explicit
replication of statistical properties, such as node hierarchies and node degree distributions, our approach addresses the economic tradeoffs, such as cost and performance, and the technical constraints faced by a single ISP in its network design. By investigating plausible objectives and constraints in the design of actual networks, observed network properties
such as certain hierarchical structures and node degree
distributions can be expected to be the natural by-product
of an approximately optimal solution chosen by network designers and operators. In short, we advocate here essentially an approach to network topology design, modeling, and generation that is based on the concept of Highly Optimized Tolerance (HOT). In contrast with purely descriptive topology modeling, this opens up new areas of research that focus on the causal forces at work in network design and aim at identifying the economic and technical drivers responsible for the observed large-scale network behavior. As a result, the proposed approach should have significantly more predictive power than currently pursued efforts and should provide a scientific foundation for the investigation of other important problems, such as pricing, peering, or the dynamics of routing protocols.https://resolver.caltech.edu/CaltechAUTHORS:20160414-163819524The systems biology markup language (SBML): a medium for representation and exchange of biochemical network models
https://resolver.caltech.edu/CaltechAUTHORS:20111020-083352718
Year: 2003
DOI: 10.1093/bioinformatics/btg015
Motivation: Molecular biotechnology now makes it possible to build elaborate systems models, but the systems biology community needs information standards if models are to be shared, evaluated and developed cooperatively.
Results: We summarize the Systems Biology Markup Language (SBML) Level 1, a free, open, XML-based format for representing biochemical reaction networks. SBML is a software-independent language for describing models common to research in many areas of computational biology, including cell signaling pathways, metabolic pathways, gene regulation, and others.https://resolver.caltech.edu/CaltechAUTHORS:20111020-083352718A new TCP/AQM for stable operation in fast networks
https://resolver.caltech.edu/CaltechAUTHORS:20170810-131233358
Year: 2003
DOI: 10.1109/INFCOM.2003.1208662
This paper is aimed at designing a congestion control system that scales gracefully with network capacity, providing high utilization, low queueing delay, dynamic stability, and fairness among users. In earlier work we had developed fluid-level control laws that achieve the first three objectives for arbitrary networks and delays, but were forced to constrain the resource allocation policy. In this paper we extend the theory to include dynamics at TCP sources, preserving the earlier features at fast time-scales, but permitting sources to match their steady-state preferences, provided a bound on round-trip-times is known. We develop two packet-level implementations of this protocol, using (i) ECN marking, and (ii) queueing delay, as means of communicating the congestion measure from links to sources. We discuss parameter choices and demonstrate using ns-2 simulations the stability of the protocol and its equilibrium features in terms of utilization, queueing and fairness. We also demonstrate the scalability of these features to increases in capacity, delay, and load, in comparison with other deployed and proposed protocols.https://resolver.caltech.edu/CaltechAUTHORS:20170810-131233358Can shortest-path routing and TCP maximize utility
https://resolver.caltech.edu/CaltechAUTHORS:20190306-132657017
Year: 2003
DOI: 10.1109/INFCOM.2003.1209226
TCP-AQM protocol can be interpreted as distributed primal-dual algorithms over the Internet to maximize aggregate utility. In this paper, we study whether TCP-AQM together with shortest-path routing can maximize utility with appropriate choice of link cost, on a slower timescale, over both source rates and routes. We show that this is generally impossible because the addition of route maximization makes the problem NP-hard. We exhibit an inevitable tradeoff between routing instability and utility maximization. For the special case of ring network, we prove rigorously that shortest-path routing based purely on congestion prices is unstable. Adding a sufficiently large static component to link cost, stabilizes it, but the maximum utility achievable by shortest-path routing decreases with the weight on the static component. We present simulation results to illustrate that these conclusions generalize to general network topology, and that routing instability can reduce utility to less than that achievable by the necessarily stable static routing.https://resolver.caltech.edu/CaltechAUTHORS:20190306-132657017A control theoretical look at internet congestion control
https://resolver.caltech.edu/CaltechAUTHORS:20170810-131419777
Year: 2003
DOI: 10.1007/3-540-36589-3_2
Congestion control mechanisms in today's Internet represent perhaps the largest artificial feedback system ever deployed, and yet one that has evolved mostly outside the scope of control theory. This can be explained by the tight constraints of decentralization and simplicity of implementation in this problem, which would appear to rule out most mathematically-based designs. Nevertheless, a recently developed framework based on fluid flow models has allowed for a belated injection of control theory into the area, with some pleasant surprises. As described in this chapter, there is enough special structure to allow us to "guess" designs with mathematically provable properties that hold in arbitrary networks, and which involve a modest complexity in implementation.https://resolver.caltech.edu/CaltechAUTHORS:20170810-131419777Next Generation Simulation Tools: The Systems Biology Workbench and BioSPICE Integration
https://resolver.caltech.edu/CaltechAUTHORS:20121026-183338290
Year: 2003
DOI: 10.1089/153623103322637670
Researchers in quantitative systems biology make use of a large number of different software packages for modelling, analysis, visualization, and general data manipulation. In this paper, we describe the Systems Biology Workbench (SBW), a software framework that allows heterogeneous application components—written in diverse programming languages and running on different platforms—to communicate and use each others' capabilities via a fast binary encoded-message system. Our goal was to create a simple, high performance, opensource software infrastructure which is easy to implement and understand. SBW enables applications (potentially running on separate, distributed computers) to communicate via a simple network protocol. The interfaces to the system are encapsulated in client-side libraries that we provide for different programming languages. We describe in this paper the SBW architecture, a selection of current modules, including Jarnac, JDesigner, and SBWMeta-tool, and the close integration of SBW into BioSPICE, which enables both frameworks to share tools and compliment and strengthen each others capabilities.https://resolver.caltech.edu/CaltechAUTHORS:20121026-183338290Linear stability of TCP/RED and a scalable control
https://resolver.caltech.edu/CaltechAUTHORS:20170810-140056148
Year: 2003
DOI: 10.1016/S1389-1286(03)00304-9
We demonstrate that the dynamic behavior of queue and average window is determined predominantly by the stability of TCP/RED, not by AIMD probing nor noise traffic. We develop a general multi-link multi-source model for TCP/RED and derive a local stability condition in the case of a single link with heterogeneous sources. We validate our model with simulations and illustrate the stability region of TCP/RED. These results suggest that TCP/RED becomes unstable when delay increases, or more strikingly, when link capacity increases. The analysis illustrates the difficulty of setting RED parameters to stabilize TCP: they can be tuned to improve stability, but only at the cost of large queues even when they are dynamically adjusted. Finally, we present a simple distributed congestion control algorithm that maintains stability for arbitrary network delay, capacity, load and topology.https://resolver.caltech.edu/CaltechAUTHORS:20170810-140056148Model validation and robust stability analysis of the bacterial heat shock response using SOSTOOLS
https://resolver.caltech.edu/CaltechAUTHORS:20111010-100919353
Year: 2003
DOI: 10.1109/CDC.2003.1271735
The complexity inherent in gene regulatory network models, as well as their nonlinear nature make them difficult to analyze or validate/invalidate using conventional tools. Combining ideas from robust control theory, real algebraic geometry, optimization and semi-definite programming, SOSTOOLS provides a promising framework to answer these robustness and model validation questions algorithmically. We adopt these tools in the study of the heat shock response in bacteria. For this purpose, we use a reduced order model of the bacterial heat stress response. We study the robust stability properties of this system to parametric uncertainty, and address the model validation/invalidation problem by proving the necessity for the existence of certain feedback loops to reproduce the known time behavior of the system.https://resolver.caltech.edu/CaltechAUTHORS:20111010-100919353A First-Principles Approach to Understanding the
Internet's Router-level Topology
https://resolver.caltech.edu/CaltechAUTHORS:20160715-133930786
Year: 2004
DOI: 10.1145/1015467.1015470
A detailed understanding of the many facets of the Internet's topological structure is critical for evaluating the performance of networking protocols, for assessing the effectiveness of proposed techniques to protect the network from nefarious intrusions and attacks, or for developing improved designs for resource provisioning. Previous studies of topology have focused on interpreting measurements or on phenomenological descriptions and evaluation of
graph-theoretic properties of topology generators. We propose a
complementary approach of combining a more subtle use of statistics
and graph theory with a first-principles theory of router-level
topology that reflects practical constraints and tradeoffs. While
there is an inevitable tradeoff between model complexity and fidelity,
a challenge is to distill from the seemingly endless list of
potentially relevant technological and economic issues the features
that are most essential to a solid understanding of the intrinsic fundamentals
of network topology. We claim that very simple models
that incorporate hard technological constraints on router and link
bandwidth and connectivity, together with abstract models of user
demand and network performance, can successfully address this
challenge and further resolve much of the confusion and controversy
that has surrounded topology generation and evaluation.https://resolver.caltech.edu/CaltechAUTHORS:20160715-133930786More "normal" than normal: scaling distributions and complex systems
https://resolver.caltech.edu/CaltechAUTHORS:WILwsc04
Year: 2004
DOI: 10.1109/WSC.2004.1371310
One feature of many naturally occurring or engineered complex systems is tremendous variability in event sizes. To account for it, the behavior of these systems is often described using power law relationships or scaling distributions, which tend to be viewed as "exotic" because of their unusual properties (e.g., infinite moments). An alternate view is based on mathematical, statistical, and data-analytic arguments and suggests that scaling distributions should be viewed as "more normal than normal". In support of this latter view that has been advocated by Mandelbrot for the last 40 years, we review in this paper some relevant results from probability theory and illustrate a powerful statistical approach for deciding whether the variability associated with observed event sizes is consistent with an underlying Gaussian-type (finite variance) or scaling-type (infinite variance) distribution. We contrast this approach with traditional model fitting techniques and discuss its implications for future modeling of complex systems.https://resolver.caltech.edu/CaltechAUTHORS:WILwsc04A Reynolds number independent model for turbulence in Couette flow
https://resolver.caltech.edu/CaltechAUTHORS:20130924-130804582
Year: 2004
In this paper we study theoretically and computationally the dynamics
of linearized stream-wise constant Navier-Stokes equations, under
external time varying deterministic disturbances.https://resolver.caltech.edu/CaltechAUTHORS:20130924-130804582Evolving a lingua franca and associated software infrastructure for computational systems biology: the Systems Biology Markup Language (SBML) project
https://resolver.caltech.edu/CaltechAUTHORS:HUCieesb04
Year: 2004
DOI: 10.1049/sb:20045008
Biologists are increasingly recognising that computational modelling is crucial for making sense of the vast quantities of complex experimental data that are now being collected. The systems biology field needs agreed-upon informationstandards if models are to be shared, evaluated and developed cooperatively. Over the last four years, our team has been developing the Systems Biology Markup Language (SBML) in collaboration with an international community of modellers and software developers. SBML has become a de facto standard format for representing formal, quantitative and qualitative models at the level of biochemical reactions and regulatory networks. In this article, we summarise the current and upcoming versions of SBML and our efforts at developing software infrastructure for supporting and broadening its use. We also provide a brief overview of the many SBML-compatible software tools available today.https://resolver.caltech.edu/CaltechAUTHORS:HUCieesb04Methodological frameworks for large-scale network analysis and design
https://resolver.caltech.edu/CaltechAUTHORS:20161207-172521946
Year: 2004
DOI: 10.1145/1031134.1031138
This paper emphasizes the need for methodological frameworks for analysis and design of large scale networks which are independent of specific design innovations and their advocacy, with the aim of making networking a more systematic engineering discipline. Networking problems have largely confounded existing theory, and innovation based on intuition has dominated design. This paper will illustrate potential pitfalls of this practice. The general aim is to illustrate universal aspects of theoretical and methodological research that can be applied to network design and verification. The issues focused on will include the choice of models, including the relationship between flow and packet level descriptions, the need to account for uncertainty generated by modelling abstractions, and the challenges of dealing with network scale. The rigorous comparison of proposed schemes will be illustrated using various abstractions. While standard tools from robust control theory have been applied in this area, we will also illustrate how network-specific challenges can drive the development of new mathematics that expand their range of applicability, and how many enormous challenges remain.https://resolver.caltech.edu/CaltechAUTHORS:20161207-172521946Managing complexity and uncertainty
https://resolver.caltech.edu/CaltechAUTHORS:20190306-084855706
Year: 2004
Modem fields of science and engineering have evolved
remarkably high degrees of specialization. The present division of intellectual labor is structured by the assumption that complex systems can be "vertically" decomposed into layers of materials and devices versus the systems they compose. A further assumption is that each layer is further "horizontally" divided into
chemical, mechanical, and electrical materials/devices as well as processing, communication, computation, and control
systems. A central cause of the fragmentation of complex
systems into isolated subdisciplines has traditionally been the inherent intractability of problems that require integration of, say, communications, computation, and control. This has necessitated specialized and domain-specific assumptions and methods that can appear arbitrary and ad hoc to researchers in other subdomains. The power of this decomposition is that it has facilitated a
massively parallel development of advanced technologies, the proliferation of sophisticated domain-specific theories, allowing each subdiscipline to function independently, with only higher level system integrators required to be generalists. An increasingly troublesome side-effect is a growing intellectual Tower of Babel where experts within one subdiscipline can rarely have
meaningful contact with experts from other subdisciplines, and may even be largely unaware of their existence. For example, the term "information" is used by everyone, but often has not just different but almost opposite meanings in, say, communications, computing, or controls systems, let alone between systems and devices.https://resolver.caltech.edu/CaltechAUTHORS:20190306-084855706A first-principles approach to understanding the internet's router-level topology
https://resolver.caltech.edu/CaltechAUTHORS:20170810-134129788
Year: 2004
DOI: 10.1145/1030194.1015470
A detailed understanding of the many facets of the Internet's topological structure is critical for evaluating the performance of networking protocols, for assessing the effectiveness of proposed techniques to protect the network from nefarious intrusions and attacks, or for developing improved designs for resource provisioning. Previous studies of topology have focused on interpreting measurements or on phenomenological descriptions and evaluation of graph-theoretic properties of topology generators. We propose a complementary approach of combining a more subtle use of statistics and graph theory with a first-principles theory of router-level topology that reflects practical constraints and tradeoffs. While there is an inevitable tradeoff between model complexity and fidelity, a challenge is to distill from the seemingly endless list of potentially relevant technological and economic issues the features that are most essential to a solid understanding of the intrinsic fundamentals of network topology. We claim that very simple models that incorporate hard technological constraints on router and link bandwidth and connectivity, together with abstract models of user demand and network performance, can successfully address this challenge and further resolve much of the confusion and controversy that has surrounded topology generation and evaluation.https://resolver.caltech.edu/CaltechAUTHORS:20170810-134129788Robustness of Cellular Functions
https://resolver.caltech.edu/CaltechAUTHORS:20170408-213739584
Year: 2004
DOI: 10.1016/j.cell.2004.09.008
Robustness, the ability to maintain performance in the face of perturbations and uncertainty, is a long-recognized key property of living systems. Owing to intimate links to cellular complexity, however, its molecular and cellular basis has only recently begun to be understood. Theoretical approaches to complex engineered systems can provide guidelines for investigating cellular robustness because biology and engineering employ a common set of basic mechanisms in different combinations. Robustness may be a key to understanding cellular complexity, elucidating design principles, and fostering closer interactions between experimentation and theory.https://resolver.caltech.edu/CaltechAUTHORS:20170408-213739584Imitation of life: How biology is inspiring computing [Book Review]
https://resolver.caltech.edu/CaltechAUTHORS:20150325-141647738
Year: 2004
DOI: 10.1038/431908a
Generations of engineers have recognized
that, in many respects, biology does it better.
Imitation of Life is a whirlwind history, richer
even than its subtitle suggests, through
various computational disciplines inspired
by biology. This is an ambitious undertaking
for such a short book but, although it
ignores some important unifying principles,
its brevity is also a virtue. The inspirations
from biology are scattered throughout the
book, and their collective impact is felt
best when the book is digested whole, at one
sitting. The early chapters on biology as a
metaphor are the least satisfying, and any
reader who stops there may never return for
the genuine delights that follow.https://resolver.caltech.edu/CaltechAUTHORS:20150325-141647738MathSBML: a package for manipulating SBML-based biological models
https://resolver.caltech.edu/CaltechAUTHORS:20111004-150808819
Year: 2004
DOI: 10.1093/bioinformatics/bth271
PMCID: PMC1409765
Summary: MathSBML is a Mathematica package designed
for manipulating Systems Biology Markup Language (SBML)
models. It converts SBML models into Mathematica data structures and provides a platform for manipulating and evaluating these models. Once a model is read by MathSBML, it is fully compatible with standard Mathematica functions such as NDSolve (a differential-algebraic equations solver). Math-SBML also provides an application programming interface for viewing, manipulating, running numerical simulations; exporting SBML models; and converting SBML models in to other formats, such as XPP, HTML and FORTRAN. By accessing the full breadth of Mathematica functionality, MathSBML is fully extensible to SBML models of any size or complexity.
Availability: Open Source (LGPL) at http://www.sbml.org and http://www.sf.net/projects/sbml.
Supplementary information: Extensive online documentation is available at http://www.sbml.org/mathsbml.html. Additional examples are provided at http://www.sbml.org/software/mathsbml/bioinformatics-application-notehttps://resolver.caltech.edu/CaltechAUTHORS:20111004-150808819FAST TCP: From Theory to Experiments
https://resolver.caltech.edu/CaltechCACR:2004.207
Year: 2004
We describe a variant of TCP, called FAST, that can sustain high throughput and utilization at multi-Gbps over large distance. We present the motivation, review the background theory, summarize key features of FAST TCP, and report our first experimental results.https://resolver.caltech.edu/CaltechCACR:2004.207Analysis of nonlinear delay differential equation models of TCP/AQM protocols using sums of squares
https://resolver.caltech.edu/CaltechAUTHORS:20110831-081433118
Year: 2004
DOI: 10.1109/CDC.2004.1429529
The simplest adequate models for congestion
control for the Internet are in the form of deterministic
nonlinear delay differential equations. However the absence
of efficient, algorithmic methodologies to analyze
them at this modelling level usually results in the investigation of their linearizations including delays; or in the analysis of nonlinear yet undelayed models. In this
paper we present an algorithmic methodology for efficient
stability analysis of network congestion control schemes at
the nonlinear delay-differential equation model level, using
the Sum of Squares decomposition and SOSTOOLS.https://resolver.caltech.edu/CaltechAUTHORS:20110831-081433118FAST TCP: from theory to experiments
https://resolver.caltech.edu/CaltechAUTHORS:JINieeen05
Year: 2005
DOI: 10.1109/MNET.2005.1383434
We describe a variant of TCP, called FAST, that can sustain high throughput and utilization at multigigabits per second over large distances. We present the motivation, review the background theory, summarize key features of FAST TCP, and report our first experimental results.https://resolver.caltech.edu/CaltechAUTHORS:JINieeen05Congestion control for high performance, stability, and fairness in general networks
https://resolver.caltech.edu/CaltechAUTHORS:PAGieeeacmtn05
Year: 2005
DOI: 10.1109/TNET.2004.842216
This paper is aimed at designing a congestion control system that scales gracefully with network capacity, providing high utilization, low queueing delay, dynamic stability, and fairness among users. The focus is on developing decentralized control laws at end-systems and routers at the level of fluid-flow models, that can provably satisfy such properties in arbitrary networks, and subsequently approximate these features through practical packet-level implementations. Two families of control laws are developed. The first "dual" control law is able to achieve the first three objectives for arbitrary networks and delays, but is forced to constrain the resource allocation policy. We subsequently develop a "primal-dual" law that overcomes this limitation and allows sources to match their steady-state preferences at a slower time-scale, provided a bound on round-trip-times is known. We develop two packet-level implementations of this protocol, using 1) ECN marking, and 2) queueing delay, as means of communicating the congestion measure from links to sources. We demonstrate using ns-2 simulations the stability of the protocol and its equilibrium features in terms of utilization, queueing and fairness, under a variety of scaling parameters.https://resolver.caltech.edu/CaltechAUTHORS:PAGieeeacmtn05Surviving heat shock: Control strategies for robustness and performance
https://resolver.caltech.edu/CaltechAUTHORS:ELSpnas05
Year: 2005
DOI: 10.1073/pnas.403510102
PMCID: PMC549435
Molecular biology studies the cause-and-effect relationships among microscopic processes initiated by individual molecules within a cell and observes their macroscopic phenotypic effects on cells and organisms. These studies provide a wealth of information about the underlying networks and pathways responsible for the basic functionality and robustness of biological systems. At the same time, these studies create exciting opportunities for the development of quantitative and predictive models that connect the mechanism to its phenotype then examine various modular structures and the range of their dynamical behavior. The use of such models enables a deeper understanding of the design principles underlying biological organization and makes their reverse engineering and manipulation both possible and tractable The heat shock response presents an interesting mechanism where such an endeavor is possible. Using a model of heat shock, we extract the design motifs in the system and justify their existence in terms of various performance objectives. We also offer a modular decomposition that parallels that of traditional engineering control architectures.https://resolver.caltech.edu/CaltechAUTHORS:ELSpnas05Joint congestion control and media access control design for ad hoc wireless networks
https://resolver.caltech.edu/CaltechAUTHORS:20170810-103408504
Year: 2005
DOI: 10.1109/INFCOM.2005.1498496
We present a model for the joint design of congestion control and media access control (MAC) for ad hoc wireless networks. Using contention graph and contention matrix, we formulate resource allocation in the network as a utility maximization problem with constraints that arise from contention for channel access. We present two algorithms that are not only distributed spatially, but more interestingly, they decompose vertically into two protocol layers where TCP and MAC jointly solve the system problem. The first is a primal algorithm where the MAC layer at the links generates congestion (contention) prices based on local aggregate source rates, and TCP sources adjust their rates based on the aggregate prices in their paths. The second is a dual subgradient algorithm where the MAC sub-algorithm is implemented through scheduling link-layer flows according to the congestion prices of the links. Global convergence properties of these algorithms are proved. This is a preliminary step towards a systematic approach to jointly design TCP congestion control algorithms and MAC algorithms, not only to improve performance, but more importantly, to make their interaction more transparent.https://resolver.caltech.edu/CaltechAUTHORS:20170810-103408504Plenary Panel Discussion: Challenges and opportunities for the future of control
https://resolver.caltech.edu/CaltechAUTHORS:DOYcdc04
Year: 2005
This panel reflects the scope and diversity of the unprecedented challenges and opportunities for the systems and controls community that has been created by several research themes from the basic sciences to advanced technologies. Connecting physical processes at multiple time and space scales in quantum, statistical, fluid, and solid mechanics, remains not only a central scientific challenge but also one with increasing technological implications. This is particular so in highly organized and nonequilibrium systems as in biology and nanotechnology, where interconnection, feedback, and dynamics are playing an increasingly central role.https://resolver.caltech.edu/CaltechAUTHORS:DOYcdc04Cross-layer optimization in TCP/IP networks
https://resolver.caltech.edu/CaltechAUTHORS:WANiatnet05
Year: 2005
DOI: 10.1109/TNET.2005.850219
TCP-AQM can be interpreted as distributed primal-dual algorithms to maximize aggregate utility over source rates. We show that an equilibrium of TCP/IP, if exists, maximizes aggregate utility over both source rates and routes, provided congestion prices are used as link costs. An equilibrium exists if and only if this utility maximization problem and its Lagrangian dual have no duality gap. In this case, TCP/IP incurs no penalty in not splitting traffic across multiple paths. Such an equilibrium, however, can be unstable. It can be stabilized by adding a static component to link cost, but at the expense of a reduced utility in equilibrium. If link capacities are optimally provisioned, however, pure static routing, which is necessarily stable, is sufficient to maximize utility. Moreover single-path routing again achieves the same utility as multipath routing at optimality.https://resolver.caltech.edu/CaltechAUTHORS:WANiatnet05Optimization model of internet protocols
https://resolver.caltech.edu/CaltechAUTHORS:20161129-162643807
Year: 2005
DOI: 10.1145/1064212.1064245
Layered architecture is one of the most fundamental and influential structures of network design. Can we integrate the various protocol layers into a single coherent theory by regarding them as carrying out an asynchronous distributed primal-dual computation over the network to implicitly solve a global optimization problem? Different layers iterate on different subsets of the decision variables using local information to achieve individual optimalities, but taken together, these local algorithms attempt to achieve a global objective. Such a theory will expose the interconnection between protocol layers and can be used to study rigorously the performance tradeoff in protocol layering as different ways to distribute a centralized computation. In this talk, we describe some preliminary work towards this goal and discuss some of the difficulties of this approach.https://resolver.caltech.edu/CaltechAUTHORS:20161129-162643807Highly optimized tolerance and power laws in dense and sparse resource regimes
https://resolver.caltech.edu/CaltechAUTHORS:MANpre05
Year: 2005
DOI: 10.1103/PhysRevE.72.016108
Power law cumulative frequency (P) versus event size (l) distributions P(>= l)similar to l(-alpha) are frequently cited as evidence for complexity and serve as a starting point for linking theoretical models and mechanisms with observed data. Systems exhibiting this behavior present fundamental mathematical challenges in probability and statistics. The broad span of length and time scales associated with heavy tailed processes often require special sensitivity to distinctions between discrete and continuous phenomena. A discrete highly optimized tolerance (HOT) model, referred to as the probability, loss, resource (PLR) model, gives the exponent alpha=1/d as a function of the dimension d of the underlying substrate in the sparse resource regime. This agrees well with data for wildfires, web file sizes, and electric power outages. However, another HOT model, based on a continuous (dense) distribution of resources, predicts alpha=1+1/d. In this paper we describe and analyze a third model, the cuts model, which exhibits both behaviors but in different regimes. We use the cuts model to show all three models agree in the dense resource limit. In the sparse resource regime, the continuum model breaks down, but in this case, the cuts and PLR models are described by the same exponent.https://resolver.caltech.edu/CaltechAUTHORS:MANpre05The "robust yet fragile" nature of the Internet
https://resolver.caltech.edu/CaltechAUTHORS:DOYpnas05
Year: 2005
DOI: 10.1073/pnas.0501426102
PMCID: PMC1240072
The search for unifying properties of complex networks is popular, challenging, and important. For modeling approaches that focus on robustness and fragility as unifying concepts, the Internet is an especially attractive case study, mainly because its applications are ubiquitous and pervasive, and widely available expositions exist at every level of detail. Nevertheless, alternative approaches to modeling the Internet often make extremely different assumptions and derive opposite conclusions about fundamental properties of one and the same system. Fortunately, a detailed understanding of Internet technology combined with a unique ability to measure the network means that these differences can be understood thoroughly and resolved unambiguously. This article aims to make recent results of this process accessible beyond Internet specialists to the broader scientific community and to clarify several sources of basic methodological differences that are relevant beyond either the Internet or the two specific approaches focused on here (i.e., scale-free networks and highly optimized tolerance networks).https://resolver.caltech.edu/CaltechAUTHORS:DOYpnas05Motifs, Control, and Stability
https://resolver.caltech.edu/CaltechAUTHORS:DOYplosb05
Year: 2005
DOI: 10.1371/journal.pbio.0030392
PMCID: PMC1283396
Many of the detailed mechanisms by which bacteria express genes in response to various environmental signals are well-known. The molecular players underlying these responses are part of a bacterial transcriptional regulatory network (BTN). To explore the properties and evolution of such networks and to extract general principles, biologists have looked for common themes or motifs, and their interconnections, such as reciprocal links or feedback loops. A BTN motif can be thought of as a directed graph with regulatory interactions connecting transcription factors to their operon targets (the set of related bacterial genes that are transcribed together). For example, Figure 1A shows a BTN motif that describes a part of the transcriptional response to heat (and other) stressors.
But biological networks are not just static physical constructs, and it is, in fact, their dynamical properties that determine their function. In this issue of PLoS Biology, Prill et al. [1] show that the relative abundance of small motifs in biological networks, including the BTN, may be explained by the stability of their dynamics across a wide range of cellular conditions. In a dynamical system, control engineers define "stability" as preservation of a specific behavior over time under some set of perturbations. The definitions of stability vary somewhat depending on the types of system, behavior, and perturbation specified [2]. For the BTN example, Prill et al. [1] study stability of gene expression levels, as modeled by a set of linear differential equations. Given interactions from a BTN motif, "structural stability" is robustness of stability to arbitrary signs and magnitudes of interactions. This is such a stringent notion of stability that it would be satisfied by few systems, yet Prill et al. [1] show that all BTN motifs are stable for all signs and magnitudes of interactions. For several other biological networks, they show a level of correlation between abundance and structural stability that is highly unlikely to occur at random. The significance of these results as well as those in recent related papers (see references in [1], particularly those of Alon and colleagues) can be better appreciated within the larger context of well-known concepts from biology and engineering, particularly control theory [3]. For additional mathematical details underlying the qualitative arguments presented here, see the online supplement (Text S1 and S2).https://resolver.caltech.edu/CaltechAUTHORS:DOYplosb05Three mechanisms for power laws on the Cayley tree
https://resolver.caltech.edu/CaltechAUTHORS:BROpre05
Year: 2005
DOI: 10.1103/PhysRevE.72.056120
We compare preferential growth, critical phase transitions, and highly optimized tolerance (HOT) as mechanisms for generating power laws in the familiar and analytically tractable context of lattice percolation and forest fire models on the Cayley tree. All three mechanisms have been widely discussed in the context of complexity in natural and technological systems. This parallel study enables direct comparison of the mechanisms and associated lattice solutions. Criticality fits most naturally into the category of random processes, where power laws are a consequence of fluctuations in an ensemble with no intrinsic scale. The power laws in preferential growth can be understood in the context of competing exponential growth and decay processes. HOT generalizes this functional mechanism involving exponentials of exponentials to a broader class of nonexponential functions, which arise from optimization.https://resolver.caltech.edu/CaltechAUTHORS:BROpre05Fundamental Limitations of Disturbance Attenuation in the Presence of Side Information
https://resolver.caltech.edu/CaltechAUTHORS:20190306-141450198
Year: 2005
DOI: 10.1109/CDC.2005.1582542
In this paper, we study fundamental limitations of disturbance attenuation of feedback systems, under the assumption that the controller has a finite horizon preview of the disturbance. In contrast with prior work, we extend Bode's integral equation for the case where the preview is made available to the controller via a general, finite capacity, communication system. Under asymptotic stationarity assumptions, our results show that the new fundamental limitation differs from Bode's only by a constant, which quantifies the information rate through the communication system. In the absence of stationarity, we derive a universal lower bound which uses entropy rates as a measure of performance.https://resolver.caltech.edu/CaltechAUTHORS:20190306-141450198Highly optimised global organisation of metabolic networks
https://resolver.caltech.edu/CaltechAUTHORS:20110815-134554423
Year: 2005
DOI: 10.1049/ip-syb:20050042
High-level, mathematically precise descriptions of the global organisation of complex metabolic networks are necessary for understanding the global structure of metabolic networks, the interpretation and integration of large amounts of biologic data (sequences, various -omics)
and ultimately for rational design of therapies for disease processes. Metabolic networks are highly organised to execute their function efficiently while tolerating wide variation in their environment. These networks are constrained by physical requirements (e.g. conservation of
energy, redox and small moieties) but are also remarkably robust and evolvable. The authors use well-known features of the stoichiometry of bacterial metabolic networks to demonstrate how network architecture facilitates such capabilities, and to develop a minimal abstract metabolism
which incorporates the known features of the stoichiometry and respects the constraints on enzymes and reactions. This model shows that the essential functionality and constraints
drive the tradeoffs between robustness and fragility, as well as the large-scale structure and organisation
of the whole network, particularly high variability. The authors emphasise how domain specific constraints and tradeoffs imposed by the environment are important factors in shaping stoichiometry. Importantly, the consequence of these highly organised tradeoffs and tolerances
is an architecture that has a highly structured modularity that is self-dissimilar and scale-rich.https://resolver.caltech.edu/CaltechAUTHORS:20110815-134554423Understanding Internet topology: principles, models, and validation
https://resolver.caltech.edu/CaltechAUTHORS:ALDieeeacmtn05
Year: 2005
DOI: 10.1109/TNET.2005.861250
Building on a recent effort that combines a first-principles approach to modeling router-level connectivity with a more pragmatic use of statistics and graph theory, we show in this paper that for the Internet, an improved understanding of its physical infrastructure is possible by viewing the physical connectivity as an annotated graph that delivers raw connectivity and bandwidth to the upper layers in the TCP/IP protocol stack, subject to practical constraints (e.g., router technology) and economic considerations (e.g., link costs). More importantly, by relying on data from Abilene, a Tier-1 ISP, and the Rocketfuel project, we provide empirical evidence in support of the proposed approach and its consistency with networking reality. To illustrate its utility, we: 1) show that our approach provides insight into the origin of high variability in measured or inferred router-level maps; 2) demonstrate that it easily accommodates the incorporation of additional objectives of network design (e.g., robustness to router failure); and 3) discuss how it complements ongoing community efforts to reverse-engineer the Internet.https://resolver.caltech.edu/CaltechAUTHORS:ALDieeeacmtn05Wildfires, complexity, and highly optimized tolerance
https://resolver.caltech.edu/CaltechAUTHORS:MORpnas05
Year: 2005
DOI: 10.1073/pnas.0508985102
PMCID: PMC1312407
Recent, large fires in the western United States have rekindled debates about fire management and the role of natural fire regimes in the resilience of terrestrial ecosystems. This real-world experience parallels debates involving abstract models of forest fires, a central metaphor in complex systems theory. Both real and modeled fire-prone landscapes exhibit roughly power law statistics in fire size versus frequency. Here, we examine historical fire catalogs and a detailed fire simulation model; both are in agreement with a highly optimized tolerance model. Highly optimized tolerance suggests robustness tradeoffs underlie resilience in different fire-prone ecosystems. Understanding these mechanisms may provide new insights into the structure of ecological systems and be key in evaluating fire management strategies and sensitivities to climate change.https://resolver.caltech.edu/CaltechAUTHORS:MORpnas05Biological complexity and robustness
https://resolver.caltech.edu/CaltechAUTHORS:20110615-105907604
Year: 2006
DOI: 10.1109/BMN.2006.330919
This talk will describe qualitatively in as much detail as time allows these features of biological systems and their parallels in technology, using hopefully familiar and concrete examples. The aim is to be accessible to biologists, and not to depend critically on the mathematical framework. A crucial insight is that both evolution and natural selection or engineering design must produce high robustness to uncertain environments and components in order for systems to persist. Yet this allows and even facilitates severe fragility to novel perturbations, particularly those that exploit the very mechanisms providing robustness, and this "robust yet fragile" (RYF) feature must be exploited explicitly in any theory that hopes to scale to large systems. Time permitting, the mathematical research implications will be sketched of this view of "organized complexity" in biology, technology, and mathematics. This view contrasts sharply with that of "emergent complexity" popular in other areas of science in a way that can now be made mathematically precise.https://resolver.caltech.edu/CaltechAUTHORS:20110615-105907604Complexity in Automation of SOS Proofs: An Illustrative Example
https://resolver.caltech.edu/CaltechAUTHORS:20110215-132231555
Year: 2006
DOI: 10.1109/CDC.2006.377629
We present a case study in proving invariance
for a chaotic dynamical system, the logistic map, based on
Positivstellensatz refutations, with the aim of studying the
problems associated with developing a completely automated
proof system. We derive the refutation using two different forms
of the Positivstellensatz and compare the results to illustrate the
challenges in defining and classifying the 'complexity' of such
a proof. The results show the flexibility of the SOS framework
in converting a dynamics problem into a semialgebraic one as
well as in choosing the form of the proof. Yet it is this very
flexibility that complicates the process of automating the proof
system and classifying proof 'complexity.'https://resolver.caltech.edu/CaltechAUTHORS:20110215-132231555Cross-layer Congestion Control, Routing and Scheduling Design in Ad Hoc Wireless Networks
https://resolver.caltech.edu/CaltechAUTHORS:20110120-103630156
Year: 2006
DOI: 10.1109/INFOCOM.2006.142
This paper considers jointly optimal design of crosslayer congestion control, routing and scheduling for ad hoc
wireless networks. We first formulate the rate constraint and scheduling constraint using multicommodity flow variables, and formulate resource allocation in networks with fixed wireless channels (or single-rate wireless devices that can mask channel variations) as a utility maximization problem with these constraints.
By dual decomposition, the resource allocation problem
naturally decomposes into three subproblems: congestion control,
routing and scheduling that interact through congestion price.
The global convergence property of this algorithm is proved. We
next extend the dual algorithm to handle networks with timevarying
channels and adaptive multi-rate devices. The stability
of the resulting system is established, and its performance is
characterized with respect to an ideal reference system which
has the best feasible rate region at link layer.
We then generalize the aforementioned results to a general
model of queueing network served by a set of interdependent
parallel servers with time-varying service capabilities, which
models many design problems in communication networks. We
show that for a general convex optimization problem where a
subset of variables lie in a polytope and the rest in a convex set,
the dual-based algorithm remains stable and optimal when the
constraint set is modulated by an irreducible finite-state Markov
chain. This paper thus presents a step toward a systematic way
to carry out cross-layer design in the framework of "layering as
optimization decomposition" for time-varying channel models.https://resolver.caltech.edu/CaltechAUTHORS:20110120-103630156An optimization-based approach to modeling internet topology
https://resolver.caltech.edu/CaltechAUTHORS:20110119-112858944
Year: 2006
DOI: 10.1007/0-387-29234-9_6
Over the last decade there has been significant interest and attention devoted towards understanding the complex structure of the Internet, particularly its topology
and the large-scale properties that can be derived from it. While recent work by empiricists and theoreticians has emphasized certain statistical and mathematical properties of network structure, this article presents an optimization-based perspective that focuses on the objectives, constraints, and other drivers of engineering
design. We argue that Internet topology at the router-level can be understood in terms of the tradeoffs between network performance and the technological and economic factors constraining design. Furthermore, we suggest that the formulation of corresponding optimization problems serves as a reasonable starting point for generating "realistic, yet fictitious" network topologies. Finally, we describe how this optimization-based perspective is being used in the
development of a still-nascent theory for the Internet as a whole.https://resolver.caltech.edu/CaltechAUTHORS:20110119-112858944Disturbance attenuation bounds in the presence of a remote preview
https://resolver.caltech.edu/CaltechAUTHORS:20110810-103527394
Year: 2006
DOI: 10.1007/11533382_18
We study the fundamental limits of disturbance attenuation of a networked control scheme, where a remote preview of the disturbance is available. The preview information is conveyed to the controller, via an encoder and a finite capacity channel. In this article, we present an example where we design a remote preview system by means of an additive, white and Gaussian channel. The example is followed by a summary of our recent results on general performance bounds, which we use to prove optimality of the design method.https://resolver.caltech.edu/CaltechAUTHORS:20110810-103527394Software Infrastructure for Effective Communication and Reuse of Computational Models
https://resolver.caltech.edu/CaltechAUTHORS:20130107-161648673
Year: 2006
Until recently, the majority of computational models in biology were implemented
in custom programs and published as statements of the underlying mathematics.
However, to be useful as formal embodiments of our understanding of biological
systems, computational models must be put into a consistent form that can be
communicated more directly between the software tools used to work with them.
In this chapter, we describe the Systems Biology Markup Language (SBML), a
format for representing models in a way that can be used by different software
systems to communicate and exchange those models. By supporting SBML as an
input and output format, different software tools can all operate on an identical
representation of a model, removing opportunities for errors in translation and
assuring a common starting point for analyses and simulations. We also take this
opportunity to discuss some of the resources available for working with SBML as
well as ongoing efforts in SBML's continuing evolution.https://resolver.caltech.edu/CaltechAUTHORS:20130107-161648673Layering As Optimization Decomposition: Current Status and Open Issues
https://resolver.caltech.edu/CaltechAUTHORS:20170508-173109246
Year: 2006
DOI: 10.1109/CISS.2006.286492
Network protocols in layered architectures have historically been obtained on an ad-hoc basis, and much of the recent cross-layer designs are conducted through piecemeal approaches. Network protocols may instead be holistically analyzed and systematically designed as distributed solutions to some global optimization problems in the form of generalized network utility maximization (NUM), providing insight on what they optimize and structures of the network protocol stack. This paper presents a short survey of the recent efforts towards a systematic understanding of "layering" as "optimization decomposition", where the overall communication network is modeled by a generalized NUM problem, each layer corresponds to a decomposed subproblem, and the interfaces among layers are quantified as functions of the optimization variables coordinating the subproblems. Furthermore, there are many alternative decompositions, each leading to a different layering architecture. Industry adoption of this unifying framework has also started. Here we summarize the current status of horizontal decomposition into distributed computation and vertical decomposition into functional modules such as congestion control, routing, scheduling, random access, power control, and coding. Key messages and methodologies arising out of many recent work are listed. Then we present a list of challenging open issues in this area and the initial progress made on some of them.https://resolver.caltech.edu/CaltechAUTHORS:20170508-173109246Layering As Optimization Decomposition: Framework and Examples
https://resolver.caltech.edu/CaltechAUTHORS:20190306-130031273
Year: 2006
DOI: 10.1109/ITW.2006.1633780
Network protocols in layered architectures have historically been obtained primarily on an ad-hoc basis. Recent research has shown that network protocols may instead be holistically analyzed and systematically designed as distributed solutions to some global optimization problems in the form of Network Utility Maximization (NUM), providing insight into what they optimize and structures of the network protocol stack. This paper presents a short survey of the recent efforts towards a systematic understanding of 'layering' as 'optimization decomposition', where the overall communication network is modeled by a generalized NUM problem, each layer corresponds to a decomposed subproblem, and the interfaces among layers are quantified as functions of the optimization variables coordinating the sub-problems. Different decompositions lead to alternative layering architectures. We summarize several examples of horizontal decomposition into distributed computation and vertical decomposition into functional modules such as congestion control, routing, scheduling, random access, power control, and coding.https://resolver.caltech.edu/CaltechAUTHORS:20190306-130031273Advanced methods and algorithms for biological networks analysis
https://resolver.caltech.edu/CaltechAUTHORS:ELSprocieee06
Year: 2006
DOI: 10.1109/JPROC.2006.871776
Modeling and analysis of complex biological networks presents a number of mathematical challenges. For the models to be useful from a biological standpoint, they must be systematically compared with data. Robustness is a key to biological understanding and proper feedback to guide experiments,including both the deterministic stability and performance properties of models in the presence of parametric uncertainties and their stochastic behavior in the presence of noise. In this paper, we present mathematical and algorithmic tools to address such questions for models that may be nonlinear, hybrid,and stochastic. These tools are rooted in solid mathematical theories, primarily from robust control and dynamical systems, but with important recent developments. They also have the potential for great practical relevance, which we explore through a series of biologically motivated examples.https://resolver.caltech.edu/CaltechAUTHORS:ELSprocieee06On Asymptotic Optimality of Dual Scheduling Algorithm In A Generalized Switch
https://resolver.caltech.edu/CaltechAUTHORS:20110203-100201578
Year: 2006
DOI: 10.1109/WIOPT.2006.1666500
Generalized switch is a model of a queueing system where parallel servers are interdependent and have time-varying service capabilities. This paper considers the dual scheduling algorithm that uses rate control and queue-length based scheduling to allocate resources for a generalized switch. We consider a saturated system in which each user has infinite amount of data to be served. We prove the asymptotic optimality of the dual scheduling algorithm for such a system, which says that the vector of average service rates of the scheduling algorithm maximizes some aggregate concave utility functions. As the fairness objectives can be achieved by appropriately choosing utility functions, the asymptotic optimality establishes the fairness properties of the dual scheduling algorithm.
The dual scheduling algorithm motivates a new architecture for scheduling, in which an additional queue is introduced to interface the user data queue and the time-varying server and to modulate the scheduling process, so as to achieve different performance objectives. Further research would include scheduling with Quality of Service guarantees with the dual scheduler, and its application and implementation in various versions of the generalized switch model.https://resolver.caltech.edu/CaltechAUTHORS:20110203-100201578Module-Based Analysis of Robustness Tradeoffs in the Heat Shock Response System
https://resolver.caltech.edu/CaltechAUTHORS:KURploscompbio06
Year: 2006
DOI: 10.1371/journal.pcbi.0020059
PMCID: PMC1523291
Biological systems have evolved complex regulatory mechanisms, even in situations where much simpler designs seem to be sufficient for generating nominal functionality. Using module-based analysis coupled with rigorous mathematical comparisons, we propose that in analogy to control engineering architectures, the complexity of cellular systems and the presence of hierarchical modular structures can be attributed to the necessity of achieving robustness. We employ the Escherichia coli heat shock response system, a strongly conserved cellular mechanism, as an example to explore the design principles of such modular architectures. In the heat shock response system, the sigma-factor σ32 is a central regulator that integrates multiple feedforward and feedback modules. Each of these modules provides a different type of robustness with its inherent tradeoffs in terms of transient response and efficiency. We demonstrate how the overall architecture of the system balances such tradeoffs. An extensive mathematical exploration nevertheless points to the existence of an array of alternative strategies for the existing heat shock response that could exhibit similar behavior. We therefore deduce that the evolutionary constraints facing the system might have steered its architecture toward one of many robustly functional solutions.https://resolver.caltech.edu/CaltechAUTHORS:KURploscompbio06H∞ Control of Nonlinear Systems: A Convex Characterization
https://resolver.caltech.edu/CaltechCDSTR:1993.020
Year: 2006
The so-called nonlinear H∞-control problem in state space is considered with an emphasis on developing machinery with promising computational properties. Both state feedback and output feedback H∞-control problems for a class of nonlinear systems are characterized in terms of continuous positive definite solutions of algebraic nonlinear matrix inequalities which are convex feasibility problems. The issue of existence of solutions to these nonlinear matrix inequalities (NLMIs) is justified.https://resolver.caltech.edu/CaltechCDSTR:1993.020Stabilization of Linear Systems with Structured Perturbations
https://resolver.caltech.edu/CaltechCDSTR:1993.014
Year: 2006
The problem of stabilization of linear systems with bounded structured uncertainties are considered in this paper. Two notions of stability, denoted quadratic stability (Q-stability) and μ-stability, are considered, and corresponding notions of stabilizability and detectability are defined. In both cases, the output feedback stabilization problem is reduced via a separation argument to two simpler problems: full information (FI) and full control (FC). The set of all stabilizing controllers can be parametrized as a linear fractional transformation (LFT) on a free stable parameter. For Q-stability, stabilizability and detectability can in turn be characterized by Linear Matrix Inequalities (LMIs), and the FI and FC Q-stabilization problems can be solved using the corresponding LMIs. In the standard one-dimensional case the results in this paper reduce to well-known results on controller parametrization using state-space methods, although the development here relies more heavily on elegant LFT machinery and avoids the need for coprime factorizations.https://resolver.caltech.edu/CaltechCDSTR:1993.014H∞ Control of Nonlinear Systems: A Class of Controllers
https://resolver.caltech.edu/CaltechCDSTR:1993.008
Year: 2006
The standard state space solutions to the H∞ control problem for linear time invariant systems are generalized to nonlinear time-invariant systems. A class of nonlinear H∞-controllers are parameterized as nonlinear fractional transformations on contractive, stable free nonlinear parameters. As in the linear case, the H∞ control problem is solved by its reduction to four simpler special state space problems, together with a separation argument. Another byproduct of this approach is that the sufficient conditions for H∞ control problem to be solved are also derived with this machinery. The solvability for nonlinear H∞-control problem requires positive definite solutions to two parallel decoupled Hamilton-Jacobi inequalities and these two solutions satisfy an additional coupling condition. An illustrative example, which deals with a passive plant, is given at the end.https://resolver.caltech.edu/CaltechCDSTR:1993.008A State-Space Approach to Robustness Analysis and Synthesis for Nonlinear Uncertain Systems
https://resolver.caltech.edu/CaltechCDSTR:1994.010
Year: 2006
A state-space characterization of stability and performance robustness analysis and synthesis with some computationally attractive properties for nonlinear uncertain systems is proposed. The robust stability and robust performances for a class of nonlinear systems subject to bounded structured uncertainties are characterized in terms of various types of nonlinear matrix inequalities (NLMIs), which are natural generalizations of the linear matrix inequalities (LMIs) that appear in linear robustness analysis. As in the linear case, scalings or multipliers are used to find storage functions that give sufficient conditions for robust performances; these are also necessary under certain assumptions about smoothness of the storage functions and structure of the uncertainty. The resulting NLMIs yield convex optimization problems. Unlike the linear case, these convex problems are not finite dimensional, so their computational benefits are far less immediate. Sufficient conditions for the solvability of robust synthesis problems are developed in terms of NLMIs as well. Some aspects of the computational issues are also discussed.https://resolver.caltech.edu/CaltechCDSTR:1994.010Attenuation of Persistent L∞-Bounded Disturbances for Nonlinear Systems
https://resolver.caltech.edu/CaltechCDSTR:1995.002
Year: 2006
A version of nonlinear generalization of the L1-control problem, which deals with the attenuation of persistent bounded disturbances in L∞-sense, is investigated in this paper. The methods used in this paper are motivated by [23]. The main idea in the L1-performance analysis and synthesis is to construct a certain invariant subset of the state-space such that achieving disturbance rejection is equivalent to restricting the state-dynamics to this set. The concepts from viability theory, nonsmooth analysis, and set-valued analysis play important roles. In addition, the relation between the L1-control of a continuous-time system and the l1-control of its Euler approximated discrete-time systems is established.https://resolver.caltech.edu/CaltechCDSTR:1995.002Rate Control for Multicast with Network Coding
https://resolver.caltech.edu/CaltechCDSTR:2006.004
Year: 2006
Recent advances in network coding have shown great potential for efficient information multicasting in communication networks, in terms of both network throughput and network management. In this paper, we address the problem of rate control at end-systems for network coding based multicast flows. We develop two adaptive rate control algorithms for the networks with given coding subgraphs and without given coding subgraphs, respectively. With random network coding, both algorithms can be implemented in a distributed manner, and work at transport layer to adjust source rates and at network layer to carry out network coding. We prove that the proposed algorithms converge to the globally optimal solutions. Some related issues are discussed, and numerical examples are provided to complement our theoretical analysis.https://resolver.caltech.edu/CaltechCDSTR:2006.004Layering as Optimization Decomposition: Questions and Answers
https://resolver.caltech.edu/CaltechAUTHORS:20170508-172152981
Year: 2006
DOI: 10.1109/MILCOM.2006.302293
Network protocols in layered architectures have historically been obtained on an ad-hoc basis, and much of the recent cross-layer designs are conducted through piecemeal approaches. Network protocols may instead be holistically analyzed and systematically designed as distributed solutions to some global optimization problems in the form of generalized Network Utility Maximization (NUM), providing insight on what they optimize and on the structures of network protocol stacks. In the form of 10 Questions and Answers, this paper presents a short survey of the recent efforts towards a systematic understanding of "layering" as "optimization decomposition". The overall communication network is modeled by a generalized NUM problem, each layer corresponds to a decomposed subproblem, and the interfaces among layers are quantified as functions of the optimization variables coordinating the subproblems. Furthermore, there are many alternative decompositions, each leading to a different layering architecture. Industry adoption of this unifying framework has also started. Here we summarize the current status of horizontal decomposition into distributed computation and vertical decomposition into functional modules such as congestion control, routing, scheduling, random access, power control, and coding. We also discuss under-explored future research directions in this area. More importantly than proposing any particular crosslayer design, this framework is working towards a mathematical foundation of network architectures and the design process of modularization.https://resolver.caltech.edu/CaltechAUTHORS:20170508-172152981Dual scheduling algorithm in a generalized switch: asymptotic optimality and throughput optimality
https://resolver.caltech.edu/CaltechAUTHORS:20170810-103710158
Year: 2007
DOI: 10.1007/1-84628-274-8_7
In this article, we consider the dual scheduling algorithm for a generalized switch. For a saturated system, we prove the asymptotic optimality of the dual scheduling algorithm and thus establish its fairness properties. For a system with exogenous arrivals, we propose a modified dual scheduling algorithm, which is throughput-optimal while providing some weighted fairness among the users at the level of flows.
The dual scheduling algorithm motivates a new architecture for scheduling, in which an additional queue is introduced to interface the user data queue and the time-varying server and to modulate the scheduling process, so as to achieve different performance objectives. Further research stemming out of this article includes scheduling with Quality of Service guarantees with the dual scheduler, and its application and implementation in various versions of the generalized switch model.https://resolver.caltech.edu/CaltechAUTHORS:20170810-103710158The Statistical Mechanics of Fluctuation-Dissipation and Measurement Back Action
https://resolver.caltech.edu/CaltechAUTHORS:20101104-115927780
Year: 2007
DOI: 10.1109/ACC.2007.4282774
In this paper, we take a control-theoretic approach to answering some standard questions in statistical mechanics. A central problem is the relation between systems which appear macroscopically dissipative but are microscopically lossless. We show that a linear macroscopic system is dissipative if and only if it can be approximated by a linear lossless microscopic system, over arbitrarily long time intervals. As a by-product, we obtain mechanisms explaining Johnson-Nyquist noise as initial uncertainty in the lossless state, as well as measurement back action and a trade-off between process and measurement noise.https://resolver.caltech.edu/CaltechAUTHORS:20101104-115927780Optimization Based Rate Control for Multicast with Network Coding
https://resolver.caltech.edu/CaltechAUTHORS:20100826-092317616
Year: 2007
DOI: 10.1109/INFCOM.2007.139
Recent advances in network coding have shown
great potential for efficient information multicasting in communication
networks, in terms of both network throughput and
network management. In this paper, we address the problem of
rate control at end-systems for network coding based multicast
flows. We develop two adaptive rate control algorithms for
the networks with given coding subgraphs and without given
coding subgraphs, respectively. With random network coding,
both algorithms can be implemented in a distributed manner, and
work at transport layer to adjust source rates and at network
layer to carry out network coding. We prove that the proposed
algorithms converge to the globally optimal solutions for intrasession
network coding. Some related issues are discussed, and
numerical examples are provided to complement our theoretical
analysis.https://resolver.caltech.edu/CaltechAUTHORS:20100826-092317616Fundamental Limitations of Disturbance Attenuation in the Presence of Side Information
https://resolver.caltech.edu/CaltechAUTHORS:MARieeetac07
Year: 2007
DOI: 10.1109/TAC.2006.887898
In this paper, we study fundamental limitations of disturbance attenuation of feedback systems, under the assumption that the controller has a finite horizon preview of the disturbance. In contrast with prior work, we extend Bode's integral equation for the case where the preview is made available to the controller via a general, finite capacity, communication system. Under asymptotic stationarity assumptions, our results show that the new fundamental limitation differs from Bode's only by a constant, which quantifies the information rate through the communication system. In the absence of asymptotic stationarity, we derive a universal lower bound which uses Shannon's entropy rate as a measure of performance. By means of a case-study, we show that our main bounds may be achieved.https://resolver.caltech.edu/CaltechAUTHORS:MARieeetac07Layering as Optimization Decomposition: A Mathematical Theory of Network Architectures
https://resolver.caltech.edu/CaltechAUTHORS:20170810-104152899
Year: 2007
DOI: 10.1109/JPROC.2006.887322
Network protocols in layered architectures have historically been obtained on an ad hoc basis, and many of the recent cross-layer designs are also conducted through piecemeal approaches. Network protocol stacks may instead be holistically analyzed and systematically designed as distributed solutions to some global optimization problems. This paper presents a survey of the recent efforts towards a systematic understanding of layering as optimization decomposition, where the overall communication network is modeled by a generalized network utility maximization problem, each layer corresponds to a decomposed subproblem, and the interfaces among layers are quantified as functions of the optimization variables coordinating the subproblems. There can be many alternative decompositions, leading to a choice of different layering architectures. This paper surveys the current status of horizontal decomposition into distributed computation, and vertical decomposition into functional modules such as congestion control, routing, scheduling, random access, power control, and channel coding. Key messages and methods arising from many recent works are summarized, and open issues discussed. Through case studies, it is illustrated how layering as Optimization Decomposition provides a common language to think about modularization in the face of complex, networked interactions, a unifying, top-down approach to design protocol stacks, and a mathematical theory of network architectures.https://resolver.caltech.edu/CaltechAUTHORS:20170810-104152899Rules of engagement
https://resolver.caltech.edu/CaltechAUTHORS:20150317-084036709
Year: 2007
DOI: 10.1038/446860a
Complex engineered and biological systems share protocol-based architectures that make them robust and evolvable, but with hidden fragilities to rare perturbations.https://resolver.caltech.edu/CaltechAUTHORS:20150317-084036709Thermodynamics of linear systems
https://resolver.caltech.edu/CaltechAUTHORS:20190208-142209118
Year: 2007
DOI: 10.23919/ECC.2007.7068722
We rigorously derive the main results of thermo-dynamics, including Carnot's theorem, in the framework of time-varying linear systems.https://resolver.caltech.edu/CaltechAUTHORS:20190208-142209118Can complexity science support the engineering of critical network infrastructures?
https://resolver.caltech.edu/CaltechAUTHORS:20190304-105018143
Year: 2007
DOI: 10.1109/ICSMC.2007.4414241
Considerable attention is now being devoted to the study of "complexity science" with the intent of discovering and applying universal laws of highly interconnected and evolved systems. This paper considers several issues related to the use of these theories in the context of critical infrastructures, particularly the Internet. Specifically, we revisit the notion of "organized complexity" and suggest that it is fundamental to our ability to understand, operate, and design next-generation infrastructure networks. We comment on the role of engineering in defining an architecture to support networked infrastructures and highlight recent advances in the theory of distributed control driven by network technologies.https://resolver.caltech.edu/CaltechAUTHORS:20190304-105018143Linear-quadratic-Gaussian heat engines
https://resolver.caltech.edu/CaltechAUTHORS:20190304-104205307
Year: 2007
DOI: 10.1109/CDC.2007.4434789
In this paper, we study the problem of extracting work from heat flows. In thermodynamics, a device doing this is called a heat engine. A fundamental problem is to derive hard limits on the efficiency of heat engines. Here we construct a linear-quadratic-Gaussian optimal controller that estimates the states of a heated lossless system. The measurements cool the system, and the surplus energy can be extracted as work by the controller. Hence, the controller acts like a Maxwell's demon. We compute the efficiency of the controller over finite and infinite time intervals, and since the controller is optimal, this yields hard limits. Over infinite time horizons, the controller has the same efficiency as a Carnot heat engine, and thereby it respects the second law of thermodynamics. As illustration we use an electric circuit where an ideal current source extracts energy from resistors with Johnson-Nyquist noise.https://resolver.caltech.edu/CaltechAUTHORS:20190304-104205307Contention control: A game-theoretic approach
https://resolver.caltech.edu/CaltechAUTHORS:CHEcdc07
Year: 2007
DOI: 10.1109/CDC.2007.4435015
We present a game-theoretic approach to contention control. We define a game-theoretic model, called random access game, to capture the contention/interaction among wireless nodes in wireless networks with contention-based medium access. We characterize Nash equilibria of random access games, study their dynamics and propose distributed algorithms (strategy evolutions) to achieve the Nash equilibria. This provides a general analytical framework that is capable of modelling a large class of systemwide quality of service models via the specification of per-node utility functions, in which systemwide fairness or service differentiation can be achieved in a distributed manner as long as each node executes a contention resolution algorithm that is designed to achieve the Nash equilibrium. We thus design medium access method according to distributed strategy update mechanism achieving the Nash equilibrium of random access game. In addition to guiding medium access control design, the random access game model also provides an analytical framework to understand equilibrium and dynamic properties of different medium access protocols and their interactions.https://resolver.caltech.edu/CaltechAUTHORS:CHEcdc07Appreciation of the Machinations of the Blind Watchmaker
https://resolver.caltech.edu/CaltechAUTHORS:ARKieeetac08
Year: 2008
DOI: 10.1109/TAC.2007.913342
One danger in using the language of engineering to describe the patterns and operations of the evident products of natural selection is that invoking principles of design runs the risk of invoking a designer. But as we analyze the increasing amount of data on the genome and its organization across a wide array of organisms, we are discovering there are patterns and dynamics reminiscent of designs that we, as imperfect human designers, recognize as serving an engineering purpose, including the purpose to be designable and evolvable.
There is no doubt that biological artifacts are the product of Dawkins' Blind Watchmaker, natural selection. But natural selection has at its heart one of engineering's most prized principles, optimization. Survival of the fittest, while not directly specifying an objective function that an organism must meet, nonetheless provides a clear figure of merit for long term biological success, persistence of lineages through reproduction of organisms, and is a well-formed if ever-changing specification. The mechanisms which provide the optimization algorithm for an organism to meet the demands of this changeable requirement, composed of a program subject to operations of mutation and interorganismal transfer and inheritance, are themselves under selection. Repeated rounds of this process leads, some argue, to architectures that facilitate evolution itself, the evolving of evolvability.https://resolver.caltech.edu/CaltechAUTHORS:ARKieeetac08Complexity and fragility in stability for linear systems
https://resolver.caltech.edu/CaltechAUTHORS:20100622-135915127
Year: 2008
DOI: 10.1109/ACC.2008.4586725
This paper presents a formal axiomization of the notion that (proof) complexity implies (property) fragility and illustrates this framework in the context of the stability of both discrete-time and continuous-time linear systems.https://resolver.caltech.edu/CaltechAUTHORS:20100622-135915127A proposal for a coordinated effort for the determination of brainwide neuroanatomical connectivity in model organisms at a mesoscopic scale
https://resolver.caltech.edu/CaltechAUTHORS:20090717-142050047
Year: 2009
DOI: 10.1371/journal.pcbi.1000334
PMCID: PMC2655718
In this era of complete genomes, our knowledge of neuroanatomical circuitry remains surprisingly sparse. Such knowledge is critical, however, for both basic and clinical research into brain function. Here we advocate for a concerted effort to fill this gap, through systematic, experimental mapping of neural circuits at a mesoscopic scale of resolution suitable for comprehensive, brainwide coverage, using injections of tracers or viral vectors. We detail the scientific and medical rationale and briefly review existing knowledge and experimental techniques. We define a set of desiderata, including brainwide coverage; validated and extensible experimental techniques suitable for standardization and automation; centralized, open-access data repository; compatibility with existing resources; and tractability with current informatics technology. We discuss a hypothetical but tractable plan for mouse, additional efforts for the macaque, and technique development for human. We estimate that the mouse connectivity project could be completed within five years with a comparatively modest budget.https://resolver.caltech.edu/CaltechAUTHORS:20090717-142050047Fire in the Earth system
https://resolver.caltech.edu/CaltechAUTHORS:20090707-150808418
Year: 2009
DOI: 10.1126/science.1163886
Fire is a worldwide phenomenon that appears in the geological record soon after the appearance of terrestrial plants. Fire influences global ecosystem patterns and processes, including vegetation distribution and structure, the carbon cycle, and climate. Although humans and fire have always coexisted, our capacity to manage fire remains imperfect and may become more difficult in the future as climate change alters fire regimes. This risk is difficult to assess, however, because fires are still poorly represented in global models. Here, we discuss some of the most important issues involved in developing a better understanding of the role of fire in the Earth system.https://resolver.caltech.edu/CaltechAUTHORS:20090707-150808418Mathematics and the Internet: A Source of Enormous Confusion and Great Potential
https://resolver.caltech.edu/CaltechAUTHORS:20090904-141033791
Year: 2009
Graph theory models the Internet mathematically, and a number of plausible mathematically intersecting network models for the Internet have been developed and studied. Simultaneously, Internet researchers have developed methodology to use real data to validate, or invalidate, proposed Internet models. The authors look at these parallel developments, particularly as they apply to scale-free network models of the preferential attachment type.https://resolver.caltech.edu/CaltechAUTHORS:20090904-141033791Linear control analysis of the autocatalytic glycolysis system
https://resolver.caltech.edu/CaltechAUTHORS:20100507-144031986
Year: 2009
DOI: 10.1109/ACC.2009.5159925
Autocatalysis is necessary and ubiquitous in both
engineered and biological systems but can aggravate control
performance and cause instability. We analyze the properties of
autocatalysis in the universal and well studied glycolytic
pathway. A simple two-state model incorporating ATP
autocatalysis and inhibitory feedback control captures the
essential dynamics, including limit cycle oscillations, observed
experimentally. System performance is limited by the inherent
autocatalytic stoichiometry and higher levels of autocatalysis
exacerbate stability and performance. We show that glycolytic
oscillations are not merely a "frozen accident" but a result of
the intrinsic stability tradeoffs emerging from the autocatalytic
mechanism. This model has pedagogical value as well as
appearing to be the simplest and most complete illustration yet
of Bode's integral formula.https://resolver.caltech.edu/CaltechAUTHORS:20100507-144031986On the graph of trees
https://resolver.caltech.edu/CaltechAUTHORS:20190226-100827411
Year: 2009
DOI: 10.1109/CCA.2009.5281136
We consider an ldquon-graph of treesrdquo whose nodes are the set of trees of fixed order n, and in which two nodes are adjacent if one tree can be derived from the other through a single application of a local edge transformation rule. We derive an exact formula for the length of the shortest path from any node to any ldquocanonicalrdquo node in the n-graph of trees. We use this result to derive upper and lower bounds on the diameter of the n-graph of trees. We then propose a coordinate system that is convenient for studying the structure of the n-graph of trees, and in which trees having the same degree sequence are projected onto a single point.https://resolver.caltech.edu/CaltechAUTHORS:20190226-100827411Solving large-scale linear circuit problems via convex optimization
https://resolver.caltech.edu/CaltechAUTHORS:20190226-085922106
Year: 2009
DOI: 10.1109/cdc.2009.5400690
A broad class of problems in circuits, electromagnetics, and optics can be expressed as finding some parameters of a linear system with a specific type. This paper is concerned with studying this type of circuit using the available control techniques. It is shown that the underlying problem can be recast as a rank minimization problem that is NP-hard in general. In order to circumvent this difficulty, the circuit problem is slightly modified so that the resulting optimization becomes convex. This interesting result is achieved at the cost of complicating the structure of the circuit, which introduces a trade-off between the design simplicity and the implementation complexity. When it is strictly required to solve the original circuit problem, the elegant structure of the proposed rank minimization problem allows for employing a celebrated heuristic method to solve it efficiently.https://resolver.caltech.edu/CaltechAUTHORS:20190226-085922106Congestion control algorithms from optimal control perspective
https://resolver.caltech.edu/CaltechAUTHORS:20170810-133936531
Year: 2009
DOI: 10.1109/CDC.2009.5399554
This paper is concerned with understanding the connection between the existing Internet congestion control algorithms and the optimal control theory. The available resource allocation controllers are mainly devised to derive the state of the system to a desired equilibrium point and, therefore, they are oblivious to the transient behavior of the closed-loop system. This work aims to investigate what dynamical functions the existing algorithms maximize (minimize). In particular, it is shown that there exist meaningful cost functionals whose minimization leads to the celebrated primal and dual congestion algorithms. An implication of this result is that a real network problem may be solved by regarding it as an optimal control problem on which some practical constraints, such as a real-time link capacity constraint, are imposed.https://resolver.caltech.edu/CaltechAUTHORS:20170810-133936531File Fragmentation over an Unreliable Channel
https://resolver.caltech.edu/CaltechAUTHORS:20110401-160938362
Year: 2010
DOI: 10.1109/INFCOM.2010.5461953
It has been recently discovered that heavy-tailed
file completion time can result from protocol interaction even
when file sizes are light-tailed. A key to this phenomenon is
the RESTART feature where if a file transfer is interrupted
before it is completed, the transfer needs to restart from the
beginning. In this paper, we show that independent or bounded
fragmentation guarantees light-tailed file completion time as long
as the file size is light-tailed, i.e., in this case, heavy-tailed file
completion time can only originate from heavy-tailed file sizes.
If the file size is heavy-tailed, then the file completion time is
necessarily heavy-tailed. For this case, we show that when the
file size distribution is regularly varying, then under independent
or bounded fragmentation, the completion time tail distribution
function is asymptotically upper bounded by that of the original
file size stretched by a constant factor. We then prove that if the
failure distribution has non-decreasing failure rate, the expected
completion time is minimized by dividing the file into equal sized
fragments; this optimal fragment size is unique but depends on
the file size. We also present a simple blind fragmentation policy
where the fragment sizes are constant and independent of the
file size and prove that it is asymptotically optimal. Finally, we
bound the error in expected completion time due to error in
modeling of the failure process.https://resolver.caltech.edu/CaltechAUTHORS:20110401-160938362Quantitative Nonlinear Analysis of Autocatalytic Pathways with Applications to Glycolysis
https://resolver.caltech.edu/CaltechAUTHORS:20110412-112151713
Year: 2010
Autocatalytic pathways are frequently encountered in biological networks. One such pathway, the glycolytic
pathway, is of special importance and has been studied extensively. Using tools from linear systems theory, our previous work on a simple two dimensional model of glycolysis demonstrated that autocatalysis can aggravate control performance and contribute to instability. Here, we expand this work and study properties of nonlinear autocatalytic pathway models (of which glycolysis is an example). Changes in the concentration of metabolites and catalyzing enzymes during the lifetime of the cell can perturb the system from the nominal operating point of the pathway. We investigate effects of such perturbations
through the estimation of invariant subsets of the region of
attraction around nominal operating conditions (i.e., a measure of the set of perturbations from which the cell recovers). Numerical experiments demonstrate that systems that are robust with respect to perturbations in parameter space have easily "verifiable" region of attraction properties in terms of proof complexity.https://resolver.caltech.edu/CaltechAUTHORS:20110412-112151713Compositional analysis of autocatalytic networks in biology
https://resolver.caltech.edu/CaltechAUTHORS:20110412-111554044
Year: 2010
Autocatalytic pathways are a necessary part of
core metabolism. Every cell consumes external food/resources
to create components and energy, but does so using processes
that also require those same components and energy. Here, we
study effects of parameter variations on the stability properties
of autocatalytic pathway models and the extent of the regions of
attraction around nominal operating conditions. Motivated by
the computational complexity of optimization-based methods
for estimating regions of attraction for large pathways, we take
a compositional approach and exploit a natural decomposition
of the system, induced by the underlying biological structure,
into a feedback interconnection of two input-output subsystems:
a small subsystem with complicating nonlinearities and a
large subsystem with simple dynamics. This decomposition
simplifies the analysis of large pathways by assembling region
of attraction certificates based on the input-output properties
of the subsystems. It enables us to numerically construct block-diagonal
Lyapunov functions for families of pathways that
are not amenable to direct analysis. Furthermore, it leads to
analytical construction of Lyapunov functions for a large family
of autocatalytic pathways.https://resolver.caltech.edu/CaltechAUTHORS:20110412-111554044A streamwise constant model of turbulence in plane Couette flow
https://resolver.caltech.edu/CaltechAUTHORS:20110131-095901289
Year: 2010
DOI: 10.1017/S0022112010003861
Streamwise and quasi-streamwise elongated structures have been shown to play a significant role in turbulent shear flows. We model the mean behaviour of fully turbulent plane Couette flow using a streamwise constant projection of the Navier–Stokes equations. This results in a two-dimensional three-velocity-component (2D/3C) model. We first use a steady-state version of the model to demonstrate that its nonlinear coupling provides the mathematical mechanism that shapes the turbulent velocity profile. Simulations of the 2D/3C model under small-amplitude Gaussian forcing of the cross-stream components are compared to direct numerical simulation (DNS) data. The results indicate that a streamwise constant projection of the Navier–Stokes equations captures salient features of fully turbulent plane Couette flow at low Reynolds numbers. A systems-theoretic approach is used to demonstrate the presence of large input–output amplification through the forced 2D/3C model. It is this amplification coupled with the appropriate nonlinearity that enables the 2D/3C model to generate turbulent behaviour under the small-amplitude forcing employed in this study.https://resolver.caltech.edu/CaltechAUTHORS:20110131-095901289Finding globally optimum solutions in antenna optimization problems
https://resolver.caltech.edu/CaltechAUTHORS:20110425-110624129
Year: 2010
DOI: 10.1109/APS.2010.5561993
During the last decade, the unprecedented increase in the affordable computational power
has strongly supported the development of optimization techniques for designing
antennas. Among these techniques, genetic algorithm [1] and particle swarm optimization
[2] could be mentioned. Most of these techniques use physical dimensions of an antenna
as the optimization variables, and require solving Maxwell's equations (numerically) at
each optimization step. They are usually slow, unable to handle a large number of
variables, and incapable of finding the globally optimum solutions. In this paper, we are
proposing an antenna optimization technique that is orders of magnitude faster than the
conventional schemes, can handle thousands of variables, and finds the globally optimum
solutions for a broad range of antenna optimization problems. In the proposed scheme,
termination impedances embedded on an antenna structure are used as the optimization
variables. This is particularly useful in designing on-chip smart antennas, where
thousands of transistors and variable passive elements can be employed to reconfigure an
antenna. By varying these parasitic impedances, an antenna can vary its gain, band-width,
pattern, and efficiency. The goal of this paper is to provide a systematic, numerically
efficient approach for finding globally optimum solutions in designing smart antennas.https://resolver.caltech.edu/CaltechAUTHORS:20110425-110624129Utility Functionals Associated With Available Congestion Control Algorithms
https://resolver.caltech.edu/CaltechAUTHORS:20110406-104430537
Year: 2010
DOI: 10.1109/INFCOM.2010.5462103
This paper is concerned with understanding the connection between the existing Internet congestion control algorithms and the optimal control theory. The available resource allocation controllers are mainly devised to derive the state of the system to a desired equilibrium point and, therefore, they are oblivious to the transient behavior of the closed-loop system. To take into account the real-time performance of the system, rather than merely its steady-state performance, the congestion control problem should be solved by maximizing a proper utility functional as opposed to a utility function. For this reason, this work aims to investigate what utility functionals the existing congestion control algorithms maximize. In particular, it is shown that there exist meaningful utility
functionals whose maximization leads to the celebrated primal, dual and primal/dual algorithms. An implication of this result is that a real network problem may be solved by regarding it as an optimal control problem on which some practical constraints, such as a real-time link capacity constraint, are imposed.https://resolver.caltech.edu/CaltechAUTHORS:20110406-104430537Contrasting Views of Complexity and Their Implications For Network-Centric Infrastructures
https://resolver.caltech.edu/CaltechAUTHORS:20100709-151451379
Year: 2010
DOI: 10.1109/TSMCA.2010.2048027
There exists a widely recognized need to better understand
and manage complex "systems of systems," ranging from
biology, ecology, and medicine to network-centric technologies.
This is motivating the search for universal laws of highly evolved
systems and driving demand for new mathematics and methods
that are consistent, integrative, and predictive. However, the theoretical
frameworks available today are not merely fragmented
but sometimes contradictory and incompatible. We argue that
complexity arises in highly evolved biological and technological
systems primarily to provide mechanisms to create robustness.
However, this complexity itself can be a source of new fragility,
leading to "robust yet fragile" tradeoffs in system design. We
focus on the role of robustness and architecture in networked
infrastructures, and we highlight recent advances in the theory
of distributed control driven by network technologies. This view
of complexity in highly organized technological and biological systems
is fundamentally different from the dominant perspective in
the mainstream sciences, which downplays function, constraints,
and tradeoffs, and tends to minimize the role of organization and
design.https://resolver.caltech.edu/CaltechAUTHORS:20100709-151451379A Study of Near-Field Direct Antenna Modulation Systems Using Convex Optimization
https://resolver.caltech.edu/CaltechAUTHORS:20110412-144235909
Year: 2010
This paper studies the constellation diagram design
for a class of communication systems known as near-field
direct antenna modulation (NFDAM) systems. The modulation
is carried out in a NFDAM system by means of a control
unit that switches among a number of pre-designed passive
controllers such that each controller generates a desired voltage
signal at the far field. To find an optimal number of signals
that can be transmitted and demodulated reliably in a NFDAM
system, the coverage area of the signal at the far field should
be identified. It is shown that this coverage area is a planar
convex region in general and simply a circle in the case when no
constraints are imposed on the input impedance of the antenna
and the voltage received at the far field. A convex optimization
method is then proposed to find a polygon that is able to approximate
the coverage area of the signal constellation diagram
satisfactorily. A similar analysis is provided for the identification
of the coverage area of the antenna input impedance, which is
beneficial for designing an energy-efficient NFDAM system.https://resolver.caltech.edu/CaltechAUTHORS:20110412-144235909Random Access Game and Medium Access Control Design
https://resolver.caltech.edu/CaltechAUTHORS:20100907-110446104
Year: 2010
DOI: 10.1109/TNET.2010.2041066
Motivated partially by a control-theoretic viewpoint, we propose a game-theoretic model, called random access game, for contention control. We characterize Nash equilibria of random access games, study their dynamics, and propose distributed algorithms (strategy evolutions) to achieve Nash equilibria. This provides a general analytical framework that is capable of modeling a large class of system-wide quality-of-service (QoS) models via the specification of per-node utility functions, in which system-wide fairness or service differentiation can be achieved in a distributed manner as long as each node executes a contention resolution algorithm that is designed to achieve the Nash equilibrium. We thus propose a novel medium access method derived from carrier sense multiple access/collision avoidance (CSMA/CA) according to distributed strategy update mechanism achieving the Nash equilibrium of random access game. We present a concrete medium access method that adapts to a continuous contention measure called conditional collision probability, stabilizes the network into a steady state that achieves optimal throughput with targeted fairness (or service differentiation), and can decouple contention control from handling failed transmissions. In addition to guiding medium access control design, the random access game model also provides an analytical framework to understand equilibrium and dynamic properties of different medium access protocols.https://resolver.caltech.edu/CaltechAUTHORS:20100907-110446104Two Market Models for Demand Response in Power Networks
https://resolver.caltech.edu/CaltechAUTHORS:20170810-104228265
Year: 2010
DOI: 10.1109/SMARTGRID.2010.5622076
In this paper, we consider two abstract market models for designing demand response to match power supply and shape power demand, respectively. We characterize the resulting equilibria in competitive as well as oligopolistic markets, and propose distributed demand response algorithms to achieve the equilibria. The models serve as a starting point to include the appliance-level details and constraints for designing practical demand response schemes for smart power grids.https://resolver.caltech.edu/CaltechAUTHORS:20170810-104228265Performance limitations in autocatalytic networks in biology
https://resolver.caltech.edu/CaltechAUTHORS:20190226-081811238
Year: 2010
DOI: 10.1109/cdc.2010.5717362
Autocatalytic networks, where a member can stimulate its own production, can be unstable when not controlled by feedback. Even when such networks are stabilized by regulating control feedbacks, they tend to exhibit non-minimum phase behavior. In this paper, we study the hard limits of the ideal performance of such networks and the hard limit of their minimum output energy. We consider a simplified model of glycolysis as our motivating example. For the glycolysis model, we characterize hard limits on the minimum output energy by analyzing the limiting behavior of the optimal cheap control problem for two different interconnection topologies. We show that some network interconnection topologies result in zero hard limits. Then, we develop necessary tools and concepts to extend our results to a general class of autocatalytic networks.https://resolver.caltech.edu/CaltechAUTHORS:20190226-081811238Topological tradeoffs in autocatalytic metabolic pathways
https://resolver.caltech.edu/CaltechAUTHORS:20190226-072341107
Year: 2010
DOI: 10.1109/CDC.2010.5717490
Metabolic pathways in cells convert external food and resources into useful cell components and energy. In many cases the cell employs product inhibition to regulate and control these pathways. We investigate the performance of such regulation and control on certain autocatalytic pathways. Specifically, we examine how well the pathways can maintain the desired output concentrations in the presence of disturbances, such as perturbations in resources, enzyme concentrations and product demand. Using control theoretic tools, we show the effects of the pathway size, the reversibility of the intermediate reactions and the coupling of pathways through the consumption of intermediate metabolites on performance. In addition, we establish some necessary conditions on the existence of fixed points and their stability for such pathways.https://resolver.caltech.edu/CaltechAUTHORS:20190226-072341107Passively Controllable Smart Antennas
https://resolver.caltech.edu/CaltechAUTHORS:20110401-112525230
Year: 2010
DOI: 10.1109/GLOCOM.2010.5684358
We recently introduced passively controllable smart (PCS) antenna systems for efficient wireless transmission, with direct applications in wireless sensor networks. A PCS antenna system is accompanied by a tunable passive controller whose adjustment at every signal transmission generates a specific radiation pattern. To reduce co-channel interference and optimize the transmitted power, this antenna can be programmed to transmit data in a desired direction in such a way that no signal is transmitted (to the far field) at pre-specified undesired directions. The controller of a PCS antenna was assumed to be centralized in our previous work, which was an impediment to its implementation. In this work, we study the design of PCS antenna systems under decentralized controllers, which are both practically implementable and cost efficient. The PCS antenna proposed here is made of one active element and its programming needs solving second-order-cone optimizations. These properties differentiate a PCS antenna from the existing smart antennas, and make it possible to implement a PCS antenna on a small-sized, low-power silicon chip.https://resolver.caltech.edu/CaltechAUTHORS:20110401-112525230On Lossless Approximations, the Fluctuation-Dissipation Theorem, and Limitations of Measurements
https://resolver.caltech.edu/CaltechAUTHORS:20110309-120503093
Year: 2011
DOI: 10.1109/TAC.2010.2056450
In this paper, we take a control-theoretic approach
to answering some standard questions in statistical mechanics, and use the results to derive limitations of classical measurements. A central problem is the relation between systems which appear macroscopically dissipative but are microscopically lossless. We show that a linear system is dissipative if, and only if, it can be
approximated by a linear lossless system over arbitrarily long time intervals. Hence lossless systems are in this sense dense in dissipative systems. A linear active system can be approximated by a nonlinear lossless system that is charged with initial energy. As a by-product, we obtain mechanisms explaining the Onsager relations from time-reversible lossless approximations, and the fluctuation-dissipation theorem from uncertainty in the initial
state of the lossless system. The results are applied to measurement devices and are used to quantify limits on the so-called observer effect, also called back action, which is the impact the measurement device has on the observed system. In particular, it is shown that deterministic back action can be compensated by using active elements, whereas stochastic back action is unavoidable and depends on the temperature of the measurement device.https://resolver.caltech.edu/CaltechAUTHORS:20110309-120503093Solving Large-Scale Hybrid Circuit-Antenna Problems
https://resolver.caltech.edu/CaltechAUTHORS:20110310-133536770
Year: 2011
DOI: 10.1109/TCSI.2010.2072010
Motivated by different applications in circuits, electromagnetics, and optics, this paper is concerned with the synthesis of a particular type of linear circuit, where the circuit is associated with a control unit. The objective is to design a controller for this control unit such that certain specifications on the parameters of the circuit are satisfied. It is shown that designing a control unit in the form of a switching network is an NP-complete problem that can be formulated as a rank-minimization problem. It is then proven that the underlying design problem can be cast as a semidefinite optimization if a passive network is designed instead of a switching network. Since the implementation of a passive network may need too many components, the design of a decoupled (sparse) passive network is subsequently studied. This paper introduces a tradeoff between design simplicity and implementation complexity for an important class of linear circuits. The superiority of the developed techniques is demonstrated by different simulations. In particular, for the first time in the literature, a wavelength-size passive antenna is designed, which has an excellent beamforming capability and which can concurrently make a null in at least eight directions.https://resolver.caltech.edu/CaltechAUTHORS:20110310-133536770Cross-layer design in multihop wireless networks
https://resolver.caltech.edu/CaltechAUTHORS:20110315-091124013
Year: 2011
DOI: 10.1016/j.comnet.2010.09.005
In this paper, we take a holistic approach to the protocol architecture design in multihop wireless networks. Our goal is to integrate various protocol layers into a rigorous framework, by regarding them as distributed computations over the network to solve some optimization problem. Different layers carry out distributed computation on different subsets of the decision variables using local information to achieve individual optimality. Taken
together, these local algorithms (with respect to different layers) achieve a global optimality. Our current theory integrates three functions—congestion control, routing and scheduling—in transport, network and link layers into a coherent framework. These three functions interact through and are regulated by congestion price so as to achieve a global optimality, even in a time-varying environment. Within this context, this model allows us to systematically derive the layering structure of the various mechanisms of different protocol layers, their interfaces, and the control information that must cross these interfaces to
achieve a certain performance and robustness.https://resolver.caltech.edu/CaltechAUTHORS:20110315-091124013Effect of buffers on stability of Internet congestion controllers
https://resolver.caltech.edu/CaltechAUTHORS:20120403-131857340
Year: 2011
Almost all existing fluid models of congestion control assume that the fluid flow at the output of a link is the same as the fluid flow at the input of the link. This means that all links in the path of a flow see the original source rate. In reality, a fluid flow is modified by the queueing processes on its path, so that an intermediate link will generally not see the original source rate. In this paper, we propose a simple model that explicitly takes into account of the effect of buffering on output flows. We study the dual and primal-dual algorithms that use implicit feedback and show that, while they are always asymptotically stable if feedback delay is ignored, they can be unstable in the new model.https://resolver.caltech.edu/CaltechAUTHORS:20120403-131857340Amplification and nonlinear mechanisms in plane Couette flow
https://resolver.caltech.edu/CaltechAUTHORS:20110722-112408053
Year: 2011
DOI: 10.1063/1.3599701
We study the input-output response of a streamwise constant projection of the Navier-Stokes equations for plane Couette flow, the so-called 2D/3C model. Study of a streamwise constant model is motivated by numerical and experimental observations that suggest the prevalence and importance of streamwise and quasi-streamwise elongated structures. Periodic spanwise/wall-normal (z–y) plane stream functions are used as input to develop a forced 2D/3C streamwise velocity field that is qualitatively similar to a fully turbulent spatial field of direct numerical simulation data. The input-output response associated with the 2D/3C nonlinear coupling is used to estimate the energy optimal spanwise wavelength over a range of Reynolds numbers. The results of the input-output analysis agree with previous studies of the linearized Navier-Stokes equations. The optimal energy corresponds to minimal nonlinear coupling. On the other hand, the nature of the forced 2D/3C streamwise velocity field provides evidence that the nonlinear coupling in the 2D/3C model is responsible for creating the well known characteristic "S" shaped turbulent velocity profile. This indicates that there is an important tradeoff between energy amplification, which is primarily linear, and the seemingly nonlinear momentum transfer mechanism that produces a turbulent-like mean profile.https://resolver.caltech.edu/CaltechAUTHORS:20110722-112408053Analysis of autocatalytic networks in biology
https://resolver.caltech.edu/CaltechAUTHORS:20110624-104150056
Year: 2011
DOI: 10.1016/j.automatica.2011.02.040
Autocatalytic networks, in particular the glycolytic pathway, constitute an important part of the cell metabolism. Changes in the concentration of metabolites and catalyzing enzymes during the lifetime of the cell can lead to perturbations from its nominal operating condition. We investigate the effects of such perturbations on stability properties, e.g., the extent of regions of attraction, of a particular family of autocatalytic network models. Numerical experiments demonstrate that systems that are robust with respect to perturbations in the parameter space have an easily "verifiable" (in terms of proof complexity) region of attraction properties. Motivated by the computational complexity of optimization-based formulations, we take a compositional approach and exploit a natural decomposition of the system, induced by the underlying biological structure, into a feedback interconnection of two input–output subsystems: a small subsystem with complicating nonlinearities and a large subsystem with simple dynamics. This decomposition simplifies the analysis of large pathways by assembling region of attraction certificates based on the input–output properties of the subsystems. It enables numerical as well as analytical construction of block-diagonal Lyapunov functions for a large family of autocatalytic pathways.https://resolver.caltech.edu/CaltechAUTHORS:20110624-104150056Glycolytic Oscillations and Limits on Robust Efficiency
https://resolver.caltech.edu/CaltechAUTHORS:20110712-150344607
Year: 2011
DOI: 10.1126/science.1200705
Both engineering and evolution are constrained by trade-offs between efficiency and robustness, but theory that formalizes this fact is limited. For a simple two-state model of glycolysis, we explicitly derive analytic equations for hard trade-offs between robustness and efficiency with oscillations as an inevitable side effect. The model describes how the trade-offs arise from individual parameters, including the interplay of feedback control with autocatalysis of network products necessary to power and catalyze intermediate reactions. We then use control theory to prove that the essential features of these hard trade-off "laws" are universal and fundamental, in that they depend minimally on the details of this system and generalize to the robust efficiency of any autocatalytic network. The theory also suggests worst-case conditions that are consistent with initial experiments.https://resolver.caltech.edu/CaltechAUTHORS:20110712-150344607Robustness, Optimization, and Architectures
https://resolver.caltech.edu/CaltechAUTHORS:20120123-110052341
Year: 2011
DOI: 10.3166/ejc.17.472-482
This paper will review recent progress on developing a unified theory for complex networks from biological systems and physics to engineering and technology. Insights into what the potential universal laws, architecture, and organizational principles are can be drawn from three converging research themes: growing attention to complexity and robustness in systems biology, layering and organization in network technology, and new mathematical frameworks for the study of complex networks. We will illustrate how tools in robust control theory and optimization can be integrated towards such unified theory by focusing on their applications in biology, physics, network design, and electric grid.https://resolver.caltech.edu/CaltechAUTHORS:20120123-110052341Architecture, constraints, and behavior
https://resolver.caltech.edu/CaltechAUTHORS:20111003-134717951
Year: 2011
DOI: 10.1073/pnas.1103557108
PMCID: PMC3176601
This paper aims to bridge progress in neuroscience involving sophisticated quantitative analysis of behavior, including the use of robust control, with other relevant conceptual and theoretical frameworks from systems engineering, systems biology, and mathematics. Familiar and accessible case studies are used to illustrate concepts of robustness, organization, and architecture (modularity and protocols) that are central to understanding complex networks. These essential organizational features are hidden during normal function of a system but are fundamental for understanding the nature, design, and function of complex biologic and technologic systems.https://resolver.caltech.edu/CaltechAUTHORS:20111003-134717951On the structure of state-feedback LQG controllers for distributed systems with communication delays
https://resolver.caltech.edu/CaltechAUTHORS:20190220-111649177
Year: 2011
DOI: 10.1109/CDC.2011.6160767
This paper presents explicit solutions for a few distributed LQG problems in which players communicate their states with delays. The resulting control structure is reminiscent of a simple management hierarchy, in which a top level input is modified by newer, more localized information as it gets passed down the chain of command. It is hoped that the controller forms arising through optimization may lend insight into the control strategies of biological and social systems with communication delays.https://resolver.caltech.edu/CaltechAUTHORS:20190220-111649177The magnitude distribution of earthquakes near Southern California faults
https://resolver.caltech.edu/CaltechAUTHORS:20120123-130627739
Year: 2011
DOI: 10.1029/2010JB007933
We investigate seismicity near faults in the Southern California Earthquake Center Community Fault Model. We search for anomalously large events that might be signs of a characteristic earthquake distribution. We find that seismicity near major fault zones in Southern California is well modeled by a Gutenberg-Richter distribution, with no evidence of characteristic earthquakes within the resolution limits of the modern instrumental catalog. However, the b value of the locally observed magnitude distribution is found to depend on distance to the nearest mapped fault segment, which suggests that earthquakes nucleating near major faults are likely to have larger magnitudes relative to earthquakes nucleating far from major faults.https://resolver.caltech.edu/CaltechAUTHORS:20120123-130627739Sepsis: Something old, something new, and a systems view
https://resolver.caltech.edu/CaltechAUTHORS:20120702-142216264
Year: 2012
DOI: 10.1016/j.jcrc.2011.05.025
Sepsis is a clinical syndrome characterized by a multisystem response to a microbial pathogenic insult consisting of a mosaic of interconnected biochemical, cellular, and organ-organ interaction networks. A central thread that connects these responses is inflammation that, while attempting to defend the body and prevent further harm, causes further damage through the feed-forward, proinflammatory effects of damage-associated molecular pattern molecules. In this review, we address the epidemiology and current definitions of sepsis and focus specifically on the biologic cascades that comprise the inflammatory response to sepsis. We suggest that attempts to improve clinical outcomes by targeting specific components of this network have been unsuccessful due to the lack of an integrative, predictive, and individualized systems-based approach to define the time-varying, multidimensional state of the patient. We highlight the translational impact of computational modeling and other complex systems approaches as applied to sepsis, including in silico clinical trials, patient-specific models, and complexity-based assessments of physiology.https://resolver.caltech.edu/CaltechAUTHORS:20120702-142216264Dynamic Programming Solutions for Decentralized State-Feedback LQG Problems with Communication Delays
https://resolver.caltech.edu/CaltechAUTHORS:20121003-154617690
Year: 2012
DOI: 10.1109/ACC.2012.6315282
This paper presents explicit solutions for a class of decentralized LQG problems in which players communicate their states with delays. A method for decomposing the Bellman equation into a hierarchy of independent subproblems is introduced. Using this decomposition, all of the gains for the optimal controller are computed from the solution of a single algebraic Riccati equation.https://resolver.caltech.edu/CaltechAUTHORS:20121003-154617690Congestion Control for Multicast Flows With Network Coding
https://resolver.caltech.edu/CaltechAUTHORS:20121008-112913029
Year: 2012
DOI: 10.1109/TIT.2012.2204170
Recent advances in network coding have shown great potential for efficient information multicasting in communication networks, in terms of both network throughput and network management. In this paper, the problem of flow control at end-systems for network-coding-based multicast flows is addressed. Optimization-based models are formulated for network resource allocation, based on which two sets of decentralized controllers at sources and links/nodes for congestion control are developed for wired networks with given coding subgraphs and without given coding subgraphs, respectively. With random network coding, both sets of controllers can be implemented in a distributed manner, and work at the transport layer to adjust source rates and at network layer to carry out network coding. The convergence of the proposed controllers to the desired equilibrium operating points is proved, and numerical examples are provided to complement the theoretical analysis. The extension to wireless networks is also briefly discussed.https://resolver.caltech.edu/CaltechAUTHORS:20121008-112913029Dynamic programming solutions for decentralized state-feedback LQG problems with communication delays
https://resolver.caltech.edu/CaltechAUTHORS:20121009-110029523
Year: 2012
This paper presents explicit solutions for a class of decentralized LQG problems in which players communicate their states with delays. A method for decomposing the Bellman equation into a hierarchy of independent subproblems is introduced. Using this decomposition, all of the gains for the optimal controller are computed from the solution of a single algebraic Riccati equation.https://resolver.caltech.edu/CaltechAUTHORS:20121009-110029523A dual problem in H_2 decentralized control subject to delays
https://resolver.caltech.edu/CaltechAUTHORS:20131219-095138236
Year: 2013
It has been shown that the decentralized H_2 model matching problem subject to delay can be solved by decomposing the controller into a centralized, but delayed, component and a decentralized FIR component, the latter of which can be solved for via a linearly constrained quadratic program. In this paper, we derive the dual to this optimization problem, show that strong duality holds, and exploit this to further analyze properties of the control problem. Namely, we determine a priori upper and lower bounds on the optimal H_2 cost, and obtain further insight into the structure of the optimal FIR component. Furthermore, we show how the optimal dual variables can be used to inform communication graph augmentation, and illustrate this idea with a routing problem.https://resolver.caltech.edu/CaltechAUTHORS:20131219-095138236A heuristic for sub-optimal H_2 decentralized control subject to delay in non-quadratically-invariant systems
https://resolver.caltech.edu/CaltechAUTHORS:20131219-094717913
Year: 2013
Inspired by potential applications to the smart grid, we develop a heuristic for sub-optimal, but acceptable, control of decentralized systems subject to non-quadratically invariant (non-QI) delay patterns. We do so by exploiting a recently developed solution to the decentralized H_2 model matching problem subject to delays, which decomposes the controller into a centralized, but delayed, component and a decentralized FIR component. In particular, we present an iterative procedure that exploits this decomposition to design a sub-optimal decentralized H_2 controller for non-QI systems that is guaranteed a priori to be stable, and to perform no worse than a controller computed with respect to a QI subset of the non-QI constraint set. We then apply this procedure to a smart-grid frequency regulation problem.https://resolver.caltech.edu/CaltechAUTHORS:20131219-094717913Output feedback H_2 model matching for decentralized systems with delays
https://resolver.caltech.edu/CaltechAUTHORS:20190213-081737120
Year: 2013
DOI: 10.1109/ACC.2013.6580743
This paper gives a new solution to the output feedback H_2 model matching problem for a large class of delayed information sharing patterns. Existing methods for similar problems typically reduce the decentralized problem to a centralized problem of higher state dimension. In contrast, this paper demonstrates that the decentralized model matching solution can be constructed from the original centralized solution via quadratic programming.https://resolver.caltech.edu/CaltechAUTHORS:20190213-081737120A multiscale modeling approach to inflammation: A case study in human endotoxemia
https://resolver.caltech.edu/CaltechAUTHORS:20130628-075744899
Year: 2013
DOI: 10.1016/j.jcp.2012.09.024
Inflammation is a critical component in the body's response to injury. A dysregulated inflammatory response, in which either the injury is not repaired or the inflammatory response does not appropriately self-regulate and end, is associated with a wide range of inflammatory diseases such as sepsis. Clinical management of sepsis is a significant problem, but progress in this area has been slow. This may be due to the inherent nonlinearities and complexities in the interacting multiscale pathways that are activated in response to systemic inflammation, motivating the application of systems biology techniques to better understand the inflammatory response. Here, we review our past work on a multiscale modeling approach applied to human endotoxemia, a model of systemic inflammation, consisting of a system of compartmentalized differential equations operating at different time scales and through a discrete model linking inflammatory mediators with changing patterns in the beating of the heart, which has been correlated with outcome and severity of inflammatory disease despite unclear mechanistic underpinnings. Working towards unraveling the relationship between inflammation and heart rate variability (HRV) may enable greater understanding of clinical observations as well as novel therapeutic targets.https://resolver.caltech.edu/CaltechAUTHORS:20130628-075744899In Darwinian evolution, feedback from natural selection leads to biased mutations
https://resolver.caltech.edu/CaltechAUTHORS:20140203-132837058
Year: 2013
DOI: 10.1111/nyas.12235
Natural selection provides feedback through which information about the environment and its recurring challenges is captured, inherited, and accumulated within genomes in the form of variations that contribute to survival. The variation upon which natural selection acts is generally described as "random." Yet evidence has been mounting for decades, from such phenomena as mutation hotspots, horizontal gene transfer, and highly mutable repetitive sequences, that variation is far from the simplifying idealization of random processes as white (uniform in space and time and independent of the environment or context). This paper focuses on what is known about the generation and control of mutational variation, emphasizing that it is not uniform across the genome or in time, not unstructured with respect to survival, and is neither memoryless nor independent of the (also far from white) environment. We suggest that, as opposed to frequentist methods, Bayesian analysis could capture the evolution of nonuniform probabilities of distinct classes of mutation, and argue not only that the locations, styles, and timing of real mutations are not correctly modeled as generated by a white noise random process, but that such a process would be inconsistent with evolutionary theory.https://resolver.caltech.edu/CaltechAUTHORS:20140203-132837058Resilience in Large Scale Distributed Systems
https://resolver.caltech.edu/CaltechAUTHORS:20151002-161607634
Year: 2014
DOI: 10.1016/j.procs.2014.03.036
Distributed systems are comprised of multiple subsystems that interact in two distinct ways: (1) physical interactions and (2) cyber interactions; i.e. sensors, actuators and computers controlling these subsystems, and the network over which they communicate. A broad class of cyber-physical systems (CPS) are described by such interactions, such as the smart grid, platoons of autonomous vehicles and the sensorimotor system. This paper will survey recent progress in developing a coherent mathematical framework that describes the rich CPS "design space" of fundamental limits and tradeoffs between efficiency, robustness, adaptation, verification and scalability. Whereas most research treats at most one of these issues, we attempt a holistic approach in examining these metrics. In particular, we will argue that a control architecture that emphasizes scalability leads to improvements in robustness, adaptation, and verification, all the while having only minor effects on efficiency – i.e. through the choice of a new architecture, we believe that we are able to bring a system closer to the true fundamental hard limits of this complex design space.https://resolver.caltech.edu/CaltechAUTHORS:20151002-161607634Localized distributed state feedback control with communication delays
https://resolver.caltech.edu/CaltechAUTHORS:20150320-125909654
Year: 2014
DOI: 10.1109/ACC.2014.6859440
This paper introduces the notion of localizable distributed systems. These are systems for which a distributed controller exists that limits the effect of each disturbance to some local subset of the entire plant, akin to spatio-temporal dead-beat control. We characterize distributed systems for which a localizing state-feedback controller exists in terms of the feasibility of a set of linear equations. We then show that when a feasible solution exists, it can be found in a distributed way, and used for the localized synthesis and implementation of controllers that lead to the desired closed loop response. In particular, by allowing controllers to exchange both state and control actions, the information needed by a particular controller is limited to a local subset of the system's state and control inputs.https://resolver.caltech.edu/CaltechAUTHORS:20150320-125909654Robust efficiency and actuator saturation explain healthy heart rate control and variability
https://resolver.caltech.edu/CaltechAUTHORS:20140805-091023982
Year: 2014
DOI: 10.1073/pnas.1401883111
PMCID: PMC4143073
The correlation of healthy states with heart rate variability (HRV) using time series analyses is well documented. Whereas these studies note the accepted proximal role of autonomic nervous system balance in HRV patterns, the responsible deeper physiological, clinically relevant mechanisms have not been fully explained. Using mathematical tools from control theory, we combine mechanistic models of basic physiology with experimental exercise data from healthy human subjects to explain causal relationships among states of stress vs. health, HR control, and HRV, and more importantly, the physiologic requirements and constraints underlying these relationships. Nonlinear dynamics play an important explanatory role––most fundamentally in the actuator saturations arising from unavoidable tradeoffs in robust homeostasis and metabolic efficiency. These results are grounded in domain-specific mechanisms, tradeoffs, and constraints, but they also illustrate important, universal properties of complex systems. We show that the study of complex biological phenomena like HRV requires a framework which facilitates inclusion of diverse domain specifics (e.g., due to physiology, evolution, and measurement technology) in addition to general theories of efficiency, robustness, feedback, dynamics, and supporting mathematical tools.https://resolver.caltech.edu/CaltechAUTHORS:20140805-091023982Study of the brain functional network using synthetic data
https://resolver.caltech.edu/CaltechAUTHORS:20190212-083440699
Year: 2014
DOI: 10.1109/ALLERTON.2014.7028476
The brain functional connectivity is usually assessed with the correlation coefficients of certain signals. The partial correlation matrix can reveal direct interactions between brain regions. However, computing this matrix is usually challenging due to the availability of only a limited number of samples. As an alternative, thresholding the sample correlation matrix is a common technique for the identification of the direct interactions. In this work, we investigate the performance of this method in addition to some other well-known techniques, namely graphical lasso and Chow-Liu algorithm. Our analysis is performed on some synthetic data produced by an electrical circuit model with certain structural properties. We show that the simple method of thresholding the correlation matrix and the graphical lasso algorithm would both create false positives and negatives that wrongly imply some network properties such as small-worldness. We also apply these techniques to some resting-state functional MRI (fMRI) data and show that similar observations can be made.https://resolver.caltech.edu/CaltechAUTHORS:20190212-083440699The mathematician's control toolbox for management of type 1 diabetes
https://resolver.caltech.edu/CaltechAUTHORS:20141211-084533887
Year: 2014
DOI: 10.1098/rsfs.2014.0042
PMCID: PMC4142019
Blood glucose levels are controlled by well-known physiological feedback loops: high glucose levels promote insulin release from the pancreas, which in turn stimulates cellular glucose uptake. Low blood glucose levels promote pancreatic glucagon release, stimulating glycogen breakdown to glucose in the liver. In healthy people, this control system is remarkably good at maintaining blood glucose in a tight range despite many perturbations to the system imposed by diet and fasting, exercise, medications and other stressors. Type 1 diabetes mellitus (T1DM) results from loss of the insulin-producing cells of the pancreas, the beta cells. These cells serve as both sensor (of glucose levels) and actuator (insulin/glucagon release) in a control physiological feedback loop. Although the idea of rebuilding this feedback loop seems intuitively easy, considerable control mathematics involving multiple types of control schema were necessary to develop an artificial pancreas that still does not function as well as evolved control mechanisms. Here, we highlight some tools from control engineering used to mimic normal glucose control in an artificial pancreas, and the constraints, trade-offs and clinical consequences inherent in various types of control schemes. T1DM can be viewed as a loss of normal physiologic controls, as can many other disease states. For this reason, we introduce basic concepts of control engineering applicable to understanding pathophysiology of disease and development of physiologically based control strategies for treatment.https://resolver.caltech.edu/CaltechAUTHORS:20141211-084533887Localized LQR optimal control
https://resolver.caltech.edu/CaltechAUTHORS:20190212-073103646
Year: 2014
DOI: 10.1109/CDC.2014.7039638
This paper introduces a receding horizon like control scheme for localizable distributed systems, in which the effect of each local disturbance is limited spatially and temporally. We characterize such systems by a set of linear equality constraints, and show that the resulting feasibility test can be solved in a localized and distributed way. We also show that the solution of the local feasibility tests can be used to synthesize a receding horizon like controller that achieves the desired closed loop response in a localized manner as well. Finally, we formulate the Localized LQR (LLQR) optimal control problem and derive an analytic solution for the optimal controller. Through a numerical example, we show that the LLQR optimal controller, with its constraints on locality, settling time, and communication delay, can achieve similar performance as an unconstrained ℋ_2 optimal controller, but can be designed and implemented in a localized and distributed way.https://resolver.caltech.edu/CaltechAUTHORS:20190212-073103646Buffering Dynamics and Stability of Internet Congestion Controllers
https://resolver.caltech.edu/CaltechAUTHORS:20150109-091116665
Year: 2014
DOI: 10.1109/TNET.2013.2287198
Many existing fluid-flow models of the Internet congestion
control algorithms make simplifying assumptions on the
effects of buffers on the data flows. In particular, they assume that the flow rate of a TCP flow at every link in its path is equal to the original source rate. However, a fluid flow in practice is modified by the queueing processes on its path, so that an intermediate link will generally not see the original source rate. In this paper, a more accurate model is derived for the behavior of the
network under a congestion controller, which takes into account the effect of buffering on output flows. It is shown how this model can be deployed for some well-known service disciplines such as first-in–first-out and generalized weighted fair queueing. Based on the derived model, the dual and primal-dual algorithms are studied
under the common pricing mechanisms, and it is shown that these algorithms can become unstable. Sufficient conditions are provided to guarantee the stability of the dual and primal-dual algorithms. Finally, a new pricing mechanism is proposed under which these congestion control algorithms are both stable.https://resolver.caltech.edu/CaltechAUTHORS:20150109-091116665A Case Study in Network Architecture Tradeoffs
https://resolver.caltech.edu/CaltechAUTHORS:20150715-140940147
Year: 2015
DOI: 10.1145/2774993.2775011
Software defined networking (SDN) establishes a separation between the control plane and the data plane, allowing network intelligence and state to be centralized -- in this way the underlying network infrastructure is hidden from the applications. This is in stark contrast to existing distributed networking architectures, in which the control and data planes are vertically combined, and network intelligence and state, as well as applications, are distributed throughout the network. It is also conceivable that some elements of network functionality be implemented in a centralized manner via SDN, and that other components be implemented in a distributed manner. Further, distributed implementations can have varying levels of decentralization, ranging from myopic (in which local algorithms use only local information) to coordinated (in which local algorithms use both local and shared information). In this way, myopic distributed architectures and fully centralized architectures lie at the two extremes of a broader hybrid software defined networking (HySDN) design space.
Using admission control as a case study, we leverage recent developments in distributed optimal control to provide network designers with tools to quantitatively compare different architectures, allowing them to explore the relevant HySDN design space in a principled manner. In particular, we assume that routing is done at a slower timescale, and seek to stabilize the network around a desirable operating point despite physical communication delays imposed by the network and rapidly varying traffic demand. We show that there exist scenarios for which one architecture allows for fundamentally better performance than another, thus highlighting the usefulness of the approach proposed in this paper.https://resolver.caltech.edu/CaltechAUTHORS:20150715-140940147The ℋ_2 Control Problem for Quadratically Invariant Systems With Delays
https://resolver.caltech.edu/CaltechAUTHORS:20150716-123217938
Year: 2015
DOI: 10.1109/TAC.2014.2363917
This technical note gives a new solution to the output feedback ℋ_2 problem for quadratically invariant communication delay patterns. A characterization of all stabilizing controllers satisfying the delay constraints is given and the decentralized ℋ_2 problem is cast as a convex model matching problem. The main result shows that the model matching problem can be reduced to a finite-dimensional quadratic program. A recursive state-space method for computing the optimal controller based on vectorization is given.https://resolver.caltech.edu/CaltechAUTHORS:20150716-123217938Hard limits on robust control over delayed and quantized communication channels with applications to sensorimotor control
https://resolver.caltech.edu/CaltechAUTHORS:20160216-132314659
Year: 2015
DOI: 10.1109/CDC.2015.7403407
The modern view of the nervous system as layering distributed computation and communication for the purpose of sensorimotor control and homeostasis has much experimental evidence but little theoretical foundation, leaving unresolved the connection between diverse components and complex behavior. As a simple starting point, we address a fundamental tradeoff when robust control is done using communication with both delay and quantization error, which are both extremely heterogeneous and highly constrained in human and animal nervous systems. This yields surprisingly simple and tight analytic bounds with clear interpretations and insights regarding hard tradeoffs, optimal coding and control strategies, and their relationship with well known physiology and behavior. These results are similar to reasoning routinely used informally by experimentalists to explain their findings, but very different from those based on information theory and statistical physics (which have dominated theoretical neuroscience). The simple analytic results and their proofs extend to more general models at the expense of less insight and nontrivial (but still scalable) computation. They are also relevant, though less dramatically, to certain cyber-physical systems.https://resolver.caltech.edu/CaltechAUTHORS:20160216-132314659Primal robustness and semidefinite cones
https://resolver.caltech.edu/CaltechAUTHORS:20160217-091635251
Year: 2015
DOI: 10.1109/CDC.2015.7403199
This paper reformulates and streamlines the core tools of robust stability and performance for LTI systems using now-standard methods in convex optimization. In particular, robustness analysis can be formulated directly as a primal convex (semidefinite program or SDP) optimization problem using sets of Gramians whose closure is a semidefinite cone. This allows various constraints such as structured uncertainty to be included directly, and worst-case disturbances and perturbations constructed directly from the primal variables. Well known results such as the KYP lemma and various scaled small gain tests can also be obtained directly through standard SDP duality. To readers familiar with robustness and SDPs, the framework should appear obvious, if only in retrospect. But this is also part of its appeal and should enhance pedagogy, and we hope suggest new research. There is a key lemma proving closure of a Gramian that is also obvious but our current proof appears unnecessarily cumbersome, and a final aim of this paper is to enlist the help of experts in robust control and convex optimization in finding simpler alternatives.https://resolver.caltech.edu/CaltechAUTHORS:20160217-091635251On Channel Failures, File Fragmentation Policies, and Heavy-Tailed Completion Times
https://resolver.caltech.edu/CaltechAUTHORS:20160317-122351101
Year: 2016
DOI: 10.1109/TNET.2014.2375920
It has been recently discovered that heavy-tailed completion times can result from protocol interaction even when file sizes are light-tailed. A key to this phenomenon is the use of a restart policy where if the file is interrupted before it is completed, it needs to restart from the beginning. In this paper, we show that fragmenting a file into pieces whose sizes are either bounded or independently chosen after each interruption guarantees light-tailed completion time as long as the file size is light-tailed; i.e., in this case, heavy-tailed completion time can only originate from heavy-tailed file sizes. If the file size is heavy-tailed, then the completion time is necessarily heavy-tailed. For this case, we show that when the file size distribution is regularly varying, then under independent or bounded fragmentation, the completion time tail distribution function is asymptotically bounded above by that of the original file size stretched by a constant factor. We then prove that if the distribution of times between interruptions has nondecreasing failure rate, the expected completion time is minimized by dividing the file into equal-sized fragments; this optimal fragment size is unique but depends on the file size. We also present a simple blind fragmentation policy where the fragment sizes are constant and independent of the file size and prove that it is asymptotically optimal. Both these policies are also shown to have desirable completion time tail behavior. Finally, we bound the error in expected completion time due to error in modeling of the failure process.https://resolver.caltech.edu/CaltechAUTHORS:20160317-122351101Even Noisy Responses Can Be Perfect If Integrated Properly
https://resolver.caltech.edu/CaltechAUTHORS:20160504-095129745
Year: 2016
DOI: 10.1016/j.cels.2016.02.012
Integral feedback for perfect adaptation is a ubiquitous strategy in engineering and biology. Long studied in deterministic settings, it can now be understood in the context of the fully stochastic systems that are prevalent in biology.https://resolver.caltech.edu/CaltechAUTHORS:20160504-095129745Localized LQR Control with Actuator Regularization
https://resolver.caltech.edu/CaltechAUTHORS:20160802-095819362
Year: 2016
DOI: 10.1109/ACC.2016.7526485
In previous work, we posed and solved the localized linear quadratic regulator (LLQR) problem - a LLQR controller is one that limits the propagation of dynamics to user-specified subsets of the global system. The advantages of taking this approach are tangible, as we show that this allows the controller to be synthesized and implemented in a scalable local manner. Implicit in this previous work was the existence of a feasible spatio-temporal constraint on the controller and closed loop response of the system that enforced these locality properties. This paper proposes and analyzes a procedure for designing such a spatio-temporal constraint, which can be interpreted as a measure of the implementation complexity of a controller, and a sparse actuation architecture that ensures that it is feasible. We show that the computational tasks involved can be suitably decomposed and solved using the alternating direction method of multipliers (ADMM), thus providing a scalable approach to designing a LLQR controller with a sparse actuation architecture.https://resolver.caltech.edu/CaltechAUTHORS:20160802-095819362A Theory of Dynamics, Control and Optimization in Layered Architectures
https://resolver.caltech.edu/CaltechAUTHORS:20160802-081108720
Year: 2016
DOI: 10.1109/ACC.2016.7525357
The controller of a large-scale distributed system (e.g., the internet, the power-grid and automated highway systems) is often faced with two complementary tasks: (i) that of finding an optimal trajectory with respect to a functional or economic utility, and (ii) that of efficiently making the state of the system follow this trajectory despite model uncertainty, process and sensor noise and distributed information sharing constraints. While each of these tasks has been addressed individually, there exists as of yet no controller synthesis framework that treats these two problems in a holistic manner. This paper proposes a unifying optimization based methodology that jointly addresses these two tasks by leveraging the strengths of well established frameworks for distributed control: the Layering as Optimization (LAO) framework and the distributed optimal control framework. We show that our proposed control scheme has a natural layered architecture composed of a low-level tracking layer and top-level planning layer. The tracking layer consists of a distributed optimal controller that takes as an input a reference trajectory generated by the top-level layer, where this top-level layer consists of a trajectory planning problem that optimizes a weighted sum of a utility function and a "racking penalty" regularizer. We further provide an exact solution to the tracking layer problem under a broad range of information sharing constraints, discuss extensions to the proposed problem formulation, and demonstrate the effectiveness of our approach on a numerical example.https://resolver.caltech.edu/CaltechAUTHORS:20160802-081108720Evolutionary tradeoffs in cellular composition across diverse bacteria
https://resolver.caltech.edu/CaltechAUTHORS:20161117-075517086
Year: 2016
DOI: 10.1038/ismej.2016.21
One of the most important classic and contemporary interests in biology is the connection between cellular composition and physiological function. Decades of research have allowed us to understand the detailed relationship between various cellular components and processes for individual species, and have uncovered common functionality across diverse species. However, there still remains the need for frameworks that can mechanistically predict the tradeoffs between cellular functions and elucidate and interpret average trends across species. Here we provide a comprehensive analysis of how cellular composition changes across the diversity of bacteria as connected with physiological function and metabolism, spanning five orders of magnitude in body size. We present an analysis of the trends with cell volume that covers shifts in genomic, protein, cellular envelope, RNA and ribosomal content. We show that trends in protein content are more complex than a simple proportionality with the overall genome size, and that the number of ribosomes is simply explained by cross-species shifts in biosynthesis requirements. Furthermore, we show that the largest and smallest bacteria are limited by physical space requirements. At the lower end of size, cell volume is dominated by DNA and protein content—the requirement for which predicts a lower limit on cell size that is in good agreement with the smallest observed bacteria. At the upper end of bacterial size, we have identified a point at which the number of ribosomes required for biosynthesis exceeds available cell volume. Between these limits we are able to discuss systematic and dramatic shifts in cellular composition. Much of our analysis is connected with the basic energetics of cells where we show that the scaling of metabolic rate is surprisingly superlinear with all cellular components.https://resolver.caltech.edu/CaltechAUTHORS:20161117-075517086Teaching control theory in high school
https://resolver.caltech.edu/CaltechAUTHORS:20170106-115150650
Year: 2016
DOI: 10.1109/CDC.2016.7799181
Controls is increasingly central to technology, science, and society, yet remains the "hidden technology." Our appropriate emphasis on mathematical rigor and practical relevance in the past 40 years has not been similarly balanced with technical accessibility. The aim of this tutorial is to enlist the controls community in helping to radically rethink controls education. In addition to the brief 2 hour tutorial at CDC, we will have a website with additional materials, but particularly extensive online videos with mathematical details and case studies. We will also have a booth in the exhibition area at CDC with live demos and engaging competitions throughout the conference.https://resolver.caltech.edu/CaltechAUTHORS:20170106-115150650Understanding robust control theory via stick balancing
https://resolver.caltech.edu/CaltechAUTHORS:20170106-124748759
Year: 2016
DOI: 10.1109/CDC.2016.7798480
Robust control theory studies the effect of noise, disturbances, and other uncertainty on system performance. Despite growing recognition across science and engineering that robustness and efficiency tradeoffs dominate the evolution and design of complex systems, the use of robust control theory remains limited, partly because the mathematics involved is relatively inaccessible to nonexperts, and the important concepts have been inexplicable without a fairly rich mathematics background. This paper aims to begin changing that by presenting the most essential concepts in robust control using human stick balancing, a simple case study popular in both the sensorimotor control literature and extremely familiar to engineers. With minimal and familiar models and mathematics, we can explore the impact of unstable poles and zeros, delays, and noise, which can then be easily verified with simple experiments using a standard extensible pointer. Despite its simplicity, this case study has extremes of robustness and fragility that are initially counter-intuitive but for which simple mathematics and experiments are clear and compelling. The theory used here has been well-known for many decades, and the cart-pendulum example is a standard in undergrad controls courses, yet a careful reconsidering of both leads to striking new insights that we argue are of great pedagogical value.https://resolver.caltech.edu/CaltechAUTHORS:20170106-124748759Interpretation of the Precision Matrix and Its Application in Estimating Sparse Brain Connectivity during Sleep Spindles from Human Electrocorticography Recordings
https://resolver.caltech.edu/CaltechAUTHORS:20170118-142813403
Year: 2017
DOI: 10.1162/NECO_a_00936
PMCID: PMC5424817
The correlation method from brain imaging has been used to estimate functional connectivity in the human brain. However, brain regions might show very high correlation even when the two regions are not directly connected due to the strong interaction of the two regions with common input from a third region. One previously proposed solution to this problem is to use a sparse regularized inverse covariance matrix or precision matrix (SRPM) assuming that the connectivity structure is sparse. This method yields partial correlations to measure strong direct interactions between pairs of regions while simultaneously removing the influence of the rest of the regions, thus identifying regions that are conditionally independent. To test our methods, we first demonstrated conditions under which the SRPM method could indeed find the true physical connection between a pair of nodes for a spring-mass example and an RC circuit example. The recovery of the connectivity structure using the SRPM method can be explained by energy models using the Boltzmann distribution. We then demonstrated the application of the SRPM method for estimating brain connectivity during stage 2 sleep spindles from human electrocorticography (ECoG) recordings using an 8 x 8 electrode array. The ECoG recordings that we analyzed were from a 32-year-old male patient with long-standing pharmaco-resistant left temporal lobe complex partial epilepsy. Sleep spindles were automatically detected using delay differential analysis and then analyzed with SRPM and the Louvain method for community detection. We found spatially localized brain networks within and between neighboring cortical areas during spindles, in contrast to the case when sleep spindles were not present.https://resolver.caltech.edu/CaltechAUTHORS:20170118-142813403Novel computational method for predicting polytherapy switching strategies to overcome tumor heterogeneity and evolution
https://resolver.caltech.edu/CaltechAUTHORS:20170206-090159752
Year: 2017
DOI: 10.1038/srep44206
PMCID: PMC5347024
The success of targeted cancer therapy is limited by drug resistance that can result from tumor genetic heterogeneity. The current approach to address resistance typically involves initiating a new treatment after clinical/radiographic disease progression, ultimately resulting in futility in most patients. Towards a potential alternative solution, we developed a novel computational framework that uses human cancer profiling data to systematically identify dynamic, pre-emptive, and sometimes non-intuitive treatment strategies that can better control tumors in real-time. By studying lung adenocarcinoma clinical specimens and preclinical models, our computational analyses revealed that the best anti-cancer strategies addressed existing resistant subpopulations as they emerged dynamically during treatment. In some cases, the best computed treatment strategy used unconventional therapy switching while the bulk tumor was responding, a prediction we confirmed in vitro. The new framework presented here could guide the principled implementation of dynamic molecular monitoring and treatment strategies to improve cancer control.https://resolver.caltech.edu/CaltechAUTHORS:20170206-09015975215th International Conference on Complex Acute Illness (ICCAI) Society for Complex Acute Illness: August 10–14, 2016 at the Beckman Institute, Caltech
https://resolver.caltech.edu/CaltechAUTHORS:20161214-135019937
Year: 2017
DOI: 10.1016/j.jcrc.2016.11.011
[no abstract]https://resolver.caltech.edu/CaltechAUTHORS:20161214-135019937System level parameterizations, constraints and synthesis
https://resolver.caltech.edu/CaltechAUTHORS:20170531-104451494
Year: 2017
DOI: 10.23919/ACC.2017.7963133
We introduce the system level approach to controller synthesis, which is composed of three elements: System Level Parameterizations (SLPs), System Level Constraints (SLCs) and System Level Synthesis (SLS) problems. SLPs provide a novel parameterization of all internally stabilizing controllers and the system responses that they achieve. These can be combined with SLCs to provide parameterizations of constrained stabilizing controllers. We provide a catalog of useful SLCs, and show that by using SLPs with SLCs, we can parameterize the largest known class of constrained stabilizing controllers that admit a convex characterization. Finally, we formulate the SLS problem, and show that it defines the broadest known class of constrained optimal control problems that can be solved using convex programming. We end by using the system level approach to computationally explore tradeoffs in controller performance, architecture cost, robustness and synthesis/implementation complexity.https://resolver.caltech.edu/CaltechAUTHORS:20170531-104451494HFTraC: High-Frequency Traffic Control
https://resolver.caltech.edu/CaltechAUTHORS:20170614-153521655
Year: 2017
DOI: 10.1145/3078505.3078557
We propose high-frequency traffic control (HFTraC), a rate control scheme that coordinates the transmission rates and buffer utilizations in routers network-wide at fast timescale. HFTraC can effectively deal with traffic demand fluctuation by utilizing available buffer space in routers network-wide, and therefore lead to significant performance improvement in terms of tradeoff between bandwidth utilization and queueing delay. We further note that the performance limit of HFTraC is determined by the network architecture used to implement it. We provide trace-driven evaluation of the performance of HFTraC implemented in the proposed architectures that vary from fully centralized to completely decentralized.https://resolver.caltech.edu/CaltechAUTHORS:20170614-153521655Effects of Delays, Poles, and Zeros on Time Domain Waterbed Tradeoffs and Oscillations
https://resolver.caltech.edu/CaltechAUTHORS:20170614-164911018
Year: 2017
DOI: 10.1109/LCSYS.2017.2710327
his letter aims for a simple and accessible explanation as to why oscillations naturally arise due to tradeoffs in feedback systems, and how these can be aggravated by delays and unstable poles and zeros. Such results have been standard for decades using frequency domain methods, which yield a rich variety of familiar "waterbed" tradeoffs. While almost trivial for control experts, frequency domain methods are less familiar to many scientists and engineers who could benefit from the insights such tradeoffs can provide. So here we present an entirely time domain model using discrete time dynamics and l1 norm performance. A simple waterbed effect is that imposing zero steady state response to a step naturally create oscillations that double the response to periodic disturbances. We also show how this tradeoff is further aggravated not only by unstable poles and zeros, but also delays, in a way clearer than in the frequency domain versions.https://resolver.caltech.edu/CaltechAUTHORS:20170614-164911018System Level Synthesis: A Tutorial
https://resolver.caltech.edu/CaltechAUTHORS:20180126-080200870
Year: 2017
DOI: 10.1109/CDC.2017.8264074
This tutorial paper provides an overview of the System Level Approach to control synthesis; a scalable framework for large-scale distributed control. The system level approach is composed of three central components: System Level Parameterizations (SLPs), System Level Constraints (SLCs) and System Level Synthesis (SLP) problems. We describe how the combination of these elements parameterize the largest known class of constrained controllers that admit a convex formulation.https://resolver.caltech.edu/CaltechAUTHORS:20180126-080200870Passive-Aggressive Learning and Control
https://resolver.caltech.edu/CaltechAUTHORS:20190205-082240504
Year: 2018
DOI: 10.23919/ACC.2018.8430904
In this work, we investigate the problem of simultaneously learning and controlling a system subject to adversarial choices of disturbances and system parameters. We study the problem for a scalar system with l∞ -norm bounded disturbances and system parameters constrained to lie in a known bounded convex polytope. We present a controller that is globally stabilizing and gives continuously improving bounds on the worst case state deviation. The proposed controller simultaneously learns the system parameters and controls the system. The controller emerges naturally from an optimization problem, and balances exploration and exploitation in such a way that it is able to efficiently stabilize unstable and adversarial system dynamics. Specifically if the controller is faced with large uncertainty, the initial focus is on exploration, retrieving information about the system by applying state-feedback controllers with varying gains and signs. In a prespecified bounded region around the origin, our control strategy can be seen as passive in the sense that it learns very little information. Only once the noise and/or system parameters act in an adversarial way, leading to the the state exiting the aforementioned region for more than one time-step, our proposed controller behaves aggressively in that it is guaranteed to learn enough about the system to subsequently robustly stabilize it. We end by demonstrating the efficiency of our methods via numerical simulations.https://resolver.caltech.edu/CaltechAUTHORS:20190205-082240504Fundamental limits and achievable performance in biomolecular control
https://resolver.caltech.edu/CaltechAUTHORS:20190205-081829763
Year: 2018
DOI: 10.23919/ACC.2018.8430933
Understanding how a biomolecular system achieves various control objectives via chemical reactions is of crucial importance in cell biology. However, unlike typical control problems where full information about the system is assumed to be known, typically, only a small portion of the entire biomolecular system can be characterized with certainty. In order to gain insights in these situations, we use control and information theory to derive the performance bounds when chemical species implement feedback control via the production rate or the degradation rate of chemical species. We expand the approach of the pioneering work of Lestas et al. to treat more general scenarios and derive explicit lower bounds on the achievable Fano factor of the controlled species. Our results suggest that control and sensing via the degradation rates, compared with those via the production rates, benefit from the additional design freedom to choose degradation efficiencies, in addition to previously considered signal rate, which helps to lower the Fano factor of the controlled species. We compare our lower bounds with achievable performance via simulation of chemical master equations.https://resolver.caltech.edu/CaltechAUTHORS:20190205-081829763Hard Limits And Performance Tradeoffs In A Class Of Sequestration Feedback Systems
https://resolver.caltech.edu/CaltechAUTHORS:20180927-114225422
Year: 2018
DOI: 10.1101/222042
Feedback regulation is pervasive in biology at both the organismal and cellular level. In this article, we explore the properties of a particular biomolecular feedback mechanism implemented using the sequestration binding of two molecules. Our work develops an analytic framework for understanding the hard limits, performance tradeoffs, and architectural properties of this simple model of biological feedback control. Using tools from control theory, we show that there are simple parametric relationships that determine both the stability and the performance of these systems in terms of speed, robustness, steady-state error, and leakiness. These findings yield a holistic understanding of the behavior of sequestration feedback and contribute to a more general theory of biological control systems.https://resolver.caltech.edu/CaltechAUTHORS:20180927-114225422Evaluation of Hansen et al.: Nuance Is Crucial in Comparisons of Noise
https://resolver.caltech.edu/CaltechAUTHORS:20181024-161225614
Year: 2018
DOI: 10.1016/j.cels.2018.10.003
This is a first-round review of "Cytoplasmic Amplification of Transcriptional Noise Generates Substantial Cell-to-Cell Variability" by Leor Weinberger, Maike Hansen, and their colleagues; it was written for Cell Systems as part of the peer review process. We chose to feature it here because its nuanced treatment of noise, Hansen et al. (2018, this issue of Cell Systems), and Battich et al. (2015) exemplifies scholarship. The constructive critique it presents also improved Hansen et al. (2018) without imposing an agenda on its authors. After the first round of review,Hansen et al. (2018)was revised to take the reviewers' comments into account, re-submitted, re-reviewed, accepted for publication, and then published in this issue of Cell Systems. For comparison, an earlier version of Hansen et al. was deposited on bioRxiv ahead of review and can be found here:https://doi.org/10.1101/222901. Olsman et al. chose to reveal their identities during the peer review process within this peer review. Hansen et al. support the publication of this Peer Review; their permission to use it was obtained after their paper was officially accepted. This Peer Review was not itself peer reviewed. It has been lightly edited for stylistic polish and clarity. No scientific content has been substantively altered.https://resolver.caltech.edu/CaltechAUTHORS:20181024-161225614A Control-Theoretic Approach to In-Network Congestion Management
https://resolver.caltech.edu/CaltechAUTHORS:20181005-093939759
Year: 2018
DOI: 10.1109/tnet.2018.2866785
WANs are often over-provisioned to accommodate worst-case operating conditions, with many links typically running at only around 30% capacity. In this paper, we show that in-network congestion management can play an important role in increasing network utilization. To mitigate the effects of in-network congestion caused by rapid variations in traffic demand, we propose using high-frequency traffic control (HFTraC) algorithms that exchange real-time flow rate and buffer occupancy information between routers to dynamically coordinate their link-service rates. We show that the design of such dynamic link-service rate policies can be cast as a distributed optimal control problem that allows us to systematically explore an enlarged design space of in-network congestion management algorithms. This also provides a means of quantitatively comparing different controller architectures: we show, perhaps surprisingly, that centralized control is not always better. We implement and evaluate HFTraC in the face of rapidly varying UDP and TCP flows and in combination with AQM algorithms. Using a custom experimental testbed, a Mininet emulator, and a production WAN, we show that HFTraC leads to up to 66% decreases in packet loss rates at high link utilizations as compared to FIFO policies.https://resolver.caltech.edu/CaltechAUTHORS:20181005-093939759Separable and Localized System Level Synthesis for Large-Scale Systems
https://resolver.caltech.edu/CaltechAUTHORS:20180330-082054929
Year: 2018
DOI: 10.1109/TAC.2018.2819246
A major challenge faced in the design of large-scale cyber-physical systems, such as power systems, the Internet of Things or intelligent transportation systems, is that traditional distributed optimal control methods do not scale gracefully, neither in controller synthesis nor in controller implementation, to systems composed of a large number (e.g., on the order of billions) of interacting subsystems. This paper shows that this challenge can now be addressed by leveraging the recently introduced System Level Approach (SLA) to controller synthesis. In particular, in the context of the SLA, we define suitable notions of separability for control objective functions and system constraints such that the global optimization problem (or iterate update problems of a distributed optimization algorithm) can be decomposed into parallel subproblems. We then further show that if additional locality (i.e., sparsity) constraints are imposed, then these subproblems can be solved using local models and local decision variables. The SLA is essential to maintaining the convexity of the aforementioned problems under locality constraints. As a consequence, the resulting synthesis methods have O(1) complexity relative to the size of the global system. We further show that many optimal control problems of interest, such as (localized) LQR and LQG, H_2 optimal control with joint actuator and sensor regularization, and (localized) mixed H_2/L_1 optimal control problems, satisfy these notions of separability, and use these problems to explore tradeoffs in performance, actuator and sensing density, and average versus worst-case performance for a large-scale power inspired system.https://resolver.caltech.edu/CaltechAUTHORS:20180330-082054929Robust Perfect Adaptation in Biomolecular Reaction Networks
https://resolver.caltech.edu/CaltechAUTHORS:20181031-075024162
Year: 2018
DOI: 10.1109/CDC.2018.8619101
For control in biomolecular systems, the most basic objective of maintaining a small error in a target variable, say the expression level of some protein, is often difficult due to the presence of both large uncertainty of every type and intrinsic limitations on the controller's implementation. This paper explores the limits of biochemically plausible controller design for the problem of robust perfect adaptation (RPA), biologists' term for robust steady state tracking. It is well-known that for a large class of nonlinear systems, a system has RPA iff it has integral feedback control (IFC), which has been used extensively in real control systems to achieve RPA. However, we show that due to intrinsic physical limitations on the dynamics of chemical reaction networks (CRNs), cells cannot implement IFC directly in the concentration of a chemical species. This contrasts with electronic implementations, particularly digital, where it is trivial to implement IFC directly in a single state. Therefore, biomolecular systems have to achieve RPA by encoding the integral control variable into the network architecture of a CRN. We describe a general framework to implement RPA in CRNs and show that well-known network motifs that achieve RPA, such as (negative) integral feedback (IFB) and incoherent feedforward (IFF), are examples of such implementations. We also develop methods to designing integral feedback variables for unknown plants. This standard control notion is surprisingly nontrivial and relatively unstudied in biomolecular control. The methods developed here connect different existing fields and approaches on the problem of biomolecular control, and hold promise for systematic chemical reaction controller synthesis as well as analysis.https://resolver.caltech.edu/CaltechAUTHORS:20181031-075024162Architecture and Trade-offs in the Heat Shock Response System
https://resolver.caltech.edu/CaltechAUTHORS:20190201-143228822
Year: 2018
DOI: 10.1109/cdc.2018.8619129
Biological control systems often contain a wide variety of feedforward and feedback mechanisms that regulate a given process. While it is generally assumed that this apparent redundancy has evolved for a reason, it is often unclear how exactly the cell benefits from more complex circuit architectures. Here we study this problem in the context of a minimal model of the Heat Shock Response system in E. coli and show, through a combination of theory and simulation, that the complexity of the natural system outperforms hypothetical simpler architectures in a variety of robustness and efficiency tradeoffs. We have developed a significantly simplified model of the system that faithfully captures these rich issues. Because a great deal of biological detail is known about this particular system, we are able to compare simple models with more complete ones and obtain a level of theoretical and quantitative insight not generally feasible in the study of biological circuits. We primarily hope this will inform future analysis of both heat shock and newly studied biological complexity.https://resolver.caltech.edu/CaltechAUTHORS:20190201-143228822Impact of Stochasticity and Its Control on a Model of the Inflammatory Response
https://resolver.caltech.edu/CaltechAUTHORS:20190128-135436096
Year: 2019
DOI: 10.3390/computation7010003
The dysregulation of inflammation, normally a self-limited response that initiates healing, is a critical component of many diseases. Treatment of inflammatory disease is hampered by an incomplete understanding of the complexities underlying the inflammatory response, motivating the application of systems and computational biology techniques in an effort to decipher this complexity and ultimately improve therapy. Many mathematical models of inflammation are based on systems of deterministic equations that do not account for the biological noise inherent at multiple scales, and consequently the effect of such noise in regulating inflammatory responses has not been studied widely. In this work, noise was added to a deterministic system of the inflammatory response in order to account for biological stochasticity. Our results demonstrate that the inflammatory response is highly dependent on the balance between the concentration of the pathogen and the level of biological noise introduced to the inflammatory network. In cases where the pro- and anti-inflammatory arms of the response do not mount the appropriate defense to the inflammatory stimulus, inflammation transitions to a different state compared to cases in which pro- and anti-inflammatory agents are elaborated adequately and in a timely manner. In this regard, our results show that noise can be both beneficial and detrimental for the inflammatory endpoint. By evaluating the parametric sensitivity of noise characteristics, we suggest that efficiency of inflammatory responses can be controlled. Interestingly, the time period on which parametric intervention can be introduced efficiently in the inflammatory system can be also adjusted by controlling noise. These findings represent a novel understanding of inflammatory systems dynamics and the potential role of stochasticity thereon.https://resolver.caltech.edu/CaltechAUTHORS:20190128-135436096Architectural Principles for Characterizing the Performance of Antithetic Integral Feedback Networks
https://resolver.caltech.edu/CaltechAUTHORS:20180927-114225624
Year: 2019
DOI: 10.1016/j.isci.2019.04.004
PMCID: PMC6479019
As we begin to design increasingly complex synthetic biomolecular systems, it is essential to develop rational design methodologies that yield predictable circuit performance. Here we apply theoretical tools from the theory of control and dynamical systems to yield practical insights into the architecture and function of a particular class of biological feedback circuit. Specifically, we show that it is possible to analytically characterize both the operating regime and performance tradeoffs of a sequestration feedback circuit architecture. Further, we demonstrate how these principles can be applied to inform the design process of a particular synthetic feedback circuit.https://resolver.caltech.edu/CaltechAUTHORS:20180927-114225624System level synthesis
https://resolver.caltech.edu/CaltechAUTHORS:20190513-131853183
Year: 2019
DOI: 10.1016/j.arcontrol.2019.03.006
This article surveys the System Level Synthesis framework, which presents a novel perspective on constrained robust and optimal controller synthesis for linear systems. We show how SLS shifts the controller synthesis task from the design of a controller to the design of the entire closed loop system, and highlight the benefits of this approach in terms of scalability and transparency. We emphasize two particular applications of SLS, namely large-scale distributed optimal control and robust control. In the case of distributed control, we show how SLS allows for localized controllers to be computed, extending robust and optimal control methods to large-scale systems under practical and realistic assumptions. In the case of robust control, we show how SLS allows for novel design methodologies that, for the first time, quantify the degradation in performance of a robust controller due to model uncertainty – such transparency is key in allowing robust control methods to interact, in a principled way, with modern techniques from machine learning and statistical inference. Throughout, we emphasize practical and efficient computational solutions, and demonstrate our methods on easy to understand case studies.https://resolver.caltech.edu/CaltechAUTHORS:20190513-131853183Fitts' Law for speed-accuracy trade-off is a diversity sweet spot in sensorimotor control
https://resolver.caltech.edu/CaltechAUTHORS:20190618-151810576
Year: 2019
DOI: 10.48550/arXiv.1906.00905
Human sensorimotor control exhibits remarkable speed and accuracy, as celebrated in Fitts' law for reaching. Much less studied is how this is possible despite being implemented by neurons and muscle components with severe speed-accuracy tradeoffs (SATs). Here we develop a theory that connects the SATs at the system and hardware levels, and use it to explain Fitts' law for reaching and related laws. These results show that diversity between hardware components can be exploited to achieve both fast and accurate control performance using slow or inaccurate hardware. Such "diversity sweet spots'' (DSSs) are ubiquitous in biology and technology, and explain why large heterogeneities exist in biological and technical components and how both engineers and natural selection routinely evolve fast and accurate systems from imperfect hardware.https://resolver.caltech.edu/CaltechAUTHORS:20190618-151810576Optimal Two Player LQR State Feedback With Varying Delay
https://resolver.caltech.edu/CaltechAUTHORS:20190621-082135981
Year: 2019
DOI: 10.48550/arXiv.1403.7790
This paper presents an explicit solution to a two player distributed LQR problem in which communication between controllers occurs across a communication link with varying delay. We extend known dynamic programming methods to accommodate this varying delay, and show that under suitable assumptions, the optimal control actions are linear in their information, and that the resulting controller has piecewise linear dynamics dictated by the current effective delay regime.https://resolver.caltech.edu/CaltechAUTHORS:20190621-082135981Towards a Theory of Scale-Free Graphs: Definition, Properties, and Implications (Extended Version)
https://resolver.caltech.edu/CaltechAUTHORS:20190627-100025027
Year: 2019
DOI: 10.48550/arXiv.0501169
Although the "scale-free: literature is large and growing, it gives neither a precise definition of scale-free graphs nor rigorous proofs of many of their claimed properties. In fact, it is easily shown that the existing theory has many inherent contradictions and verifiably false claims. In this paper, we propose a new, mathematically precise, and structural definition of the extent to which a graph is scale-free, and prove a series of results that recover many of the claimed properties while suggesting the potential for a rich and interesting theory. With this definition, scale-free (or its opposite, scale-rich) is closely related to other structural graph properties such as various notions of self-similarity (or respectively, self-dissimilarity). Scale-free graphs are also shown to be the likely outcome of random construction processes, consistent with the heuristic definitions implicit in existing random graph approaches. Our approach clarifies much of the confusion surrounding the sensational qualitative claims in the scale-free literature, and offers rigorous and quantitative alternatives.https://resolver.caltech.edu/CaltechAUTHORS:20190627-100025027Experimental and educational platforms for studying architecture and tradeoffs in human sensorimotor control
https://resolver.caltech.edu/CaltechAUTHORS:20190905-143550126
Year: 2019
This paper describes several surprisingly rich but simple demos and a new experimental platform for human sensorimotor control research and also controls education. The platform safely simulates a canonical sensorimotor task of riding a mountain bike down a steep, twisting, bumpy trail using a standard display and inexpensive off-the-shelf gaming steering wheel with a force feedback motor. We use the platform to verify our theory, presented in a companion paper. The theory tells how component hardware speed-accuracy tradeoffs (SATs) in control loops impose corresponding SATs at the system level and how effective architectures mitigate the deleterious impact of hardware SATs through layering and "diversity sweet spots" (DSSs). Specifically, we measure the impacts on system performance of delays, quantization, and uncertainties in sensorimotor control loops, both within the subject's nervous system and added externally via software in the platform. This provides a remarkably rich test of the theory, which is consistent with all preliminary data. Moreover, as the theory predicted, subjects effectively multiplex specific higher layer planning/tracking of the trail using vision with lower layer rejection of unseen bump disturbances using reflexes. In contrast, humans multitask badly on tasks that do not naturally distribute across layers (e.g. texting and driving). The platform is cheap to build and easy to program for both research and education purposes, yet verifies our theory, which is aimed at closing a crucial gap between neurophysiology and sensorimotor control. The platform can be downloaded at https://github.com/Doyle-Lab/WheelCon.https://resolver.caltech.edu/CaltechAUTHORS:20190905-143550126Scalable Robust Adaptive Control from the System Level Perspective
https://resolver.caltech.edu/CaltechAUTHORS:20190617-112254520
Year: 2019
DOI: 10.48550/arXiv.1904.00077
We will present a new general framework for robust and adaptive control that allows for distributed and scalable learning and control of large systems of interconnected linear subsystems. The control method is demonstrated for a linear time-invariant system with bounded parameter uncertainties, disturbances and noise. The presented scheme continuously collects measurements to reduce the uncertainty about the system parameters and adapts dynamic robust controllers online in a stable and performance-improving way. A key enabler for our approach is choosing a time-varying dynamic controller implementation, inspired by recent work on System Level Synthesis [1]. We leverage a new robustness result for this implementation to propose a general robust adaptive control algorithm. In particular, the algorithm allows us to impose communication and delay constraints on the controller implementation and is formulated as a sequence of robust optimization problems that can be solved in a distributed manner. The proposed control methodology performs particularly well when the interconnection between systems is sparse and the dynamics of local regions of subsystems depend only on a small number of parameters. As we will show on a five-dimensional exemplary chain-system, the algorithm can utilize system structure to efficiently learn and control the entire system while respecting communication and implementation constraints. Moreover, although current theoretical results require the assumption of small initial uncertainties to guarantee robustness, we will present simulations that show good closed-loop performance even in the case of large uncertainties, which suggests that this assumption is not critical for the presented technique and future work will focus on providing less conservative guarantees.https://resolver.caltech.edu/CaltechAUTHORS:20190617-112254520Coupled Reaction Networks for Noise Suppression
https://resolver.caltech.edu/CaltechAUTHORS:20181030-075417310
Year: 2019
DOI: 10.1101/440453
Noise is intrinsic to many important regulatory processes in living cells, and often forms obstacles to be overcome for reliable biological functions. However, due to stochastic birth and death events of all components in biomolecular systems, suppression of noise of one component by another is fundamentally hard and costly. Quantitatively, a widely-cited severe lower bound on noise suppression in biomolecular systems was established by Lestas et. al. in 2010, assuming that the plant and the controller have separate birth and death reactions. This makes the precision observed in several biological phenomena, e.g., cell fate decision making and cell cycle time ordering, seem impossible. We demonstrate that coupling, a mechanism widely observed in biology, could suppress noise lower than the bound of Lestas et. al. with moderate energy cost. Furthermore, we systematically investigate the coupling mechanism in all two-node reaction networks, showing that negative feedback suppresses noise better than incoherent feedforward achitectures, coupled systems have less noise than their decoupled version for a large class of networks, and coupling has its own fundamental limitations in noise suppression. Results in this work have implications for noise suppression in biological control and provide insight for a new efficient mechanism of noise suppression in biology.https://resolver.caltech.edu/CaltechAUTHORS:20181030-075417310Flexibility and Cost-Dependence in Quantized Control
https://resolver.caltech.edu/CaltechAUTHORS:20190905-145500772
Year: 2019
Layered control architectures in biology and neuroscience can be used to mitigate speed-accuracy tradeoffs, with low-layer quantized controllers carrying out time-sensitive tasks at reduced precision. Here, we describe and optimize the worst-case approximation loss for a quantized controller: the maximum control and state costs paid in the quantized case that would not be paid in the full-precision case. We show that the optimal design of a quantizer depends on the dynamics and the state and control costs, leading notably to cases in which systematically biased estimates of state are optimal for control. We further show that high-layer input can direct a low-layer controller to flexibly execute quantized control across context-related cost functions, with component-level mechanisms that are plausibly implementable in biological settings.https://resolver.caltech.edu/CaltechAUTHORS:20190905-145500772Measurement back action and a classical uncertainty principle: Heisenberg meets Kalman
https://resolver.caltech.edu/CaltechAUTHORS:20190905-145752040
Year: 2019
We study a measurement framework motivated by considering macroscopic (i.e. large, active, and with finite temperature) measurement of microscopic (i.e. small and lossless) but classical dynamics. This unavoidably leads to "measurement back action" on the microscopic dynamics that nevertheless still allows for optimal filtering to minimize estimation error, but with tradeoffs between errors due to estimation and errors due to the back action. We focus on a simple case in which the deterministic effects of the measurement process are completely canceled by active control, and the remaining (coupled) stochastic back action and measurement noise is optimally filtered to minimize estimation error. This leads to a particularly interesting tradeoffs and limits on estimation and back action, analogous in many respects with the Heisenberg uncertainty principle but in an entirely classical framework.https://resolver.caltech.edu/CaltechAUTHORS:20190905-145752040Mathematical Models of Physiological Responses to Exercise
https://resolver.caltech.edu/CaltechAUTHORS:20190906-075423929
Year: 2019
This paper develops empirical mathematical models for physiological responses to exercise. We first find single-input single-output models describing heart rate variability, ventilation, oxygen consumption and carbon dioxide production in response to workload changes and then identify a single-input multi-output model from workload to these physiological variabilities. We also investigate the possibility of the existence of a universal model for physiological variability in different individuals during treadmill running. Simulations based on real data substantiate that the obtained models accurately capture the physiological responses to workload variations. In particular, it is observed that (i) different physiological responses to exercise can be captured by low-order linear or mildly nonlinear models; and (ii) there may exist a universal model for oxygen consumption that works for different individuals.https://resolver.caltech.edu/CaltechAUTHORS:20190906-075423929Hard Limits and Performance Tradeoffs in a Class of Antithetic Integral Feedback Networks
https://resolver.caltech.edu/CaltechAUTHORS:20190708-153642582
Year: 2019
DOI: 10.1016/j.cels.2019.06.001
Feedback regulation is pervasive in biology at both the organismal and cellular level. In this article, we explore the properties of a particular biomolecular feedback mechanism called antithetic integral feedback, which can be implemented using the binding of two molecules. Our work develops an analytic framework for understanding the hard limits, performance tradeoffs, and architectural properties of this simple model of biological feedback control. Using tools from control theory, we show that there are simple parametric relationships that determine both the stability and the performance of these systems in terms of speed, robustness, steady-state error, and leakiness. These findings yield a holistic understanding of the behavior of antithetic integral feedback and contribute to a more general theory of biological control systems.https://resolver.caltech.edu/CaltechAUTHORS:20190708-153642582A System Level Approach to Controller Synthesis
https://resolver.caltech.edu/CaltechAUTHORS:20190201-155312160
Year: 2019
DOI: 10.1109/tac.2018.2890753
Biological and advanced cyber-physical control systems often have limited, sparse, uncertain, and distributed communication and computing in addition to sensing and actuation. Fortunately, the corresponding plants and performance requirements are also sparse and structured, and this must be exploited to make constrained controller design feasible and tractable. We introduce a new "system level" (SL) approach involving three complementary SL elements. SL parameterizations (SLPs) provide an alternative to the Youla parameterization of all stabilizing controllers and the responses they achieve, and combine with SL constraints (SLCs) to parameterize the largest known class of constrained stabilizing controllers that admit a convex characterization, generalizing quadratic invariance. SLPs also lead to a generalization of detectability and stabilizability, suggesting the existence of a rich separation structure, that when combined with SLCs is naturally applicable to structurally constrained controllers and systems. We further provide a catalog of useful SLCs, most importantly including sparsity, delay, and locality constraints on both communication and computing internal to the controller, and external system performance. Finally, we formulate SL synthesis problems, which define the broadest known class of constrained optimal control problems that can be solved using convex programming.https://resolver.caltech.edu/CaltechAUTHORS:20190201-155312160Lessons from "a first-principles approach to understanding the internet's router-level topology"
https://resolver.caltech.edu/CaltechAUTHORS:20191111-134656474
Year: 2019
DOI: 10.1145/3371934.3371964
Our main purpose for this editorial is to reiterate the main message that we tried to convey in our SIGCOMM'04 paper but that got largely lost in all the hype surrounding the use of scale-free network models throughout the sciences in the last two decades. That message was that because of (1) the Internet's highly-engineered architecture, (2) a thorough understanding of its component technologies, and (3) the availability of extensive (but typically noisy) measurements, this complex man-made system affords unique opportunities to unambiguously resolve most claims about its properties, structure, and functionality. In the process, we point out the fallacy of popular approaches that consider complex systems such as the Internet from the perspective of disorganized complexity and argue for renewed efforts and increased focus on advancing an "architecture first" view with its emphasis on studying the organized complexity of systems such as the Internet.https://resolver.caltech.edu/CaltechAUTHORS:20191111-134656474Robust Model-Free Learning and Control without Prior Knowledge
https://resolver.caltech.edu/CaltechAUTHORS:20200911-071601902
Year: 2019
DOI: 10.1109/cdc40024.2019.9029986
We present a simple model-free control algorithm that is able to robustly learn and stabilize an unknown discretetime linear system with full control and state feedback subject to arbitrary bounded disturbance and noise sequences. The controller does not require any prior knowledge of the system dynamics, disturbances or noise, yet can guarantee robust stability, uniform asymptotic bounds and uniform worst-case bounds on the state-deviation. Rather than the algorithm itself, we would like to highlight the new approach taken towards robust stability analysis which served as a key enabler in providing the presented stability and performance guarantees. We will conclude with simulation results that show that despite the generality and simplicity, the controller demonstrates good closed-loop performance.https://resolver.caltech.edu/CaltechAUTHORS:20200911-071601902Fundamental Limits and Tradeoffs in Autocatalytic Pathways
https://resolver.caltech.edu/CaltechAUTHORS:20190613-142206629
Year: 2020
DOI: 10.1109/tac.2019.2921671
This paper develops some basic principles to study autocatalytic networks and exploit their structural properties in order to characterize their inherent fundamental limits and tradeoffs. In a dynamical system with autocatalytic structure, the system's output is necessary to catalyze its own production. Our study has been motivated by a simplified model of a glycolysis pathway. First, the properties of this class of pathways are investigated through a network model, which consists of a chain of enzymatically catalyzed intermediate reactions coupled with an autocatalytic component. We explicitly derive a hard limit on the minimum achievable L₂-gain disturbance attenuation and a hard limit on its minimum required output energy. Then, we show how these resulting hard limits lead to some fundamental tradeoffs between transient and steady-state behavior of the network and its net production.https://resolver.caltech.edu/CaltechAUTHORS:20190613-142206629Metabolic multi-stability and hysteresis in a model aerobe-anaerobe microbiome community
https://resolver.caltech.edu/CaltechAUTHORS:20200303-084602692
Year: 2020
DOI: 10.1101/2020.02.28.968941
Changes in the composition of the human microbiome are associated with health and disease. Some microbiome states persist in seemingly unfavorable conditions, e.g., the proliferation of aerobe-anaerobe communities in oxygen-exposed environments in wounds or small intestinal bacterial overgrowth. However, it remains unclear how different stable microbiome states can exist under the same conditions, or why some states persist under seemingly unfavorable conditions. Here, using two microbes relevant to the human microbiome, we combine genome-scale mathematical modeling, bioreactor experiments, transcriptomics, and dynamical systems theory, to show that multi-stability and hysteresis (MSH) is a mechanism that can describe the shift from an aerobe-dominated state to a resilient, paradoxically persistent aerobe-anaerobe state. We examine the impact of changing oxygen and nutrient regimes and identify factors, including changes in metabolism and gene expression, that lead to MSH. When analyzing the transitions between the two states in this system, the familiar conceptual connection between causation and correlation is broken and MSH must be used to interpret the dynamics. Using MSH to analyze microbiome dynamics will improve our conceptual understanding of the stability of microbiome states and the transitions among microbiome states.https://resolver.caltech.edu/CaltechAUTHORS:20200303-084602692The driver and the engineer: Reinforcement learning and robust control
https://resolver.caltech.edu/CaltechAUTHORS:20200730-143943072
Year: 2020
DOI: 10.23919/acc45564.2020.9147347
Reinforcement learning (RL) and other AI methods are exciting approaches to data-driven control design, but RL's emphasis on maximizing expected performance contrasts with robust control theory (RCT), which puts central emphasis on the impact of model uncertainty and worst case scenarios. This paper argues that these approaches are potentially complementary, roughly analogous to that of a driver and an engineer in, say, formula one racing. Each is indispensable but with radically different roles. If RL takes the driver seat in safety critical applications, RCT may still play a role in plant design, and also in diagnosing and mitigating the effects of performance degradation due to changes or failures in component or environments. While much RCT research emphasizes synthesis of controllers, as does RL, in practice RCT's impact has perhaps already been greater in using hard limits and tradeoffs on robust performance to provide insight into plant design, interpreted broadly as including sensor, actuator, communications, and computer selection and placement in addition to core plant dynamics. More automation may ultimately require more rigor and theory, not less, if our systems are going to be both more efficient and robust. Here we use the simplest possible toy model to illustrate how RCT can potentially augment RL in finding mechanistic explanations when control is not merely hard, but impossible, and issues in making them more compatibly data-driven. Despite the simplicity, questions abound. We also discuss the relevance of these ideas to more realistic challenges.https://resolver.caltech.edu/CaltechAUTHORS:20200730-143943072SBML Level 3: an extensible format for the exchange and reuse of biological models
https://resolver.caltech.edu/CaltechAUTHORS:20200827-073238775
Year: 2020
DOI: 10.15252/msb.20199110
Systems biology has experienced dramatic growth in the number, size, and complexity of computational models. To reproduce simulation results and reuse models, researchers must exchange unambiguous model descriptions. We review the latest edition of the Systems Biology Markup Language (SBML), a format designed for this purpose. A community of modelers and software authors developed SBML Level 3 over the past decade. Its modular form consists of a core suited to representing reaction‐based models and packages that extend the core with features suited to other model types including constraint‐based models, reaction‐diffusion models, logical network models, and rule‐based models. The format leverages two decades of SBML and a rich software ecosystem that transformed how systems biologists build and interact with models. More recently, the rise of multiscale models of whole cells and organs, and new data sources such as single‐cell measurements and live imaging, has precipitated new ways of integrating data with models. We provide our perspectives on the challenges presented by these developments and how SBML Level 3 provides the foundation needed to support this evolution.https://resolver.caltech.edu/CaltechAUTHORS:20200827-073238775Metabolic multistability and hysteresis in a model aerobe-anaerobe microbiome community
https://resolver.caltech.edu/CaltechAUTHORS:20200813-144241318
Year: 2020
DOI: 10.1126/sciadv.aba0353
PMCID: PMC7423363
Major changes in the microbiome are associated with health and disease. Some microbiome states persist despite seemingly unfavorable conditions, such as the proliferation of aerobe-anaerobe communities in oxygen-exposed environments in wound infections or small intestinal bacterial overgrowth. Mechanisms underlying transitions into and persistence of these states remain unclear. Using two microbial taxa relevant to the human microbiome, we combine genome-scale mathematical modeling, bioreactor experiments, transcriptomics, and dynamical systems theory to show that multistability and hysteresis (MSH) is a mechanism describing the shift from an aerobe-dominated state to a resilient, paradoxically persistent aerobe-anaerobe state. We examine the impact of changing oxygen and nutrient regimes and identify changes in metabolism and gene expression that lead to MSH and associated multi-stable states. In such systems, conceptual causation-correlation connections break and MSH must be used for analysis. Using MSH to analyze microbiome dynamics will improve our conceptual understanding of stability of microbiome states and transitions between states.https://resolver.caltech.edu/CaltechAUTHORS:20200813-144241318WheelCon: A Wheel Control-Based Gaming Platform for Studying Human Sensorimotor Control
https://resolver.caltech.edu/CaltechAUTHORS:20200909-133721965
Year: 2020
DOI: 10.3791/61092
WheelCon is a novel, free and open-source platform to design video games that noninvasively simulates mountain biking down a steep, twisting, bumpy trail. It contains components presenting in human sensorimotor control (delay, quantization, noise, disturbance, and multiple feedback loops) and allows researchers to study the layered architecture in sensorimotor control.https://resolver.caltech.edu/CaltechAUTHORS:20200909-133721965Heart rate and blood pressure decreases after a motor task in pre‐symptomatic AD
https://resolver.caltech.edu/CaltechAUTHORS:20201210-160240164
Year: 2020
DOI: 10.1002/alz.045521
Background. Understanding how cardiovascular health affects the early Alzheimer's disease (AD) pathology is challenging because several variables can contribute to significant changes in blood pressure (BP) and heart rate (HR). Previous studies have suggested an association between Alzheimer's disease and HR. Little association is known about pre‐symptomatic AD. We aim to explore the HR and BP changes before and after a motor task in a cognitive healthy population.
Method. Participants (age ranged 61‐95 years old) were recruited from the local community, including cognitively healthy (CH) individuals who were further subdivided based on cerebrospinal fluid (CSF) classification: those with a normal amyloid/tau ratio (CH‐NAT, n = 11) or a pathological amyloid/tau ratio (CH‐PAT, n = 8). Two groups were age, gender, BMI, and education matched. Participants were asked to use a steering wheel to follow a moving line presented on the monitor. Practice session were followed by three task‐identical sessions, 90 seconds per session. Each session consisted of 3 repeated blocks, 30 seconds per block: bump, trail, and bump & trail. Systolic pressure (SP), diastolic pressure (DP), pulse pressure (PP), and HR were measured before and after completing the whole task.
Result. No differences in BP and HR were found between CH‐NATs and CH‐PATs. We found a significant decrease in SP for both CH‐NATs (pre‐task 135.2±24.2, post‐task 127.2±18.8, reduced by 8±9.7, p = 0.0208) and CH‐PATs (pre‐task: 146.1±20.6, post‐task: 134.8±19.6, reduced by 11.3±5.2, p = 0.0004). Furthermore, CH‐PATs have a significant drop in HR (pre‐task: 71.4±10.7, post‐task: 66.3±8.2, reduced by 5.1±4.1 beats, p = 0.0098) compared to CH‐NATs (pre‐task: 70.8±11.8, post‐task: 69.2±13.3, reduced by 1.5±6.2 beats, p = 0.4302). CH‐PATs has decreased HR*SP (pre‐task: 10464±2375.6, post‐task: 8941.1±1843.1, reduced by 1522.9±775.8, p = 0.0009), compared to CH‐NATs (pre‐task: 9537.2±2098.3, post‐task: 8765.2±1769.2, reduced by 772.0±1153.5, p=0.0507). No significant change was found in DP for either CH‐NATs or CH‐PATs.
Conclusion. Pre‐symptomatic AD participants have a significant drop of HR, SP, and SP*HR compared to CH‐NATs, indicating reduced sympathetic responses after motor task. These changes in HR and SP may provide evidence of compromised cardiovascular health and autonomic regulation in pre‐symptomatic AD.https://resolver.caltech.edu/CaltechAUTHORS:20201210-160240164Descending Predictive Feedback: From Optimal Control to the Sensorimotor System
https://resolver.caltech.edu/CaltechAUTHORS:20210510-141350951
Year: 2021
DOI: 10.48550/arXiv.2103.16812
Descending predictive feedback (DPF) is an ubiquitous yet unexplained phenomenon in the central nervous system. Motivated by recent observations on motor-related signals in the visual system, we approach this problem from a sensorimotor standpoint and make use of optimal controllers to explain DPF. We define and analyze DPF in the optimal control context, revisiting several control problems (state feedback, full control, and output feedback) to explore conditions that necessitate DPF. We find that even small deviations from the unconstrained state feedback problem (e.g. incomplete sensing, communication delay) necessitate DPF in the optimal controller. We also discuss parallels between controller structure and observations from neuroscience. In particular, the system level (SLS) controller displays DPF patterns compatible with predictive coding theory and easily accommodates signaling restrictions (e.g. delay) typical to neurons, making it a candidate for use in sensorimotor modeling.https://resolver.caltech.edu/CaltechAUTHORS:20210510-141350951Online Robust Control of Nonlinear Systems with Large Uncertainty
https://resolver.caltech.edu/CaltechAUTHORS:20210510-094452379
Year: 2021
DOI: 10.48550/arXiv.2103.11055
Robust control is a core approach for controlling systems with performance guarantees that are robust to modeling error, and is widely used in real-world systems. However, current robust control approaches can only handle small system uncertainty, and thus require significant effort in system identification prior to controller design. We present an online approach that robustly controls a nonlinear system under large model uncertainty. Our approach is based on decomposing the problem into two sub-problems, "robust control design" (which assumes small model uncertainty) and "chasing consistent models", which can be solved using existing tools from control theory and online learning, respectively. We provide a learning convergence analysis that yields a finite mistake bound on the number of times performance requirements are not met and can provide strong safety guarantees, by bounding the worst-case state deviation. To the best of our knowledge, this is the first approach for online robust control of nonlinear systems with such learning theoretic and safety guarantees. We also show how to instantiate this framework for general robotic systems, demonstrating the practicality of our approach.https://resolver.caltech.edu/CaltechAUTHORS:20210510-094452379Control-theoretic immune tradeoffs explain SARS-CoV-2 virulence and transmission variation
https://resolver.caltech.edu/CaltechAUTHORS:20210429-095332336
Year: 2021
DOI: 10.1101/2021.04.25.441372
Dramatic variation in SARS-CoV-2 virulence and transmission between hosts has driven the COVID-19 pandemic. The complexity and dynamics of the immune response present a challenge to understanding variation in SARS-CoV-2 infections. To address this challenge, we apply control theory, a framework used to study complex feedback systems, to establish rigorous mathematical bounds on immune responses. Two mechanisms of SARS-CoV-2 biology are sufficient to create extreme variation between hosts: (1) a sparsely expressed host receptor and (2) potent, but not unique, suppression of interferon. The resulting model unifies disparate and unexplained features of the SARS-CoV-2 pandemic, predicts features of future viruses that threaten to cause pandemics, and identifies potential interventions.https://resolver.caltech.edu/CaltechAUTHORS:20210429-095332336Frontiers in Scalable Distributed Control: SLS, MPC, and Beyond
https://resolver.caltech.edu/CaltechAUTHORS:20210510-141354333
Year: 2021
DOI: 10.23919/ACC50511.2021.9483130
The System Level Synthesis (SLS) approach facilitates distributed control of large cyberphysical networks in an easy-to-understand, computationally scalable way. We present an overview of the SLS approach and its associated extensions in nonlinear control, MPC, adaptive control, and learning for control. To illustrate the effectiveness of SLS-based methods, we present a case study motivated by the power grid, with communication constraints, actuator saturation, disturbances, and changing setpoints. This simple but challenging case study necessitates the use of model predictive control (MPC); however, standard MPC techniques often scales poorly to large systems and incurs heavy computational burden. To address this challenge, we combine two SLS-based controllers to form a layered MPC-like controller. Our controller has constant computational complexity with respect to the system size, gives a 20-fold reduction in online computation requirements, and still achieves performance that is within 3% of the centralized MPC controller.https://resolver.caltech.edu/CaltechAUTHORS:20210510-141354333Stability and Control of Biomolecular Circuits through Structure
https://resolver.caltech.edu/CaltechAUTHORS:20201106-110344530
Year: 2021
DOI: 10.23919/ACC50511.2021.9483039
Due to omnipresent uncertainties and environmental disturbances, natural and engineered biological organisms face the challenging control problem of achieving robust performance using unreliable parts. The key to overcoming this challenge rests in identifying structures of biomolecular circuits that are largely invariant despite uncertainties, and building control through such structures. In this work, we show that log derivatives can capture the structural regimes of biocircuits in regulating the production and degradation rates of molecules. We show that log derivatives can establish stability of fixed points based on structure, despite large variations in rates and functional forms of models. Furthermore, we demonstrate how control objectives, such as robust perfect adaptation (i.e. step disturbance rejection), could be implemented through structure. Due to the method's simplicity, structural properties for analysis and design of biomolecular circuits can often be determined by a glance at the equations.https://resolver.caltech.edu/CaltechAUTHORS:20201106-110344530Diversity-enabled sweet spots in layered architectures and speed–accuracy trade-offs in sensorimotor control
https://resolver.caltech.edu/CaltechAUTHORS:20210510-141357701
Year: 2021
DOI: 10.1073/pnas.1916367118
PMCID: PMC8179159
Nervous systems sense, communicate, compute, and actuate movement using distributed components with severe trade-offs in speed, accuracy, sparsity, noise, and saturation. Nevertheless, brains achieve remarkably fast, accurate, and robust control performance due to a highly effective layered control architecture. Here, we introduce a driving task to study how a mountain biker mitigates the immediate disturbance of trail bumps and responds to changes in trail direction. We manipulated the time delays and accuracy of the control input from the wheel as a surrogate for manipulating the characteristics of neurons in the control loop. The observed speed–accuracy trade-offs motivated a theoretical framework consisting of two layers of control loops—a fast, but inaccurate, reflexive layer that corrects for bumps and a slow, but accurate, planning layer that computes the trajectory to follow—each with components having diverse speeds and accuracies within each physical level, such as nerve bundles containing axons with a wide range of sizes. Our model explains why the errors from two control loops are additive and shows how the errors in each control loop can be decomposed into the errors caused by the limited speeds and accuracies of the components. These results demonstrate that an appropriate diversity in the properties of neurons across layers helps to create "diversity-enabled sweet spots," so that both fast and accurate control is achieved using slow or inaccurate components.https://resolver.caltech.edu/CaltechAUTHORS:20210510-141357701Systems Level Model of Dietary Effects on Cognition via the Microbiome-Gut-Brain Axis
https://resolver.caltech.edu/CaltechAUTHORS:20220503-50866100
Year: 2021
DOI: 10.23919/ecc54610.2021.9655216
Intercommunication of the microbiome-gut-brain axis occurs through various signaling pathways including the vagus nerve, immune system, endocrine/paracrine, and bacteria-derived metabolites. But how these pathways integrate to influence cognition remains undefined. In this paper, we create a systems level mathematical framework comprised of interconnected organ-level dynamical subsystems to increase conceptual understanding of how these subsystems contribute to cognitive performance. With this framework we propose that control of hippocampal long-term potentiation (hypothesized to correlate with cognitive performance) is influenced by interorgan signaling with diet as the external control input. Specifically, diet can influence synaptic strength (LTP) homeostatic conditions necessary for learning. The proposed model provides new qualitative information about the functional relationship between diet and output cognitive performance. The results can give insight for optimization of cognitive performance via diet in experimental animal models.https://resolver.caltech.edu/CaltechAUTHORS:20220503-50866100Flux exponent control predicts metabolic dynamics from network structure
https://resolver.caltech.edu/CaltechAUTHORS:20230327-442794000.3
Year: 2023
DOI: 10.1101/2023.03.23.533708
Metabolic dynamics such as stability of steady states, oscillations, lags and growth arrests in stress responses are important for microbial communities in human health, ecology, and metabolic engineering. Yet it is hard to model due to sparse data available on trajectories of metabolic fluxes. For this reason, a constraint-based approach called flux control (e.g., flux balance analysis) was invented to split metabolic systems into known stoichiometry (plant) and unknown fluxes (controller), so that data can be incorporated as refined constraints, and optimization can be used to find behaviors in scenarios of interest. However, flux control can only capture steady state fluxes well, limiting its application to scenarios with days or slower timescales. To overcome this limitation and capture dynamic fluxes, this work proposes a novel constraint-based approach, flux exponent control (FEC). FEC uses a different plant-controller split between the activities of catalytic enzymes and their regulation through binding reactions. Since binding reactions effectively regulate fluxes' exponents (from previous works), this yields the rule of FEC, that cells regulate fluxes' exponents, not the fluxes themselves as in flux control. In FEC, dynamic regulations of metabolic systems are solutions to optimal control problems that are computationally solvable via model predictive control. Glycolysis, which is known to have minute-timescale oscillations, is used as an example to demonstrate FEC can capture metabolism dynamics from network structure. More generally, FEC brings metabolic dynamics to the realm of control system analysis and design.https://resolver.caltech.edu/CaltechAUTHORS:20230327-442794000.3Internal feedback in the cortical perception–action loop enables fast and accurate behavior
https://authors.library.caltech.edu/records/s2bj5-0ty91
Year: 2023
DOI: 10.1073/pnas.2300445120
PMCID: PMC10523540
Animals move smoothly and reliably in unpredictable environments. Models of sensorimotor control, drawing on control theory, have assumed that sensory information from the environment leads to actions, which then act back on the environment, creating a single, unidirectional perception–action loop. However, the sensorimotor loop contains internal delays in sensory and motor pathways, which can lead to unstable control. We show here that these delays can be compensated by internal feedback signals that flow backward, from motor toward sensory areas. This internal feedback is ubiquitous in neural sensorimotor systems, and we show how internal feedback compensates internal delays. This is accomplished by filtering out self-generated and other predictable changes so that unpredicted, actionable information can be rapidly transmitted toward action by the fastest components, effectively compressing the sensory input to more efficiently use feedforward pathways: Tracts of fast, giant neurons necessarily convey less accurate signals than tracts with many smaller neurons, but they are crucial for fast and accurate behavior. We use a mathematically tractable control model to show that internal feedback has an indispensable role in achieving state estimation, localization of function (how different parts of the cortex control different parts of the body), and attention, all of which are crucial for effective sensorimotor control. This control model can explain anatomical, physiological, and behavioral observations, including motor signals in the visual cortex, heterogeneous kinetics of sensory receptors, and the presence of giant cells in the cortex of humans as well as internal feedback patterns and unexplained heterogeneity in neural systems.https://authors.library.caltech.edu/records/s2bj5-0ty91