CaltechAUTHORS: Book Chapter
https://feeds.library.caltech.edu/people/Doyle-J-C/book_section.rss
A Caltech Library Repository Feedhttp://www.rssboard.org/rss-specificationpython-feedgenenThu, 13 Jun 2024 19:25:14 -0700Robustness with observers
https://resolver.caltech.edu/CaltechAUTHORS:20190308-152210784
Year: 1979
DOI: 10.1109/cdc.1978.267883
This paper describes an adjustment procedure for observer-based linear control systems which asymptotically achieves the same loop transfer functions (and hence the same relative stability, robustness, and disturbance rejection properties) as full-state feedback control implementations.https://resolver.caltech.edu/CaltechAUTHORS:20190308-152210784Robustness of multiloop linear feedback systems
https://resolver.caltech.edu/CaltechAUTHORS:20190308-152210699
Year: 1979
DOI: 10.1109/cdc.1978.267885
This paper presents a new approach to the frequency-domain analysis of multiloop linear feed-back systems. The properties of the return difference equation are examined using the concepts of singular values, singular vectors and the spectral norm of a matrix. A number of new tools for multiloop systems are developed which are analogous to those for scalar Nyquist and Bode analysis. These provide a generalization of the scalar frequency-domain notions such as gain, bandwidth, stability margins and M-circles, and provide considerable insight into system robustness.https://resolver.caltech.edu/CaltechAUTHORS:20190308-152210699Performance and robustness analysis for structured uncertainty
https://resolver.caltech.edu/CaltechAUTHORS:20190308-152211137
Year: 1982
DOI: 10.1109/cdc.1982.268218
This paper introduces a nonconservative measure of performance for linear feedback systems in the face of structured uncertainty. This measure is based on a new matrix function, which we call the Structured Singular Value.https://resolver.caltech.edu/CaltechAUTHORS:20190308-152211137Synthesis of robust controllers and filters
https://resolver.caltech.edu/CaltechAUTHORS:20190308-152211312
Year: 1983
DOI: 10.1109/cdc.1983.269806
This paper outlines a general framework for analysis and synthesis of linear control systems and reports on a new solution to a very general L_∞/H_∞ optimal control problem.https://resolver.caltech.edu/CaltechAUTHORS:20190308-152211312On inner-outer and spectral factorizations
https://resolver.caltech.edu/CaltechAUTHORS:20190308-152211397
Year: 1984
DOI: 10.1109/cdc.1984.272438
This paper outlines methods for computing the key factorizations necessary to solve general H2 and H∞ linear optimal control problems.https://resolver.caltech.edu/CaltechAUTHORS:20190308-152211397Matrix interpolation and H_∞ performance bounds
https://resolver.caltech.edu/CaltechAUTHORS:20170724-174332489
Year: 1985
This paper introduces a methodology for obtaining bounds on the achievable performance of a multivariable control system involving tradeoffs between potentially conflicting performance requirements.https://resolver.caltech.edu/CaltechAUTHORS:20170724-174332489The general distance problem in H_∞ synthesis
https://resolver.caltech.edu/CaltechAUTHORS:20190308-152211485
Year: 1985
DOI: 10.1109/cdc.1985.268720
The general distance problem which arises in the general H_∞ optimal control problem is considered. The existence of an optimal solution is proved and the expression of the optimal norm γ_o is obtained from a somewhat abstract operator point of view. An iterative scheme, called γ-iteration, is introduced which reduces the general distance problem to a standard best approximation problem. Bounds for γ_o are also derived. The γ-iteration is viewed as a problem of finding the zero crossing of a function. This function is shown to be continuous, monotonically decreasing, convex and be bounded by some very simple functions. These properties make it possible to obtain very rapid convergence of the iterative process. The issue of model-reduction in H_∞ - synthesis will also be addressed.https://resolver.caltech.edu/CaltechAUTHORS:20190308-152211485Structured uncertainty in control system design
https://resolver.caltech.edu/CaltechAUTHORS:20170719-174055387
Year: 1985
DOI: 10.1109/CDC.1985.268842
This paper reviews control system analysis and synthesis techniques for robust performance with structured uncertainty in the form of multiple unstructured perturbations and parameter variations. The structured singular value, µ, plays a central role. The case where parameter variations are known to be real is considered.https://resolver.caltech.edu/CaltechAUTHORS:20170719-174055387Quantitative Feedback Theory (QFT) and Robust Control
https://resolver.caltech.edu/CaltechAUTHORS:20170718-173812529
Year: 1986
QFT, a theory developed by Horowitz [H3], is claimed by its advocates to provide a complete and general treatment of feedback design for highly uncertain multi-input-output (MIMO) systems. This paper reviews QFT and shows that while the philosophy behind QFT is attractive, the claims for the theory are unjustified. In particular, counterexamples are given for the main theorem of QFT on which the claims are based. This is in spite of the severe assumptions (no rhp zeros and fixed relative degree) that QFT requires on the plant model.https://resolver.caltech.edu/CaltechAUTHORS:20170718-173812529Design examples using µ-synthesis: Space shuttle lateral axis FCS during reentry
https://resolver.caltech.edu/CaltechAUTHORS:20190312-154519116
Year: 1986
DOI: 10.1109/CDC.1986.267482
This paper studies the application of Structured Singular Values (SSV or µ) for analysis and synthesis of the Space Shuttle lateral axis flight control system (FCS) during reentry. While this is a fairly standard FCS problem in most respects, the aircraft model is highly uncertain due to the poorly known aerodynamic characteristics (e.g. aero coefficients). Comparisons are made of the conventional FCS with alternatives based on H∞ optimal control and µ-synthesis. The problem as formulated is particularly interesting and challenging because the uncertainty is large and highly structured.https://resolver.caltech.edu/CaltechAUTHORS:20190312-154519116When is classical loop shaping H∞-optimal?
https://resolver.caltech.edu/CaltechAUTHORS:20190313-142557997
Year: 1987
DOI: 10.23919/ACC.1987.4789392
This paper examines conditions under which a given SISO LTI control system is H∞-optimal with respect to weighted combinations of its sensitivity function and its complementary sensitivity function. The specific weighting functions considered are defined in terms of the sensitivity and complementary sesitivity functions. We show that a large class of practical controllers are in fact H∞-optimal, including typical stable controllers.https://resolver.caltech.edu/CaltechAUTHORS:20190313-142557997Control of Plants with Input Saturation Nonlinearities
https://resolver.caltech.edu/CaltechAUTHORS:20190320-075948577
Year: 1987
DOI: 10.23919/ACC.1987.4789464
This paper considers control design for systems with input magnitude saturation. Four examples, 2 SISO and 2 MIMO, are used to illustrate the properties of several existing schemes. A new method based on a modification of conventional antiwindup compensation is introduced. It is assumed that the reader is familiar with the problem of integral windup for saturating plants and conventional schemes for dealing with it.https://resolver.caltech.edu/CaltechAUTHORS:20190320-075948577Uncertain Multivariable Systems from a State Space Perspective
https://resolver.caltech.edu/CaltechAUTHORS:20190319-110904871
Year: 1987
DOI: 10.23919/ACC.1987.4789666
This paper introduces some new extensions of μ analysis for LTI systems with structured uncertainty to time varying and nonlinear systems.https://resolver.caltech.edu/CaltechAUTHORS:20190319-110904871Robust Control with an H_2 Performance Objective
https://resolver.caltech.edu/CaltechAUTHORS:20190315-134650754
Year: 1987
DOI: 10.23919/ACC.1987.4789665
This paper considers the problem of designing robust controllers with an H_2 performance objective. A modified version of μ-synthesis is proposed and compared with two alternative schemes.https://resolver.caltech.edu/CaltechAUTHORS:20190315-134650754Relations between H_∞ and risk sensitive controllers
https://resolver.caltech.edu/CaltechAUTHORS:20200930-113054991
Year: 1988
DOI: 10.1007/bfb0042196
The motivation for designing controllers to satisfy H_∞-norm bounds on specified closed-loop transfer functions is briefly discussed. The characterization of all such controllers is then described and it is shown that the controller that maximizes a corresponding entropy integral is in fact the steady state risk sensitive optimal controller. This gives a direct relation between robust and stochastic control.https://resolver.caltech.edu/CaltechAUTHORS:20200930-113054991Robustness in the Presence of Joint Parametric Uncertainty and Unmodeled Dynamics
https://resolver.caltech.edu/CaltechAUTHORS:20190313-074303113
Year: 1988
DOI: 10.23919/ACC.1988.4789902
It is shown that, in the case of joint real parametric and complex uncertainty, Doyle's structured singular value can be obtained as the solution of a smooth constrained optimization problem. While this problem may have local maxima, an improved computable upper bound to the structured singular value is derived, leading to a sufficient condition for robust stability and performance.https://resolver.caltech.edu/CaltechAUTHORS:20190313-074303113A General Statement of Structured Singular Value Concepts
https://resolver.caltech.edu/CaltechAUTHORS:20170712-164435070
Year: 1988
Some key concepts of strucred singular value theory for the stability and performance-robustness analysis of linear time-invariant multivariable systems are stated. Using a set-invariance principle, the theory is then generalized to allow for nonlinear and/or time-varying nominal systems and uncertainties. The general theory is then re-specialized to the case of nominally linear time-invariant systems subject to L2-induced-norm bounded uncertainties.https://resolver.caltech.edu/CaltechAUTHORS:20170712-164435070State-space solutions to standard H_2 and H_∞ control problems
https://resolver.caltech.edu/CaltechAUTHORS:20170712-162848745
Year: 1988
DOI: 10.23919/ACC.1988.4789992
Simple state-space formulas are presented for a controller solving a standard H_∞-problem. The controller has the same state-dimension as the plant, its computation involves only two Riccati equations, and it has a separation structure reminiscent of classical LQG (i.e., H_2) theory. This paper is also intended to be of tutorial value, so a standard H_2-solution is developed in parallel.https://resolver.caltech.edu/CaltechAUTHORS:20170712-162848745Structured singular value with repeated scalar blocks
https://resolver.caltech.edu/CaltechAUTHORS:20170712-154141091
Year: 1988
The structured singular value, μ, is an important linear algebra tool to study a class of matrix perturbation problems, [Doy]. It is useful for analyzing the robustness of stability and performance of dynamical systems [DoyWS]. This paper studies uncertainty structures involving repeated scalar parameters in more detail than in [Doy]. In [DoyP], it was shown that the frequency domain μ tests of [DoyWS] can conceptually be reduced to a single constant matrix μ test, but the uncertainty structure must be augmented with a large repeated scalar block. This paper studies the properties of μ and the upper bound with these types of uncertainty blocks, and compares the frequency domain vs. state space μ based tests, assuming that the upper bound is what can be reliably computed.https://resolver.caltech.edu/CaltechAUTHORS:20170712-154141091On the Caltech Experimental Large Space Structure
https://resolver.caltech.edu/CaltechAUTHORS:20190313-113336008
Year: 1988
DOI: 10.23919/ACC.1988.4789995
This paper focuses on a large space structure experiment developed at the California Institute of Technology. The main thrust of the experiment is to address the identification and robust control issues associated with large space structures by capturing their characteristics in the laboratory. The design, modeling, identification and control objectives are discussed within the paper.https://resolver.caltech.edu/CaltechAUTHORS:20190313-113336008Controller Order Reduction with Guaranteed Stability and Performance
https://resolver.caltech.edu/CaltechAUTHORS:20190315-152803266
Year: 1988
DOI: 10.23919/ACC.1988.4789993
In this paper we consider the problem of controller order
reduction for control design for robust performance. In practical
control design it may be important to have low order controllers.
For example, one may want to gain schedule a series
of LTI (linear, time invariant) controllers, or give simple physical
interpretations to the control dynamics. When solving practical
design problems using, say, H∞ software it is common to
produce controllers of high order - equal to the sum of the
order of the plant plus each of the weighting functions. However,
there may be lower order controllers which stabilize the
plant and provide satisfactory H∞ closed loop performance.
The objectives of a method for controller order reduction
within the H∞ framework, then, should be to find low order
controllers which stabilize a given plant and provides satisfactory
H∞ performance. Ideally, the method should apply to a
large class of problems, be easy to implement and be
guaranteed to work.https://resolver.caltech.edu/CaltechAUTHORS:20190315-152803266A power method for the structured singular value
https://resolver.caltech.edu/CaltechAUTHORS:20170712-160003131
Year: 1988
DOI: 10.1109/CDC.1988.194710
An iterative algorithm is presented to compute lower bounds for the structured singular value ( mu ). The algorithm resembles a mixture of power methods for eigenvalues and singular values, since the structured singular value can be viewed as a generalization of both. If the algorithm converges, a lower bound for mu results. The authors prove that mu is always an equilibrium point of the algorithm. However, since in general there are many equilibrium points, some heuristic ideas to achieve convergence are presented. Extensive numerical experience with the algorithm is discussed.https://resolver.caltech.edu/CaltechAUTHORS:20170712-160003131Model Invalidation: A Connection between Robust Control and Identification
https://resolver.caltech.edu/CaltechAUTHORS:20190313-075623254
Year: 1989
DOI: 10.23919/ACC.1989.4790413
This paper begins to address the gap between the models used in robust control theory and those obtained from identification experiments by considering the connection between uncertain models and data. The model invalidation problem considered here is: given experimental data and a model with both additive noise and norm-bounded perturbations, is it possible that the model could produce the input/output data?https://resolver.caltech.edu/CaltechAUTHORS:20190313-075623254Optimal control with mixed H_2 and H∞ performance objectives
https://resolver.caltech.edu/CaltechAUTHORS:20190313-095844804
Year: 1989
DOI: 10.23919/ACC.1989.4790529
This paper considers the analysis and synthesis of control systems subject to two types of disturbance signals: signals with bounded power spectral density and signals with bounded power. The resulting control problem involves minimizing a mixed H_2 and H∞ norm of the system. It is shown that the controller shares a separation property similar to those of pare H_2 or H∞. controller. It is also shown that the mixed problem reduces naturally to H_2 and H∞ problem in special cases. Some necessary and sufficient conditions are obtained for the existence of a solution to the mixed problem. Explicit state space formulae are given for the optimal controllers.https://resolver.caltech.edu/CaltechAUTHORS:20190313-095844804Identification for Robust Control of Flexible Structures
https://resolver.caltech.edu/CaltechAUTHORS:20190313-092315130
Year: 1989
DOI: 10.23919/ACC.1989.4790620
An accurate multivariable transfer function model of an experimental structure is required for research involving robust control of flexible structures. Initially, a multi-input/multi-output model of the structure is generated using the finite element method. This model was insufficient due to its variation from the experimental data. Therefore, Chebyshev polynomials are employed to fit the data with a single-input/multi-output transfer function models. Combining these lead to a multivariable model with more modes than the original finite element model. To find a physically motivated model, as ad hoc model reduction technique which uses a priori knowledge of the structure is developed. The ad hoc approach is compared with balanced realisation model reduction to determine its benefits. Plots of select transfer function models and experimental data are included.https://resolver.caltech.edu/CaltechAUTHORS:20190313-092315130Vibration damping and robust control of the JPL/AFAL experiment using µ-synthesis
https://resolver.caltech.edu/CaltechAUTHORS:20190313-102804933
Year: 1989
DOI: 10.1109/CDC.1989.70668
The technology for controlling elastic deformations of flexible structures is one of the key considerations for future space initiatives. A vital area needed to achieve this objective is the development of a control design methodology applicable to future structures. The mu -synthesis technique is employed to design a high-performance vibration attenuation controller for the JPL/AFAL experimental flexible antenna structure. The results presented deal primarily with the control of first two global flexible modes using only two hub actuators and two hub sensors. Implementation of the multivariable control laws based on a finite-element model is presented. All results are from actual implementation on the JPL/AFAL flexible structure testbed.https://resolver.caltech.edu/CaltechAUTHORS:20190313-102804933Mixed H_2 and H∞ control
https://resolver.caltech.edu/CaltechAUTHORS:20190313-085246171
Year: 1990
DOI: 10.23919/ACC.1990.4791177
Mixed H_2 and H∞ norm analysis and synthesis problems are considered in this paper. It is shown that the mixed norm analysis combined with structured uncertainty can be used to give bounds on robust H_2 and H∞ performance. It is also shown that the mixed norm controller shares a separation property similar to those of pure H_2 or H∞ controllers. The obvious advantage for a mixed norm is that it gives a natural trade-off between H_2 performance and H∞ performance, and provides a potential framework for extending the μ-synthesis framework to address robust H_2 performance. A simple example is used to motivate the possible advantages such a framework might have over a pure H∞ theory.https://resolver.caltech.edu/CaltechAUTHORS:20190313-085246171Collocated versus Non-collocated Multivariable Control for Flexible Structure
https://resolver.caltech.edu/CaltechAUTHORS:20190313-090053071
Year: 1990
DOI: 10.23919/ACC.1990.4791064
Future space structures have many closely spaced, lightly damped natural frequencies throughout the frequency domain. To achieve desired performance objectives, a number of these modes must actively be controlled. For control, a combination of collocated and noncollocated sensors and actuators will be employed. The control designs will be formulated based on models which have inaccuracies due to unmodeled dynamics, and variations in damping levels, natural frequencies and mode shapes. Therefore, along with achieving the performance objectives, the control design must be robust to a variety of uncertainty. This paper focuses on the benefits and limitations associated with multivariable control design using noncollocated versus collocated sensors and actuators. We address the question of whether performance is restricted due to the noncollocation of the sensors and actuators or the uncertainty associated with modeling of the flexible structures. Control laws are formulated based on models of the system and evaluated analytically and experimentally. Results of implementation of these control laws on the Caltech flexible structure are presented.https://resolver.caltech.edu/CaltechAUTHORS:20190313-090053071Towards a Methodology for Robust Parameter Identification
https://resolver.caltech.edu/CaltechAUTHORS:20190313-083856627
Year: 1990
DOI: 10.23919/ACC.1990.4791156
The paper considers the problem of estimating, from experimental data, real parameters for a model with uncertainty in the form of both additive noise and norm bounded perturbations. Such models frequently arise in robust control theory, and a framework is introduced for the consideration of experimental data in robust control analysis problems. If the analysis tools applied include robust stability tests for real parameter variations (real μ), then the framework can be used to address the problem of "robust" parameter identification. While the techniques discussed here can quickly become computationally overwhelming when applied to physical systems and real data, the approach introduces a new way of looking at the identification problem and may be helpful in arriving at a more tractable methodology.https://resolver.caltech.edu/CaltechAUTHORS:20190313-083856627Robustness and performance tradeoffs in control design for flexible structures
https://resolver.caltech.edu/CaltechAUTHORS:20190313-110937657
Year: 1990
DOI: 10.1109/CDC.1990.203334
The design of control laws for the Caltech flexible structure experiment using a nominal design model with varying levels of uncertainty is considered. A brief overview of the structured singular value (µ) H∞ control design, and µ-synthesis design techniques is presented. Tradeoffs associated with uncertainty modeling of flexible structures are discussed. A series of controllers are synthesized based on different uncertainty descriptions. It is shown that an improper selection of nominal and uncertainty models may lead to unstable or poor-performing controllers on the actual system. In contrast, if descriptions of uncertainty are overly conservative, performance of the closed-loop system may be severely limited. Experimental results on control laws synthesized for different uncertainty levels on the Caltech structure are presented.https://resolver.caltech.edu/CaltechAUTHORS:20190313-110937657Computation of µ with real and complex uncertainties
https://resolver.caltech.edu/CaltechAUTHORS:20190313-112035048
Year: 1990
DOI: 10.1109/CDC.1990.203804
The robustness analysis of system performance is one of the key issues in control theory, and one approach is to reduce this problem to that of computing the structured singular value, mu . When real parametric uncertainty is included, then mu must be computed with respect to a block structure containing both real and complex uncertainties. It is shown that mu is equivalent to a real eigenvalue maximization problem, and a power algorithm is developed to solve this problem. The algorithm has the property that mu is (almost) always an equilibrium point of the algorithm, and that whenever the algorithm converges a lower bound for mu results. This scheme has been found to have fairly good convergence properties. Each iteration of the scheme is very cheap, requiring only such operations as matrix-vector multiplications and vector inner products, and the method is sufficiently general to handle arbitrary numbers of repeated real scalars, repeated complex scalars, and full complex blocks.https://resolver.caltech.edu/CaltechAUTHORS:20190313-112035048The Process of Control Design for the NASA Langley Minimast Structure
https://resolver.caltech.edu/CaltechAUTHORS:20170619-163711765
Year: 1991
DOI: 10.23919/ACC.1991.4791434
he NASA Langley Minimast Facility is an experimental flexible structure designed to emulate future large space structures. The Minimast system consists of a 18 bay, 20 meter-long truss beam structure which is cantilevered at its base from a rigid foundation. It is desired to use active control to attenuate the response of the structure at bay 10 and 18 due to impulse disturbances at bay 9 while minimizing actuator torque commanded from the torque wheel actuators. This paper details the design process used to select sensors for feedback and performance weights on the Minimast facility. Initially, a series of controllers are synthesized using H2 optimal control techniques for the given structural model, a variety of sensor locations and performance criteria to determine the "best" displacement sensor and/or accelerometers to be used for feedback. Upon selection of the sensors, controllers are formulated to determine the affect of using a reduced order model of the Minimast structure instead of the higher order structural analysis model for control design and the relationship between the actuator torque level and the closed-loop performance. Based on this information, controllers are designed using μ-synthesis techniques and implemented on the Minimast structure. Results of the implementation of these controllers on the Minimast experimental facility are presented.https://resolver.caltech.edu/CaltechAUTHORS:20170619-163711765Development of Advanced Control Design Software for Researchers and Engineers
https://resolver.caltech.edu/CaltechAUTHORS:20190313-084558863
Year: 1991
DOI: 10.23919/ACC.1991.4791529
This paper provides a brief description of The μ Analysis and Synthesis Toolbox (μ-Tools), an advanced control design toolbox to be used in conjunction with MATLAB.https://resolver.caltech.edu/CaltechAUTHORS:20190313-084558863Stabilization of LFT systems
https://resolver.caltech.edu/CaltechAUTHORS:20190315-095540812
Year: 1991
DOI: 10.1109/CDC.1991.261575
The problem of parametrizing all stabilizing controllers for general linear fractional transformation (LFT) systems is studied. The LFT systems can be variously interpreted as multidimensional systems or uncertain systems, and the controller is allowed to have the same dependence on the frequency/uncertainty structure as the plant. For multidimensional systems, this means that the controller is allowed dynamic feedback, while the uncertain system case can be given a gain scheduling interpretation. Both mu and Q stability are considered, although the latter is emphasized. In both cases, the output feedback problem is reduced by a separation argument to two simpler problems, involving the dual problems of full information and full control. For Q stability, these problems can be characterized completely in terms of linear matrix inequalities. In the standard 1D system case with no uncertainty, the results in the present work reduce to the standard parametrization of D.C. Youla, H.A. Jabr and J.J. Bongiorno (1976), although the development appears to be much simpler, and does not require coprime factorizations.https://resolver.caltech.edu/CaltechAUTHORS:20190315-095540812µ analysis with real parametric uncertainty
https://resolver.caltech.edu/CaltechAUTHORS:20190318-131438205
Year: 1991
DOI: 10.1109/CDC.1991.261579
The authors give a broad overview, from a LFT (linear fractional transformation) µ perspective, of some of the theoretical and practical issues associated with robustness in the presence of real parametric uncertainty, with a focus on computation. Recent results on the properties of µ in the mixed case are reviewed, including issues of NP completeness, continuity, computation of bounds, the equivalence of µ and its bounds, and some direct comparisons with Kharitonov-type analysis methods. In addition, some advances in the computational aspects of the problem, including a branch-and-bound algorithm, are briefly presented together with the mixed µ problem may have inherently combinatoric worst-case behavior, practical algorithms with modes computational requirements can be developed for problems of medium size (<100 parameters) that are of engineering interest.https://resolver.caltech.edu/CaltechAUTHORS:20190318-131438205Review of LFTs, LMIs, and μ
https://resolver.caltech.edu/CaltechAUTHORS:20170619-173047637
Year: 1991
DOI: 10.1109/CDC.1991.261572
The purpose of this paper is to present a tutorial overview of Linear Fractional Transformations (LFT) and the role of the Structured Singular Value, μ, and Linear Matrix Inequalities (LMI) in solving LFT problems.https://resolver.caltech.edu/CaltechAUTHORS:20170619-173047637Model reduction of LFT systems
https://resolver.caltech.edu/CaltechAUTHORS:20190319-091249008
Year: 1991
DOI: 10.1109/CDC.1991.261574
The notion of balanced realizations and balanced truncation model reduction, including guaranteed error bounds, is extended to general Q-stable linear fractional transformations (LFTs). Since both multidimensional and uncertain systems are naturally represented using LFTs, this can be interpreted either as doing state order reduction for multidimensional systems or as uncertainty simplification in the case of uncertain systems. The role of Lyapunov equations in the 1D theory is replaced by linear matrix inequalities (LMIs). All proofs are given in detail as they are very short and greatly simplify even the standard 1D case.https://resolver.caltech.edu/CaltechAUTHORS:20190319-091249008Practical computation of the mixed μ problem
https://resolver.caltech.edu/CaltechAUTHORS:20190313-082229918
Year: 1992
DOI: 10.23919/ACC.1992.4792521
Upper and lower bounds for the mixed μ problem have recently been developed, and this paper examines the computational aspects of these bounds. In particular a practical algorithm is developed to compute the bounds. This has been implemented as a Matlab function (m-file), and will be available shortly in a test version in conjunction with the μ-Tools toolbox. The algorithm performance is very encouraging, both in terms of accuracy of the resulting bounds, and growth rate in required computation with problem size. In particular it appears that one can handle medium size problems (less than 100 perturbations) with reasonable computational requirements.https://resolver.caltech.edu/CaltechAUTHORS:20190313-082229918Synthesizing robust mode shapes with μ and implicit model following
https://resolver.caltech.edu/CaltechAUTHORS:20170606-173114371
Year: 1992
DOI: 10.1109/CCA.1992.269732
Control synthesis problems involving assignment of closed-loop model shapes using implicit model following (IMF) structure are considered in the context of H_2, H∞ , and μ-synthesis theories. An extension to the dynamic output feedback case is given for the quadratic or H_2 IMF problem. The IMF problem is embedded within the framework of μ control theory, and extensions for including uncertainty are discussed. A robust synthesis methodology is presented using μ theory. An application of the robust IMF synthesis methodology to modal shape assignment for the longitudinal axis of a helicopter is demonstrated.https://resolver.caltech.edu/CaltechAUTHORS:20170606-173114371Mixed µ upper bound computation
https://resolver.caltech.edu/CaltechAUTHORS:20190319-085951997
Year: 1992
DOI: 10.1109/CDC.1992.371241
Computation of the mixed real and complex µ upper bound expressed in terms of linear matrix inequalities (LMIs) is considered. Two existing methods are used, the Osborne (1960) method for balancing matrices, and the method of centers as proposed by Boyd and El Ghaoui (1991). These methods are compared, and a hybrid algorithm that combines the best features of each is proposed. Numerical experiments suggest that this hybrid algorithm provides an efficient method to compute the upper bound for mixed µ.https://resolver.caltech.edu/CaltechAUTHORS:20190319-085951997H∞ control of LFT systems: an LMI approach
https://resolver.caltech.edu/CaltechAUTHORS:20190319-080533883
Year: 1992
DOI: 10.1109/CDC.1992.371449
The standard H∞ control problem for linear state-space systems is extended to general LFT systems, which involve a LFT (linear fractional transformation) on a structured free parameter Delta and can be interpreted as structuredly perturbed uncertain systems. Two generalizations of H∞ performance are considered, referred too as µ-performance and Q-performance with the latter implying the former. Necessary and sufficient conditions for a system to have Q-performance and for the existence of a controller yielding Q-performance can be expressed in terms of structured linear matrix inequalities (LMIs).https://resolver.caltech.edu/CaltechAUTHORS:20190319-080533883Overview of robust stability and performance methods of systems with structured mixed perturbations
https://resolver.caltech.edu/CaltechAUTHORS:20190315-160750146
Year: 1992
DOI: 10.1109/CDC.1992.371246
Robust stability and performance analysis results for systems in the presence of structured mixed perturbations are outlined. Attention is limited to scalar perturbations. The goal is to develop succinctly an overall description of state of the art techniques in analyzing systems with mixed perturbations, and to point the reader to sources in the literature where more details and proofs can be found.https://resolver.caltech.edu/CaltechAUTHORS:20190315-160750146The parallel projection operators of a nonlinear feedback system
https://resolver.caltech.edu/CaltechAUTHORS:20190315-103331839
Year: 1992
DOI: 10.1109/CDC.1992.371556
The authors define and study a pair of nonlinear parallel projection operators associated with a nonlinear feedback system. The input-output L_2-stability of a feedback system is shown to be equivalent to a coordinating of the input and output spaces, which is also equivalent to the existence of a pair of nonlinear parallel projection operators onto the graph of the plant and the inverse graph of the controller. These projections have equal norms whenever one of the feedback elements is linear. A bound on this norm is given in the case of passive systems with unity negative feedback.https://resolver.caltech.edu/CaltechAUTHORS:20190315-103331839Computational complexity of μ calculation
https://resolver.caltech.edu/CaltechAUTHORS:20190320-132001216
Year: 1993
DOI: 10.23919/ACC.1993.4793162
The structured singular value μ measures the robustness of uncertain Systems. Numerous researchers over the last decade have worked on developing efficient methods for computing μ. This paper considers the complexity of calculating μ with general mixed real/complex uncertainty in the framework of combinatorial complexity theory. In particular, it is proved that the μ recognition problem with either pure real or mixed real/complex uncertainty is NP-hard. This strongly suggests that it is futile to pursue exact methods for calculating μ of general systems with pure real or mixed uncertainty for other than small problems.https://resolver.caltech.edu/CaltechAUTHORS:20190320-132001216Model reduction of behavioural systems
https://resolver.caltech.edu/CaltechAUTHORS:20190320-142555896
Year: 1993
DOI: 10.1109/CDC.1993.325889
We consider model reduction of uncertain behavioural models. Machinery for gap-metric model reduction and multidimensional model reduction using linear matrix inequalities is extended to these behavioural models. The goal is a systematic method for reducing the complexity of uncertain components in hierarchically developed models which approximates the behavior of the full-order system. This paper focuses on component model reduction that preserves stability under interconnection.https://resolver.caltech.edu/CaltechAUTHORS:20190320-142555896H∞ control of nonlinear systems via output feedback: a class of controllers
https://resolver.caltech.edu/CaltechAUTHORS:20190319-084913906
Year: 1993
DOI: 10.1109/CDC.1993.325170
The standard state space solutions to the H∞ control problem for linear time invariant systems are generalized to nonlinear time-invariant systems. A class of nonlinear H∞ controllers are parametrized as nonlinear fractional transformation on contractive, stable free nonlinear parameters. As in the linear case, the H∞ control problem is solved by its reduction to a state feedback and output injection problems, together with a separation argument. the sufficient conditions for H∞-control problem to be solved are also derived with this machinery. The solvability for nonlinear H∞ control problem requires positive definite solution to two parallel decoupled Hamilton-Jacobi inequalities and these two solutions satisfy and additional coupling condition. An illustrative example, which deals with a passive plant, is given.https://resolver.caltech.edu/CaltechAUTHORS:20190319-084913906From data to control
https://resolver.caltech.edu/CaltechAUTHORS:20201023-102058277
Year: 1994
DOI: 10.1007/bfb0036249
The basic control problem for a given process can be stated as follows: Given some prior information about the process and a set of finite data, design a feedback controller that meets given performance specifications. Traditionally, this problem has been tackled by the introduction of an intermediate step, namely finding a model which describes the process in some precise sense, and then designing a robust controller using the model as the nominal plant.https://resolver.caltech.edu/CaltechAUTHORS:20201023-102058277H∞ control of nonlinear systems: a convex characterization
https://resolver.caltech.edu/CaltechAUTHORS:20190319-113943191
Year: 1994
DOI: 10.1109/ACC.1994.752446
The so-called nonlinear H∞-control problem in state space is considered with an emphasis on developing machinery with promising computational properties. Both state feedback and output feedback H∞-control problems for a class of nonlinear systems are characterized in terms of continuous positive definite solutions of algebraic nonlinear matrix inequalities (NLMIs) which are convex feasibility problems.https://resolver.caltech.edu/CaltechAUTHORS:20190319-113943191Behavioral approach to robustness analysis
https://resolver.caltech.edu/CaltechAUTHORS:20190319-080011582
Year: 1994
DOI: 10.1109/ACC.1994.735075
This paper introduces a general and powerful framework for modeling and analysis of uncertain systems. One immediate concrete result of this work is a practical method for computing robust performance in the presence of norm-bounded perturbations and both norm-bounded and white-noise disturbances.https://resolver.caltech.edu/CaltechAUTHORS:20190319-080011582Linear matrix inequalities in analysis with multipliers
https://resolver.caltech.edu/CaltechAUTHORS:20190318-152115268
Year: 1994
DOI: 10.1109/ACC.1994.752254
We show that a number of standard robustness tests can be reinterpreted as special cases of the application of the passivity theorem with the appropriate choice of multipliers. We show how these tests can be performed using convex optimization over linear matrix inequalities.https://resolver.caltech.edu/CaltechAUTHORS:20190318-152115268Finite time horizon robust performance analysis
https://resolver.caltech.edu/CaltechAUTHORS:20190315-155217994
Year: 1994
DOI: 10.1109/CDC.1994.411312
Robust performance problems for linear time varying systems considered over a finite horizon, are reduced to the computation of the structured singular value of a finite matrix. Connections are established between the time domain and frequency domain tests.https://resolver.caltech.edu/CaltechAUTHORS:20190315-155217994Unifying robustness analysis and system ID
https://resolver.caltech.edu/CaltechAUTHORS:20190319-105001140
Year: 1994
DOI: 10.1109/CDC.1994.411725
A unified systems analysis framework is presented, which includes conventional robustness analysis, model validation, and system identification as special cases and thus shows them to be instances of the same fundamental problem. A concrete version of this framework is developed for the linear case, based on a generalized structured singular value. This unification forms the basis for the use of common computational tools and and a more natural interplay between modeling, identification, and robustness analysis.https://resolver.caltech.edu/CaltechAUTHORS:20190319-105001140Robustness analysis and synthesis for uncertain nonlinear systems
https://resolver.caltech.edu/CaltechAUTHORS:20190318-135325265
Year: 1994
DOI: 10.1109/CDC.1994.410855
The stability and performance robustness analysis for a class of uncertain nonlinear systems with bounded structured uncertainties are characterized in terms of various types of nonlinear matrix inequalities (NLMIs). As in the linear case, scalings or multipliers are used to find Lyapunov functions that give sufficient conditions, and the resulting NLMIs yield convex feasibility problem. For these problems, robustness analysis is essentially no harder than stability analysis of the system with no uncertainty. Sufficient conditions for the solvability of related robust synthesis problems are developed in terms of NLMIs as well.https://resolver.caltech.edu/CaltechAUTHORS:20190318-135325265Analysis of implicitly defined systems
https://resolver.caltech.edu/CaltechAUTHORS:20190318-103730568
Year: 1994
DOI: 10.1109/CDC.1994.411726
An alternative paradigm is considered for robustness analysis, where systems are described in implicit form. The central role in this formulation is played by equations rather than input-output maps. The framework for robust stability analysis is appropriately extended, and a necessary and sufficient condition is proved for the case of arbitrary structured norm bounded perturbations. Finally, the constant matrix version of this framework is considered, leading to an extension of the structured singular value µ; the corresponding upper bound theory is developed fully.https://resolver.caltech.edu/CaltechAUTHORS:20190318-103730568Application of multivariable feedback methods to intravenous anesthetic pharmacodynamics
https://resolver.caltech.edu/CaltechAUTHORS:20190318-102253057
Year: 1995
DOI: 10.1109/ACC.1995.529765
Continuous infusions of intravenous anesthetics are becoming increasingly popular during surgical procedures, largely because relatively precise, consistent control of anesthetic depth is possible over intravenous injection techniques. In this paper we investigate the main issues involved with the development of automatic intravenous anesthesia delivery systems in the context of robust multivariable control. We present a pharmacodynamic model that may be suitable for closed-loop control, and discuss clinical data collected from human subjects during actual surgical conditions with the anesthetic propofol.https://resolver.caltech.edu/CaltechAUTHORS:20190318-102253057On design methods for sampled-data systems
https://resolver.caltech.edu/CaltechAUTHORS:20190318-132618553
Year: 1995
DOI: 10.1109/ACC.1995.531236
In this paper we compare, via example, the standard approaches to sampled-data design with recently developed direct design methods for these hybrid systems. Simple intuitive examples are used to show that traditional design heuristics provide no performance guarantees whatsoever. Even when the sampling rate is a design parameter that can be chosen as fast as desired, using design heuristics can lead to either severe performance degradation or extreme over-design. These effects are apparently well-known to practitioners, but may not be widely appreciated by the control community at large. The paper contains no new theoretical results and is intended to be of a tutorial nature.https://resolver.caltech.edu/CaltechAUTHORS:20190318-132618553Realizations of uncertain systems and formal power series
https://resolver.caltech.edu/CaltechAUTHORS:20190315-102303166
Year: 1995
DOI: 10.1109/ACC.1995.520997
Rational functions of several noncommuting indeterminates arise naturally in robust control when studying systems with structured uncertainty. Linear fractional transformations (LFTs) provide a convenient way of obtaining realizations of such systems and a complete realization theory of LFTs is emerging. This paper establishes connections between a minimal LFT realization and minimal realizations of a formal power series, which have been studied extensively in a variety of disciplines. The result is a fairly complete generalization of standard minimal realization theory for linear systems to the formal power series and LFT setting.https://resolver.caltech.edu/CaltechAUTHORS:20190315-102303166An efficient algorithm for performance analysis of nonlinear control systems
https://resolver.caltech.edu/CaltechAUTHORS:20190315-142359300
Year: 1995
DOI: 10.1109/acc.1995.532342
A numerical algorithm for computing necessary conditions for performance specifications is developed for nonlinear uncertain systems. The algorithm is similar in nature and behavior to the power algorithm for the µ lower bound, and does not rely on a descent method. The algorithm is applied to a practical example.https://resolver.caltech.edu/CaltechAUTHORS:20190315-142359300Control problems and the polynomial time hierarchy
https://resolver.caltech.edu/CaltechAUTHORS:20190319-100707731
Year: 1995
DOI: 10.1109/CDC.1995.478587
Classifies control problems by exhibiting their alternating quantifier structure. This classification allows the authors to relate these control problems to the computational complexity classes of the polynomial time hierarchy. A specific synthesis problem for uncertain systems is shown to be hard in the class II^p_2.https://resolver.caltech.edu/CaltechAUTHORS:20190319-100707731Full information and full control in a behavioral context
https://resolver.caltech.edu/CaltechAUTHORS:20190319-102058453
Year: 1996
DOI: 10.1109/CDC.1996.572834
In this paper, the concepts of full information (FI) and full control (FC), which arise in standard H∞ theory, are extended to the behavioral framework. The H∞ optimal interconnection problem formulation is outlined and the solution presented. The behavioral versions of the FI and FC problems are introduced, followed by connections with the input/output versions of the FI and FC problems and the associated Riccati equations. An illustrative example is presented.https://resolver.caltech.edu/CaltechAUTHORS:20190319-102058453Approximate behaviors
https://resolver.caltech.edu/CaltechAUTHORS:20190319-110229272
Year: 1996
DOI: 10.1109/CDC.1996.574430
The motivation for this paper is to contribute to a unified approach to modeling, realization, approximation and analysis for systems with a rich class of uncertainty structures. The specific focus is on what is the appropriate framework to model components with uncertainty, and what is the appropriate notion of approximation for such components. Components and systems are conceptualized in terms of their behaviors, which can be specified by parametrized equations. More questions are posed than are answered.https://resolver.caltech.edu/CaltechAUTHORS:20190319-110229272Uncertain hierarchical modeling
https://resolver.caltech.edu/CaltechAUTHORS:20190318-101348405
Year: 1996
DOI: 10.1109/CDC.1996.574346
For modeling complex systems, it is natural to reduce the system into subsystems and model each subsystem. The approach taken in this paper is that it is desired that a model should be consistent with the modeling methodology. Further it is important to explicitly represent the inaccuracies of the model as part of the model. Within this paper, uncertain hierarchical modeling is further motivated. A hierarchy, interconnection structure, and a fundamental component data type are proposed and the choices motivated. The framework is proposed with the intention of being implemented on a computer and having a family of models of different resolution representing a system.https://resolver.caltech.edu/CaltechAUTHORS:20190318-101348405Robust simulation and nonlinear performance
https://resolver.caltech.edu/CaltechAUTHORS:20190318-134911414
Year: 1996
DOI: 10.1109/CDC.1996.573498
Robust simulation, defined as the simulation of sets, allows the computation of a system's global properties. By simulating entire sets, instead of individual points, performance guarantees can made. While exact algorithms for robust simulation are computationally prohibitive, reasonable approximations which preserve performance guarantees exist. An approximate solution, which provides an upper bound on performance, is tested on a large number of systems. In general, the upper bound is close to the best lower bound computed by search. Furthermore, when the bounds differ, several techniques exist to improve the upper bound.https://resolver.caltech.edu/CaltechAUTHORS:20190318-134911414Soft vs. hard bounds in probabilistic robustness analysis
https://resolver.caltech.edu/CaltechAUTHORS:20190319-103641203
Year: 1996
DOI: 10.1109/CDC.1996.573688
The relationship between soft vs. hard bounds and probabilistic vs. worst-case problem formulations for robustness analysis has been a source of some apparent confusion in the control community, and this paper will attempt to clarify some of these issues. Essentially, worst-case analysis involves computing the maximum of a function which measures performance over some set of uncertainty. Probabilistic analysis assumes some distribution on the uncertainty and computes the resulting probability measure on performance. Exact computation in each case is intractable in general, and this paper explores the use of both soft, and hard bounds for computing estimates of performance, including extensive numerical experimentation. We will focus on the simplest possible problem formulations that we believe reveal the difficulties associated with more general robustness analysis.https://resolver.caltech.edu/CaltechAUTHORS:20190319-103641203Robust and optimal control
https://resolver.caltech.edu/CaltechAUTHORS:20190319-082820924
Year: 1996
DOI: 10.1109/CDC.1996.572756
This paper will very briefly review the history of the relationship between modern optimal control and robust control. The latter is commonly viewed as having arisen in reaction to certain perceived inadequacies of the former. More recently, the distinction has effectively disappeared. Once-controversial notions of robust control have become thoroughly mainstream, and optimal control methods permeate robust control theory. This has been especially true in H-infinity theory, the primary focus of this paper.https://resolver.caltech.edu/CaltechAUTHORS:20190319-082820924Reducing uncertain systems and behaviors
https://resolver.caltech.edu/CaltechAUTHORS:20190315-144945947
Year: 1996
DOI: 10.1109/CDC.1996.574435
This paper considers the problem of reducing the dimension of a model for an uncertain system whilst bounding the resulting error. Model reduction methods with guaranteed upper error bounds have previously been established for uncertain systems described by a state-space type realization; specifically, by a linear fractional transformation (LFT) of a constant realization matrix over a structured uncertainty operator. In contrast to traditional 1-D model reduction where upper bounds on reduction are matched with comparable lower bounds, in the uncertain system problem there have previously been no lower bounds established. The computation of both upper and lower bounds is discussed in this paper, including a discussion of the use of Hankel-like matrices. These model reduction methods and error bound computations are then discussed in the context of kernel representations of behavioral uncertain systems.https://resolver.caltech.edu/CaltechAUTHORS:20190315-144945947Nonlinear Games: examples and counterexamples
https://resolver.caltech.edu/CaltechAUTHORS:20140527-071022483
Year: 1996
DOI: 10.1109/CDC.1996.577292
Popular nonlinear control methodologies are compared using benchmark examples generated with a "converse Hamilton-Jacobi-Bellman" method (CoHJB). Starting with the cost and optimal value function V, CoHJB solves HJB PDEs "backwards" algebraically to produce nonlinear dynamics and optimal controllers and disturbances. Although useless for design, it is great for generating benchmark examples. It is easy to use, computationally tractable, and can generate essentially all possible nonlinear optimal control problems. The optimal control and disturbance are then known and can be used to study actual design methods, which must start with the cost and dynamics without knowledge of V. This paper gives a brief introduction to the CoHJB method and some of the ground rules for comparing various methods. Some very simple examples are given to illustrate the main ideas. Both Jacobian linearization and feedback linearization combined with linear optimal control are used as "strawmen" design methods.https://resolver.caltech.edu/CaltechAUTHORS:20140527-071022483Genetic algorithms and simulated annealing for robustness analysis
https://resolver.caltech.edu/CaltechAUTHORS:20190318-095624277
Year: 1997
DOI: 10.1109/ACC.1997.609547
Genetic algorithms (GAs) and simulated annealing (SA) have been promoted as useful, general tools for nonlinear optimization. This paper explores their use in robustness analysis with real parameter variations, a known NP hard problem which would appear to be ideally suited to demonstrate the power of GAs and SA. Numerical experiment results show convincingly that they turn out to be poorer than existing branch and bound (B&B) approaches. While this may appear to shed doubt on some of the hype surrounding these stochastic optimization techniques, we find that they do have attractive features, which are also demonstrated in this study. For example, both GAs and SA are almost trivial to understand and program, so they require essentially no expertise, in sharp contrast to the B&B methods. They may be suitable for problems where programming effort is much more important than running time or the quality of the answer. Robustness analysis for engineering problems is not the best candidate in this respect, but it does provide an interesting test case for the evaluation of GAs and SA. A simple hill climbing algorithm is also studied for comparison.https://resolver.caltech.edu/CaltechAUTHORS:20190318-095624277On receding horizon extensions and control Lyapunov functions
https://resolver.caltech.edu/CaltechAUTHORS:20190315-104922109
Year: 1998
DOI: 10.1109/ACC.1998.703180
Control Lyapunov functions (CLFs) are used in conjunction with receding horizon control (RHC) to develop a new class of control schemes. In the process, strong connections between the seemingly disparate approaches are revealed, leading to a unified picture that ties together the notions of pointwise min-norm, receding horizon, and optimal control. This framework is used to develop a control Lyapunov function based receding horizon scheme, of which a special case provides an appropriate extension of a variation on Sontag's formula. These schemes are shown to possess a number of desirable theoretical and implementation properties. An example is provided, demonstrating their application to a nonlinear control problem.https://resolver.caltech.edu/CaltechAUTHORS:20190315-104922109Performance validation of the Caltech ducted-fan at a fixed operating point
https://resolver.caltech.edu/CaltechAUTHORS:20190318-100108768
Year: 1999
DOI: 10.1109/ACC.1999.783229
Using measured input and output data and a priori assumptions on a nominal model and a linear fractional transformation uncertainty structure, a family of model validating uncertainty sets are constructed for robust control analysis and design of the Caltech ducted fan. Based on an identified uncertainty set, the predicted closed loop performance for any given controller is compared to the directly measured performance. The paper reports current status of the ongoing work at Caltech.https://resolver.caltech.edu/CaltechAUTHORS:20190318-100108768Robust control in the quantum domain
https://resolver.caltech.edu/CaltechAUTHORS:DOHcdc00
Year: 2000
DOI: 10.1109/CDC.2000.912895
Progress in quantum physics has made it possible to perform experiments in which individual quantum systems are monitored and manipulated in real time. The advent of such new technical capabilities provides strong motivation for the development of theoretical and experimental methodologies for quantum feedback control. The availability of such methods would enable radically new approaches to experimental physics in the quantum realm. Likewise, the investigation of quantum feedback control will introduce crucial new considerations to control theory, such as the uniquely quantum phenomena of entanglement and measurement back-action. The extension of established analysis techniques from control theory into the quantum domain may also provide new insight into the dynamics of complex quantum systems. We anticipate that the successful formulation of an input-output approach to the analysis and reduction of large quantum systems could have very general applications in nonequilibrium quantum statistical mechanics and in the nascent field of quantum information theory.https://resolver.caltech.edu/CaltechAUTHORS:DOHcdc00The ERATO Systems Biology Workbench: An Integrated Environment for Multiscale and Multitheoretic Simulations in Systems Biology
https://resolver.caltech.edu/CaltechAUTHORS:20130108-145620726
Year: 2001
Over the years, a variety of biochemical network modeling packages have been developed and used by researchers in biology. No single package currently answers all the needs of the biology community; nor is one likely to do so in the near future, because the range of tools needed is vast and
new techniques are emerging too rapidly. It seems unavoidable that, for the foreseeable future, systems biology researchers are likely to continue using multiple packages to carry out their work.
In this chapter, we describe the ERATO Systems Biology Workbench (SBW) and the Systems Biology Markup Language (SBML), two related efforts directed at the problems of software package interoperability. The goal of the SBW project is to create an integrated, easy-to-use software environment that enables sharing of models and resources between simulation and analysis tools for systems biology. SBW uses a modular, plug-in architecture that permits easy introduction of new components. SBML is a proposed standard XML-based language for representing models communicated between software packages; it is used as the format of models communicated between components in SBW.https://resolver.caltech.edu/CaltechAUTHORS:20130108-145620726Heavy Tails, Generalized Coding, and Optimal Web Layout
https://resolver.caltech.edu/CaltechAUTHORS:20111122-144035028
Year: 2001
DOI: 10.1109/INFCOM.2001.916658
This paper considers Web layout design in the spirit of source coding for data compression and rate distortion theory, with the aim of minimizing the average size of files downloaded during Web browsing sessions. The novel aspect here is that the object of design is layout rather than codeword selection, and is subject to navigability constraints. This produces statistics for file transfers that are heavy tailed, completely unlike standard Shannon theory, and provides a natural and plausible explanation for the origin of observed power laws in Web traffic. We introduce a series of theoretical and simulation models for optimal Web layout design with varying levels of analytic tractability and realism with respect to modeling of structure, hyperlinks, and user behavior. All models produce power laws which are striking both for their consistency with each other and with observed data, and their robustness to modeling assumptions. These results suggest that heavy tails are a permanent and ubiquitous feature of Internet traffic, and not an artifice of current applications or user behavior. They also suggest new ways of thinking about protocol design that combines insights from information and control theory with traditional networking.https://resolver.caltech.edu/CaltechAUTHORS:20111122-144035028An Overview of the ERATO Systems Biology Workbench Project
https://resolver.caltech.edu/CaltechAUTHORS:20130107-141403406
Year: 2001
The goal of the ERATO Systems Biology Workbench (SBW) project is to create an integrated,
easy-to-use software environment that enables sharing of resources for systems
biology. Our initial focus is on achieving interoperability between 7 existing simulators.
Our long-term goal is to develop a flexible and adaptable environment that provides the
ability to interact with a wide variety of software tools applicable to the systems biology
field including databases and experimental devices. We place high value on ease of use and
ease of extensibility as important qualities of software for use in biological investigations.
The software products of this project will be open source, and portable to Windows and
Linux. The Systems Biology Workbench is a vehicle for collaboration between developers
of bioinformatics technology. We are actively seeking other collaborators to extend the
workbench. The motivation is to reduce the time spent by developers both creating
software infrastructure and creating tools that exist in a similar form in other packages,
allowing developers to concentrate on new algorithm and model development.https://resolver.caltech.edu/CaltechAUTHORS:20130107-141403406The ERATO Systems Biology Workbench: Architectural Evolution
https://resolver.caltech.edu/CaltechAUTHORS:20130104-163929932
Year: 2001
Systems biology researchers make use of a large number of
different software packages for computational modeling and
analysis as well as data manipulation and visualization. To
help developers easily provide the ability for their applications
to communicate with other tools, we have developed a
simple, open-source, application integration framework, the
ERATO Systems Biology Workbench (SBW). In this paper,
we discuss the architecture of SBW, focusing on our motivations for various design decisions including the choice of the message-oriented communications infrastructure.https://resolver.caltech.edu/CaltechAUTHORS:20130104-163929932Scalable laws for stable network congestion control
https://resolver.caltech.edu/CaltechAUTHORS:PAGdcc01
Year: 2001
Discusses flow control in networks, in which sources control their rates based on feedback signals received from the network links, a feature present in current TCP protocols. We develop a congestion control system which is arbitrarily scalable, in the sense that its stability is maintained for arbitrary network topologies and arbitrary amounts of delay. Such a system can be implemented in a decentralized way with information currently available in networks plus a small amount of additional signaling.https://resolver.caltech.edu/CaltechAUTHORS:PAGdcc01Feedback regulation of the heat shock response in E. coli
https://resolver.caltech.edu/CaltechAUTHORS:20190304-154305641
Year: 2001
DOI: 10.1109/cdc.2001.980210
Survival of organisms in extreme conditions has necessitated the evolution of stress response networks that detect and respond to environmental changes. Among the extreme conditions that cells must face is the exposure to higher than normal temperatures. In this paper, we propose a detailed biochemical model that captures the dynamical nature of the heat-shock response in Escherichia coli. Using this model, we show that both feedback and feedforward control are utilized to achieve robustness, performance, and efficiency of the response to the heat stress. We discuss the evolutionary advantages that feedback confers to the system, as compared to other strategies that could have been implemented to get the same performance.https://resolver.caltech.edu/CaltechAUTHORS:20190304-154305641The ERATO Systems Biology Workbench: Enabling Interaction and Exchange Between Software Tools for Computational Biology
https://resolver.caltech.edu/CaltechAUTHORS:20130108-142104885
Year: 2002
Researchers in computational biology today make use of a large number of different software packages for modeling, analysis, and data manipulation and visualization.
In this paper, we describe the ERATO Systems Biology Workbench (SBW), a software framework that allows these heterogeneous application components--written in diverse programming languages and running on different platforms--to communicate and use each others' data and algorithmic capabilities. Our goal is to create a simple, open-source software infrastructure which is effective, easy to implement and easy to understand. SBW uses a broker-based architecture and enables applications (potentially running on separate, distributed computers) to communicate via a simple network protocol. The interfaces to the system are encapsulated in client-side libraries that we provide for different programming languages. We describe the SBW architecture and the current set of modules, as well as alternative implementation technologies.https://resolver.caltech.edu/CaltechAUTHORS:20130108-142104885A new bound of the ℒ2[0, T]-induced norm and applications to model reduction
https://resolver.caltech.edu/CaltechAUTHORS:SZNacc02
Year: 2002
DOI: 10.1109/ACC.2002.1023179
We present a simple bound on the finite horizon ℒ2/[0, T]-induced norm of a linear time-invariant (LTI), not necessarily stable system which can be efficiently computed by calculating the ℋ∞ norm of a shifted version of the original operator. As an application, we show how to use this bound to perform model reduction of unstable systems over a finite horizon. The technique is illustrated with a non-trivial physical example relevant to the appearance of time-irreversible phenomena in statistical physics.https://resolver.caltech.edu/CaltechAUTHORS:SZNacc02Robustness analysis of the heat shock response in E. coli
https://resolver.caltech.edu/CaltechAUTHORS:20190312-075150830
Year: 2002
DOI: 10.1109/ACC.2002.1023817
The bacterial heat shock response refers to the mechanism by which bacteria react to a sudden increase in the ambient temperature of growth. The consequences of such an unmediated temperature increase at the cellular level is the unfolding, misfolding, or aggregation of cell proteins, which threatens the life of the cell. Cells respond to the heat stress by initiating the production of heat-shock proteins whose function is to refold denatured proteins into their native states. The heat shock response, through the elevated synthesis of molecular chaperones and proteases, enables the repair of protein damage and the degradation of aggregated proteins. In a previous work (Kurata et al., 2001), we have devised a dynamic model for the heat shock response in E. coli. In the present paper, we provide a thorough discussion of the dynamical nature of this model. We use sensitivity analysis and simulation tools to illustrate the remarkable efficiency, robustness, and stability of the heat shock response system.https://resolver.caltech.edu/CaltechAUTHORS:20190312-075150830Dynamics of TCP/RED and a scalable control
https://resolver.caltech.edu/CaltechAUTHORS:20170810-135606639
Year: 2002
DOI: 10.1109/INFCOM.2002.1019265
We demonstrate that the dynamic behavior of queue and average window is determined predominantly by the stability of TCP/RED, not by AIMD probing nor noise traffic. We develop a general multi-link multi-source model for TCP/RED and derive a local stability condition in the case of a single link with heterogeneous sources. We validate our model with simulations and illustrate the stability region of TCP/RED. These results suggest that TCP/RED becomes unstable when delay increases, or more strikingly, when link capacity increases. The analysis illustrates the difficulty of setting RED parameters to stabilize TCP: they can be tuned to improve stability, but only at the cost of large queues even when they are dynamically adjusted. Finally, we present a simple distributed congestion control algorithm that maintains stability for arbitrary network delay, capacity, load and topology.https://resolver.caltech.edu/CaltechAUTHORS:20170810-135606639Finite horizon model reduction and the appearance of dissipation in Hamiltonian systems
https://resolver.caltech.edu/CaltechAUTHORS:BARcdc02
Year: 2002
DOI: 10.1109/CDC.2002.1185095
An apparent paradox in classical statistical physics is the mechanism by which conservative, time-reversible microscopic dynamics, can give rise to seemingly dissipative behavior. In this paper we use system theoretic tools to show that dissipation can arise as an artifact of incomplete observations over a finite horizon. In addition, this approach allows us to obtain finite-time, low order, approximations of systems with moderate size and to establish how the approach to the thermodynamic limit depends on the different physical parameters.https://resolver.caltech.edu/CaltechAUTHORS:BARcdc02Highly optimized transitions to turbulence
https://resolver.caltech.edu/CaltechAUTHORS:20190306-081135649
Year: 2002
DOI: 10.1109/CDC.2002.1185094
We study the Navier-Stokes equations in three dimensional plane Couette flow geometry subject to stream-wise constant initial conditions and perturbations. The resulting two dimensional/three component (2D/3C) model has no bifurcations and is globally (non-linearly) stable for all Reynolds numbers R, yet has a total transient energy amplification that scales like R/sup 3/. These transients also have the particular dynamic flow structures known to play a central role in wall bounded shear flow transition and turbulence. This suggests a highly optimized tolerance (HOT) model of shear flow turbulence, where streamlining eliminates generic bifurcation cascade transitions that occur in bluff body flows, resulting in a flow which is stable to arbitrary changes in Reynolds number but highly fragile in amplifying arbitrarily small perturbations. This result indicates that transition and turbulence in special streamlined geometries is not a problem of linear or nonlinear instability, but rather a problem of robustness.https://resolver.caltech.edu/CaltechAUTHORS:20190306-081135649A new physics
https://resolver.caltech.edu/CaltechAUTHORS:DOYcdc02
Year: 2002
DOI: 10.1109/CDC.2002.1185093
This session considers the application of mathematics from control theory to several persistent mysteries at the foundations of physics where interconnected, multiscale systems issues arise. In addition to the ubiquity of power laws in natural and man-made systems, these include a new view of turbulence in highly sheared flows that results from design for drag minimization, the origin of macroscopic dissipation and thermodynamic irreversibility in microscopically reversible dynamics, the universality of quantum gates for quantum computing, decoherence minimization in quantum systems, and entanglement witnessing. The latter ones are problems at the heart of several important tasks such as quantum computing, teleportation and quantum key distribution. Much of the original motivation for a new science of complexity came from the hope that methods of theoretical physics could contribute to a theory of complex engineering and biological networks and systems. This collection of work shows that apparently exactly the opposite is true. The role that robust control methods play in this research will be the central theme of this paper, around which the other issues will be woven. The aim is not to provide a control-friendly rederivation of known results in physics, but rather to illustrate through representative examples, how exciting new results and important insight, as assessed by physicists themselves, can be obtained through the mathematics and methods that the control community has developed. Since this work is largely being published in the scientific literature, the controls community may be largely unaware of these developments.https://resolver.caltech.edu/CaltechAUTHORS:DOYcdc02Feedback Regulation of the Heat Shock Response in E. coli
https://resolver.caltech.edu/CaltechAUTHORS:20191009-114846374
Year: 2003
DOI: 10.1007/3-540-36589-3_9
Systems Biology is an emerging new field defined as the study of biology as an integrated system of components that act interdependently to accomplish certain functions. This approach holds the promise of offering precious insight into aspects of biological organization that cannot be identified through a reductionist approach concerned solely with the study of individual molecules. In this work, we illustrate this viewpoint through the example of the bacterial heat shock response. The heat shock response is an important mechanism that combats harmful effects of an unmediated increase in temperature. Such an increase in temperature causes the unfolding or aggregation of the cellular proteins, which imposes a tremendous amount of stress on the cell. The heat shock response is implemented through an elaborate system of controls whose purpose is to refold denatured proteins, therefore restoring their normal function. In this paper, we present a deterministic model for the heat shock response. We use this model to gain insight into the design and performance objectives of this response. We then provide a stochastic treatment based on the Stochastic Simulation Algorithm of Gillepsie [18]. This stochastic investigation validates the use of the deterministic approach in modeling the heat shock response, and motivates the investigation of feedback structures that play a role in attenuating stochastic fluctuations.https://resolver.caltech.edu/CaltechAUTHORS:20191009-114846374Can shortest-path routing and TCP maximize utility
https://resolver.caltech.edu/CaltechAUTHORS:20190306-132657017
Year: 2003
DOI: 10.1109/INFCOM.2003.1209226
TCP-AQM protocol can be interpreted as distributed primal-dual algorithms over the Internet to maximize aggregate utility. In this paper, we study whether TCP-AQM together with shortest-path routing can maximize utility with appropriate choice of link cost, on a slower timescale, over both source rates and routes. We show that this is generally impossible because the addition of route maximization makes the problem NP-hard. We exhibit an inevitable tradeoff between routing instability and utility maximization. For the special case of ring network, we prove rigorously that shortest-path routing based purely on congestion prices is unstable. Adding a sufficiently large static component to link cost, stabilizes it, but the maximum utility achievable by shortest-path routing decreases with the weight on the static component. We present simulation results to illustrate that these conclusions generalize to general network topology, and that routing instability can reduce utility to less than that achievable by the necessarily stable static routing.https://resolver.caltech.edu/CaltechAUTHORS:20190306-132657017A new TCP/AQM for stable operation in fast networks
https://resolver.caltech.edu/CaltechAUTHORS:20170810-131233358
Year: 2003
DOI: 10.1109/INFCOM.2003.1208662
This paper is aimed at designing a congestion control system that scales gracefully with network capacity, providing high utilization, low queueing delay, dynamic stability, and fairness among users. In earlier work we had developed fluid-level control laws that achieve the first three objectives for arbitrary networks and delays, but were forced to constrain the resource allocation policy. In this paper we extend the theory to include dynamics at TCP sources, preserving the earlier features at fast time-scales, but permitting sources to match their steady-state preferences, provided a bound on round-trip-times is known. We develop two packet-level implementations of this protocol, using (i) ECN marking, and (ii) queueing delay, as means of communicating the congestion measure from links to sources. We discuss parameter choices and demonstrate using ns-2 simulations the stability of the protocol and its equilibrium features in terms of utilization, queueing and fairness. We also demonstrate the scalability of these features to increases in capacity, delay, and load, in comparison with other deployed and proposed protocols.https://resolver.caltech.edu/CaltechAUTHORS:20170810-131233358A control theoretical look at internet congestion control
https://resolver.caltech.edu/CaltechAUTHORS:20170810-131419777
Year: 2003
DOI: 10.1007/3-540-36589-3_2
Congestion control mechanisms in today's Internet represent perhaps the largest artificial feedback system ever deployed, and yet one that has evolved mostly outside the scope of control theory. This can be explained by the tight constraints of decentralization and simplicity of implementation in this problem, which would appear to rule out most mathematically-based designs. Nevertheless, a recently developed framework based on fluid flow models has allowed for a belated injection of control theory into the area, with some pleasant surprises. As described in this chapter, there is enough special structure to allow us to "guess" designs with mathematically provable properties that hold in arbitrary networks, and which involve a modest complexity in implementation.https://resolver.caltech.edu/CaltechAUTHORS:20170810-131419777Model validation and robust stability analysis of the bacterial heat shock response using SOSTOOLS
https://resolver.caltech.edu/CaltechAUTHORS:20111010-100919353
Year: 2003
DOI: 10.1109/CDC.2003.1271735
The complexity inherent in gene regulatory network models, as well as their nonlinear nature make them difficult to analyze or validate/invalidate using conventional tools. Combining ideas from robust control theory, real algebraic geometry, optimization and semi-definite programming, SOSTOOLS provides a promising framework to answer these robustness and model validation questions algorithmically. We adopt these tools in the study of the heat shock response in bacteria. For this purpose, we use a reduced order model of the bacterial heat stress response. We study the robust stability properties of this system to parametric uncertainty, and address the model validation/invalidation problem by proving the necessity for the existence of certain feedback loops to reproduce the known time behavior of the system.https://resolver.caltech.edu/CaltechAUTHORS:20111010-100919353A First-Principles Approach to Understanding the
Internet's Router-level Topology
https://resolver.caltech.edu/CaltechAUTHORS:20160715-133930786
Year: 2004
DOI: 10.1145/1015467.1015470
A detailed understanding of the many facets of the Internet's topological structure is critical for evaluating the performance of networking protocols, for assessing the effectiveness of proposed techniques to protect the network from nefarious intrusions and attacks, or for developing improved designs for resource provisioning. Previous studies of topology have focused on interpreting measurements or on phenomenological descriptions and evaluation of
graph-theoretic properties of topology generators. We propose a
complementary approach of combining a more subtle use of statistics
and graph theory with a first-principles theory of router-level
topology that reflects practical constraints and tradeoffs. While
there is an inevitable tradeoff between model complexity and fidelity,
a challenge is to distill from the seemingly endless list of
potentially relevant technological and economic issues the features
that are most essential to a solid understanding of the intrinsic fundamentals
of network topology. We claim that very simple models
that incorporate hard technological constraints on router and link
bandwidth and connectivity, together with abstract models of user
demand and network performance, can successfully address this
challenge and further resolve much of the confusion and controversy
that has surrounded topology generation and evaluation.https://resolver.caltech.edu/CaltechAUTHORS:20160715-133930786More "normal" than normal: scaling distributions and complex systems
https://resolver.caltech.edu/CaltechAUTHORS:WILwsc04
Year: 2004
DOI: 10.1109/WSC.2004.1371310
One feature of many naturally occurring or engineered complex systems is tremendous variability in event sizes. To account for it, the behavior of these systems is often described using power law relationships or scaling distributions, which tend to be viewed as "exotic" because of their unusual properties (e.g., infinite moments). An alternate view is based on mathematical, statistical, and data-analytic arguments and suggests that scaling distributions should be viewed as "more normal than normal". In support of this latter view that has been advocated by Mandelbrot for the last 40 years, we review in this paper some relevant results from probability theory and illustrate a powerful statistical approach for deciding whether the variability associated with observed event sizes is consistent with an underlying Gaussian-type (finite variance) or scaling-type (infinite variance) distribution. We contrast this approach with traditional model fitting techniques and discuss its implications for future modeling of complex systems.https://resolver.caltech.edu/CaltechAUTHORS:WILwsc04A Reynolds number independent model for turbulence in Couette flow
https://resolver.caltech.edu/CaltechAUTHORS:20130924-130804582
Year: 2004
In this paper we study theoretically and computationally the dynamics
of linearized stream-wise constant Navier-Stokes equations, under
external time varying deterministic disturbances.https://resolver.caltech.edu/CaltechAUTHORS:20130924-130804582Managing complexity and uncertainty
https://resolver.caltech.edu/CaltechAUTHORS:20190306-084855706
Year: 2004
Modem fields of science and engineering have evolved
remarkably high degrees of specialization. The present division of intellectual labor is structured by the assumption that complex systems can be "vertically" decomposed into layers of materials and devices versus the systems they compose. A further assumption is that each layer is further "horizontally" divided into
chemical, mechanical, and electrical materials/devices as well as processing, communication, computation, and control
systems. A central cause of the fragmentation of complex
systems into isolated subdisciplines has traditionally been the inherent intractability of problems that require integration of, say, communications, computation, and control. This has necessitated specialized and domain-specific assumptions and methods that can appear arbitrary and ad hoc to researchers in other subdomains. The power of this decomposition is that it has facilitated a
massively parallel development of advanced technologies, the proliferation of sophisticated domain-specific theories, allowing each subdiscipline to function independently, with only higher level system integrators required to be generalists. An increasingly troublesome side-effect is a growing intellectual Tower of Babel where experts within one subdiscipline can rarely have
meaningful contact with experts from other subdisciplines, and may even be largely unaware of their existence. For example, the term "information" is used by everyone, but often has not just different but almost opposite meanings in, say, communications, computing, or controls systems, let alone between systems and devices.https://resolver.caltech.edu/CaltechAUTHORS:20190306-084855706A first-principles approach to understanding the internet's router-level topology
https://resolver.caltech.edu/CaltechAUTHORS:20170810-134129788
Year: 2004
DOI: 10.1145/1030194.1015470
A detailed understanding of the many facets of the Internet's topological structure is critical for evaluating the performance of networking protocols, for assessing the effectiveness of proposed techniques to protect the network from nefarious intrusions and attacks, or for developing improved designs for resource provisioning. Previous studies of topology have focused on interpreting measurements or on phenomenological descriptions and evaluation of graph-theoretic properties of topology generators. We propose a complementary approach of combining a more subtle use of statistics and graph theory with a first-principles theory of router-level topology that reflects practical constraints and tradeoffs. While there is an inevitable tradeoff between model complexity and fidelity, a challenge is to distill from the seemingly endless list of potentially relevant technological and economic issues the features that are most essential to a solid understanding of the intrinsic fundamentals of network topology. We claim that very simple models that incorporate hard technological constraints on router and link bandwidth and connectivity, together with abstract models of user demand and network performance, can successfully address this challenge and further resolve much of the confusion and controversy that has surrounded topology generation and evaluation.https://resolver.caltech.edu/CaltechAUTHORS:20170810-134129788Analysis of nonlinear delay differential equation models of TCP/AQM protocols using sums of squares
https://resolver.caltech.edu/CaltechAUTHORS:20110831-081433118
Year: 2004
DOI: 10.1109/CDC.2004.1429529
The simplest adequate models for congestion
control for the Internet are in the form of deterministic
nonlinear delay differential equations. However the absence
of efficient, algorithmic methodologies to analyze
them at this modelling level usually results in the investigation of their linearizations including delays; or in the analysis of nonlinear yet undelayed models. In this
paper we present an algorithmic methodology for efficient
stability analysis of network congestion control schemes at
the nonlinear delay-differential equation model level, using
the Sum of Squares decomposition and SOSTOOLS.https://resolver.caltech.edu/CaltechAUTHORS:20110831-081433118Joint congestion control and media access control design for ad hoc wireless networks
https://resolver.caltech.edu/CaltechAUTHORS:20170810-103408504
Year: 2005
DOI: 10.1109/INFCOM.2005.1498496
We present a model for the joint design of congestion control and media access control (MAC) for ad hoc wireless networks. Using contention graph and contention matrix, we formulate resource allocation in the network as a utility maximization problem with constraints that arise from contention for channel access. We present two algorithms that are not only distributed spatially, but more interestingly, they decompose vertically into two protocol layers where TCP and MAC jointly solve the system problem. The first is a primal algorithm where the MAC layer at the links generates congestion (contention) prices based on local aggregate source rates, and TCP sources adjust their rates based on the aggregate prices in their paths. The second is a dual subgradient algorithm where the MAC sub-algorithm is implemented through scheduling link-layer flows according to the congestion prices of the links. Global convergence properties of these algorithms are proved. This is a preliminary step towards a systematic approach to jointly design TCP congestion control algorithms and MAC algorithms, not only to improve performance, but more importantly, to make their interaction more transparent.https://resolver.caltech.edu/CaltechAUTHORS:20170810-103408504Plenary Panel Discussion: Challenges and opportunities for the future of control
https://resolver.caltech.edu/CaltechAUTHORS:DOYcdc04
Year: 2005
This panel reflects the scope and diversity of the unprecedented challenges and opportunities for the systems and controls community that has been created by several research themes from the basic sciences to advanced technologies. Connecting physical processes at multiple time and space scales in quantum, statistical, fluid, and solid mechanics, remains not only a central scientific challenge but also one with increasing technological implications. This is particular so in highly organized and nonequilibrium systems as in biology and nanotechnology, where interconnection, feedback, and dynamics are playing an increasingly central role.https://resolver.caltech.edu/CaltechAUTHORS:DOYcdc04Optimization model of internet protocols
https://resolver.caltech.edu/CaltechAUTHORS:20161129-162643807
Year: 2005
DOI: 10.1145/1064212.1064245
Layered architecture is one of the most fundamental and influential structures of network design. Can we integrate the various protocol layers into a single coherent theory by regarding them as carrying out an asynchronous distributed primal-dual computation over the network to implicitly solve a global optimization problem? Different layers iterate on different subsets of the decision variables using local information to achieve individual optimalities, but taken together, these local algorithms attempt to achieve a global objective. Such a theory will expose the interconnection between protocol layers and can be used to study rigorously the performance tradeoff in protocol layering as different ways to distribute a centralized computation. In this talk, we describe some preliminary work towards this goal and discuss some of the difficulties of this approach.https://resolver.caltech.edu/CaltechAUTHORS:20161129-162643807Fundamental Limitations of Disturbance Attenuation in the Presence of Side Information
https://resolver.caltech.edu/CaltechAUTHORS:20190306-141450198
Year: 2005
DOI: 10.1109/CDC.2005.1582542
In this paper, we study fundamental limitations of disturbance attenuation of feedback systems, under the assumption that the controller has a finite horizon preview of the disturbance. In contrast with prior work, we extend Bode's integral equation for the case where the preview is made available to the controller via a general, finite capacity, communication system. Under asymptotic stationarity assumptions, our results show that the new fundamental limitation differs from Bode's only by a constant, which quantifies the information rate through the communication system. In the absence of stationarity, we derive a universal lower bound which uses entropy rates as a measure of performance.https://resolver.caltech.edu/CaltechAUTHORS:20190306-141450198Cross-layer Congestion Control, Routing and Scheduling Design in Ad Hoc Wireless Networks
https://resolver.caltech.edu/CaltechAUTHORS:20110120-103630156
Year: 2006
DOI: 10.1109/INFOCOM.2006.142
This paper considers jointly optimal design of crosslayer congestion control, routing and scheduling for ad hoc
wireless networks. We first formulate the rate constraint and scheduling constraint using multicommodity flow variables, and formulate resource allocation in networks with fixed wireless channels (or single-rate wireless devices that can mask channel variations) as a utility maximization problem with these constraints.
By dual decomposition, the resource allocation problem
naturally decomposes into three subproblems: congestion control,
routing and scheduling that interact through congestion price.
The global convergence property of this algorithm is proved. We
next extend the dual algorithm to handle networks with timevarying
channels and adaptive multi-rate devices. The stability
of the resulting system is established, and its performance is
characterized with respect to an ideal reference system which
has the best feasible rate region at link layer.
We then generalize the aforementioned results to a general
model of queueing network served by a set of interdependent
parallel servers with time-varying service capabilities, which
models many design problems in communication networks. We
show that for a general convex optimization problem where a
subset of variables lie in a polytope and the rest in a convex set,
the dual-based algorithm remains stable and optimal when the
constraint set is modulated by an irreducible finite-state Markov
chain. This paper thus presents a step toward a systematic way
to carry out cross-layer design in the framework of "layering as
optimization decomposition" for time-varying channel models.https://resolver.caltech.edu/CaltechAUTHORS:20110120-103630156Software Infrastructure for Effective Communication and Reuse of Computational Models
https://resolver.caltech.edu/CaltechAUTHORS:20130107-161648673
Year: 2006
Until recently, the majority of computational models in biology were implemented
in custom programs and published as statements of the underlying mathematics.
However, to be useful as formal embodiments of our understanding of biological
systems, computational models must be put into a consistent form that can be
communicated more directly between the software tools used to work with them.
In this chapter, we describe the Systems Biology Markup Language (SBML), a
format for representing models in a way that can be used by different software
systems to communicate and exchange those models. By supporting SBML as an
input and output format, different software tools can all operate on an identical
representation of a model, removing opportunities for errors in translation and
assuring a common starting point for analyses and simulations. We also take this
opportunity to discuss some of the resources available for working with SBML as
well as ongoing efforts in SBML's continuing evolution.https://resolver.caltech.edu/CaltechAUTHORS:20130107-161648673Biological complexity and robustness
https://resolver.caltech.edu/CaltechAUTHORS:20110615-105907604
Year: 2006
DOI: 10.1109/BMN.2006.330919
This talk will describe qualitatively in as much detail as time allows these features of biological systems and their parallels in technology, using hopefully familiar and concrete examples. The aim is to be accessible to biologists, and not to depend critically on the mathematical framework. A crucial insight is that both evolution and natural selection or engineering design must produce high robustness to uncertain environments and components in order for systems to persist. Yet this allows and even facilitates severe fragility to novel perturbations, particularly those that exploit the very mechanisms providing robustness, and this "robust yet fragile" (RYF) feature must be exploited explicitly in any theory that hopes to scale to large systems. Time permitting, the mathematical research implications will be sketched of this view of "organized complexity" in biology, technology, and mathematics. This view contrasts sharply with that of "emergent complexity" popular in other areas of science in a way that can now be made mathematically precise.https://resolver.caltech.edu/CaltechAUTHORS:20110615-105907604Disturbance attenuation bounds in the presence of a remote preview
https://resolver.caltech.edu/CaltechAUTHORS:20110810-103527394
Year: 2006
DOI: 10.1007/11533382_18
We study the fundamental limits of disturbance attenuation of a networked control scheme, where a remote preview of the disturbance is available. The preview information is conveyed to the controller, via an encoder and a finite capacity channel. In this article, we present an example where we design a remote preview system by means of an additive, white and Gaussian channel. The example is followed by a summary of our recent results on general performance bounds, which we use to prove optimality of the design method.https://resolver.caltech.edu/CaltechAUTHORS:20110810-103527394An optimization-based approach to modeling internet topology
https://resolver.caltech.edu/CaltechAUTHORS:20110119-112858944
Year: 2006
DOI: 10.1007/0-387-29234-9_6
Over the last decade there has been significant interest and attention devoted towards understanding the complex structure of the Internet, particularly its topology
and the large-scale properties that can be derived from it. While recent work by empiricists and theoreticians has emphasized certain statistical and mathematical properties of network structure, this article presents an optimization-based perspective that focuses on the objectives, constraints, and other drivers of engineering
design. We argue that Internet topology at the router-level can be understood in terms of the tradeoffs between network performance and the technological and economic factors constraining design. Furthermore, we suggest that the formulation of corresponding optimization problems serves as a reasonable starting point for generating "realistic, yet fictitious" network topologies. Finally, we describe how this optimization-based perspective is being used in the
development of a still-nascent theory for the Internet as a whole.https://resolver.caltech.edu/CaltechAUTHORS:20110119-112858944Complexity in Automation of SOS Proofs: An Illustrative Example
https://resolver.caltech.edu/CaltechAUTHORS:20110215-132231555
Year: 2006
DOI: 10.1109/CDC.2006.377629
We present a case study in proving invariance
for a chaotic dynamical system, the logistic map, based on
Positivstellensatz refutations, with the aim of studying the
problems associated with developing a completely automated
proof system. We derive the refutation using two different forms
of the Positivstellensatz and compare the results to illustrate the
challenges in defining and classifying the 'complexity' of such
a proof. The results show the flexibility of the SOS framework
in converting a dynamics problem into a semialgebraic one as
well as in choosing the form of the proof. Yet it is this very
flexibility that complicates the process of automating the proof
system and classifying proof 'complexity.'https://resolver.caltech.edu/CaltechAUTHORS:20110215-132231555Layering As Optimization Decomposition: Current Status and Open Issues
https://resolver.caltech.edu/CaltechAUTHORS:20170508-173109246
Year: 2006
DOI: 10.1109/CISS.2006.286492
Network protocols in layered architectures have historically been obtained on an ad-hoc basis, and much of the recent cross-layer designs are conducted through piecemeal approaches. Network protocols may instead be holistically analyzed and systematically designed as distributed solutions to some global optimization problems in the form of generalized network utility maximization (NUM), providing insight on what they optimize and structures of the network protocol stack. This paper presents a short survey of the recent efforts towards a systematic understanding of "layering" as "optimization decomposition", where the overall communication network is modeled by a generalized NUM problem, each layer corresponds to a decomposed subproblem, and the interfaces among layers are quantified as functions of the optimization variables coordinating the subproblems. Furthermore, there are many alternative decompositions, each leading to a different layering architecture. Industry adoption of this unifying framework has also started. Here we summarize the current status of horizontal decomposition into distributed computation and vertical decomposition into functional modules such as congestion control, routing, scheduling, random access, power control, and coding. Key messages and methodologies arising out of many recent work are listed. Then we present a list of challenging open issues in this area and the initial progress made on some of them.https://resolver.caltech.edu/CaltechAUTHORS:20170508-173109246Layering As Optimization Decomposition: Framework and Examples
https://resolver.caltech.edu/CaltechAUTHORS:20190306-130031273
Year: 2006
DOI: 10.1109/ITW.2006.1633780
Network protocols in layered architectures have historically been obtained primarily on an ad-hoc basis. Recent research has shown that network protocols may instead be holistically analyzed and systematically designed as distributed solutions to some global optimization problems in the form of Network Utility Maximization (NUM), providing insight into what they optimize and structures of the network protocol stack. This paper presents a short survey of the recent efforts towards a systematic understanding of 'layering' as 'optimization decomposition', where the overall communication network is modeled by a generalized NUM problem, each layer corresponds to a decomposed subproblem, and the interfaces among layers are quantified as functions of the optimization variables coordinating the sub-problems. Different decompositions lead to alternative layering architectures. We summarize several examples of horizontal decomposition into distributed computation and vertical decomposition into functional modules such as congestion control, routing, scheduling, random access, power control, and coding.https://resolver.caltech.edu/CaltechAUTHORS:20190306-130031273On Asymptotic Optimality of Dual Scheduling Algorithm In A Generalized Switch
https://resolver.caltech.edu/CaltechAUTHORS:20110203-100201578
Year: 2006
DOI: 10.1109/WIOPT.2006.1666500
Generalized switch is a model of a queueing system where parallel servers are interdependent and have time-varying service capabilities. This paper considers the dual scheduling algorithm that uses rate control and queue-length based scheduling to allocate resources for a generalized switch. We consider a saturated system in which each user has infinite amount of data to be served. We prove the asymptotic optimality of the dual scheduling algorithm for such a system, which says that the vector of average service rates of the scheduling algorithm maximizes some aggregate concave utility functions. As the fairness objectives can be achieved by appropriately choosing utility functions, the asymptotic optimality establishes the fairness properties of the dual scheduling algorithm.
The dual scheduling algorithm motivates a new architecture for scheduling, in which an additional queue is introduced to interface the user data queue and the time-varying server and to modulate the scheduling process, so as to achieve different performance objectives. Further research would include scheduling with Quality of Service guarantees with the dual scheduler, and its application and implementation in various versions of the generalized switch model.https://resolver.caltech.edu/CaltechAUTHORS:20110203-100201578Layering as Optimization Decomposition: Questions and Answers
https://resolver.caltech.edu/CaltechAUTHORS:20170508-172152981
Year: 2006
DOI: 10.1109/MILCOM.2006.302293
Network protocols in layered architectures have historically been obtained on an ad-hoc basis, and much of the recent cross-layer designs are conducted through piecemeal approaches. Network protocols may instead be holistically analyzed and systematically designed as distributed solutions to some global optimization problems in the form of generalized Network Utility Maximization (NUM), providing insight on what they optimize and on the structures of network protocol stacks. In the form of 10 Questions and Answers, this paper presents a short survey of the recent efforts towards a systematic understanding of "layering" as "optimization decomposition". The overall communication network is modeled by a generalized NUM problem, each layer corresponds to a decomposed subproblem, and the interfaces among layers are quantified as functions of the optimization variables coordinating the subproblems. Furthermore, there are many alternative decompositions, each leading to a different layering architecture. Industry adoption of this unifying framework has also started. Here we summarize the current status of horizontal decomposition into distributed computation and vertical decomposition into functional modules such as congestion control, routing, scheduling, random access, power control, and coding. We also discuss under-explored future research directions in this area. More importantly than proposing any particular crosslayer design, this framework is working towards a mathematical foundation of network architectures and the design process of modularization.https://resolver.caltech.edu/CaltechAUTHORS:20170508-172152981Optimization Based Rate Control for Multicast with Network Coding
https://resolver.caltech.edu/CaltechAUTHORS:20100826-092317616
Year: 2007
DOI: 10.1109/INFCOM.2007.139
Recent advances in network coding have shown
great potential for efficient information multicasting in communication
networks, in terms of both network throughput and
network management. In this paper, we address the problem of
rate control at end-systems for network coding based multicast
flows. We develop two adaptive rate control algorithms for
the networks with given coding subgraphs and without given
coding subgraphs, respectively. With random network coding,
both algorithms can be implemented in a distributed manner, and
work at transport layer to adjust source rates and at network
layer to carry out network coding. We prove that the proposed
algorithms converge to the globally optimal solutions for intrasession
network coding. Some related issues are discussed, and
numerical examples are provided to complement our theoretical
analysis.https://resolver.caltech.edu/CaltechAUTHORS:20100826-092317616Dual scheduling algorithm in a generalized switch: asymptotic optimality and throughput optimality
https://resolver.caltech.edu/CaltechAUTHORS:20170810-103710158
Year: 2007
DOI: 10.1007/1-84628-274-8_7
In this article, we consider the dual scheduling algorithm for a generalized switch. For a saturated system, we prove the asymptotic optimality of the dual scheduling algorithm and thus establish its fairness properties. For a system with exogenous arrivals, we propose a modified dual scheduling algorithm, which is throughput-optimal while providing some weighted fairness among the users at the level of flows.
The dual scheduling algorithm motivates a new architecture for scheduling, in which an additional queue is introduced to interface the user data queue and the time-varying server and to modulate the scheduling process, so as to achieve different performance objectives. Further research stemming out of this article includes scheduling with Quality of Service guarantees with the dual scheduler, and its application and implementation in various versions of the generalized switch model.https://resolver.caltech.edu/CaltechAUTHORS:20170810-103710158The Statistical Mechanics of Fluctuation-Dissipation and Measurement Back Action
https://resolver.caltech.edu/CaltechAUTHORS:20101104-115927780
Year: 2007
DOI: 10.1109/ACC.2007.4282774
In this paper, we take a control-theoretic approach to answering some standard questions in statistical mechanics. A central problem is the relation between systems which appear macroscopically dissipative but are microscopically lossless. We show that a linear macroscopic system is dissipative if and only if it can be approximated by a linear lossless microscopic system, over arbitrarily long time intervals. As a by-product, we obtain mechanisms explaining Johnson-Nyquist noise as initial uncertainty in the lossless state, as well as measurement back action and a trade-off between process and measurement noise.https://resolver.caltech.edu/CaltechAUTHORS:20101104-115927780Thermodynamics of linear systems
https://resolver.caltech.edu/CaltechAUTHORS:20190208-142209118
Year: 2007
DOI: 10.23919/ECC.2007.7068722
We rigorously derive the main results of thermo-dynamics, including Carnot's theorem, in the framework of time-varying linear systems.https://resolver.caltech.edu/CaltechAUTHORS:20190208-142209118Can complexity science support the engineering of critical network infrastructures?
https://resolver.caltech.edu/CaltechAUTHORS:20190304-105018143
Year: 2007
DOI: 10.1109/ICSMC.2007.4414241
Considerable attention is now being devoted to the study of "complexity science" with the intent of discovering and applying universal laws of highly interconnected and evolved systems. This paper considers several issues related to the use of these theories in the context of critical infrastructures, particularly the Internet. Specifically, we revisit the notion of "organized complexity" and suggest that it is fundamental to our ability to understand, operate, and design next-generation infrastructure networks. We comment on the role of engineering in defining an architecture to support networked infrastructures and highlight recent advances in the theory of distributed control driven by network technologies.https://resolver.caltech.edu/CaltechAUTHORS:20190304-105018143Contention control: A game-theoretic approach
https://resolver.caltech.edu/CaltechAUTHORS:CHEcdc07
Year: 2007
DOI: 10.1109/CDC.2007.4435015
We present a game-theoretic approach to contention control. We define a game-theoretic model, called random access game, to capture the contention/interaction among wireless nodes in wireless networks with contention-based medium access. We characterize Nash equilibria of random access games, study their dynamics and propose distributed algorithms (strategy evolutions) to achieve the Nash equilibria. This provides a general analytical framework that is capable of modelling a large class of systemwide quality of service models via the specification of per-node utility functions, in which systemwide fairness or service differentiation can be achieved in a distributed manner as long as each node executes a contention resolution algorithm that is designed to achieve the Nash equilibrium. We thus design medium access method according to distributed strategy update mechanism achieving the Nash equilibrium of random access game. In addition to guiding medium access control design, the random access game model also provides an analytical framework to understand equilibrium and dynamic properties of different medium access protocols and their interactions.https://resolver.caltech.edu/CaltechAUTHORS:CHEcdc07Linear-quadratic-Gaussian heat engines
https://resolver.caltech.edu/CaltechAUTHORS:20190304-104205307
Year: 2007
DOI: 10.1109/CDC.2007.4434789
In this paper, we study the problem of extracting work from heat flows. In thermodynamics, a device doing this is called a heat engine. A fundamental problem is to derive hard limits on the efficiency of heat engines. Here we construct a linear-quadratic-Gaussian optimal controller that estimates the states of a heated lossless system. The measurements cool the system, and the surplus energy can be extracted as work by the controller. Hence, the controller acts like a Maxwell's demon. We compute the efficiency of the controller over finite and infinite time intervals, and since the controller is optimal, this yields hard limits. Over infinite time horizons, the controller has the same efficiency as a Carnot heat engine, and thereby it respects the second law of thermodynamics. As illustration we use an electric circuit where an ideal current source extracts energy from resistors with Johnson-Nyquist noise.https://resolver.caltech.edu/CaltechAUTHORS:20190304-104205307Complexity and fragility in stability for linear systems
https://resolver.caltech.edu/CaltechAUTHORS:20100622-135915127
Year: 2008
DOI: 10.1109/ACC.2008.4586725
This paper presents a formal axiomization of the notion that (proof) complexity implies (property) fragility and illustrates this framework in the context of the stability of both discrete-time and continuous-time linear systems.https://resolver.caltech.edu/CaltechAUTHORS:20100622-135915127Linear control analysis of the autocatalytic glycolysis system
https://resolver.caltech.edu/CaltechAUTHORS:20100507-144031986
Year: 2009
DOI: 10.1109/ACC.2009.5159925
Autocatalysis is necessary and ubiquitous in both
engineered and biological systems but can aggravate control
performance and cause instability. We analyze the properties of
autocatalysis in the universal and well studied glycolytic
pathway. A simple two-state model incorporating ATP
autocatalysis and inhibitory feedback control captures the
essential dynamics, including limit cycle oscillations, observed
experimentally. System performance is limited by the inherent
autocatalytic stoichiometry and higher levels of autocatalysis
exacerbate stability and performance. We show that glycolytic
oscillations are not merely a "frozen accident" but a result of
the intrinsic stability tradeoffs emerging from the autocatalytic
mechanism. This model has pedagogical value as well as
appearing to be the simplest and most complete illustration yet
of Bode's integral formula.https://resolver.caltech.edu/CaltechAUTHORS:20100507-144031986On the graph of trees
https://resolver.caltech.edu/CaltechAUTHORS:20190226-100827411
Year: 2009
DOI: 10.1109/CCA.2009.5281136
We consider an ldquon-graph of treesrdquo whose nodes are the set of trees of fixed order n, and in which two nodes are adjacent if one tree can be derived from the other through a single application of a local edge transformation rule. We derive an exact formula for the length of the shortest path from any node to any ldquocanonicalrdquo node in the n-graph of trees. We use this result to derive upper and lower bounds on the diameter of the n-graph of trees. We then propose a coordinate system that is convenient for studying the structure of the n-graph of trees, and in which trees having the same degree sequence are projected onto a single point.https://resolver.caltech.edu/CaltechAUTHORS:20190226-100827411Congestion control algorithms from optimal control perspective
https://resolver.caltech.edu/CaltechAUTHORS:20170810-133936531
Year: 2009
DOI: 10.1109/CDC.2009.5399554
This paper is concerned with understanding the connection between the existing Internet congestion control algorithms and the optimal control theory. The available resource allocation controllers are mainly devised to derive the state of the system to a desired equilibrium point and, therefore, they are oblivious to the transient behavior of the closed-loop system. This work aims to investigate what dynamical functions the existing algorithms maximize (minimize). In particular, it is shown that there exist meaningful cost functionals whose minimization leads to the celebrated primal and dual congestion algorithms. An implication of this result is that a real network problem may be solved by regarding it as an optimal control problem on which some practical constraints, such as a real-time link capacity constraint, are imposed.https://resolver.caltech.edu/CaltechAUTHORS:20170810-133936531Solving large-scale linear circuit problems via convex optimization
https://resolver.caltech.edu/CaltechAUTHORS:20190226-085922106
Year: 2009
DOI: 10.1109/cdc.2009.5400690
A broad class of problems in circuits, electromagnetics, and optics can be expressed as finding some parameters of a linear system with a specific type. This paper is concerned with studying this type of circuit using the available control techniques. It is shown that the underlying problem can be recast as a rank minimization problem that is NP-hard in general. In order to circumvent this difficulty, the circuit problem is slightly modified so that the resulting optimization becomes convex. This interesting result is achieved at the cost of complicating the structure of the circuit, which introduces a trade-off between the design simplicity and the implementation complexity. When it is strictly required to solve the original circuit problem, the elegant structure of the proposed rank minimization problem allows for employing a celebrated heuristic method to solve it efficiently.https://resolver.caltech.edu/CaltechAUTHORS:20190226-085922106Finding globally optimum solutions in antenna optimization problems
https://resolver.caltech.edu/CaltechAUTHORS:20110425-110624129
Year: 2010
DOI: 10.1109/APS.2010.5561993
During the last decade, the unprecedented increase in the affordable computational power
has strongly supported the development of optimization techniques for designing
antennas. Among these techniques, genetic algorithm [1] and particle swarm optimization
[2] could be mentioned. Most of these techniques use physical dimensions of an antenna
as the optimization variables, and require solving Maxwell's equations (numerically) at
each optimization step. They are usually slow, unable to handle a large number of
variables, and incapable of finding the globally optimum solutions. In this paper, we are
proposing an antenna optimization technique that is orders of magnitude faster than the
conventional schemes, can handle thousands of variables, and finds the globally optimum
solutions for a broad range of antenna optimization problems. In the proposed scheme,
termination impedances embedded on an antenna structure are used as the optimization
variables. This is particularly useful in designing on-chip smart antennas, where
thousands of transistors and variable passive elements can be employed to reconfigure an
antenna. By varying these parasitic impedances, an antenna can vary its gain, band-width,
pattern, and efficiency. The goal of this paper is to provide a systematic, numerically
efficient approach for finding globally optimum solutions in designing smart antennas.https://resolver.caltech.edu/CaltechAUTHORS:20110425-110624129File Fragmentation over an Unreliable Channel
https://resolver.caltech.edu/CaltechAUTHORS:20110401-160938362
Year: 2010
DOI: 10.1109/INFCOM.2010.5461953
It has been recently discovered that heavy-tailed
file completion time can result from protocol interaction even
when file sizes are light-tailed. A key to this phenomenon is
the RESTART feature where if a file transfer is interrupted
before it is completed, the transfer needs to restart from the
beginning. In this paper, we show that independent or bounded
fragmentation guarantees light-tailed file completion time as long
as the file size is light-tailed, i.e., in this case, heavy-tailed file
completion time can only originate from heavy-tailed file sizes.
If the file size is heavy-tailed, then the file completion time is
necessarily heavy-tailed. For this case, we show that when the
file size distribution is regularly varying, then under independent
or bounded fragmentation, the completion time tail distribution
function is asymptotically upper bounded by that of the original
file size stretched by a constant factor. We then prove that if the
failure distribution has non-decreasing failure rate, the expected
completion time is minimized by dividing the file into equal sized
fragments; this optimal fragment size is unique but depends on
the file size. We also present a simple blind fragmentation policy
where the fragment sizes are constant and independent of the
file size and prove that it is asymptotically optimal. Finally, we
bound the error in expected completion time due to error in
modeling of the failure process.https://resolver.caltech.edu/CaltechAUTHORS:20110401-160938362Quantitative Nonlinear Analysis of Autocatalytic Pathways with Applications to Glycolysis
https://resolver.caltech.edu/CaltechAUTHORS:20110412-112151713
Year: 2010
Autocatalytic pathways are frequently encountered in biological networks. One such pathway, the glycolytic
pathway, is of special importance and has been studied extensively. Using tools from linear systems theory, our previous work on a simple two dimensional model of glycolysis demonstrated that autocatalysis can aggravate control performance and contribute to instability. Here, we expand this work and study properties of nonlinear autocatalytic pathway models (of which glycolysis is an example). Changes in the concentration of metabolites and catalyzing enzymes during the lifetime of the cell can perturb the system from the nominal operating point of the pathway. We investigate effects of such perturbations
through the estimation of invariant subsets of the region of
attraction around nominal operating conditions (i.e., a measure of the set of perturbations from which the cell recovers). Numerical experiments demonstrate that systems that are robust with respect to perturbations in parameter space have easily "verifiable" region of attraction properties in terms of proof complexity.https://resolver.caltech.edu/CaltechAUTHORS:20110412-112151713Compositional analysis of autocatalytic networks in biology
https://resolver.caltech.edu/CaltechAUTHORS:20110412-111554044
Year: 2010
Autocatalytic pathways are a necessary part of
core metabolism. Every cell consumes external food/resources
to create components and energy, but does so using processes
that also require those same components and energy. Here, we
study effects of parameter variations on the stability properties
of autocatalytic pathway models and the extent of the regions of
attraction around nominal operating conditions. Motivated by
the computational complexity of optimization-based methods
for estimating regions of attraction for large pathways, we take
a compositional approach and exploit a natural decomposition
of the system, induced by the underlying biological structure,
into a feedback interconnection of two input-output subsystems:
a small subsystem with complicating nonlinearities and a
large subsystem with simple dynamics. This decomposition
simplifies the analysis of large pathways by assembling region
of attraction certificates based on the input-output properties
of the subsystems. It enables us to numerically construct block-diagonal
Lyapunov functions for families of pathways that
are not amenable to direct analysis. Furthermore, it leads to
analytical construction of Lyapunov functions for a large family
of autocatalytic pathways.https://resolver.caltech.edu/CaltechAUTHORS:20110412-111554044Utility Functionals Associated With Available Congestion Control Algorithms
https://resolver.caltech.edu/CaltechAUTHORS:20110406-104430537
Year: 2010
DOI: 10.1109/INFCOM.2010.5462103
This paper is concerned with understanding the connection between the existing Internet congestion control algorithms and the optimal control theory. The available resource allocation controllers are mainly devised to derive the state of the system to a desired equilibrium point and, therefore, they are oblivious to the transient behavior of the closed-loop system. To take into account the real-time performance of the system, rather than merely its steady-state performance, the congestion control problem should be solved by maximizing a proper utility functional as opposed to a utility function. For this reason, this work aims to investigate what utility functionals the existing congestion control algorithms maximize. In particular, it is shown that there exist meaningful utility
functionals whose maximization leads to the celebrated primal, dual and primal/dual algorithms. An implication of this result is that a real network problem may be solved by regarding it as an optimal control problem on which some practical constraints, such as a real-time link capacity constraint, are imposed.https://resolver.caltech.edu/CaltechAUTHORS:20110406-104430537A Study of Near-Field Direct Antenna Modulation Systems Using Convex Optimization
https://resolver.caltech.edu/CaltechAUTHORS:20110412-144235909
Year: 2010
This paper studies the constellation diagram design
for a class of communication systems known as near-field
direct antenna modulation (NFDAM) systems. The modulation
is carried out in a NFDAM system by means of a control
unit that switches among a number of pre-designed passive
controllers such that each controller generates a desired voltage
signal at the far field. To find an optimal number of signals
that can be transmitted and demodulated reliably in a NFDAM
system, the coverage area of the signal at the far field should
be identified. It is shown that this coverage area is a planar
convex region in general and simply a circle in the case when no
constraints are imposed on the input impedance of the antenna
and the voltage received at the far field. A convex optimization
method is then proposed to find a polygon that is able to approximate
the coverage area of the signal constellation diagram
satisfactorily. A similar analysis is provided for the identification
of the coverage area of the antenna input impedance, which is
beneficial for designing an energy-efficient NFDAM system.https://resolver.caltech.edu/CaltechAUTHORS:20110412-144235909Two Market Models for Demand Response in Power Networks
https://resolver.caltech.edu/CaltechAUTHORS:20170810-104228265
Year: 2010
DOI: 10.1109/SMARTGRID.2010.5622076
In this paper, we consider two abstract market models for designing demand response to match power supply and shape power demand, respectively. We characterize the resulting equilibria in competitive as well as oligopolistic markets, and propose distributed demand response algorithms to achieve the equilibria. The models serve as a starting point to include the appliance-level details and constraints for designing practical demand response schemes for smart power grids.https://resolver.caltech.edu/CaltechAUTHORS:20170810-104228265Performance limitations in autocatalytic networks in biology
https://resolver.caltech.edu/CaltechAUTHORS:20190226-081811238
Year: 2010
DOI: 10.1109/cdc.2010.5717362
Autocatalytic networks, where a member can stimulate its own production, can be unstable when not controlled by feedback. Even when such networks are stabilized by regulating control feedbacks, they tend to exhibit non-minimum phase behavior. In this paper, we study the hard limits of the ideal performance of such networks and the hard limit of their minimum output energy. We consider a simplified model of glycolysis as our motivating example. For the glycolysis model, we characterize hard limits on the minimum output energy by analyzing the limiting behavior of the optimal cheap control problem for two different interconnection topologies. We show that some network interconnection topologies result in zero hard limits. Then, we develop necessary tools and concepts to extend our results to a general class of autocatalytic networks.https://resolver.caltech.edu/CaltechAUTHORS:20190226-081811238Topological tradeoffs in autocatalytic metabolic pathways
https://resolver.caltech.edu/CaltechAUTHORS:20190226-072341107
Year: 2010
DOI: 10.1109/CDC.2010.5717490
Metabolic pathways in cells convert external food and resources into useful cell components and energy. In many cases the cell employs product inhibition to regulate and control these pathways. We investigate the performance of such regulation and control on certain autocatalytic pathways. Specifically, we examine how well the pathways can maintain the desired output concentrations in the presence of disturbances, such as perturbations in resources, enzyme concentrations and product demand. Using control theoretic tools, we show the effects of the pathway size, the reversibility of the intermediate reactions and the coupling of pathways through the consumption of intermediate metabolites on performance. In addition, we establish some necessary conditions on the existence of fixed points and their stability for such pathways.https://resolver.caltech.edu/CaltechAUTHORS:20190226-072341107Passively Controllable Smart Antennas
https://resolver.caltech.edu/CaltechAUTHORS:20110401-112525230
Year: 2010
DOI: 10.1109/GLOCOM.2010.5684358
We recently introduced passively controllable smart (PCS) antenna systems for efficient wireless transmission, with direct applications in wireless sensor networks. A PCS antenna system is accompanied by a tunable passive controller whose adjustment at every signal transmission generates a specific radiation pattern. To reduce co-channel interference and optimize the transmitted power, this antenna can be programmed to transmit data in a desired direction in such a way that no signal is transmitted (to the far field) at pre-specified undesired directions. The controller of a PCS antenna was assumed to be centralized in our previous work, which was an impediment to its implementation. In this work, we study the design of PCS antenna systems under decentralized controllers, which are both practically implementable and cost efficient. The PCS antenna proposed here is made of one active element and its programming needs solving second-order-cone optimizations. These properties differentiate a PCS antenna from the existing smart antennas, and make it possible to implement a PCS antenna on a small-sized, low-power silicon chip.https://resolver.caltech.edu/CaltechAUTHORS:20110401-112525230Effect of buffers on stability of Internet congestion controllers
https://resolver.caltech.edu/CaltechAUTHORS:20120403-131857340
Year: 2011
Almost all existing fluid models of congestion control assume that the fluid flow at the output of a link is the same as the fluid flow at the input of the link. This means that all links in the path of a flow see the original source rate. In reality, a fluid flow is modified by the queueing processes on its path, so that an intermediate link will generally not see the original source rate. In this paper, we propose a simple model that explicitly takes into account of the effect of buffering on output flows. We study the dual and primal-dual algorithms that use implicit feedback and show that, while they are always asymptotically stable if feedback delay is ignored, they can be unstable in the new model.https://resolver.caltech.edu/CaltechAUTHORS:20120403-131857340On the structure of state-feedback LQG controllers for distributed systems with communication delays
https://resolver.caltech.edu/CaltechAUTHORS:20190220-111649177
Year: 2011
DOI: 10.1109/CDC.2011.6160767
This paper presents explicit solutions for a few distributed LQG problems in which players communicate their states with delays. The resulting control structure is reminiscent of a simple management hierarchy, in which a top level input is modified by newer, more localized information as it gets passed down the chain of command. It is hoped that the controller forms arising through optimization may lend insight into the control strategies of biological and social systems with communication delays.https://resolver.caltech.edu/CaltechAUTHORS:20190220-111649177Dynamic Programming Solutions for Decentralized State-Feedback LQG Problems with Communication Delays
https://resolver.caltech.edu/CaltechAUTHORS:20121003-154617690
Year: 2012
DOI: 10.1109/ACC.2012.6315282
This paper presents explicit solutions for a class of decentralized LQG problems in which players communicate their states with delays. A method for decomposing the Bellman equation into a hierarchy of independent subproblems is introduced. Using this decomposition, all of the gains for the optimal controller are computed from the solution of a single algebraic Riccati equation.https://resolver.caltech.edu/CaltechAUTHORS:20121003-154617690Dynamic programming solutions for decentralized state-feedback LQG problems with communication delays
https://resolver.caltech.edu/CaltechAUTHORS:20121009-110029523
Year: 2012
This paper presents explicit solutions for a class of decentralized LQG problems in which players communicate their states with delays. A method for decomposing the Bellman equation into a hierarchy of independent subproblems is introduced. Using this decomposition, all of the gains for the optimal controller are computed from the solution of a single algebraic Riccati equation.https://resolver.caltech.edu/CaltechAUTHORS:20121009-110029523A dual problem in H_2 decentralized control subject to delays
https://resolver.caltech.edu/CaltechAUTHORS:20131219-095138236
Year: 2013
It has been shown that the decentralized H_2 model matching problem subject to delay can be solved by decomposing the controller into a centralized, but delayed, component and a decentralized FIR component, the latter of which can be solved for via a linearly constrained quadratic program. In this paper, we derive the dual to this optimization problem, show that strong duality holds, and exploit this to further analyze properties of the control problem. Namely, we determine a priori upper and lower bounds on the optimal H_2 cost, and obtain further insight into the structure of the optimal FIR component. Furthermore, we show how the optimal dual variables can be used to inform communication graph augmentation, and illustrate this idea with a routing problem.https://resolver.caltech.edu/CaltechAUTHORS:20131219-095138236A heuristic for sub-optimal H_2 decentralized control subject to delay in non-quadratically-invariant systems
https://resolver.caltech.edu/CaltechAUTHORS:20131219-094717913
Year: 2013
Inspired by potential applications to the smart grid, we develop a heuristic for sub-optimal, but acceptable, control of decentralized systems subject to non-quadratically invariant (non-QI) delay patterns. We do so by exploiting a recently developed solution to the decentralized H_2 model matching problem subject to delays, which decomposes the controller into a centralized, but delayed, component and a decentralized FIR component. In particular, we present an iterative procedure that exploits this decomposition to design a sub-optimal decentralized H_2 controller for non-QI systems that is guaranteed a priori to be stable, and to perform no worse than a controller computed with respect to a QI subset of the non-QI constraint set. We then apply this procedure to a smart-grid frequency regulation problem.https://resolver.caltech.edu/CaltechAUTHORS:20131219-094717913Output feedback H_2 model matching for decentralized systems with delays
https://resolver.caltech.edu/CaltechAUTHORS:20190213-081737120
Year: 2013
DOI: 10.1109/ACC.2013.6580743
This paper gives a new solution to the output feedback H_2 model matching problem for a large class of delayed information sharing patterns. Existing methods for similar problems typically reduce the decentralized problem to a centralized problem of higher state dimension. In contrast, this paper demonstrates that the decentralized model matching solution can be constructed from the original centralized solution via quadratic programming.https://resolver.caltech.edu/CaltechAUTHORS:20190213-081737120Localized distributed state feedback control with communication delays
https://resolver.caltech.edu/CaltechAUTHORS:20150320-125909654
Year: 2014
DOI: 10.1109/ACC.2014.6859440
This paper introduces the notion of localizable distributed systems. These are systems for which a distributed controller exists that limits the effect of each disturbance to some local subset of the entire plant, akin to spatio-temporal dead-beat control. We characterize distributed systems for which a localizing state-feedback controller exists in terms of the feasibility of a set of linear equations. We then show that when a feasible solution exists, it can be found in a distributed way, and used for the localized synthesis and implementation of controllers that lead to the desired closed loop response. In particular, by allowing controllers to exchange both state and control actions, the information needed by a particular controller is limited to a local subset of the system's state and control inputs.https://resolver.caltech.edu/CaltechAUTHORS:20150320-125909654Study of the brain functional network using synthetic data
https://resolver.caltech.edu/CaltechAUTHORS:20190212-083440699
Year: 2014
DOI: 10.1109/ALLERTON.2014.7028476
The brain functional connectivity is usually assessed with the correlation coefficients of certain signals. The partial correlation matrix can reveal direct interactions between brain regions. However, computing this matrix is usually challenging due to the availability of only a limited number of samples. As an alternative, thresholding the sample correlation matrix is a common technique for the identification of the direct interactions. In this work, we investigate the performance of this method in addition to some other well-known techniques, namely graphical lasso and Chow-Liu algorithm. Our analysis is performed on some synthetic data produced by an electrical circuit model with certain structural properties. We show that the simple method of thresholding the correlation matrix and the graphical lasso algorithm would both create false positives and negatives that wrongly imply some network properties such as small-worldness. We also apply these techniques to some resting-state functional MRI (fMRI) data and show that similar observations can be made.https://resolver.caltech.edu/CaltechAUTHORS:20190212-083440699Localized LQR optimal control
https://resolver.caltech.edu/CaltechAUTHORS:20190212-073103646
Year: 2014
DOI: 10.1109/CDC.2014.7039638
This paper introduces a receding horizon like control scheme for localizable distributed systems, in which the effect of each local disturbance is limited spatially and temporally. We characterize such systems by a set of linear equality constraints, and show that the resulting feasibility test can be solved in a localized and distributed way. We also show that the solution of the local feasibility tests can be used to synthesize a receding horizon like controller that achieves the desired closed loop response in a localized manner as well. Finally, we formulate the Localized LQR (LLQR) optimal control problem and derive an analytic solution for the optimal controller. Through a numerical example, we show that the LLQR optimal controller, with its constraints on locality, settling time, and communication delay, can achieve similar performance as an unconstrained ℋ_2 optimal controller, but can be designed and implemented in a localized and distributed way.https://resolver.caltech.edu/CaltechAUTHORS:20190212-073103646A Case Study in Network Architecture Tradeoffs
https://resolver.caltech.edu/CaltechAUTHORS:20150715-140940147
Year: 2015
DOI: 10.1145/2774993.2775011
Software defined networking (SDN) establishes a separation between the control plane and the data plane, allowing network intelligence and state to be centralized -- in this way the underlying network infrastructure is hidden from the applications. This is in stark contrast to existing distributed networking architectures, in which the control and data planes are vertically combined, and network intelligence and state, as well as applications, are distributed throughout the network. It is also conceivable that some elements of network functionality be implemented in a centralized manner via SDN, and that other components be implemented in a distributed manner. Further, distributed implementations can have varying levels of decentralization, ranging from myopic (in which local algorithms use only local information) to coordinated (in which local algorithms use both local and shared information). In this way, myopic distributed architectures and fully centralized architectures lie at the two extremes of a broader hybrid software defined networking (HySDN) design space.
Using admission control as a case study, we leverage recent developments in distributed optimal control to provide network designers with tools to quantitatively compare different architectures, allowing them to explore the relevant HySDN design space in a principled manner. In particular, we assume that routing is done at a slower timescale, and seek to stabilize the network around a desirable operating point despite physical communication delays imposed by the network and rapidly varying traffic demand. We show that there exist scenarios for which one architecture allows for fundamentally better performance than another, thus highlighting the usefulness of the approach proposed in this paper.https://resolver.caltech.edu/CaltechAUTHORS:20150715-140940147Primal robustness and semidefinite cones
https://resolver.caltech.edu/CaltechAUTHORS:20160217-091635251
Year: 2015
DOI: 10.1109/CDC.2015.7403199
This paper reformulates and streamlines the core tools of robust stability and performance for LTI systems using now-standard methods in convex optimization. In particular, robustness analysis can be formulated directly as a primal convex (semidefinite program or SDP) optimization problem using sets of Gramians whose closure is a semidefinite cone. This allows various constraints such as structured uncertainty to be included directly, and worst-case disturbances and perturbations constructed directly from the primal variables. Well known results such as the KYP lemma and various scaled small gain tests can also be obtained directly through standard SDP duality. To readers familiar with robustness and SDPs, the framework should appear obvious, if only in retrospect. But this is also part of its appeal and should enhance pedagogy, and we hope suggest new research. There is a key lemma proving closure of a Gramian that is also obvious but our current proof appears unnecessarily cumbersome, and a final aim of this paper is to enlist the help of experts in robust control and convex optimization in finding simpler alternatives.https://resolver.caltech.edu/CaltechAUTHORS:20160217-091635251Hard limits on robust control over delayed and quantized communication channels with applications to sensorimotor control
https://resolver.caltech.edu/CaltechAUTHORS:20160216-132314659
Year: 2015
DOI: 10.1109/CDC.2015.7403407
The modern view of the nervous system as layering distributed computation and communication for the purpose of sensorimotor control and homeostasis has much experimental evidence but little theoretical foundation, leaving unresolved the connection between diverse components and complex behavior. As a simple starting point, we address a fundamental tradeoff when robust control is done using communication with both delay and quantization error, which are both extremely heterogeneous and highly constrained in human and animal nervous systems. This yields surprisingly simple and tight analytic bounds with clear interpretations and insights regarding hard tradeoffs, optimal coding and control strategies, and their relationship with well known physiology and behavior. These results are similar to reasoning routinely used informally by experimentalists to explain their findings, but very different from those based on information theory and statistical physics (which have dominated theoretical neuroscience). The simple analytic results and their proofs extend to more general models at the expense of less insight and nontrivial (but still scalable) computation. They are also relevant, though less dramatically, to certain cyber-physical systems.https://resolver.caltech.edu/CaltechAUTHORS:20160216-132314659A Theory of Dynamics, Control and Optimization in Layered Architectures
https://resolver.caltech.edu/CaltechAUTHORS:20160802-081108720
Year: 2016
DOI: 10.1109/ACC.2016.7525357
The controller of a large-scale distributed system (e.g., the internet, the power-grid and automated highway systems) is often faced with two complementary tasks: (i) that of finding an optimal trajectory with respect to a functional or economic utility, and (ii) that of efficiently making the state of the system follow this trajectory despite model uncertainty, process and sensor noise and distributed information sharing constraints. While each of these tasks has been addressed individually, there exists as of yet no controller synthesis framework that treats these two problems in a holistic manner. This paper proposes a unifying optimization based methodology that jointly addresses these two tasks by leveraging the strengths of well established frameworks for distributed control: the Layering as Optimization (LAO) framework and the distributed optimal control framework. We show that our proposed control scheme has a natural layered architecture composed of a low-level tracking layer and top-level planning layer. The tracking layer consists of a distributed optimal controller that takes as an input a reference trajectory generated by the top-level layer, where this top-level layer consists of a trajectory planning problem that optimizes a weighted sum of a utility function and a "racking penalty" regularizer. We further provide an exact solution to the tracking layer problem under a broad range of information sharing constraints, discuss extensions to the proposed problem formulation, and demonstrate the effectiveness of our approach on a numerical example.https://resolver.caltech.edu/CaltechAUTHORS:20160802-081108720Localized LQR Control with Actuator Regularization
https://resolver.caltech.edu/CaltechAUTHORS:20160802-095819362
Year: 2016
DOI: 10.1109/ACC.2016.7526485
In previous work, we posed and solved the localized linear quadratic regulator (LLQR) problem - a LLQR controller is one that limits the propagation of dynamics to user-specified subsets of the global system. The advantages of taking this approach are tangible, as we show that this allows the controller to be synthesized and implemented in a scalable local manner. Implicit in this previous work was the existence of a feasible spatio-temporal constraint on the controller and closed loop response of the system that enforced these locality properties. This paper proposes and analyzes a procedure for designing such a spatio-temporal constraint, which can be interpreted as a measure of the implementation complexity of a controller, and a sparse actuation architecture that ensures that it is feasible. We show that the computational tasks involved can be suitably decomposed and solved using the alternating direction method of multipliers (ADMM), thus providing a scalable approach to designing a LLQR controller with a sparse actuation architecture.https://resolver.caltech.edu/CaltechAUTHORS:20160802-095819362Teaching control theory in high school
https://resolver.caltech.edu/CaltechAUTHORS:20170106-115150650
Year: 2016
DOI: 10.1109/CDC.2016.7799181
Controls is increasingly central to technology, science, and society, yet remains the "hidden technology." Our appropriate emphasis on mathematical rigor and practical relevance in the past 40 years has not been similarly balanced with technical accessibility. The aim of this tutorial is to enlist the controls community in helping to radically rethink controls education. In addition to the brief 2 hour tutorial at CDC, we will have a website with additional materials, but particularly extensive online videos with mathematical details and case studies. We will also have a booth in the exhibition area at CDC with live demos and engaging competitions throughout the conference.https://resolver.caltech.edu/CaltechAUTHORS:20170106-115150650Understanding robust control theory via stick balancing
https://resolver.caltech.edu/CaltechAUTHORS:20170106-124748759
Year: 2016
DOI: 10.1109/CDC.2016.7798480
Robust control theory studies the effect of noise, disturbances, and other uncertainty on system performance. Despite growing recognition across science and engineering that robustness and efficiency tradeoffs dominate the evolution and design of complex systems, the use of robust control theory remains limited, partly because the mathematics involved is relatively inaccessible to nonexperts, and the important concepts have been inexplicable without a fairly rich mathematics background. This paper aims to begin changing that by presenting the most essential concepts in robust control using human stick balancing, a simple case study popular in both the sensorimotor control literature and extremely familiar to engineers. With minimal and familiar models and mathematics, we can explore the impact of unstable poles and zeros, delays, and noise, which can then be easily verified with simple experiments using a standard extensible pointer. Despite its simplicity, this case study has extremes of robustness and fragility that are initially counter-intuitive but for which simple mathematics and experiments are clear and compelling. The theory used here has been well-known for many decades, and the cart-pendulum example is a standard in undergrad controls courses, yet a careful reconsidering of both leads to striking new insights that we argue are of great pedagogical value.https://resolver.caltech.edu/CaltechAUTHORS:20170106-124748759System level parameterizations, constraints and synthesis
https://resolver.caltech.edu/CaltechAUTHORS:20170531-104451494
Year: 2017
DOI: 10.23919/ACC.2017.7963133
We introduce the system level approach to controller synthesis, which is composed of three elements: System Level Parameterizations (SLPs), System Level Constraints (SLCs) and System Level Synthesis (SLS) problems. SLPs provide a novel parameterization of all internally stabilizing controllers and the system responses that they achieve. These can be combined with SLCs to provide parameterizations of constrained stabilizing controllers. We provide a catalog of useful SLCs, and show that by using SLPs with SLCs, we can parameterize the largest known class of constrained stabilizing controllers that admit a convex characterization. Finally, we formulate the SLS problem, and show that it defines the broadest known class of constrained optimal control problems that can be solved using convex programming. We end by using the system level approach to computationally explore tradeoffs in controller performance, architecture cost, robustness and synthesis/implementation complexity.https://resolver.caltech.edu/CaltechAUTHORS:20170531-104451494HFTraC: High-Frequency Traffic Control
https://resolver.caltech.edu/CaltechAUTHORS:20170614-153521655
Year: 2017
DOI: 10.1145/3078505.3078557
We propose high-frequency traffic control (HFTraC), a rate control scheme that coordinates the transmission rates and buffer utilizations in routers network-wide at fast timescale. HFTraC can effectively deal with traffic demand fluctuation by utilizing available buffer space in routers network-wide, and therefore lead to significant performance improvement in terms of tradeoff between bandwidth utilization and queueing delay. We further note that the performance limit of HFTraC is determined by the network architecture used to implement it. We provide trace-driven evaluation of the performance of HFTraC implemented in the proposed architectures that vary from fully centralized to completely decentralized.https://resolver.caltech.edu/CaltechAUTHORS:20170614-153521655System Level Synthesis: A Tutorial
https://resolver.caltech.edu/CaltechAUTHORS:20180126-080200870
Year: 2017
DOI: 10.1109/CDC.2017.8264074
This tutorial paper provides an overview of the System Level Approach to control synthesis; a scalable framework for large-scale distributed control. The system level approach is composed of three central components: System Level Parameterizations (SLPs), System Level Constraints (SLCs) and System Level Synthesis (SLP) problems. We describe how the combination of these elements parameterize the largest known class of constrained controllers that admit a convex formulation.https://resolver.caltech.edu/CaltechAUTHORS:20180126-080200870Fundamental limits and achievable performance in biomolecular control
https://resolver.caltech.edu/CaltechAUTHORS:20190205-081829763
Year: 2018
DOI: 10.23919/ACC.2018.8430933
Understanding how a biomolecular system achieves various control objectives via chemical reactions is of crucial importance in cell biology. However, unlike typical control problems where full information about the system is assumed to be known, typically, only a small portion of the entire biomolecular system can be characterized with certainty. In order to gain insights in these situations, we use control and information theory to derive the performance bounds when chemical species implement feedback control via the production rate or the degradation rate of chemical species. We expand the approach of the pioneering work of Lestas et al. to treat more general scenarios and derive explicit lower bounds on the achievable Fano factor of the controlled species. Our results suggest that control and sensing via the degradation rates, compared with those via the production rates, benefit from the additional design freedom to choose degradation efficiencies, in addition to previously considered signal rate, which helps to lower the Fano factor of the controlled species. We compare our lower bounds with achievable performance via simulation of chemical master equations.https://resolver.caltech.edu/CaltechAUTHORS:20190205-081829763Passive-Aggressive Learning and Control
https://resolver.caltech.edu/CaltechAUTHORS:20190205-082240504
Year: 2018
DOI: 10.23919/ACC.2018.8430904
In this work, we investigate the problem of simultaneously learning and controlling a system subject to adversarial choices of disturbances and system parameters. We study the problem for a scalar system with l∞ -norm bounded disturbances and system parameters constrained to lie in a known bounded convex polytope. We present a controller that is globally stabilizing and gives continuously improving bounds on the worst case state deviation. The proposed controller simultaneously learns the system parameters and controls the system. The controller emerges naturally from an optimization problem, and balances exploration and exploitation in such a way that it is able to efficiently stabilize unstable and adversarial system dynamics. Specifically if the controller is faced with large uncertainty, the initial focus is on exploration, retrieving information about the system by applying state-feedback controllers with varying gains and signs. In a prespecified bounded region around the origin, our control strategy can be seen as passive in the sense that it learns very little information. Only once the noise and/or system parameters act in an adversarial way, leading to the the state exiting the aforementioned region for more than one time-step, our proposed controller behaves aggressively in that it is guaranteed to learn enough about the system to subsequently robustly stabilize it. We end by demonstrating the efficiency of our methods via numerical simulations.https://resolver.caltech.edu/CaltechAUTHORS:20190205-082240504Architecture and Trade-offs in the Heat Shock Response System
https://resolver.caltech.edu/CaltechAUTHORS:20190201-143228822
Year: 2018
DOI: 10.1109/cdc.2018.8619129
Biological control systems often contain a wide variety of feedforward and feedback mechanisms that regulate a given process. While it is generally assumed that this apparent redundancy has evolved for a reason, it is often unclear how exactly the cell benefits from more complex circuit architectures. Here we study this problem in the context of a minimal model of the Heat Shock Response system in E. coli and show, through a combination of theory and simulation, that the complexity of the natural system outperforms hypothetical simpler architectures in a variety of robustness and efficiency tradeoffs. We have developed a significantly simplified model of the system that faithfully captures these rich issues. Because a great deal of biological detail is known about this particular system, we are able to compare simple models with more complete ones and obtain a level of theoretical and quantitative insight not generally feasible in the study of biological circuits. We primarily hope this will inform future analysis of both heat shock and newly studied biological complexity.https://resolver.caltech.edu/CaltechAUTHORS:20190201-143228822Robust Perfect Adaptation in Biomolecular Reaction Networks
https://resolver.caltech.edu/CaltechAUTHORS:20181031-075024162
Year: 2018
DOI: 10.1109/CDC.2018.8619101
For control in biomolecular systems, the most basic objective of maintaining a small error in a target variable, say the expression level of some protein, is often difficult due to the presence of both large uncertainty of every type and intrinsic limitations on the controller's implementation. This paper explores the limits of biochemically plausible controller design for the problem of robust perfect adaptation (RPA), biologists' term for robust steady state tracking. It is well-known that for a large class of nonlinear systems, a system has RPA iff it has integral feedback control (IFC), which has been used extensively in real control systems to achieve RPA. However, we show that due to intrinsic physical limitations on the dynamics of chemical reaction networks (CRNs), cells cannot implement IFC directly in the concentration of a chemical species. This contrasts with electronic implementations, particularly digital, where it is trivial to implement IFC directly in a single state. Therefore, biomolecular systems have to achieve RPA by encoding the integral control variable into the network architecture of a CRN. We describe a general framework to implement RPA in CRNs and show that well-known network motifs that achieve RPA, such as (negative) integral feedback (IFB) and incoherent feedforward (IFF), are examples of such implementations. We also develop methods to designing integral feedback variables for unknown plants. This standard control notion is surprisingly nontrivial and relatively unstudied in biomolecular control. The methods developed here connect different existing fields and approaches on the problem of biomolecular control, and hold promise for systematic chemical reaction controller synthesis as well as analysis.https://resolver.caltech.edu/CaltechAUTHORS:20181031-075024162Scalable Robust Adaptive Control from the System Level Perspective
https://resolver.caltech.edu/CaltechAUTHORS:20190617-112254520
Year: 2019
DOI: 10.48550/arXiv.1904.00077
We will present a new general framework for robust and adaptive control that allows for distributed and scalable learning and control of large systems of interconnected linear subsystems. The control method is demonstrated for a linear time-invariant system with bounded parameter uncertainties, disturbances and noise. The presented scheme continuously collects measurements to reduce the uncertainty about the system parameters and adapts dynamic robust controllers online in a stable and performance-improving way. A key enabler for our approach is choosing a time-varying dynamic controller implementation, inspired by recent work on System Level Synthesis [1]. We leverage a new robustness result for this implementation to propose a general robust adaptive control algorithm. In particular, the algorithm allows us to impose communication and delay constraints on the controller implementation and is formulated as a sequence of robust optimization problems that can be solved in a distributed manner. The proposed control methodology performs particularly well when the interconnection between systems is sparse and the dynamics of local regions of subsystems depend only on a small number of parameters. As we will show on a five-dimensional exemplary chain-system, the algorithm can utilize system structure to efficiently learn and control the entire system while respecting communication and implementation constraints. Moreover, although current theoretical results require the assumption of small initial uncertainties to guarantee robustness, we will present simulations that show good closed-loop performance even in the case of large uncertainties, which suggests that this assumption is not critical for the presented technique and future work will focus on providing less conservative guarantees.https://resolver.caltech.edu/CaltechAUTHORS:20190617-112254520Experimental and educational platforms for studying architecture and tradeoffs in human sensorimotor control
https://resolver.caltech.edu/CaltechAUTHORS:20190905-143550126
Year: 2019
This paper describes several surprisingly rich but simple demos and a new experimental platform for human sensorimotor control research and also controls education. The platform safely simulates a canonical sensorimotor task of riding a mountain bike down a steep, twisting, bumpy trail using a standard display and inexpensive off-the-shelf gaming steering wheel with a force feedback motor. We use the platform to verify our theory, presented in a companion paper. The theory tells how component hardware speed-accuracy tradeoffs (SATs) in control loops impose corresponding SATs at the system level and how effective architectures mitigate the deleterious impact of hardware SATs through layering and "diversity sweet spots" (DSSs). Specifically, we measure the impacts on system performance of delays, quantization, and uncertainties in sensorimotor control loops, both within the subject's nervous system and added externally via software in the platform. This provides a remarkably rich test of the theory, which is consistent with all preliminary data. Moreover, as the theory predicted, subjects effectively multiplex specific higher layer planning/tracking of the trail using vision with lower layer rejection of unseen bump disturbances using reflexes. In contrast, humans multitask badly on tasks that do not naturally distribute across layers (e.g. texting and driving). The platform is cheap to build and easy to program for both research and education purposes, yet verifies our theory, which is aimed at closing a crucial gap between neurophysiology and sensorimotor control. The platform can be downloaded at https://github.com/Doyle-Lab/WheelCon.https://resolver.caltech.edu/CaltechAUTHORS:20190905-143550126Mathematical Models of Physiological Responses to Exercise
https://resolver.caltech.edu/CaltechAUTHORS:20190906-075423929
Year: 2019
This paper develops empirical mathematical models for physiological responses to exercise. We first find single-input single-output models describing heart rate variability, ventilation, oxygen consumption and carbon dioxide production in response to workload changes and then identify a single-input multi-output model from workload to these physiological variabilities. We also investigate the possibility of the existence of a universal model for physiological variability in different individuals during treadmill running. Simulations based on real data substantiate that the obtained models accurately capture the physiological responses to workload variations. In particular, it is observed that (i) different physiological responses to exercise can be captured by low-order linear or mildly nonlinear models; and (ii) there may exist a universal model for oxygen consumption that works for different individuals.https://resolver.caltech.edu/CaltechAUTHORS:20190906-075423929Measurement back action and a classical uncertainty principle: Heisenberg meets Kalman
https://resolver.caltech.edu/CaltechAUTHORS:20190905-145752040
Year: 2019
We study a measurement framework motivated by considering macroscopic (i.e. large, active, and with finite temperature) measurement of microscopic (i.e. small and lossless) but classical dynamics. This unavoidably leads to "measurement back action" on the microscopic dynamics that nevertheless still allows for optimal filtering to minimize estimation error, but with tradeoffs between errors due to estimation and errors due to the back action. We focus on a simple case in which the deterministic effects of the measurement process are completely canceled by active control, and the remaining (coupled) stochastic back action and measurement noise is optimally filtered to minimize estimation error. This leads to a particularly interesting tradeoffs and limits on estimation and back action, analogous in many respects with the Heisenberg uncertainty principle but in an entirely classical framework.https://resolver.caltech.edu/CaltechAUTHORS:20190905-145752040Flexibility and Cost-Dependence in Quantized Control
https://resolver.caltech.edu/CaltechAUTHORS:20190905-145500772
Year: 2019
Layered control architectures in biology and neuroscience can be used to mitigate speed-accuracy tradeoffs, with low-layer quantized controllers carrying out time-sensitive tasks at reduced precision. Here, we describe and optimize the worst-case approximation loss for a quantized controller: the maximum control and state costs paid in the quantized case that would not be paid in the full-precision case. We show that the optimal design of a quantizer depends on the dynamics and the state and control costs, leading notably to cases in which systematically biased estimates of state are optimal for control. We further show that high-layer input can direct a low-layer controller to flexibly execute quantized control across context-related cost functions, with component-level mechanisms that are plausibly implementable in biological settings.https://resolver.caltech.edu/CaltechAUTHORS:20190905-145500772Coupled Reaction Networks for Noise Suppression
https://resolver.caltech.edu/CaltechAUTHORS:20181030-075417310
Year: 2019
DOI: 10.1101/440453
Noise is intrinsic to many important regulatory processes in living cells, and often forms obstacles to be overcome for reliable biological functions. However, due to stochastic birth and death events of all components in biomolecular systems, suppression of noise of one component by another is fundamentally hard and costly. Quantitatively, a widely-cited severe lower bound on noise suppression in biomolecular systems was established by Lestas et. al. in 2010, assuming that the plant and the controller have separate birth and death reactions. This makes the precision observed in several biological phenomena, e.g., cell fate decision making and cell cycle time ordering, seem impossible. We demonstrate that coupling, a mechanism widely observed in biology, could suppress noise lower than the bound of Lestas et. al. with moderate energy cost. Furthermore, we systematically investigate the coupling mechanism in all two-node reaction networks, showing that negative feedback suppresses noise better than incoherent feedforward achitectures, coupled systems have less noise than their decoupled version for a large class of networks, and coupling has its own fundamental limitations in noise suppression. Results in this work have implications for noise suppression in biological control and provide insight for a new efficient mechanism of noise suppression in biology.https://resolver.caltech.edu/CaltechAUTHORS:20181030-075417310Robust Model-Free Learning and Control without Prior Knowledge
https://resolver.caltech.edu/CaltechAUTHORS:20200911-071601902
Year: 2019
DOI: 10.1109/cdc40024.2019.9029986
We present a simple model-free control algorithm that is able to robustly learn and stabilize an unknown discretetime linear system with full control and state feedback subject to arbitrary bounded disturbance and noise sequences. The controller does not require any prior knowledge of the system dynamics, disturbances or noise, yet can guarantee robust stability, uniform asymptotic bounds and uniform worst-case bounds on the state-deviation. Rather than the algorithm itself, we would like to highlight the new approach taken towards robust stability analysis which served as a key enabler in providing the presented stability and performance guarantees. We will conclude with simulation results that show that despite the generality and simplicity, the controller demonstrates good closed-loop performance.https://resolver.caltech.edu/CaltechAUTHORS:20200911-071601902The driver and the engineer: Reinforcement learning and robust control
https://resolver.caltech.edu/CaltechAUTHORS:20200730-143943072
Year: 2020
DOI: 10.23919/acc45564.2020.9147347
Reinforcement learning (RL) and other AI methods are exciting approaches to data-driven control design, but RL's emphasis on maximizing expected performance contrasts with robust control theory (RCT), which puts central emphasis on the impact of model uncertainty and worst case scenarios. This paper argues that these approaches are potentially complementary, roughly analogous to that of a driver and an engineer in, say, formula one racing. Each is indispensable but with radically different roles. If RL takes the driver seat in safety critical applications, RCT may still play a role in plant design, and also in diagnosing and mitigating the effects of performance degradation due to changes or failures in component or environments. While much RCT research emphasizes synthesis of controllers, as does RL, in practice RCT's impact has perhaps already been greater in using hard limits and tradeoffs on robust performance to provide insight into plant design, interpreted broadly as including sensor, actuator, communications, and computer selection and placement in addition to core plant dynamics. More automation may ultimately require more rigor and theory, not less, if our systems are going to be both more efficient and robust. Here we use the simplest possible toy model to illustrate how RCT can potentially augment RL in finding mechanistic explanations when control is not merely hard, but impossible, and issues in making them more compatibly data-driven. Despite the simplicity, questions abound. We also discuss the relevance of these ideas to more realistic challenges.https://resolver.caltech.edu/CaltechAUTHORS:20200730-143943072Frontiers in Scalable Distributed Control: SLS, MPC, and Beyond
https://resolver.caltech.edu/CaltechAUTHORS:20210510-141354333
Year: 2021
DOI: 10.23919/ACC50511.2021.9483130
The System Level Synthesis (SLS) approach facilitates distributed control of large cyberphysical networks in an easy-to-understand, computationally scalable way. We present an overview of the SLS approach and its associated extensions in nonlinear control, MPC, adaptive control, and learning for control. To illustrate the effectiveness of SLS-based methods, we present a case study motivated by the power grid, with communication constraints, actuator saturation, disturbances, and changing setpoints. This simple but challenging case study necessitates the use of model predictive control (MPC); however, standard MPC techniques often scales poorly to large systems and incurs heavy computational burden. To address this challenge, we combine two SLS-based controllers to form a layered MPC-like controller. Our controller has constant computational complexity with respect to the system size, gives a 20-fold reduction in online computation requirements, and still achieves performance that is within 3% of the centralized MPC controller.https://resolver.caltech.edu/CaltechAUTHORS:20210510-141354333Stability and Control of Biomolecular Circuits through Structure
https://resolver.caltech.edu/CaltechAUTHORS:20201106-110344530
Year: 2021
DOI: 10.23919/ACC50511.2021.9483039
Due to omnipresent uncertainties and environmental disturbances, natural and engineered biological organisms face the challenging control problem of achieving robust performance using unreliable parts. The key to overcoming this challenge rests in identifying structures of biomolecular circuits that are largely invariant despite uncertainties, and building control through such structures. In this work, we show that log derivatives can capture the structural regimes of biocircuits in regulating the production and degradation rates of molecules. We show that log derivatives can establish stability of fixed points based on structure, despite large variations in rates and functional forms of models. Furthermore, we demonstrate how control objectives, such as robust perfect adaptation (i.e. step disturbance rejection), could be implemented through structure. Due to the method's simplicity, structural properties for analysis and design of biomolecular circuits can often be determined by a glance at the equations.https://resolver.caltech.edu/CaltechAUTHORS:20201106-110344530Systems Level Model of Dietary Effects on Cognition via the Microbiome-Gut-Brain Axis
https://resolver.caltech.edu/CaltechAUTHORS:20220503-50866100
Year: 2021
DOI: 10.23919/ecc54610.2021.9655216
Intercommunication of the microbiome-gut-brain axis occurs through various signaling pathways including the vagus nerve, immune system, endocrine/paracrine, and bacteria-derived metabolites. But how these pathways integrate to influence cognition remains undefined. In this paper, we create a systems level mathematical framework comprised of interconnected organ-level dynamical subsystems to increase conceptual understanding of how these subsystems contribute to cognitive performance. With this framework we propose that control of hippocampal long-term potentiation (hypothesized to correlate with cognitive performance) is influenced by interorgan signaling with diet as the external control input. Specifically, diet can influence synaptic strength (LTP) homeostatic conditions necessary for learning. The proposed model provides new qualitative information about the functional relationship between diet and output cognitive performance. The results can give insight for optimization of cognitive performance via diet in experimental animal models.https://resolver.caltech.edu/CaltechAUTHORS:20220503-50866100