Committee Feed
https://feeds.library.caltech.edu/people/Owhadi-H/committee.rss
A Caltech Library Repository Feedhttp://www.rssboard.org/rss-specificationpython-feedgenenSat, 13 Apr 2024 01:42:24 +0000Upscaling for Two-Phase Flows in Porous Media
https://resolver.caltech.edu/CaltechETD:etd-05252005-085744
Authors: {'items': [{'email': 'an_westhead@yahoo.com', 'id': 'Westhead-Andrew-Neil', 'name': {'family': 'Westhead', 'given': 'Andrew Neil'}, 'show_email': 'NO'}]}
Year: 2005
DOI: 10.7907/T7Q0-FG76
<p>The understanding and modeling of flow through porous media is an important issue in several branches of engineering. In petroleum engineering, for instance, one wishes to model the "enhanced oil recovery" process, whereby water or steam is injected into an oil saturated porous media in an attempt to displace the oil so that it can be collected. In groundwater contaminant studies the transport of dissolved material, such as toxic metals or radioactive waste, and how it affects drinking water supplies, is of interest.</p>
<p>Numerical simulation of these flow are generally difficult. The principal reason for this is the presence of many different length scales in the physical problem, and resolving all these is computationally expensive. To circumvent these difficulties a class of methods known as upscaling methods has been developed where one attempts to solve only for large scale features of interest and model the effect of the small scale features.</p>
<p>In this thesis, we review some of the previous efforts in upscaling and introduce a new scheme that attempts to overcome some of the existing shortcomings of these methods. In our analysis, we consider the flow problem in two distinct stages: the first is the determination of the velocity field which gives rise to an elliptic partial differential equation (PDE) and the second is a transport problem which gives rise to a hyperbolic PDE.</p>
<p>For the elliptic part, we make use of existing upscaling methods for elliptic equations. In particular, we use the multi-scale finite element method of Hou et al. to solve for the velocity field on a coarse grid, and yet still be able to obtain fine scale information through a special means of interpolation.</p>
<p>The analysis of the hyperbolic part forms the main contribution of this thesis. We first analyze the problem by restricting ourselves to the case where the small scales have a periodic structure. With this assumption, we are able to derive a coupled set of equations for the large scale average and the small scale fluctuations about this average. This is done by means of a special averaging, which is done along the fine scale streamlines. This coupled set of equations provides better starting point for both the modeling of the largescale small-scale interactions and the numerical implementation of any scheme. We derive an upscaling scheme from this by tracking only a sub-set of the fluctuations, which are used to approximate the scale interactions. Once this model has been derived, we discuss and present a means to extend it to the case where the fluctuations are more general than periodic.</p>
<p>In the sections that follow we provide the details of the numerical implementation, which is a very significant part of any practical method. Finally, we present numerical results using the new scheme and compare this with both resolved computations and some existing upscaling schemes.</p>https://thesis.library.caltech.edu/id/eprint/2050A Dynamical Systems Approach to Unsteady Systems
https://resolver.caltech.edu/CaltechETD:etd-05122006-083011
Authors: {'items': [{'email': 'shadden@berkeley.edu', 'id': 'Shadden-Shawn-Christopher', 'name': {'family': 'Shadden', 'given': 'Shawn Christopher'}, 'orcid': '0000-0001-7561-1568', 'show_email': 'YES'}]}
Year: 2006
DOI: 10.7907/BG86-YB12
<p>For steady systems, interpreting the flow structure is typically straightforward because streamlines and trajectories coincide. Therefore the velocity field, or quantities derived from it, provide a clear description of the flow geometry. For unsteady flows, this is often not the case. A more natural choice is to understand the flow in terms of particle trajectories, i.e., the Lagrangian viewpoint. While the chaotic behavior of trajectories of unsteady systems makes direct interpretation difficult, more structured and frame-independent techniques have been developed. The method presented here uses finite-time Lyapunov exponent (FTLE) fields to locate Lagrangian Coherent Structures (LCS). LCS are co-dimension 1 separatrices that partition regions in phase space with dynamically different behavior. This method enables the detection of often non-obvious, time-dependent boundaries in complicated flows, which greatly elucidates the transport and mixing geometry.</p>
<p>The first portion of this thesis deals with the theoretical development of LCS for two-, and then, n-dimensional systems, where n>2. Based on the definitions presented, some important properties of these structures are proven. It is shown that the flux across an LCS is typically very small and depends on the relative strength of the structure, the difference between the local rotation rate of the LCS with that of the Eulerian velocity field, and the integration time used to compute the FTLE field.</p>
<p>The second portion of the thesis presents a series of numerical studies in which LCS are used to examine a range of interesting applications. This portion is bridged with the theoretical development presented in the first half by a brief chapter describing the numerical computation of FTLE fields and LCS. Applications presented in the second half of the thesis include the study of vortex rings in which LCS are used to define the unsteady vortex boundary to clarify the entrainment and detrainment processes; the computation of LCS in the ocean to provide mesoscale separatrices that help characterize the flow conditions and help navigate gliders or drifters used for sampling; flow over an airfoil where an LCS captures the unsteady separation profile; flow through a micro-mixing channel where LCS reveal the mechanism and geometry of chaotic mixing.</p>
https://thesis.library.caltech.edu/id/eprint/1755Curvelets, Wave Atoms, and Wave Equations
https://resolver.caltech.edu/CaltechETD:etd-05262006-133555
Authors: {'items': [{'email': 'demanet@gmail.com', 'id': 'Demanet-Laurent', 'name': {'family': 'Demanet', 'given': 'Laurent'}, 'orcid': '0000-0001-7052-5097', 'show_email': 'NO'}]}
Year: 2006
DOI: 10.7907/1TEF-RQ51
<p>We argue that two specific wave packet families---curvelets and wave atoms---provide powerful tools for representing linear systems of hyperbolic differential equations with smooth, time-independent coefficients. In both cases, we prove that the matrix representation of the Green's function is sparse in the sense that the matrix entries decay nearly exponentially fast (i.e., faster than any negative polynomial), and well organized in the sense that the very few nonnegligible entries occur near a few shifted diagonals, whose location is predicted by geometrical optics.</p>
<p>This result holds only when the basis elements obey a precise parabolic balance between oscillations and support size, shared by curvelets and wave atoms but not wavelets, Gabor atoms, or any other such transform.</p>
<p>A physical interpretation of this result is that curvelets may be viewed as coherent waveforms with enough frequency localization so that they behave like waves but at the same time, with enough spatial localization so that they simultaneously behave like particles.</p>
<p>We also provide fast digital implementations of tight frames of curvelets and wave atoms in two dimensions. In both cases the complexity is O(N² log N) flops for N-by-N Cartesian arrays, for forward as well as inverse transforms.</p>
<p>Finally, we present a geometric strategy based on wave atoms for the numerical solution of wave equations in smoothly varying, 2D time-independent periodic media. Our algorithm is based on sparsity of the matrix representation of Green's function, as above, and also exploits its low-rank block structure after separation of the spatial indices. As a result, it becomes realistic to accurately build the full matrix exponential using repeated squaring, up to some time which is much larger than the CFL timestep. Once available, the wave atom representation of the Green's function can be used to perform 'upscaled' timestepping.</p>
<p>We show numerical examples and prove complexity results based on a priori estimates of sparsity and separation ranks. They beat the O(N^3) bottleneck on an N-by-N grid, for a wide range of physically relevant situations. In practice, the current wave atom solver can become competitive over a pseudospectral method in the regime when the wave equation should be solved several times with different initial conditions, as in reflection seismology.</p>https://thesis.library.caltech.edu/id/eprint/2112Wiener Chaos Expansion and Numerical Solutions of Stochastic Partial Differential Equations
https://resolver.caltech.edu/CaltechETD:etd-05182006-173710
Authors: {'items': [{'id': 'Luo-Wuan', 'name': {'family': 'Luo', 'given': 'Wuan'}, 'show_email': 'NO'}]}
Year: 2006
DOI: 10.7907/RPKX-BN02
<p>Stochastic partial differential equations (SPDEs) are important tools in modeling complex phenomena, and they arise in many physics and engineering applications. Developing efficient numerical methods for simulating SPDEs is a very important while challenging research topic. In this thesis, we study a numerical method based on the Wiener chaos expansion (WCE) for solving SPDEs driven by Brownian motion forcing. WCE represents a stochastic solution as a spectral expansion with respect to a set of random basis. By deriving a governing equation for the expansion coefficients, we can reduce a stochastic PDE into a system of deterministic PDEs and separate the randomness from the computation. All the statistical information of the solution can be recovered from the deterministic coefficients using very simple formulae.</p>
<p>We apply the WCE-based method to solve stochastic Burgers equations, Navier-Stokes equations and nonlinear reaction-diffusion equations with either additive or multiplicative random forcing. Our numerical results demonstrate convincingly that the new method is much more efficient and accurate than MC simulations for solutions in short to moderate time. For a class of model equations, we prove the convergence rate of the WCE method. The analysis also reveals precisely how the convergence constants depend on the size of the time intervals and the variability of the random forcing. Based on the error analysis, we design a sparse truncation strategy for the Wiener chaos expansion. The sparse truncation can reduce the dimension of the resulting PDE system substantially while retaining the same asymptotic convergence rates.</p>
<p>For long time solutions, we propose a new computational strategy where MC simulations are used to correct the unresolved small scales in the sparse Wiener chaos solutions. Numerical experiments demonstrate that the WCE-MC hybrid method can handle SPDEs in much longer time intervals than the direct WCE method can. The new method is shown to be much more efficient than the WCE method or the MC simulation alone in relatively long time intervals. However, the limitation of this method is also pointed out.</p>
<p>Using the sparse WCE truncation, we can resolve the probability distributions of a stochastic Burgers equation numerically and provide direct evidence for the existence of a unique stationary measure. Using the WCE-MC hybrid method, we can simulate the long time front propagation for a reaction-diffusion equation in random shear flows. Our numerical results confirm the conjecture by Jack Xin that the front propagation speed obeys a quadratic enhancing law.</p>
<p>Using the machinery we have developed for the Wiener chaos method, we resolve a few technical difficulties in solving stochastic elliptic equations by Karhunen-Loeve-based polynomial chaos method. We further derive an upscaling formulation for the elliptic system of the Wiener chaos coefficients. Eventually, we apply the upscaled Wiener chaos method for uncertainty quantification in subsurface modeling, combined with a two-stage Markov chain Monte Carlo sampling method we have developed recently.</p>https://thesis.library.caltech.edu/id/eprint/1861Structure and Evolution of Martensitic Phase Boundaries
https://resolver.caltech.edu/CaltechETD:etd-05292007-211950
Authors: {'items': [{'email': 'patrick.dondl@mathematik.uni-freiburg.de', 'id': 'Dondl-Patrick-Werner', 'name': {'family': 'Dondl', 'given': 'Patrick Werner'}, 'orcid': '0000-0003-3035-7230', 'show_email': 'YES'}]}
Year: 2007
DOI: 10.7907/89AW-3S87
<p>This work examines two major aspects of martensitic phase boundaries. The first part studies numerically the deformation of thin films of shape memory alloys by using subdivision surfaces for discretization. These films have gained interest for their possible use as actuators in microscale electro-mechanical systems, specifically in a pyramid-shaped configuration. The study of such configurations requires adequate resolution of the regions of high strain gradient that emerge from the interplay of the multi-well strain energy and the penalization of the strain gradient through a surface energy term. This surface energy term also requires the spatial numerical discretization to be of higher regularity, i.e., it needs to be continuously differentiable. This excludes the use of a piecewise linear approximation. It is shown in this thesis that subdivision surfaces provide an attractive tool for the numerical examination of thin phase transforming structures. We also provide insight in the properties of such tent-like structures.</p>
<p>The second part of this thesis examines the question of how the rate-independent hysteresis that is observed in martensitic phase transformations can be reconciled with the linear kinetic relation linking the evolution of domains with the thermodynamic driving force on a microscopic scale. A sharp interface model for the evolution of martensitic phase boundaries, including full elasticity, is proposed. The existence of a solution for this coupled problem of a free discontinuity evolution to an elliptic equation is proved. Numerical studies using this model show the pinning of a phase boundary by precipitates of non-transforming material. This pinning is the first step in a stick-slip behavior and therefore a rate-independent hysteresis.</p>
<p>In an approximate model, the existence of a critical pinning force as well as the existence of solutions traveling with an average velocity are proved rigorously. For this shallow phase boundary approximation, the depinning behavior is studied numerically. We find a universal power-law linking the driving force to the average velocity of the interface. For a smooth local force due to an inhomogeneous but periodic environment we find a critical exponent of 1/2.</p>
https://thesis.library.caltech.edu/id/eprint/2251Metric Based Upscaling for Partial Differential Equations with a Continuum of Scales
https://resolver.caltech.edu/CaltechETD:etd-05162007-172755
Authors: {'items': [{'email': 'mail4lei@gmail.com', 'id': 'Zhang-Lei', 'name': {'family': 'Zhang', 'given': 'Lei'}, 'orcid': '0000-0002-2917-9652', 'show_email': 'YES'}]}
Year: 2007
DOI: 10.7907/AZ06-4B54
<p>Numerical upscaling of problems with multiple scale structures have attracted increasing attention in recent years. In particular, problems with non-separable scales pose a great challenge to mathematical analysis and simulation. Most existing methods are either based on the assumption of scale separation or heuristic arguments.</p>
<p>In this thesis, we present rigorous results on homogenization of partial differential equations with L<sup>∞</sup> coefficients which allow for a continuum of spatial and temporal scales. We propose a new type of compensation phenomena for elliptic, parabolic, and hyperbolic equations. The main idea is the use of the so-called "harmonic coordinates" ("caloric coordinates" in the parabolic case). Under these coordinates, the solutions of these differential equations have one more degree of differentiability. It has been deduced from this compensation phenomenon that numerical homogenization methods formulated as oscillating finite elements can converge in the presence of a continuum of scales, if one uses global caloric coordinates to obtain the test functions instead of using solutions of a local cell problem.</p>https://thesis.library.caltech.edu/id/eprint/1841Hamilton-Pontryagin Integrators on Lie Groups
https://resolver.caltech.edu/CaltechETD:etd-06052007-153115
Authors: {'items': [{'email': 'nawaf.bourabee@rutgers.edu', 'id': 'Bou-Rabee-Nawaf-Mohammed', 'name': {'family': 'Bou-Rabee', 'given': 'Nawaf Mohammed'}, 'orcid': '0000-0001-9280-9808', 'show_email': 'YES'}]}
Year: 2007
DOI: 10.7907/0EC4-2042
<p>In this thesis structure-preserving time integrators for mechanical systems whose configuration space is a Lie group are derived from a Hamilton-Pontryagin (HP) variational principle. In addition to its attractive properties for degenerate mechanical systems, the HP viewpoint also affords a practical way to design discrete Lagrangians, which are the cornerstone of variational integration theory. The HP principle states that a mechanical system traverses a path that extremizes an HP action integral. The integrand of the HP action integral consists of two terms: the Lagrangian and a kinematic constraint paired with a Lagrange multiplier (the momentum). The kinematic constraint relates the velocity of the mechanical system to a curve on the tangent bundle. This form of the action integral makes it amenable to discretization.</p>
<p>In particular, our strategy is to implement an s-stage Runge-Kutta-Munthe-Kaas (RKMK) discretization of the kinematic constraint. We are motivated by the fact that the theory, order conditions, and implementation of such methods are mature. In analogy with the continuous system, the discrete HP action sum consists of two parts: a weighted sum of the Lagrangian using the weights from the Butcher tableau of the RKMK scheme, and a pairing between a discrete Lagrange multiplier (the discrete momentum) and the discretized kinematic constraint. In the vector space context, it is shown that this strategy yields a well-known class of symplectic partitioned Runge-Kutta methods including the Lobatto IIIA-IIIB pair which generalize to higher-order accuracy.</p>
<p>In the Lie group context, the strategy yields an interesting and novel family of variational partitioned Runge-Kutta methods. Specifically, for mechanical systems on Lie groups we analyze the ideal context of EP systems. For such systems the HP principle can be transformed from the Pontryagin bundle to a reduced space. To set up the discrete theory, a continuous reduced HP principle is also analyzed. It is this reduced HP principle that we apply our discretization strategy to. The resulting integrator describes an update scheme on the reduced space. As in RKMK we parametrize the Lie group using coordinate charts whose model space is the Lie algebra and that approximate the exponential map. Since the Lie group is non abelian, the structure of these integrators is not the same as in the vector space context.</p>
<p>We carry out an in-depth study of the simplest integrators within this family that we call variational Euler integrators; specifically we analyze the integrator's efficiency, global error, and geometric properties. Because of their variational character, the variational Euler integrators preserve a discrete momentum map and symplectic form. Moreover, since the update on the configuration space is explicit, the configuration updates exhibit no drift from the Lie group. We also prove that the global error of these methods is second order. Numerical experiments on the free rigid body and the chaotic dynamics of an underwater vehicle reveal that these reduced variational integrators possess structure-preserving properties that methods designed to preserve momentum (using the coadjoint action of the Lie group) and energy (for example, by projection) lack.</p>
<p>In addition we discuss how the HP integrators extend to a wider class of mechanical systems with, e.g., configuration dependent potentials and non trivial shape-space dynamics.</p>https://thesis.library.caltech.edu/id/eprint/2465Low-Dimensional Representations of Transitions in Molecular Systems
https://resolver.caltech.edu/CaltechETD:etd-06062008-171210
Authors: {'items': [{'email': 'kgrubits@mmm.edu', 'id': 'Grubits-Katalin-Anna', 'name': {'family': 'Grubits', 'given': 'Katalin Anna'}, 'show_email': 'NO'}]}
Year: 2008
DOI: 10.7907/YMTC-WQ16
<p>A major difficulty in modeling molecular systems is that the number of dimensions, even for a small system, is commonly too large for computation to be feasible. To overcome this challenge, a combination of lower-dimensional representations of the system and improved computational methods are required. In this thesis, we investigate techniques to achieve both of these aims via three model problems.</p>
<p>By exploiting an understanding of the mechanism and dynamics of reaction in the systems considered, we attain a low-dimensional description of the transition that captures the essential dynamics. For the ionization of a Rydberg atom we utilize concepts from dynamical systems theory that reveal the geometric structures in phase space that mediate the reaction. The gyration radius formalism captures the kinematic effects of the secondary particles in a coarse variable that reduces the number of dimensions of the model, thereby providing a simple description of our methane and oxygen dissociation example. These methods are applicable more generally and provide a coarse model of chemical reactions that can be combined with efficient computational tools, such as the set-oriented method employed in our Rydberg example, to efficiently compute reaction rates of previously difficult problems.</p>
<p>The third model problem considered is the self-assembly of particles into an ordered lattice configuration under the influence of an isotropic inter-particle potential. With the aim of characterizing the transition from a disordered to an ordered state, we develop metrics that assess the quality of the lattice with respect to the target lattice configuration. The five metrics presented use a single number to quantify the order within this large system of particles. We explore numerous applications of these quality assessment tools, in particular, finding the optimal potential for self-assembly. The very noisy, highly variable nature of our expensive-to-evaluate objective function prompted the development of a trend optimization algorithm that efficiently minimizes the objective function, using upper and lower envelopes that are responsible for the robustness of the method and the solution. This trend optimization scheme is widely applicable to problems in other fields.</p> https://thesis.library.caltech.edu/id/eprint/5223A Super-Algebraically Convergent, Windowing-Based Approach to the Evaluation of Scattering from Periodic Rough Surfaces
https://resolver.caltech.edu/CaltechETD:etd-01032008-222910
Authors: {'items': [{'email': 'monro@acm.caltech.edu', 'id': 'Monro-John-Anderson', 'name': {'family': 'Monro', 'given': 'John Anderson'}, 'show_email': 'NO'}]}
Year: 2008
DOI: 10.7907/F9VM-JP39
<p>We introduce a new second-kind integral equation method to solve direct rough surface scattering problems in two dimensions. This approach is based, in part, upon the bounded obstacle scattering method that was originally presented in Bruno et al. [2004] and is discussed in an appendix of this thesis. We restrict our attention to problems in which time-harmonic acoustic or electromagnetic plane waves scatter from rough surfaces that are perfectly reflecting, periodic and at least twice continuously differentiable; both sound-soft and sound-hard type acoustic scattering cases---correspondingly, transverse-electric and transverse-magnetic electromagnetic scattering cases---are treated. Key elements of our algorithm include the use of infinitely continuously differentiable windowing functions that comprise partitions of unity, analytical representations of the integral equation’s solution (taking into account either the absence or presence of multiple scattering) and spectral quadrature formulas. Together, they provide an efficient alternative to the use of the periodic Green’s function found in the kernel of most solvers’ integral operators, and they strongly mitigate the rapidly increasing computational complexity that is typically borne as the frequency of the incident field increases.</p>
<p>After providing a complete description of our solver and illustrating its usefulness through some preliminary examples, we rigorously prove its convergence. In particular, the super-algebraic convergence of the method is established for problems with infinitely continuously differentiable scattering surfaces. We additionally show that accuracies within prescribed tolerances are achieved with fixed computational cost as the frequency increases without bound for cases in which no multiple reflections occur.</p>
<p>We present extensive numerical data demonstrating the convergence, accuracy and efficiency of our computational approach for a wide range of scattering configurations (sinusoidal, multi-scale and simulated ocean surfaces are considered). These results include favorable comparisons with other leading integral equation methods as well as the non-convergent Kirchhoff approximation. They also contain analyses of sets of cases in which the major physical parameters associated with these problems (i.e., surface height, wavenumber and incidence angle) are systematically varied. As a result of these tests, we conclude that the proposed algorithm is highly competitive and robust: it significantly outperforms other leading numerical methods in many cases of scientific and practical relevance, and it facilitates rapid analyses of a wide variety of scattering configurations.</p>https://thesis.library.caltech.edu/id/eprint/19Discrete Geometric Homogenisation and Inverse Homogenisation of an Elliptic Operator
https://resolver.caltech.edu/CaltechETD:etd-05212008-164705
Authors: {'items': [{'email': 'rdonald@acm.caltech.edu', 'id': 'Donaldson-Roger-David', 'name': {'family': 'Donaldson', 'given': 'Roger David'}, 'show_email': 'NO'}]}
Year: 2008
DOI: 10.7907/S4S7-8T31
We show how to parameterise a homogenised conductivity in R² by a scalar function s(x), despite the fact that the conductivity parameter in the related up-scaled elliptic operator is typically tensor valued. Ellipticity of the operator is equivalent to strict convexity of s(x), and with consideration to mesh connectivity, this equivalence extends to discrete parameterisations over triangulated domains. We apply the parameterisation in three contexts: (i) sampling s(x) produces a family of stiffness matrices representing the elliptic operator over a hierarchy of scales; (ii) the curvature of s(x) directs the construction of meshes well-adapted to the anisotropy of the operator, improving the conditioning of the stiffness matrix and interpolation properties of the mesh; and (iii) using electric impedance tomography to reconstruct s(x) recovers the up-scaled conductivity, which while anisotropic, is unique. Extensions of the parameterisation to R³ are introduced.https://thesis.library.caltech.edu/id/eprint/1928Nonparametric Detection and Estimation of Highly Oscillatory Signals
https://resolver.caltech.edu/CaltechETD:etd-05112008-152328
Authors: {'items': [{'email': 'hannes.helgason@gmail.com', 'id': 'Helgason-Hannes', 'name': {'family': 'Helgason', 'given': 'Hannes'}, 'show_email': 'YES'}]}
Year: 2008
DOI: 10.7907/SAEK-MV13
<p>This thesis considers the problem of detecting and estimating highly oscillatory signals from noisy measurements. These signals are often referred to as chirps in the literature; they are found everywhere in nature, and frequently arise in scientific and engineering problems. Mathematically, they can be written in the general form A(t) exp(ilambda varphi(t)), where lambda is a large constant base frequency, the phase varphi(t) is time-varying, and the envelope A(t) is slowly varying. Given a sequence of noisy measurements, we study two problems seperately: 1) the problem of testing whether or not there is a chirp hidden in the noisy data, and 2) the problem of estimating this chirp from the data.</p>
<p>This thesis introduces novel, flexible and practical strategies for addressing these important nonparametric statistical problems. The main idea is to calculate correlations of the data with a rich family of local templates in a first step, the multiscale chirplets, and in a second step, search for meaningful aggregations or chains of chirplets which provide a good global fit to the data. From a physical viewpoint, these chains correspond to realistic signals since they model arbitrary chirps. From an algorithmic viewpoint, these chains are identified as paths in a convenient graph. The key point is that this important underlying graph structure allows to unleash very effective algorithms such as network flow algorithms for finding those chains which optimize a near optimal trade-off between goodness of fit and complexity.</p>
<p>Our estimation procedures provide provably near optimal performance over a wide range of chirps and numerical experiments show that both our detection and estimation procedures perform exceptionally well over a broad class of chirps. This thesis also introduces general strategies for extracting signals of unknown duration in long streams of data when we have no idea where these signals may be. The approach is leveraging testing methods designed to detect the presence of signals with known time support.</p>
<p>Underlying our methods is a general abstraction which postulates an abstract statistical problem of detecting paths in graphs which have random variables attached to their vertices. The formulation of this problem was inspired by our chirp detection methods and is of great independent interest.</p>https://thesis.library.caltech.edu/id/eprint/1726A Subdivision Approach to the Construction of Smooth Differential Forms
https://resolver.caltech.edu/CaltechETD:etd-02282008-112022
Authors: {'items': [{'email': 'wang@acm.caltech.edu', 'id': 'Wang-Ke', 'name': {'family': 'Wang', 'given': 'Ke'}, 'show_email': 'NO'}]}
Year: 2008
DOI: 10.7907/K0DE-9P44
<p>Vertex- and face-based subdivision schemes are now routinely used in geometric modeling and computational science, and their primal/dual relationships are well studied. In this thesis we interpret these schemes as defining bases for discrete differential 0- resp. 2-forms, and present a novel subdivision-based method of constructing smooth differential forms on simplicial surfaces. It completes the picture of classic primal/dual subdivision by introducing a new concept named r-cochain subdivision. Such subdivision schemes map scalar coefficients on r-simplexes from the coarse to the refined mesh and converge to r-forms on the mesh. We perform convergence and smoothness analysis in an arbitrary topology setting by utilizing the techniques of matrix subdivision and the subdivision differential structure.</p>
<p>The other significance of our method is its preserving exactness of differential forms. We prove that exactness preserving is equivalent to the commutative relations between the subdivision schemes and the topological exterior derivative. Our construction is based on treating r- and (r+1)-cochain subdivision schemes as a pair and enforcing the commutative relations. As a result, our low-order construction recovers classic Whitney forms, while the high-order construction yields a new class of high order Whitney forms. The 1-form bases are C^1, except at irregular vertices where they are C^0. We also demonstrate extensions to three-dimensional subdivision schemes and non-simplicial meshes as well, such as quadrilaterals and octahedra.</p>
<p>Our construction is seamlessly integrated with surface subdivision. Once a metric is supplied, the scalar 1-form coefficients define a smooth tangent vector filed on the underlying subdivision surface. Design of tangent vector fields is made particularly easy with this machinery as we demonstrate. The subdivision r-forms can also be used as finite element bases for physical simulations on curved surfaces. We demonstrate the optimal rate of convergence in solving the Laplace and bi-Laplace equations of 1-forms.</p>
https://thesis.library.caltech.edu/id/eprint/812The Optimal Transportation Meshfree Method for General Fluid Flows and Strongly Coupled Fluid-Structure Interaction Problems
https://resolver.caltech.edu/CaltechETD:etd-06012009-104937
Authors: {'items': [{'email': 'fhabbal@ices.utexas.edu', 'id': 'Habbal-Feras', 'name': {'family': 'Habbal', 'given': 'Feras'}, 'show_email': 'NO'}]}
Year: 2009
DOI: 10.7907/MHQX-3Z52
This thesis develops a novel meshfree numerical method for simulating general fluid flows. Drawing from concepts in optimal mass transport theory and in combination with the notion of material point sampling and meshfree interpolation, the optimal transport meshfree (OTM) method provides a rigorous mathematical framework for numerically simulating three-dimensional general fluid flows with general, and possibly moving boundaries (as in fluid-structure interaction simulations). Specifically, the proposed OTM method generalizes the Benamou-Brenier differential formulation of optimal mass transportation problems which leads to a multi-field variational characterization of general fluid flows including viscosity, equations of state and general geometries and boundary conditions. With the use of material point sampling in conjunction with local max-entropy shape functions, the OTM method leads to a meshfree formulation bearing a number of salient features. Compared with other meshfree methods that face significant challenges to enforce essential boundary conditions as well as couple to other methods, such as the finite element method, the OTM method provides a new paradigm in meshfree methods. The OTM method is numerically validated by simulating the classical Riemann benchmark example for Euler flow. Furthermore, in order to highlight the ability of the OTM to simulate Navier-Stokes flows within general, moving three-dimensional domains, and naturally couple with finite elements, an illustrative strongly coupled FSI example is simulated. This illustrative FSI example, consisting of a gas-inflated sphere impacting the ground, is simulated as a toy model of the final phase of NASA's landing scheme devised for Mars missions, where a network of airbags are deployed to dissipate the energy of impact.
https://thesis.library.caltech.edu/id/eprint/5220Sparse Recovery via Convex Optimization
https://resolver.caltech.edu/CaltechETD:etd-05292009-152544
Authors: {'items': [{'email': 'paige@caltech.edu', 'id': 'Randall-Paige-Alicia', 'name': {'family': 'Randall', 'given': 'Paige Alicia'}, 'show_email': 'NO'}]}
Year: 2009
DOI: 10.7907/3Z65-A925
<p>This thesis considers the problem of estimating a sparse signal from a few (possibly noisy) linear measurements. In other words, we have y = Ax + z where A is a measurement matrix with more columns than rows, x is a sparse signal to be estimated, z is a noise vector, and y is a vector of measurements. This setup arises frequently in many problems ranging from MRI imaging to genomics to compressed sensing.</p>
<p>We begin by relating our setup to an error correction problem over the reals, where a received encoded message is corrupted by a few arbitrary errors, as well as smaller dense errors. We show that under suitable conditions on the encoding matrix and on the number of arbitrary errors, one is able to accurately recover the message.</p>
<p>We next show that we are able to achieve oracle optimality for x, up to a log factor and a factor of sqrt{s}, when we require the matrix A to obey an incoherence property. The incoherence property is novel in that it allows the coherence of A to be as large as O(1/ log n) and still allows sparsities as large as O(m/log n). This is in contrast to other existing results involving coherence where the coherence can only be as large as O(1/sqrt{m}) to allow sparsities as large as O(sqrt{m}). We also do not make the common assumption that the matrix A obeys a restricted eigenvalue condition.</p>
<p>We then show that we can recover a (non-sparse) signal from a few linear measurements when the signal has an exactly sparse representation in an overcomplete dictionary. We again only require that the dictionary obey an incoherence property.</p>
<p>Finally, we introduce the method of l_1 analysis and show that it is guaranteed to give good recovery of a signal from a few measurements, when the signal can be well represented in a dictionary. We require that the combined measurement/dictionary matrix satisfies a uniform uncertainty principle and we compare our results with the more standard l_1 synthesis approach.</p>
<p>All our methods involve solving an l_1 minimization program which can be written as either a linear program or a second-order cone program, and the well-established machinery of convex optimization used to solve it rapidly.</p>https://thesis.library.caltech.edu/id/eprint/2279Uncertainty Quantification Using Concentration-of-Measure Inequalities
https://resolver.caltech.edu/CaltechETD:etd-05292009-165215
Authors: {'items': [{'email': 'lenny.lucas@gmail.com', 'id': 'Lucas-Leonard-Joseph', 'name': {'family': 'Lucas', 'given': 'Leonard Joseph'}, 'show_email': 'YES'}]}
Year: 2009
DOI: 10.7907/DRAM-H941
This work introduces a rigorous uncertainty quantification framework that exploits concentration–of–measure inequalities to bound failure probabilities using a well-defined certification campaign regarding the performance of engineering systems. The framework is constructed to be used as a tool for deciding whether a system is likely to perform safely and reliably within design specifications. Concentration-of-measure inequalities rigorously bound probabilities-of-failure and thus supply conservative certification criteria, in addition to supplying unambiguous quantitative definitions of terms such as margins, epistemic and aleatoric uncertainties, verification and validation measures, and confidence factors. This methodology unveils clear procedures for computing the latter quantities by means of concerted simulation and experimental campaigns. Extensions to the theory include hierarchical uncertainty quantification, and validation with experimentally uncontrollable random variables.https://thesis.library.caltech.edu/id/eprint/2282Geometric Discretization of Lagrangian Mechanics and Field Theories
https://resolver.caltech.edu/CaltechETD:etd-12312008-173851
Authors: {'items': [{'email': 'astern@acm.caltech.edu', 'id': 'Stern-Ari-Joshua', 'name': {'family': 'Stern', 'given': 'Ari Joshua'}, 'show_email': 'NO'}]}
Year: 2009
DOI: 10.7907/K943-VJ44
This thesis presents a unified framework for geometric discretization of highly oscillatory mechanics and classical field theories, based on Lagrangian variational principles and discrete differential forms. For highly oscillatory problems in mechanics, we present a variational approach to two families of geometric numerical integrators: implicit-explicit (IMEX) and trigonometric methods. Next, we show how discrete differential forms in spacetime can be used to derive a structure-preserving discretization of Maxwell's equations, with applications to computational electromagnetics. Finally, we sketch out some future directions in discrete gauge theory, providing foundations based on fiber bundles and Lie groupoids, as well as discussing applications to discrete Riemannian geometry and numerical general relativity.
https://thesis.library.caltech.edu/id/eprint/5173Algorithms for Mapping Nucleic Acid Free Energy Landscapes
https://resolver.caltech.edu/CaltechETD:etd-12312008-153810
Authors: {'items': [{'email': 'othmer@acm.caltech.edu', 'id': 'Othmer-Jonathan-Andrew', 'name': {'family': 'Othmer', 'given': 'Jonathan Andrew'}, 'show_email': 'NO'}]}
Year: 2009
DOI: 10.7907/VJX1-6376
To complement the utility of thermodynamic calculations in the design and analysis of nucleic acid secondary structures, we seek to develop efficient and scalable algorithms for the analysis of secondary structure kinetics. Secondary structure kinetics are modeled by a first-order master equation, but the number of secondary structures for a sequence grows exponentially with the length of the sequence, meaning that for systems of interest, we cannot write down the rate matrix, much less solve the master equation. To address these difficulties, we develop a method to construct macrostate maps of nucleic acid free energy landscapes based on simulating the continuous-time Markov chain associated with the microstate master equation. The method relies on the careful combination of several elements: a novel procedure to explicitly identify transitions between macrostates in the simulation, a goodness-of-clustering test specific to secondary structures, an algorithm to find the centroid secondary structure for each macrostate, a method to compute macrostate partition functions from short simulations, and a framework for computing transition rates with confidence intervals. We use this method to study several experimental systems from our laboratory with system sizes in the hundreds of nucleotides, and develop a model problem, the d-cube, for which we can control all of the relevant parameters and analyze our method's error behavior. Our results and analysis suggest that this method will be useful not only in the analysis and design of nucleic acid mechanical devices, but also in wider applications of molecular simulation and simulation-based model reduction.https://thesis.library.caltech.edu/id/eprint/5172Geometric Interpretation of Physical Systems for Improved Elasticity Simulations
https://resolver.caltech.edu/CaltechTHESIS:11172009-224005473
Authors: {'items': [{'id': 'Kharevych-Liliya', 'name': {'family': 'Kharevych', 'given': 'Liliya'}, 'show_email': 'NO'}]}
Year: 2010
DOI: 10.7907/8ZF3-XN72
<p>The physics of most mechanical systems can be described from a geometric viewpoint; i.e., by defining variational principles that the system obeys and the properties that are being preserved (often referred to as invariants). The methods that arise from properly discretizing such principles preserve corresponding discrete invariants of the mechanical system, even at very coarse resolutions, yielding robust and efficient algorithms. In this thesis geometric interpretations of physical systems are used to develop algorithms for discretization of both space (including proper material discretization) and time. The effectiveness of these algorithms is demonstrated by their application to the simulation of elastic bodies.</p>
<p>Time discretization is performed using variational time integrators that, unlike many of the standard integrators (e.g., Explicit Euler, Implicit Euler, Runge-Kutta), do not introduce artificial numerical energy decrease (damping) or increase. A new physical damping model that does not depend on timestep size is proposed for finite viscoelasticity simulation. When used in conjunction with variational time integrators, this model yields simulations that physically damp the energy of the system, even when timesteps of different sizes are used. The usual root-finding procedure for time update is replaced with an energy minimization procedure, allowing for more precise step size control inside a non-linear solver. Additionally, a study of variational and time-reversible methods for adapting timestep size during the simulation is presented.</p>
<p>Spatial discretization is performed using a finite element approach for finite (non-linear) or linear elasticity. A new method for the coarsening of elastic properties of heterogeneous linear materials is proposed. The coarsening is accomplished through a precomputational procedure that converts the heterogeneous elastic coefficients of the very fine mesh into anisotropic elastic coefficients of the coarse mesh. This method does not depend on the material structure of objects, allowing for complex and non-uniform material structures. Simulation on the coarse mesh, equipped with the resulting elastic coefficients, can then be performed at interactive rates using existing linear elasticity solvers and, if desired, co-rotational methods. A time-reversible integrator is used to improve time integration of co-rotated linear elasticity.</p>
https://thesis.library.caltech.edu/id/eprint/5379Multiscale Geometric Integration of Deterministic and Stochastic Systems
https://resolver.caltech.edu/CaltechTHESIS:05262011-171044915
Authors: {'items': [{'email': 't.t.snail@gmail.com', 'id': 'Tao-Molei', 'name': {'family': 'Tao', 'given': 'Molei'}, 'show_email': 'NO'}]}
Year: 2011
DOI: 10.7907/6J83-7C18
<p>In order to accelerate computations and improve long time accuracy of numerical simulations, this thesis develops multiscale geometric integrators.</p>
<p>For general multiscale stiff ODEs, SDEs, and PDEs, FLow AVeraging integratORs (FLAVORs) have been proposed for the coarse time-stepping without any identification of the slow or the fast variables. In the special case of deterministic and stochastic mechanical systems, symplectic, multisymplectic, and quasi-symplectic multiscale integrators are easily obtained using this strategy.</p>
<p>For highly oscillatory mechanical systems (with quasi-quadratic stiff potentials and possibly high-dimensional), a specialized symplectic method has been devised to provide improved efficiency and accuracy. This method is based on the introduction of two highly nontrivial matrix exponentiation algorithms, which are generic, efficient, and symplectic (if the exact exponential is symplectic).</p>
<p>For multiscale systems with Dirac-distributed fast processes, a family of symplectic, linearly-implicit and stable integrators has been designed for coarse step simulations. An application is the fast and accurate integration of constrained dynamics.</p>
<p>In addition, if one cares about statistical properties of an ensemble of trajectories, but not the numerical accuracy of a single trajectory, we suggest tuning friction and annealing temperature in a Langevin process to accelerate its convergence.</p>
<p>Other works include variational integration of circuits, efficient simulation of a nonlinear wave, and finding optimal transition pathways in stochastic dynamical systems (with a demonstration of mass effects in molecular dynamics).</p>https://thesis.library.caltech.edu/id/eprint/6457Network Coding for Error Correction
https://resolver.caltech.edu/CaltechTHESIS:06032011-153909265
Authors: {'items': [{'email': 'svitlana@caltech.edu', 'id': 'Vyetrenko-Svitlana-S', 'name': {'family': 'Vyetrenko', 'given': 'Svitlana S.'}, 'show_email': 'NO'}]}
Year: 2011
DOI: 10.7907/D2ZM-V541
<p>In this thesis, network error correction is considered from both theoretical and practical viewpoints. Theoretical parameters such as network structure and type of connection (multicast vs. nonmulticast) have a profound effect on network error correction capability. This work is also dictated by the practical network issues that arise in wireless ad-hoc networks, networks with limited computational power (e.g., sensor networks) and real-time data streaming systems (e.g., video/audio conferencing or media streaming).</p>
<p>Firstly, multicast network scenarios with probabilistic error and erasure occurrence are considered. In particular, it is shown that in networks with both random packet erasures and errors, increasing the relative occurrence of erasures compared to errors favors network coding over forwarding at network nodes, and vice versa. Also, fountain-like error-correcting codes, for which redundancy is incrementally added until decoding succeeds, are constructed. These codes are appropriate for use in scenarios where the upper bound on the number of errors is unknown a priori.</p>
<p>Secondly, network error correction in multisource multicast and nonmulticast network scenarios is discussed. Capacity regions for multisource multicast network error correction with both known and unknown topologies (coherent and noncoherent network coding) are derived. Several approaches to lower- and upper-bounding error-correction capacity regions of general nonmulticast networks are given. For 3-layer two-sink and nested-demand nonmulticast network topologies some of the given lower and upper bounds match. For these network topologies, code constructions that employ only intrasession coding are designed. These designs can be applied to streaming erasure correction code constructions.</p>https://thesis.library.caltech.edu/id/eprint/6497Compressed Sensing, Sparse Approximation, and Low-Rank Matrix Estimation
https://resolver.caltech.edu/CaltechTHESIS:02272011-233144146
Authors: {'items': [{'email': 'yanivplan@gmail.com', 'id': 'Plan-Yaniv', 'name': {'family': 'Plan', 'given': 'Yaniv'}, 'show_email': 'NO'}]}
Year: 2011
DOI: 10.7907/K8W9-RS71
<p>The importance of sparse signal structures has been recognized in a plethora of applications ranging from medical imaging to group disease testing to radar technology. It has been shown in practice that various signals of interest may be (approximately) sparsely modeled, and that sparse modeling is often beneficial, or even indispensable to signal recovery. Alongside an increase in applications, a rich theory of sparse and compressible signal recovery has recently been developed under the names compressed sensing (CS) and sparse approximation (SA). This revolutionary research has demonstrated that many signals can be recovered from severely undersampled measurements by taking advantage of their inherent low-dimensional structure. More recently, an offshoot of CS and SA has been a focus of research on other low-dimensional signal structures such as matrices of low rank. Low-rank matrix recovery (LRMR) is demonstrating a rapidly growing array of important applications such as quantum state tomography, triangulation from incomplete distance measurements, recommender systems (e.g., the Netflix problem), and system identification and control.</p>
<p>In this dissertation, we examine CS, SA, and LRMR from a theoretical perspective. We consider a variety of different measurement and signal models, both random and deterministic, and mainly ask two questions.</p>
<p>How many measurements are necessary? How large is the recovery error?</p>
<p>We give theoretical lower bounds for both of these questions, including oracle and minimax lower bounds for the error. However, the main emphasis of the thesis is to demonstrate the efficacy of convex optimization---in particular l1 and nuclear-norm minimization based programs---in CS, SA, and LRMR. We derive upper bounds for the number of measurements required and the error derived by convex optimization, which in many cases match the lower bounds up to constant or logarithmic factors. The majority of these results do not require the restricted isometry property (RIP), a ubiquitous condition in the literature.</p>https://thesis.library.caltech.edu/id/eprint/6259Simulation Capabilities for Challenging Medical Imaging and Treatment Planning Problems
https://resolver.caltech.edu/CaltechTHESIS:05272011-085600111
Authors: {'items': [{'email': 'beni@caltech.edu', 'id': 'Beni-Catherine-Elizabeth', 'name': {'family': 'Beni', 'given': 'Catherine Elizabeth'}, 'show_email': 'NO'}]}
Year: 2011
DOI: 10.7907/8PBA-RN43
Advanced numerical solvers and associated simulation tools, such as, for example, numerical algorithms based on novel spectral methods, efficient time-stepping and domain meshing techniques for solution of Partial Differential Equations (PDEs) (enabling, in particular, effective resolution of extremely steep boundary layers in short computing times), can have a significant impact in the design of medical procedures. In this thesis we present three recently introduced numerical algorithms for medical problems whose performance improves significantly over those of earlier counterparts, and which can thereby provide solutions to a range of challenging computational problems for planning and design of medical treatments.https://thesis.library.caltech.edu/id/eprint/6463Multiscale Modeling and Computation of 3D Incompressible Turbulent Flows
https://resolver.caltech.edu/CaltechTHESIS:05302012-081356007
Authors: {'items': [{'email': 'lanxin0106@gmail.com', 'id': 'Hu-Xin', 'name': {'family': 'Hu', 'given': 'Xin'}, 'show_email': 'YES'}]}
Year: 2012
DOI: 10.7907/K1RZ-1H07
<p>In the first part, we present a mathematical derivation of a closure relating the Reynolds stress to the mean strain rate for incompressible turbulent flows. This derivation is based on a systematic multiscale analysis that expresses the Reynolds stress in terms of the solutions of local periodic cell problems. We reveal an asymptotic structure of the Reynolds stress by invoking the frame invariant property of the cell problems and an iterative dynamic homogenization of large- and small-scale solutions. The Smagorinsky model for homogeneous turbulence is recovered as an example to illustrate our mathematical derivation. Another example is turbulent channel flow, where we derive a simplified turbulence model based on the asymptotic flow structure near the wall. Additionally, we obtain a nonlinear model by using a second order approximation of the inverse flow map function. This nonlinear model captures the effects of the backscatter of kinetic energy and dispersion and is consistent with other models, such as a mixed model that combines the Smagorinsky and gradient models, and the generic nonlinear model of Lund and Novikov.</p>
<p>Numerical simulation results at two Reynolds numbers using our simplified turbulence model are in good agreement with both experiments and direct numerical simulations in turbulent channel flow. However, due to experimental and modeling errors, we do observe some noticeable differences, e.g. , root mean square velocity fluctuations at Re<sub>τ</sub> = 180.</p>
<p>In the second part, we present a new perspective on calculating fully developed turbulent flows using a data-driven stochastic method. General polynomial chaos (gPC) bases are obtained based on the mean velocity profile of turbulent channel flow in the offline part. The velocity fields are projected onto the subspace spanned by these gPC bases and a coupled system of equations is solved to compute the velocity components in the Karhunen-Loeve expansion in the online part. Our numerical results have shown that the data-driven stochastic method for fully developed turbulence offers decent approximations of statistical quantities with a coarse grid and a relatively small number of gPC base elements.</p>https://thesis.library.caltech.edu/id/eprint/7093High-Order Integral Equation Methods for Diffraction Problems Involving Screens and Apertures
https://resolver.caltech.edu/CaltechTHESIS:06072012-004925615
Authors: {'items': [{'email': 'lintner@caltech.edu', 'id': 'Lintner-Stéphane-Karl', 'name': {'family': 'Lintner', 'given': 'Stéphane Karl'}, 'show_email': 'NO'}]}
Year: 2012
DOI: 10.7907/VP8P-DP74
This thesis presents a novel approach for the numerical solution of problems of diffraction by infinitely thin screens and apertures. The new methodology relies on combination of weighted versions of the classical operators associated with the Dirichlet and Neumann open-surface problems. In the two-dimensional case, a rigorous proof is presented, establishing that the new weighted formulations give rise to second-kind Fredholm integral equations, thus providing a generalization to open surfaces of the classical closed-surface Calderon formulae. High-order quadrature rules are introduced for the new weighted operators, both in the two-dimensional case as well as the scalar three-dimensional case. Used in conjunction with Krylov subspace iterative methods, these rules give rise to efficient and accurate numerical solvers which produce highly accurate solutions in small numbers of iterations, and whose performance is comparable to that arising from efficient high-order integral solvers recently introduced for closed-surface problems. Numerical results are presented for a wide range of frequencies and a variety of geometries in two- and three-dimensional space, including complex resonating structures as well as, for the first time, accurate numerical solutions of classical diffraction problems considered by the 19th-century pioneers: diffraction of high-frequency waves by the infinitely thin disc, the circular aperture, and the two-hole geometry inherent in Young's experiment.
https://thesis.library.caltech.edu/id/eprint/7143Elliptic Combinatorics and Markov Processes
https://resolver.caltech.edu/CaltechTHESIS:05312012-201348939
Authors: {'items': [{'email': 'dan.betea@gmail.com', 'id': 'Betea-Dan-Dumitru', 'name': {'family': 'Betea', 'given': 'Dan Dumitru'}, 'show_email': 'NO'}]}
Year: 2012
DOI: 10.7907/MMZG-5G61
We present combinatorial and probabilistic interpretations of recent results in the theory of elliptic special functions (due to, among many others, Frenkel, Turaev, Spiridonov, and Zhedanov in the case of univariate functions, and Rains in the multivariate case). We focus on elliptically distributed random lozenge tilings of the hexagon which we analyze from several perspectives. We compute the N-point function for the associated process, and show the process as a whole is determinantal with correlation kernel given by elliptic biorthogonal functions. We furthermore compute transition probabilities for the Markov processes involved and show they come from the multivariate elliptic difference operators of Rains. Properties of difference operators yield an efficient sampling algorithm for such random lozenge tilings. Simulations of said algorithm lead to new arctic circle behavior. Finally we introduce elliptic Schur processes on bounded partitions analogous to the Schur process of Reshetikhin and Okounkov (
and to the Macdonald processes of Vuletic, Borodin, and Corwin). These give a somewhat different (and faster) sampling algorithm from these elliptic distributions, but in principle should encompass more than just tilings of a hexagon.https://thesis.library.caltech.edu/id/eprint/7115Geometric Descriptions of Couplings in Fluids and Circuits
https://resolver.caltech.edu/CaltechTHESIS:04302012-142612208
Authors: {'items': [{'email': 'hoj201@gmail.com', 'id': 'Jacobs-Henry-Ochi', 'name': {'family': 'Jacobs', 'given': 'Henry Ochi'}, 'show_email': 'NO'}]}
Year: 2012
DOI: 10.7907/AZXE-PH33
<p>Geometric mechanics is often commended for its breadth (e.g., fluids, circuits, controls) and depth (e.g., identification of stability criteria, controllability criteria, conservation laws). However, on the interface between disciplines it is commonplace for the analysis previously done on each discipline in isolation to break down. For example, when a solid is immersed in a fluid, the particle relabeling symmetry is broken because particles in the fluid behave differently from particles in the solid. This breaks conservation laws, and even changes the configuration manifolds. A second example is that of the interconnection of circuits. It has been verified that LC-circuits satisfy a variational principle. However, when two circuits are soldered together this variational principle must transform to accommodate the interconnection.</p>
<p>Motivated by these difficulties, this thesis analyzes the following couplings: fluid-particle, fluid-structure, and circuit-circuit. For the case of fluid-particle interactions we understand the system as a Lagrangian system evolving on a Lagrange-Poincare bundle. We leverage this interpretation to propose a class of particle methods by "ignoring" the vertical Lagrange-Poincare equation. In a similar vein, we can analyze fluids interacting with a rigid body. We then generalize this analysis to view fluid-structure problems as Lagrangian systems on a Lie algebroid. The simplicity of the reduction process for Lie algebroids allows us to propose a mechanism in which swimming corresponds to a limit-cycle in a reduced Lie algebroid. In the final section we change gears and understand non-energetic interconnection as Dirac structures. In particular we find that any (linear) non-energetic interconnection is equivalent to some Dirac structure. We then explore what this insight has to say about variational principles, using interconnection of LC-circuits as a guiding example.</p>https://thesis.library.caltech.edu/id/eprint/6991Adaptive Methods Exploring Intrinsic Sparse Structures of Stochastic Partial Differential Equations
https://resolver.caltech.edu/CaltechTHESIS:09182012-175436855
Authors: {'items': [{'email': 'mulin.cheng@gmail.com', 'id': 'Cheng-Mulin', 'name': {'family': 'Cheng', 'given': 'Mulin'}, 'show_email': 'YES'}]}
Year: 2013
DOI: 10.7907/V638-V403
Many physical and engineering problems involving uncertainty enjoy certain low-dimensional structures, e.g., in the sense of Karhunen-Loeve expansions (KLEs), which in turn indicate the existence of reduced-order models and better formulations for efficient numerical simulations. In this thesis, we target a class of time-dependent stochastic partial differential equations whose solutions enjoy such structures at any time and propose a new methodology (DyBO) to derive equivalent systems whose solutions closely follow KL expansions of the original stochastic solutions. KL expansions are known to be the most compact representations of stochastic processes in an L<sup>2</sup> sense. Our methods explore such sparsity and offer great computational benefits compared to other popular generic methods, such as traditional Monte Carlo (MC), generalized Polynomial Chaos (gPC) method, and generalized Stochastic Collocation (gSC) method. Such benefits are demonstrated through various numerical examples ranging from spatially one-dimensional examples, such as stochastic Burgers' equations and stochastic transport equations to spatially two-dimensional examples, such as stochastic flows in 2D unit square. Parallelization is also discussed, aiming toward future industrial-scale applications. In addition to numerical examples, theoretical aspects of DyBO are also carefully analyzed, such as preservation of bi-orthogonality, error propagation, and computational complexity. Based on theoretical analysis, strategies are proposed to overcome difficulties in numerical implementations, such as eigenvalue crossing and adaptively adding or removing mode pairs. The effectiveness of the proposed strategies is numerically verified. Generalization to a system of SPDEs is considered as well in the thesis, and its success is demonstrated by applications to stochastic Boussinesq convection problems. Other generalizations, such as generalized stochastic collocation formulation of DyBO method, are also discussed. https://thesis.library.caltech.edu/id/eprint/7207Topics in Randomized Numerical Linear Algebra
https://resolver.caltech.edu/CaltechTHESIS:06102013-100609092
Authors: {'items': [{'email': 'swiftset@gmail.com', 'id': 'Gittens-Alex-A', 'name': {'family': 'Gittens', 'given': 'Alex A.'}, 'show_email': 'NO'}]}
Year: 2013
DOI: 10.7907/3K1S-R458
<p>This thesis studies three classes of randomized numerical linear algebra algorithms, namely: (i) randomized matrix sparsification algorithms, (ii) low-rank approximation algorithms that use randomized unitary transformations, and (iii) low-rank approximation algorithms for positive-semidefinite (PSD) matrices. </p>
<p>Randomized matrix sparsification algorithms set randomly chosen entries of the input matrix to zero. When the approximant is substituted for the original matrix in computations, its sparsity allows one to employ faster sparsity-exploiting algorithms. This thesis contributes bounds on the approximation error of nonuniform randomized sparsification schemes, measured in the spectral norm and two NP-hard norms that are of interest in computational graph theory and subset selection applications.</p>
<p> Low-rank approximations based on randomized unitary transformations have several desirable properties: they have low communication costs, are amenable to parallel implementation, and exploit the existence of fast transform algorithms. This thesis investigates the tradeoff between the accuracy and cost of generating such approximations. State-of-the-art spectral and Frobenius-norm error bounds are provided. </p>
<p> The last class of algorithms considered are SPSD "sketching" algorithms. Such sketches can be computed faster than approximations based on projecting onto mixtures of the columns of the matrix. The performance of several such sketching schemes is empirically evaluated using a suite of canonical matrices drawn from machine learning and data analysis applications, and a framework is developed for establishing theoretical error bounds. </p>
<p> In addition to studying these algorithms, this thesis extends the Matrix Laplace Transform framework to derive Chernoff and Bernstein inequalities that apply to all the eigenvalues of certain classes of random matrices. These inequalities are used to investigate the behavior of the singular values of a matrix under random sampling, and to derive convergence rates for each individual eigenvalue of a sample covariance matrix.</p>https://thesis.library.caltech.edu/id/eprint/7880GPU-Accelerated Fourier-Continuation Solvers and Physically Exact Computational Boundary Conditions for Wave Scattering Problems
https://resolver.caltech.edu/CaltechTHESIS:07092012-144406693
Authors: {'items': [{'email': 'elling@gmail.com', 'id': 'Elling-Timothy-James', 'name': {'family': 'Elling', 'given': 'Timothy James'}, 'show_email': 'NO'}]}
Year: 2013
DOI: 10.7907/A5ZM-NK18
<p>Many important engineering problems, ranging from antenna design to seismic imaging, require the numerical solution of problems of time-domain propagation and scattering of acoustic, electromagnetic, elastic waves, etc. These problems present several key difficulties, including numerical dispersion, the need for computational boundary conditions, and the extensive computational cost that arises from the extremely large number of unknowns that are often required for adequate spatial resolution of the underlying three-dimensional space. In this thesis a new class of numerical methods is developed. Based on the recently introduced Fourier continuation (FC) methodology (which eliminates the Gibbs phenomenon and thus facilitates accurate Fourier expansion of nonperiodic functions), these new methods enable fast spectral solution of wave propagation problems in the time domain. In particular, unlike finite difference or finite element approaches, these methods are very nearly dispersionless---a highly desirable property indeed, which guarantees that fixed numbers of points per wavelength suffice to solve problems of arbitrarily large extent. This thesis further puts forth the mathematical and algorithmic elements necessary to produce highly scalable implementations of these algorithms in challenging parallel computing environments---such as those arising in GPU architectures---while preserving their useful properties regarding convergence and dispersion.</p>
<p>Additionally, this thesis develops a fast method for evaluation of computational boundary conditions which is based on Kirchhoff's integral formula in conjunction with the FC methodology and an accelerated equivalent source integration method introduced recently for solution of integral equation problems. The combination of these ideas gives rise to a physically exact radiating boundary condition that is nonlocal but fast. The only known alternatives that provide all three of these features are only applicable to a highly restrictive class of domains such as spheres or cylinders, whereas the Kirchhoff-based approach considered here only requires a bounded domain with nonvanishing thickness. As is the case with the FC scattering solvers mentioned above, the boundary-conditions algorithm is modified into a formulation that admits efficient implementation in GPU and other parallel infrastructures.</p>
<p>Finally, this thesis illustrates the character of the newly developed algorithms, in both GPU and parallel CPU infrastructures, with a variety of numerical examples. In particular, it is shown that the GPU implementations result in thirty- to fiftyfold speedups over the corresponding single CPU implementations. An extension of the boundary-condition algorithm, further, is demonstrated, which enables for propagation of time-domain solutions over arbitrarily large spans of empty space at essentially null computational cost. Finally, a hybridization of the FC and boundary condition algorithm is presented, which is also part of this thesis work, and which provides an interface of the newly developed algorithms with legacy finite-element representations of geometries and engineering structures. Thus, combining spectral and classical PDE solvers and propagation methods with novel GPU and parallel CPU implementations, this thesis demonstrates a computational capability that enables solution, in novel computational architectures, of some of the most challenging problems in the broad field of computational wave propagation and scattering.</p>
https://thesis.library.caltech.edu/id/eprint/7174Geometric Integration Applied to Moving Mesh Methods and Degenerate Lagrangians
https://resolver.caltech.edu/CaltechTHESIS:12042013-185815472
Authors: {'items': [{'email': 'tomasz.tyranowski@gmail.com', 'id': 'Tyranowski-Tomasz-Michal', 'name': {'family': 'Tyranowski', 'given': 'Tomasz Michal'}, 'show_email': 'NO'}]}
Year: 2014
DOI: 10.7907/PH3X-YH23
<p>Moving mesh methods (also called r-adaptive methods) are space-adaptive strategies used for the numerical simulation of time-dependent partial differential equations. These methods keep the total number of mesh points fixed during the simulation, but redistribute them over time to follow the areas where a higher mesh point density is required. There are a very limited number of moving mesh methods designed for solving field-theoretic partial differential equations, and the numerical analysis of the resulting schemes is challenging. In this thesis we present two ways to construct r-adaptive variational and multisymplectic integrators for (1+1)-dimensional Lagrangian field theories. The first method uses a variational discretization of the physical equations and the mesh equations are then coupled in a way typical of the existing r-adaptive schemes. The second method treats the mesh points as pseudo-particles and incorporates their dynamics directly into the variational principle. A user-specified adaptation strategy is then enforced through Lagrange multipliers as a constraint on the dynamics of both the physical field and the mesh points. We discuss the advantages and limitations of our methods. The proposed methods are readily applicable to (weakly) non-degenerate field theories---numerical results for the Sine-Gordon equation are presented.</p>
<p>In an attempt to extend our approach to degenerate field theories, in the last part of this thesis we construct higher-order variational integrators for a class of degenerate systems described by Lagrangians that are linear in velocities. We analyze the geometry underlying such systems and develop the appropriate theory for variational integration. Our main observation is that the evolution takes place on the primary constraint and the 'Hamiltonian' equations of motion can be formulated as an index 1 differential-algebraic system. We then proceed to construct variational Runge-Kutta methods and analyze their properties. The general properties of Runge-Kutta methods depend on the 'velocity' part of the Lagrangian. If the 'velocity' part is also linear in the position coordinate, then we show that non-partitioned variational Runge-Kutta methods are equivalent to integration of the corresponding first-order Euler-Lagrange equations, which have the form of a Poisson system with a constant structure matrix, and the classical properties of the Runge-Kutta method are retained. If the 'velocity' part is nonlinear in the position coordinate, we observe a reduction of the order of convergence, which is typical of numerical integration of DAEs. We also apply our methods to several models and present the results of our numerical experiments.</p>https://thesis.library.caltech.edu/id/eprint/8038Sparse Time-Frequency Data Analysis: A Multi-Scale Approach
https://resolver.caltech.edu/CaltechTHESIS:05152014-141711934
Authors: {'items': [{'email': 'tavallali@gmail.com', 'id': 'Tavallali-Peyman', 'name': {'family': 'Tavallali', 'given': 'Peyman'}, 'show_email': 'NO'}]}
Year: 2014
DOI: 10.7907/Z9TT4NXD
In this work, we further extend the recently developed adaptive data analysis method, the Sparse Time-Frequency Representation (STFR) method. This method is based on the assumption that many physical signals inherently contain AM-FM representations. We propose a sparse optimization method to extract the AM-FM representations of such signals. We prove the convergence of the method for periodic signals under certain assumptions and provide practical algorithms specifically for the non-periodic STFR, which extends the method to tackle problems that former STFR methods could not handle, including stability to noise and non-periodic data analysis. This is a significant improvement since many adaptive and non-adaptive signal processing methods are not fully capable of handling non-periodic signals. Moreover, we propose a new STFR algorithm to study intrawave signals with strong frequency modulation and analyze the convergence of this new algorithm for periodic signals. Such signals have previously remained a bottleneck for all signal processing methods. Furthermore, we propose a modified version of STFR that facilitates the extraction of intrawaves that have overlaping frequency content. We show that the STFR methods can be applied to the realm of dynamical systems and cardiovascular signals. In particular, we present a simplified and modified version of the STFR algorithm that is potentially useful for the diagnosis of some cardiovascular diseases. We further explain some preliminary work on the nature of Intrinsic Mode Functions (IMFs) and how they can have different representations in different phase coordinates. This analysis shows that the uncertainty principle is fundamental to all oscillating signals.https://thesis.library.caltech.edu/id/eprint/8236Multiscale Model Reduction Methods for Deterministic and Stochastic Partial Differential Equations
https://resolver.caltech.edu/CaltechTHESIS:03312014-014047677
Authors: {'items': [{'email': 'cimaolin@gmail.com', 'id': 'Ci-Maolin', 'name': {'family': 'Ci', 'given': 'Maolin'}, 'show_email': 'NO'}]}
Year: 2014
DOI: 10.7907/06ND-CY07
<p>Partial differential equations (PDEs) with multiscale coefficients are very difficult to solve due to the wide range of scales in the solutions. In the thesis, we propose some efficient numerical methods for both deterministic and stochastic PDEs based on the model reduction technique.</p>
<p>For the deterministic PDEs, the main purpose of our method is to derive an effective equation for the multiscale problem. An essential ingredient is to decompose the harmonic coordinate into a smooth part and a highly oscillatory part of which the magnitude is small. Such a decomposition plays a key role in our construction of the effective equation. We show that the solution to the effective equation is smooth, and could be resolved on a regular coarse mesh grid. Furthermore, we provide error analysis and show that the solution to the effective equation plus a correction term is close to the original multiscale solution.</p>
<p>For the stochastic PDEs, we propose the model reduction based data-driven stochastic method and multilevel Monte Carlo method. In the multiquery, setting and on the assumption that the ratio of the smallest scale and largest scale is not too small, we propose the multiscale data-driven stochastic method. We construct a data-driven stochastic basis and solve the coupled deterministic PDEs to obtain the solutions. For the tougher problems, we propose the multiscale multilevel Monte Carlo method. We apply the multilevel scheme to the effective equations and assemble the stiffness matrices efficiently on each coarse mesh grid. In both methods, the $\KL$ expansion plays an important role in extracting the main parts of some stochastic quantities.</p>
<p>For both the deterministic and stochastic PDEs, numerical results are presented to demonstrate the accuracy and robustness of the methods. We also show the computational time cost reduction in the numerical examples.</p>https://thesis.library.caltech.edu/id/eprint/8174Geometric Discretization through Primal-Dual Meshes
https://resolver.caltech.edu/CaltechTHESIS:05222014-134831171
Authors: {'items': [{'email': 'fernando.goes@gmail.com', 'id': 'Ferrari-de-Goes-Fernando-Ferrari', 'name': {'family': 'Ferrari de Goes', 'given': 'Fernando'}, 'show_email': 'NO'}]}
Year: 2014
DOI: 10.7907/32CA-7376
This thesis introduces new tools for geometric discretization in computer graphics and computational physics. Our work builds upon the duality between weighted triangulations and power diagrams to provide concise, yet expressive discretization of manifolds and differential operators. Our exposition begins with a review of the construction of power diagrams, followed by novel optimization procedures to fully control the local volume and spatial distribution of power cells. Based on this power diagram framework, we develop a new family of discrete differential operators, an effective stippling algorithm, as well as a new fluid solver for Lagrangian particles. We then turn our attention to applications in geometry processing. We show that orthogonal primal-dual meshes augment the notion of local metric in non-flat discrete surfaces. In particular, we introduce a reduced set of coordinates for the construction of orthogonal primal-dual structures of arbitrary topology, and provide alternative metric characterizations through convex optimizations. We finally leverage these novel theoretical contributions to generate well-centered primal-dual meshes, sphere packing on surfaces, and self-supporting triangulations.https://thesis.library.caltech.edu/id/eprint/8258Random Propagation in Complex Systems: Nonlinear Matrix Recursions and Epidemic Spread
https://resolver.caltech.edu/CaltechTHESIS:05232014-172754261
Authors: {'items': [{'email': 'ctznahj@gmail.com', 'id': 'Ahn-Hyoung-Jun', 'name': {'family': 'Ahn', 'given': 'Hyoung Jun'}, 'show_email': 'NO'}]}
Year: 2014
DOI: 10.7907/MC7M-EE22
This dissertation studies long-term behavior of random Riccati recursions and mathematical epidemic model. Riccati recursions are derived from Kalman filtering. The error covariance matrix of Kalman filtering satisfies Riccati recursions. Convergence condition of time-invariant Riccati recursions are well-studied by researchers. We focus on time-varying case, and assume that regressor matrix is random and identical and independently distributed according to given distribution whose probability distribution function is continuous, supported on whole space, and decaying faster than any polynomial. We study the geometric convergence of the probability distribution. We also study the global dynamics of the epidemic spread over complex networks for various models. For instance, in the discrete-time Markov chain model, each node is either healthy or infected at any given time. In this setting, the number of the state increases exponentially as the size of the network increases. The Markov chain has a unique stationary distribution where all the nodes are healthy with probability 1. Since the probability distribution of Markov chain defined on finite state converges to the stationary distribution, this Markov chain model concludes that epidemic disease dies out after long enough time. To analyze the Markov chain model, we study nonlinear epidemic model whose state at any given time is the vector obtained from the marginal probability of infection of each node in the network at that time. Convergence to the origin in the epidemic map implies the extinction of epidemics. The nonlinear model is upper-bounded by linearizing the model at the origin. As a result, the origin is the globally stable unique fixed point of the nonlinear model if the linear upper bound is stable. The nonlinear model has a second fixed point when the linear upper bound is unstable. We work on stability analysis of the second fixed point for both discrete-time and continuous-time models. Returning back to the Markov chain model, we claim that the stability of linear upper bound for nonlinear model is strongly related with the extinction time of the Markov chain. We show that stable linear upper bound is sufficient condition of fast extinction and the probability of survival is bounded by nonlinear epidemic map.https://thesis.library.caltech.edu/id/eprint/8391A New High-Order Fourier Continuation-Based Elasticity Solver for Complex Three-Dimensional Geometries
https://resolver.caltech.edu/CaltechTHESIS:10082013-093825165
Authors: {'items': [{'email': 'fpamlani@outlook.com', 'id': 'Amlani-Faisal', 'name': {'family': 'Amlani', 'given': 'Faisal'}, 'show_email': 'YES'}]}
Year: 2014
DOI: 10.7907/V9DQ-P103
This thesis presents a new approach for the numerical solution of three-dimensional problems in elastodynamics. The new methodology, which is based on a recently introduced Fourier continuation (FC) algorithm for the solution of Partial Differential Equations on the basis of accurate Fourier expansions of possibly non-periodic functions, enables fast, high-order solutions of the time-dependent elastic wave equation in a nearly dispersionless manner, and it requires use of CFL constraints that scale only linearly with spatial discretizations. A new FC operator is introduced to treat Neumann and traction boundary conditions, and a block-decomposed (sub-patch) overset strategy is presented for implementation of general, complex geometries in distributed-memory parallel computing environments. Our treatment of the elastic wave equation, which is formulated as a complex system of variable-coefficient PDEs that includes possibly heterogeneous and spatially varying material constants, represents the first fully-realized three-dimensional extension of FC-based solvers to date. Challenges for three-dimensional elastodynamics simulations such as treatment of corners and edges in three-dimensional geometries, the existence of variable coefficients arising from physical configurations and/or use of curvilinear coordinate systems and treatment of boundary conditions, are all addressed. The broad applicability of our new FC elasticity solver is demonstrated through application to realistic problems concerning seismic wave motion on three-dimensional topographies as well as applications to non-destructive evaluation where, for the first time, we present three-dimensional simulations for comparison to experimental studies of guided-wave scattering by through-thickness holes in thin plates.https://thesis.library.caltech.edu/id/eprint/7974Geometric Elasticity for Graphics, Simulation, and Computation
https://resolver.caltech.edu/CaltechTHESIS:12052013-121547860
Authors: {'items': [{'email': 'patrick.sanan@gmail.com', 'id': 'Sanan-Patrick-David', 'name': {'family': 'Sanan', 'given': 'Patrick David'}, 'show_email': 'NO'}]}
Year: 2014
DOI: 10.7907/DF7X-F354
We develop new algorithms which combine the rigorous theory of mathematical elasticity with the geometric underpinnings and computational attractiveness of modern tools in geometry processing. We develop a simple elastic energy based on the Biot strain measure, which improves on state-of-the-art methods in geometry processing. We use this energy within a constrained optimization problem to, for the first time, provide surface parameterization tools which guarantee injectivity and bounded distortion, are user-directable, and which scale to large meshes. With the help of some new generalizations in the computation of matrix functions and their derivative, we extend our methods to a large class of hyperelastic stored energy functions quadratic in piecewise analytic strain measures, including the Hencky (logarithmic) strain, opening up a wide range of possibilities for robust and efficient nonlinear elastic simulation and geometry processing by elastic analogy. https://thesis.library.caltech.edu/id/eprint/8039Optimal Uncertainty Quantification via Convex Optimization and Relaxation
https://resolver.caltech.edu/CaltechTHESIS:10162013-111333269
Authors: {'items': [{'email': 'hanshuo99@gmail.com', 'id': 'Han-Shuo', 'name': {'family': 'Han', 'given': 'Shuo'}, 'show_email': 'NO'}]}
Year: 2014
DOI: 10.7907/X00K-T615
<p>Many engineering applications face the problem of bounding the expected value of a quantity of interest (performance, risk, cost, etc.) that depends on stochastic uncertainties whose probability distribution is not known exactly. Optimal uncertainty quantification (OUQ) is a framework that aims at obtaining the best bound in these situations by explicitly incorporating available information about the distribution. Unfortunately, this often leads to non-convex optimization problems that are numerically expensive to solve.</p>
<p>This thesis emphasizes on efficient numerical algorithms for OUQ problems. It begins by investigating several classes of OUQ problems that can be reformulated as convex optimization problems. Conditions on the objective function and information constraints under which a convex formulation exists are presented. Since the size of the optimization problem can become quite large, solutions for scaling up are also discussed. Finally, the capability of analyzing a practical system through such convex formulations is demonstrated by a numerical example of energy storage placement in power grids.</p>
<p>When an equivalent convex formulation is unavailable, it is possible to find a convex problem that provides a meaningful bound for the original problem, also known as a convex relaxation. As an example, the thesis investigates the setting used in Hoeffding's inequality. The naive formulation requires solving a collection of non-convex polynomial optimization problems whose number grows doubly exponentially. After structures such as symmetry are exploited, it is shown that both the number and the size of the polynomial optimization problems can be reduced significantly. Each polynomial optimization problem is then bounded by its convex relaxation using sums-of-squares. These bounds are found to be tight in all the numerical examples tested in the thesis and are significantly better than Hoeffding's bounds.</p>https://thesis.library.caltech.edu/id/eprint/7991General-Domain Compressible Navier-Stokes Solvers Exhibiting Quasi-Unconditional Stability and High Order Accuracy in Space and Time
https://resolver.caltech.edu/CaltechTHESIS:05082015-184801592
Authors: {'items': [{'email': 'max.cubillos@gmail.com', 'id': 'Cubillos-Moraga-Max-Anton', 'name': {'family': 'Cubillos-Moraga', 'given': 'Max Anton'}, 'show_email': 'NO'}]}
Year: 2015
DOI: 10.7907/Z9WW7FKW
This thesis presents a new class of solvers for the subsonic compressible Navier-Stokes equations in general two- and three-dimensional spatial domains. The proposed methodology incorporates: 1) A novel linear-cost implicit solver based on use of higher-order backward differentiation formulae (BDF) and the alternating direction implicit approach (ADI); 2) A fast explicit solver; 3) Dispersionless spectral spatial discretizations; and 4) A domain decomposition strategy that negotiates the interactions between the implicit and explicit domains. In particular, the implicit methodology is quasi-unconditionally stable (it does not suffer from CFL constraints for adequately resolved flows), and it can deliver orders of time accuracy between two and six in the presence of general boundary conditions. In fact this thesis presents, for the first time in the literature, high-order time-convergence curves for Navier-Stokes solvers based on the ADI strategy---previous ADI solvers for the Navier-Stokes equations have not demonstrated orders of temporal accuracy higher than one. An extended discussion is presented in this thesis which places on a solid theoretical basis the observed quasi-unconditional stability of the methods of orders two through six. The performance of the proposed solvers is favorable. For example, a two-dimensional rough-surface configuration including boundary layer effects at Reynolds number equal to one million and Mach number 0.85 (with a well-resolved boundary layer, run up to a sufficiently long time that single vortices travel the entire spatial extent of the domain, and with spatial mesh sizes near the wall of the order of one hundred-thousandth the length of the domain) was successfully tackled in a relatively short (approximately thirty-hour) single-core run; for such discretizations an explicit solver would require truly prohibitive computing times. As demonstrated via a variety of numerical experiments in two- and three-dimensions, further, the proposed multi-domain parallel implicit-explicit implementations exhibit high-order convergence in space and time, useful stability properties, limited dispersion, and high parallel efficiency.https://thesis.library.caltech.edu/id/eprint/8851Full and Model-Reduced Structure-Preserving Simulation of Incompressible Fluids
https://resolver.caltech.edu/CaltechTHESIS:05312015-134909133
Authors: {'items': [{'email': 'gemmaellen@gmail.com', 'id': 'Mason-Gemma-Ellen', 'name': {'family': 'Mason', 'given': 'Gemma Ellen'}, 'show_email': 'NO'}]}
Year: 2015
DOI: 10.7907/Z9KK98QG
<p>This thesis outlines the construction of several types of structured integrators for incompressible fluids. We first present a vorticity integrator, which is the Hamiltonian counterpart of the existing Lagrangian-based fluid integrator. We next present a model-reduced variational Eulerian integrator for incompressible fluids, which combines the efficiency gains of dimension reduction, the qualitative robustness to coarse spatial and temporal resolutions of geometric integrators, and the simplicity of homogenized boundary conditions on regular grids to deal with arbitrarily-shaped domains with sub-grid accuracy.</p>
<p>Both these numerical methods involve approximating the Lie group of volume-preserving diffeomorphisms by a finite-dimensional Lie-group and then restricting the resulting variational principle by means of a non-holonomic constraint. Advantages and limitations of this discretization method will be outlined. It will be seen that these derivation techniques are unable to yield symplectic integrators, but that energy conservation is easily obtained, as is a discretized version of Kelvin's circulation theorem.</p>
<p>Finally, we outline the basis of a spectral discrete exterior calculus, which may be a useful element in producing structured numerical methods for fluids in the future.</p>https://thesis.library.caltech.edu/id/eprint/8948Compressing Positive Semidefinite Operators with Sparse/Localized Bases
https://resolver.caltech.edu/CaltechTHESIS:05312017-000636495
Authors: {'items': [{'email': 'zhangjiahuah@gmail.com', 'id': 'Zhang-Pengchuan', 'name': {'family': 'Zhang', 'given': 'Pengchuan'}, 'orcid': '0000-0003-1155-9507', 'show_email': 'NO'}]}
Year: 2017
DOI: 10.7907/Z91N7Z5J
<p>Given a positive semidefinite (PSD) operator, such as a PSD matrix, an elliptic operator with rough coefficients, a covariance operator of a random field, or the Hamiltonian of a quantum system, we would like to find its best finite rank approximation with a given rank. One way to achieve this objective is to project the operator to its eigenspace that corresponds to the smallest or largest eigenvalues, depending on the setting. The eigenfunctions are typically global, i.e. nonzero almost everywhere, but our interest is to find the sparsest or most localized bases for these subspaces. The sparse/localized basis functions lead to better physical interpretation and preserve some sparsity structure in the original operator. Moreover, sparse/localized basis functions also enable us to develop more efficient numerical algorithms to solve these problems.</p>
<p>In this thesis, we present two methods for this purpose, namely the sparse operator compression (Sparse OC) and the intrinsic sparse mode decomposition (ISMD). The Sparse OC is a general strategy to construct finite rank approximations to PSD operators, for which the range space of the finite rank approximation is spanned by a set of sparse/localized basis functions. The basis functions are energy minimizing functions on local patches. When applied to approximate the solution operator of elliptic operators with rough coefficients and various homogeneous boundary conditions, the Sparse OC achieves the optimal convergence rate with nearly optimally localized basis functions. Our localized basis functions can be used as multiscale basis functions to solve elliptic equations with multiscale coefficients and provide the optimal convergence rate <i>O</i>(<i>h</i><sup>k</sup>) for 2<i>k</i>'th order elliptic problems in the energy norm. From the perspective of operator compression, these localized basis functions provide an efficient and optimal way to approximate the principal eigen-space of the elliptic operators. From the perspective of the Sparse PCA, we can approximate a large set of covariance functions by a rank-<i>n</i> operator with a localized basis and with the optimal accuracy. While the Sparse OC works well on the solution operator of elliptic operators, we also propose the ISMD that works well on low rank or nearly low rank PSD operators. Given a rank-<i>n</i> PSD operator, say a <i>N</i>-by-<i>N</i> PSD matrix <i>A</i> (<i>n</i> ≤ <i>N</i>), the ISMD <i>decomposes</i> it into <i>n</i> rank-one matrices Σ<sup><i>n</i></sup><sub><i>i=1</i></sub><i>g</i><sub><i>i</i></sub><i>g</i><sup><i>T</i></sup><sub><i>i</i></sub> where the mode {<i>g</i><sub><i>i</i></sub>}<sup><i>n</i></sup><sub><i>i=1</i></sub> are required to be as sparse as possible. Under the regular-sparse assumption (see Definition 1.3.2), we have proved that the ISMD gives the optimal patchwise sparse decomposition, and is stable to small perturbations in the matrix to be decomposed. We provide several applications in both the physical and data sciences to demonstrate the effectiveness of the proposed strategies.</p>https://thesis.library.caltech.edu/id/eprint/10228Spatial Profiles in the Singular Solutions of the 3D Euler Equations and Simplified Models
https://resolver.caltech.edu/CaltechTHESIS:09092016-000915850
Authors: {'items': [{'email': 'pengfeiliuc@gmail.com', 'id': 'Liu-Pengfei', 'name': {'family': 'Liu', 'given': 'Pengfei'}, 'orcid': '0000-0002-6714-7387', 'show_email': 'YES'}]}
Year: 2017
DOI: 10.7907/Z9V9862G
<p>The partial differential equations (PDE) governing the motions of incompressible ideal fluid in three dimensional (3D) space are among the most fundamental nonlinear PDEs in nature and have found a lot of important applications. Due to the presence of super-critical non-linearity, the fundamental question of global well-posedness still remains open and is generally viewed as one of the most outstanding open questions in mathematics. In this thesis, we investigate the potential finite-time singularity formation of the 3D Euler equations and simplified models by studying the self-similar spatial profiles in the potentially singular solutions.</p>
<p>In the first part, we study the self-similar singularity of two 1D models, the CKY model and the HL model, which approximate the dynamics of the 3D axisymmtric Euler equations on the solid boundary of a cylindrical domain. The two models are both numerically observed to develop self-similar singularity. We prove the existence of a discrete family of self-similar profiles for the CKY model, using a combination of analysis and computer-aided verification. Then we employ a dynamic rescaling formulation to numerically study the evolution of the spatial profiles for the two 1D models, and demonstrate the stability of the self-similar singularity. We also study a singularity scenario for the HL model with multi-scale feature.</p>
<p>In the second part, we study the self-similar singularity for the 3D axisymmetric Euler equations. We first prove the local existence of a family of analytic self-similar profiles using a modified Cauchy-Kowalevski majorization argument. Then we use the dynamic rescaling formulation to investigate two types of initial data with different leading order properties. The first initial data correspond to the singularity scenario reported by Luo and Hou. We demonstrate that the self-similar profiles enjoy certain stability, which confirms the finite-time singularity reported by Luo and Hou. For the second initial data, we show that the solutions develop singularity in a different manner from the first case, which is unknown previously. The spatial profiles in the solutions become singular themselves, which means that the solutions to the Euler equations develop singularity at multiple spatial scales.</p>
<p>In the third part, we propose a family of 3D models for the 3D axisymmetric Euler and Navier-Stokes equations by modifying the amplitude of the convection terms. The family of models share several regularity results with the original Euler and Navier-Stokes equations, and we study the potential finite-time singularity of the models numerically. We show that for small convection, the solutions of the inviscid model develop self-similar singularity and the profiles behave like travelling waves. As we increase the amplitude of the velocity field, we find a critical value, after which the travelling wave self-similar singularity scenario disappears. Our numerical results reveal the potential stabilizing effect the convection terms.</p>https://thesis.library.caltech.edu/id/eprint/9920Concentration Inequalities of Random Matrices and Solving Ptychography with a Convex Relaxation
https://resolver.caltech.edu/CaltechTHESIS:09022016-135721172
Authors: {'items': [{'email': 'richardchen100@gmail.com', 'id': 'Chen-Yuhua-Richard', 'name': {'family': 'Chen', 'given': 'Yuhua Richard'}, 'show_email': 'NO'}]}
Year: 2017
DOI: 10.7907/Z9M906MF
<p>Random matrix theory has seen rapid development in recent years. In particular, researchers have developed many non-asymptotic matrix concentration inequalities that parallel powerful scalar concentration inequalities. In this thesis, we focus on three topics: 1) estimating sparse covariance matrix using matrix concentration inequalities, 2) constructing the matrix phi-entropy to derive matrix concentration inequalities, 3) developing scalable algorithms to solve the phase recovery problem of ptychography based on low-rank matrix factorization.</p>
<p>Estimation of covariance matrix is an important subject. In the setting of high dimensional statistics, the number of samples can be small in comparison to the dimension of the problem, thus estimating the complete covariance matrix is unfeasible. By assuming that the covariance matrix satisfies some sparsity assumptions, prior work has proved that it is feasible to estimate the sparse covariance matrix of Gaussian distribution using the masked sample covariance estimator. In this thesis, we use a new approach and apply non-asymptotic matrix concentration inequalities to obtain tight sample bounds for estimating the sparse covariance matrix of subgaussian distributions.</p>
<p>The entropy method is a powerful approach in developing scalar concentration inequalities. The key ingredient is the subadditivity property that scalar entropy function exhibits. In this thesis, we construct a new concept of matrix phi-entropy and prove that matrix phi-entropy also satisfies a subadditivity property similar to the scalar form. We apply this new concept of matrix phi-entropy to derive non-asymptotic matrix concentration inequalities.</p>
<p>Ptychography is a computational imaging technique which transforms low-resolution intensity-only images into a high-resolution complex recovery of the signal. Conventional algorithms are based on alternating projection, which lacks theoretical guarantees for their performance. In this thesis, we construct two new algorithms. The first algorithm relies on a convex formulation of the ptychography problem and on low-rank matrix recovery. This algorithm improves traditional approaches' performance but has high computational cost. The second algorithm achieves near-linear runtime and memory complexity by factorizing the objective matrix into its low-rank components and approximates the first algorithm's imaging quality.</p>https://thesis.library.caltech.edu/id/eprint/9911Fluid Dynamics with Incompressible Schrödinger Flow
https://resolver.caltech.edu/CaltechTHESIS:06052017-102338732
Authors: {'items': [{'email': 'albert123chern@gmail.com', 'id': 'Chern-Albert-Ren-Haur', 'name': {'family': 'Chern', 'given': 'Albert Ren-Haur'}, 'orcid': '0000-0002-9802-3619', 'show_email': 'NO'}]}
Year: 2017
DOI: 10.7907/Z98050N3
<p>This thesis introduces a new way of looking at incompressible fluid dynamics. Specifically, we formulate and simulate classical fluids using a Schrödinger equation subject to an incompressibility constraint. We call such a fluid flow an incompressible Schrödinger flow (ISF). The approach is motivated by Madelung's hydrodynamical form of quantum mechanics, and we show that it can simulate classical fluids with particular advantage in its simplicity and its ability of capturing thin vortex dynamics. The effective dynamics under an ISF is shown to be an Euler equation modified with a Landau-Lifshitz term. We show that the modifying term not only enhances the dynamics of vortex filaments, but also regularizes the potentially singular behavior of incompressible flows.</p>
<p>Another contribution of this thesis is the elucidation of a general, geometric notion of Clebsch variables. A geometric Clebsch variable is useful for analyzing the dynamics of ISF, as well as representing vortical structures in a general flow field. We also develop an algorithm of approximating a "spherical" Clebsch map for an arbitrarily given flow field, which leads to a new tool for visualizing, analyzing, and processing the vortex structure in a fluid data.</p>https://thesis.library.caltech.edu/id/eprint/10278Windowed Integral Equation Methods for Problems of Scattering by Defects and Obstacles in Layered Media
https://resolver.caltech.edu/CaltechTHESIS:08182016-124629380
Authors: {'items': [{'email': 'caperezar@gmail.com', 'id': 'Perez-Arancibia-Carlos-Andrés', 'name': {'family': 'Pérez Arancibia', 'given': 'Carlos Andrés'}, 'orcid': '0000-0003-1647-4019', 'show_email': 'YES'}]}
Year: 2017
DOI: 10.7907/Z9GQ6VQT
<p>This thesis concerns development of efficient high-order boundary integral equation methods for the numerical solution of problems of acoustic and electromagnetic scattering in the presence of planar layered media in two and three spatial dimensions. The interest in such problems arises from application areas that benefit from accurate numerical modeling of the layered media scattering phenomena, such as electronics, near-field optics, plasmonics and photonics as well as communications, radar and remote sensing.</p>
<p>A number of efficient algorithms applicable to various problems in these areas are pre- sented in this thesis, including (i) A Sommerfeld integral based high-order integral equation method for problems of scattering by defects in presence of infinite ground and other layered media, (ii) Studies of resonances and near resonances and their impact on the absorptive properties of rough surfaces, and (iii) A novel <i>Window Green Function Method</i> (WGF) for problems of scattering by obstacles and defects in the presence of layered media. The WGF approach makes it possible to completely avoid use of expensive Sommerfeld integrals that are typically utilized in layer-media simulations. In fact, the methods and studies referred in points (i) and (ii) above motivated the development of the markedly more efficient WGF alternative.</p>https://thesis.library.caltech.edu/id/eprint/9902Geometry-Driven Model Reduction
https://resolver.caltech.edu/CaltechTHESIS:12102018-011614041
Authors: {'items': [{'email': 'max.budninskiy@gmail.com', 'id': 'Budninskiy-Maxim-A', 'name': {'family': 'Budninskiy', 'given': 'Maxim A.'}, 'orcid': '0000-0002-9288-0249', 'show_email': 'NO'}]}
Year: 2019
DOI: 10.7907/0RCX-0369
<p>In this thesis we bring discrete differential geometry to bear on model reduction, both in the context of data analysis and numerical simulation of physical phenomena.</p>
<p>First, we present a novel controllable as-isometric-as-possible embedding method for low- and high-dimensional geometric datasets through sparse matrix eigenanalysis. This approach is equally suitable for performing nonlinear dimensionality reduction on big data and nonlinear shape editing of 3D meshes and pointsets. At the core of our approach is the construction of a "multi-Laplacian" quadratic form that is assembled from local operators whose kernels only contain locally affine functions. Minimizing this quadratic form produces an embedding that best preserves all relative coordinates of points within their local neighborhoods. We demonstrate the improvements that our approach brings over existing nonlinear local manifold learning methods on a number of datasets, and formulate the first eigen-based as-rigid-as-possible shape deformation technique by applying our affine-kernel embedding approach to 3D data augmented with user-imposed constraints on select vertices.</p>
<p>Second, we introduce a new global manifold learning approach based on metric connection for generating a quasi-isometric, low-dimensional mapping from a sparse and irregular sampling of an arbitrary low-dimensional manifold embedded in a high-dimensional space. Our geometric procedure computes a low-dimensional embedding that best preserves all pairwise geodesic distances over the input pointset similarly to one of the staples of manifold learning, the Isomap algorithm, and exhibits the same strong resilience to noise. While Isomap relies on Dijkstra's shortest path algorithm to approximate geodesic distances over the input pointset, we instead propose to compute them through "parallel transport unfolding," a discrete form of Cartan's development, to offer robustness to poor sampling and arbitrary topology. Our novel approach to evaluating geodesic distances using discrete differential geometry results in a markedly improved robustness to irregularities and sampling voids. In particular, it does not suffer from Isomap's limitation to geodesically convex sampled domains. Moreover, it involves only simple linear algebra, significantly improves the accuracy of all pairwise geodesic distance approximations, and has the same computational complexity as Isomap. We also show that our connection-based distance estimation can be used for faster variants of Isomap such as Landmark-Isomap.</p>
<p>Finally, we introduce an operator-adapted multiresolution analysis for finite-element differential forms. From a given continuous, linear, bijective, and self-adjoint positive-definite operator <i>L</i>, a hierarchy of basis functions and associated wavelets for discrete differential forms is constructed in a fine-to-coarse fashion and in quasilinear time. The resulting wavelets are <i>L</i>-orthogonal across all scales, and can be used to obtain a Galerkin discretization of the operator with a block diagonal stiffness matrix composed of uniformly well-conditioned and sparse blocks. Because our approach applies to arbitrary differential <i>p</i>-forms, we can derive both scalar-valued and vector-valued wavelets that block diagonalize a prescribed operator. Our construction applies to various types of computational grids, offers arbitrary smoothness orders of basis functions and wavelets, and can accommodate linear differential constraints such as divergence-freeness. We also demonstrate the benefits of the operator-adapted multiresolution decomposition for coarse-graining and model reduction of linear and nonlinear partial differential equations.</p>
<p>We conclude with a short discussion on how future work in geometric model reduction may impact other related topics such as semi-supervised learning.</p>https://thesis.library.caltech.edu/id/eprint/11303Wave-Scattering by Periodic Media
https://resolver.caltech.edu/CaltechTHESIS:08252019-170117522
Authors: {'items': [{'email': 'fernandezlado@gmail.com', 'id': 'Fernandez-Lado-Agustin- Gabriel', 'name': {'family': 'Fernandez-Lado', 'given': 'Agustin Gabriel'}, 'orcid': '0000-0002-8141-3792', 'show_email': 'NO'}]}
Year: 2020
DOI: 10.7907/G7XQ-RT85
<p>This thesis presents a full-spectrum, well-conditioned, Green-function methodology for evaluation of scattering by general periodic structures, which remains applicable on a set of challenging singular configurations, usually called Rayleigh-Wood (RW) anomalies, where most existing methods break down. After reviewing a variety of existing fast-converging numerical procedures commonly used to compute the classical quasi-periodic Green-function, the present work explores the difficulties they present around RW-anomalies and introduces the concept of hybrid "spatial/spectral" representations. Such expressions allow both the modification of existing methods to obtain convergence at RW-anomalies as well as the application of a slight generalization of the Woodbury-Sherman-Morrison formulae together with a limiting procedure to bypass the singularities. Although, for definiteness, the overall approach is applied to the scalar (acoustic) wave-scattering problem in the frequency domain, the approach can be extended in a straightforward manner to the harmonic Maxwell's and elasticity equations. Ultimately, the thorough understanding of RW-anomalies this thesis provides yields fast and highly-accurate solvers, which are demonstrated with a variety of simulations of wave-scattering phenomena by arrays of particles, crossed impenetrable and penetrable diffraction gratings, and other related structures. In particular, the methods developed in this thesis can be used to "upgrade" classical approaches, resulting in algorithms that are applicable throughout the spectrum, and it provides new methods for cases where previous approaches are either costly or fail altogether. In particular, it is suggested that the proposed shifted Green function approach may provide the only viable alternative for treatment of three-dimensional high-frequency configurations. A variety of computational examples are presented which demonstrate the flexibility of the overall approach, including, in particular, a problem of diffraction by a double-helix structure, for which numerical simulations did not previously exist, and for which the scattering pattern presented in this thesis closely resembles those obtained in crystallography experiments for DNA molecules.</p>
https://thesis.library.caltech.edu/id/eprint/11764Positive Definite Matrices: Compression, Decomposition, Eigensolver, and Concentration
https://resolver.caltech.edu/CaltechTHESIS:05222020-162227420
Authors: {'items': [{'email': 'huangde0123@gmail.com', 'id': 'Huang-De', 'name': {'family': 'Huang', 'given': 'De'}, 'orcid': '0000-0003-4023-9895', 'show_email': 'YES'}]}
Year: 2020
DOI: 10.7907/g2nt-yy27
<p>For many decades, the study of positive-definite (PD) matrices has been one of the most popular subjects among a wide range of scientific researches. A huge mass of successful models on PD matrices has been proposed and developed in the fields of mathematics, physics, biology, etc., leading to a celebrated richness of theories and algorithms. In this thesis, we draw our attention to a general class of PD matrices that can be decomposed as the sum of a sequence of positive-semidefinite matrices. For this class of PD matrices, we will develop theories and algorithms on operator compression, multilevel decomposition, eigenpair computation, and spectrum concentration. We divide these contents into three main parts.</p>
<p>In the first part, we propose an adaptive fast solver for the preceding class of PD matrices which includes the well-known graph Laplacians. We achieve this by establishing an adaptive operator compression scheme and a multiresolution matrix factorization algorithm which have nearly optimal performance on both complexity and well-posedness. To develop our methods, we introduce a novel notion of energy decomposition for PD matrices and two important local measurement quantities, which provide theoretical guarantee and computational guidance for the construction of an appropriate partition and a nested adaptive basis.</p>
<p>In the second part, we propose a new iterative method to hierarchically compute a relatively large number of leftmost eigenpairs of a sparse PD matrix under the multiresolution matrix compression framework. We exploit the well-conditioned property of every decomposition components by integrating the multiresolution framework into the Implicitly Restarted Lanczos method. We achieve this combination by proposing an extension-refinement iterative scheme, in which the intrinsic idea is to decompose the target spectrum into several segments such that the corresponding eigenproblem in each segment is well-conditioned.</p>
<p>In the third part, we derive concentration inequalities on partial sums of eigenvalues of random PD matrices by introducing the notion of <i>k</i>-trace. For this purpose, we establish a generalized Lieb's concavity theorem, which extends the original Lieb's concavity theorem from the normal trace to <i>k</i>-traces. Our argument employs a variety of matrix techniques and concepts, including exterior algebra, mixed discriminant, and operator interpolation.</p>https://thesis.library.caltech.edu/id/eprint/13715Boundary Integral Equation Methods for Simulation and Design of Photonic Devices
https://resolver.caltech.edu/CaltechTHESIS:01092020-141256074
Authors: {'items': [{'email': 'emgg90@gmail.com', 'id': 'Garza-Gonzalez-Emmanuel', 'name': {'family': 'Garza Gonzalez', 'given': 'Emmanuel'}, 'orcid': '0000-0003-1687-8216', 'show_email': 'NO'}]}
Year: 2020
DOI: 10.7907/XXPX-9H78
<p>This thesis presents novel boundary integral equation (BIE) and associated optimization methodologies for photonic devices. The simulation and optimization of such structures is a vast and rapidly growing engineering area, which impacts on design of optical devices such as waveguide splitters, tapers, grating couplers, and metamaterial structures, all of which are commonly used as elements in the field of integrated photonics. The design process has been significantly facilitated in recent years on the basis of a variety of methods in computational electromagnetic (EM) simulation and design. Unfortunately, however, the expense required by previous simulation tools has limited the extent and complexity of the structures that can be treated. The methods presented in this thesis represent the results of our efforts towards accomplishing the dual goals of 1) Accurate and efficient EM simulation for general, highly-complex three-dimensional problems, and 2) Development of effective optimization methods leading to an improved state of the art in EM design.</p>
<p>One of the main proposed elements utilizes BIE in conjunction with a modified-search algorithm to obtain the modes of uniform waveguides with arbitrary cross sections. This method avoids spurious solutions by means of a certain normalization procedure for the fields within the waveguides. In order to handle problems including nonuniform waveguide structures, we introduce the windowed Green function (WGF) method, which used in conjunction with auxiliary integral representations for bound mode excitations, has enabled accurate simulation of a wide variety of waveguide problems on the basis of highly accurate and efficient BIE, in two and three spatial dimensions. The "rectangular-polar" method provides the basic high-order singular-integration engine. Based on non-overlapping Chebyshev-discretized patches, the rectangular-polar method underlies the accuracy and efficiency of the proposed general-geometry three-dimensional BIE approach. Finally, we introduce a three-dimensional BIE framework for the efficient computation of sensitivities — i.e. gradients with respect to design parameters — via adjoint techniques. This methodology is then applied to the design of metalenses including up to a thousand parameters, where the overall optimization process takes in the order of three hours using five hundred computing cores. Forthcoming work along the lines of this effort seeks to extend and apply these methodologies to some of the most challenging and exciting design problems in electromagnetics in general, and photonics in particular.</p>https://thesis.library.caltech.edu/id/eprint/13613Hybrid Frequency-Time Analysis and Numerical Methods for Time-Dependent Wave Propagation
https://resolver.caltech.edu/CaltechTHESIS:09042020-172204130
Authors: {'items': [{'email': 'thomas.geoff.anderson@gmail.com', 'id': 'Anderson-Thomas-Geoffrey', 'name': {'family': 'Anderson', 'given': 'Thomas Geoffrey'}, 'orcid': '0000-0002-0643-2571', 'show_email': 'NO'}]}
Year: 2021
DOI: 10.7907/hmv1-r869
<p>This thesis focuses on the solution of causal, time-dependent wave propagation and scattering problems, in two- and three-dimensional spatial domains. This important and long-lasting problem has attracted a great deal of interest reflecting not only its use as a model problem but also the prevalence of wave phenomena in diverse areas of modern science, technology and engineering. Essentially all prior methods rely on "time-stepping" in one form or another, which involves local-in-time approximation of the evolution of the solution of the partial differential equation (PDE) based on the immediate time history and temporal finite-difference approximation. In addition to the need to manage the accumulation of (dispersion) error and the burdensome increase in computational cost over time, there are additionally difficult issues of stability, time-domain boundary conditions, and absorbing boundary conditions which often need to be addressed.</p>
<p>To sidestep many of these problems, this thesis develops a novel highly-efficient approach for time-dependent wave scattering problems employing the global-in-time techniques of Fourier transformation and leading to a frequency/time hybrid method for the time-dependent wave equation. Thus, relying on Fourier Transformation in time and utilizing a fixed (time-independent) number of frequency-domain solutions, the method evaluates the desired time-domain evolution with errors that both, decay faster than any negative power of the temporal sampling rate, and that, for a given sampling rate, are additionally uniform in time for all time. The fast error decay guarantees that high accuracies can be attained on the basis of relatively coarse temporal and frequency discretizations. The uniformity of the error for all time with fixed sampling rate, a property known as dispersionlessness, plays a crucial role, together with other properties of the Fourier transform, in enabling the evaluation of solutions for long times at <i>O</i>(1) cost. In particular, this thesis demonstrates the significant advantages enjoyed by the proposed methods over alternative approaches based on volumetric discretizations, time-domain integral equations, and convolution-quadrature.</p>
<p>The approach relies on two main elements, namely, 1) A smooth time-windowing methodology that enables accurate band-limited representations for arbitrarily-long time signals, and 2) A novel Fourier transform approach which, in a time-parallel manner and without causing spurious periodicity effects, delivers numerically dispersionless spectrally-accurate solutions. A similar hybrid technique can be obtained on the basis of Laplace transforms instead of Fourier transforms, but we do not consider in detail the Laplace-based method, and only briefly point out its essential features and associated challenges.</p>
<p>The proposed frequency/time Fourier-transform methods for obstacle scattering problems are easily generalizable to any linear partial differential equation in the time domain for which frequency-domain solutions can readily be obtained, including e.g. the time-domain Maxwell equations, the linear elasticity equations, inhomogeneous and/or frequency-dependent dispersive media, etc. Further, the proposed approach can tackle complex physical structures, it enables parallelization in time in a straightforward manner, and it allows for time leaping—that is, solution sampling at any given time <i>T</i> at <i>O</i>(1)-bounded sampling cost, for arbitrarily large values of <i>T</i>, and without requirement of evaluation of the solution at intermediate times. In particular, effective algorithms are introduced that, relying on use of time-asymptotics, compute two-dimensional solutions at <i>O</i>(1) cost despite the very slow time-decay that takes place in the two-dimensional case.</p>
<p>A significant portion of this thesis is devoted to a theoretical study of the validity of a certain stopping criterion used by the algorithm, which guarantees that certain field contributions can safely be neglected after certain stopping times. Roughly speaking, the theoretical results guarantee that, after the incident field is turned off, the magnitude of the future scattering density (and thus the magnitudes of the fields) can be estimated by the magnitude of the integral density <i>over a time period comparable to the time required by a wave to travel a distance equal to the diameter of the scatterer</i>. The criterion, which is crucial in ensuring the <i>O</i>(1) computational cost of the algorithm, is closely related to the well-known scattering theory developed in the 1960s and '70s by Lax, Morawetz, Phillips, Strauss and others. Our approach to the decay problem is based on use of frequency-domain estimates (developed previously in the context of numerical analysis of frequency-domain problems) on integral operators in the high-frequency regime for obstacles of various trapping classes. In particular, our theory yields, for the first time, decay estimates for a class of connected trapping obstacles: all previous estimates of scattered-field decay for connected obstacles are restricted to nontrapping structures.</p>
<p>In all, the proposed approach leverages the power of the Fourier transformation together with a range of newly developed spectrally convergent numerical methods in both the frequency and time domain and a variety of novel theoretical results in the general area of scattering theory to produce a radically-new framework for the solution of time-dependent wave propagation and scattering problems.</p>https://thesis.library.caltech.edu/id/eprint/13864Learning Patterns with Kernels and Learning Kernels from Patterns
https://resolver.caltech.edu/CaltechTHESIS:09062020-172055989
Authors: {'items': [{'email': 'ryanyoo2008@gmail.com', 'id': 'Yoo-Gene-Ryan', 'name': {'family': 'Yoo', 'given': 'Gene Ryan'}, 'orcid': '0000-0002-5319-5599', 'show_email': 'NO'}]}
Year: 2021
DOI: 10.7907/c5fn-ac81
<p>A major technique in learning involves the identification of patterns and their use to make predictions. In this work, we examine the symbiotic relationship between patterns and Gaussian process regression (GPR), which is mathematically equivalent to kernel interpolation. We introduce techniques where GPR can be used to learn patterns in denoising and mode (signal) decomposition. Additionally, we present the kernel flow (KF) algorithm which learns a kernels from patterns in the data with methodology inspired by cross validation. We further show how the KF algorithm can be applied to artificial neural networks (ANNs) to make improvements to learning patterns in images.</p>
<p>In our denoising and mode decomposition examples, we show how kernels can be constructed to estimate patterns that may be hidden due to data corruption. In other words, we demonstrate how to learn patterns with kernels. Donoho and Johnstone proposed a near-minimax method for reconstructing an unknown smooth function <i>u</i> from noisy data <i>u</i> + ζ by translating the empirical wavelet coefficients of <i>u</i> + ζ towards zero. We consider the situation where the prior information on the unknown function <i>u</i> may not be the regularity of <i>u</i>, but that of ℒ<i>u</i> where ℒ is a linear operator, such as a partial differential equation (PDE) or a graph Laplacian. We show that a near-minimax approximation of <i>u</i> can be obtained by truncating the ℒ-gamblet (operator-adapted wavelet) coefficients of <i>u</i> + ζ. The recovery of <i>u</i> can be seen to be precisely a Gaussian conditioning of <i>u</i> + ζ on measurement functions with length scale dependent on the signal-to-noise ratio.</p>
<p>We next introduce kernel mode decomposition (KMD), which has been designed to learn the modes <i>v<sub>i</sub></i> = <i>a<sub>i</sub></i>(<i>t</i>)<i>y<sub>i</sub></i>(<i>θ<sub>i</sub></i>(<i>t</i>)) of a (possibly noisy) signal Σ<i><sub>i</sub>v<sub>i</sub></i> when the amplitudes <i>a<sub>i</sub></i>, instantaneous phases <i>θ<sub>i</sub></i>, and periodic waveforms <i>y<sub>i</sub></i> may all be unknown. GPR with Gabor wavelet-inspired kernels is used to estimate <i>a<sub>i</sub></i>, <i>θ<sub>i</sub></i>, and <i>y<sub>i</sub></i>. We show near machine precision recovery under regularity and separation assumptions on the instantaneous amplitudes <i>a<sub>i</sub></i> and frequencies <i>˙θ<sub>i</sub></i>.</p>
<p>GPR and kernel interpolation require the selection of an appropriate kernel modeling the data. We present the KF algorithm, which is a numerical-approximation approach to this selection. The main principle the method utilizes is that a "good" kernel is able to make accurate predictions with small subsets of a training set. In this way, we learn a kernel from patterns. In image classification, we show that the learned kernels are able to classify accurately using only one training image per class and show signs of unsupervised learning. Furthermore, we introduce the combination of the KF algorithm with conventional neural-network training. This combination is able to train the intermediate-layer outputs of the network simultaneously with the final-layer output. We test the proposed method on Convolutional Neural Networks (CNNs) and Wide Residual Networks (WRNs) without alteration of their structure or their output classifier. We report reduced test errors, decreased generalization gaps, and increased robustness to distribution shift without significant increase in computational complexity relative to standard CNN and WRN training (with Drop Out and Batch Normalization).</p>
<p>As a whole, this work highlights the interplay between kernel techniques with pattern recognition and numerical approximation.</p> https://thesis.library.caltech.edu/id/eprint/13868Inference, Computation, and Games
https://resolver.caltech.edu/CaltechTHESIS:06082021-005706263
Authors: {'items': [{'email': 'flotosch@gmail.com', 'id': 'Schäfer-Florian-Tobias', 'name': {'family': 'Schäfer', 'given': 'Florian Tobias'}, 'orcid': '0000-0002-4891-0172', 'show_email': 'YES'}]}
Year: 2021
DOI: 10.7907/esyv-2181
<p>In this thesis, we use statistical inference and competitive games to design algorithms for computational mathematics.</p>
<p> In the first part, comprising chapters two through six, we use ideas from Gaussian process statistics to obtain fast solvers for differential and integral equations. We begin by observing the equivalence of conditional (near-)independence of Gaussian processes and the (near-)sparsity of the Cholesky factors of its precision and covariance matrices. This implies the existence of a large class of <em>dense</em> matrices with almost <em>sparse</em> Cholesky factors, thereby greatly increasing the scope of application of sparse Cholesky factorization. Using an elimination ordering and sparsity pattern motivated by the <em>screening effect</em> in spatial statistics, we can compute approximate Cholesky factors of the covariance matrices of Gaussian processes admitting a screening effect in near-linear computational complexity. These include many popular smoothness priors such as the Matérn class of covariance functions.
In the special case of Green's matrices of elliptic boundary value problems (with possibly unknown elliptic operators of arbitrarily high order, with possibly rough coefficients), we can use tools from numerical homogenization to prove the exponential accuracy of our method. This result improves the state-of-the-art for solving general elliptic integral equations and provides the first proof of an exponential screening effect. We also derive a fast solver for elliptic partial differential equations, with accuracy-vs-complexity guarantees that improve upon the state-of-the-art. Furthermore, the resulting solver is performant in practice, frequently beating established algebraic multigrid libraries such as AMGCL and Trilinos on a series of challenging problems in two and three dimensions.
Finally, for any given covariance matrix, we obtain a closed-form expression for its <em>optimal</em> (in terms of Kullback-Leibler divergence) approximate inverse-Cholesky factorization subject to a sparsity constraint, recovering Vecchia approximation and factorized sparse approximate inverses. Our method is highly robust, embarrassingly parallel, and further improves our asymptotic results on the solution of elliptic integral equations. We also provide a way to apply our techniques to sums of independent Gaussian processes, resolving a major limitation of existing methods based on the screening effect. As a result, we obtain fast algorithms for large-scale Gaussian process regression problems with possibly noisy measurements.</p>
<p>In the second part of this thesis, comprising chapters seven through nine, we study continuous optimization through the lens of competitive games. In particular, we consider <em>competitive optimization</em>, where multiple agents attempt to minimize conflicting objectives. In the single-agent case, the updates of gradient descent are minimizers of quadratically regularized linearizations of the loss function. We propose to generalize this idea by using the Nash equilibria of quadratically regularized linearizations of the competitive game as updates (<em>linearize the game</em>). We provide fundamental reasons why the natural notion of linearization for competitive optimization problems is given by the <em>multilinear</em> (as opposed to linear) approximation of the agents' loss functions. The resulting algorithm, which we call <em>competitive gradient descent</em>, thus provides a natural generalization of gradient descent to competitive optimization. By using ideas from information geometry, we extend CGD to competitive mirror descent (CMD) that can be applied to a vast range of constrained competitive optimization problems. CGD and CMD resolve the cycling problem of simultaneous gradient descent and show promising results on problems arising in constrained optimization, robust control theory, and generative adversarial networks. Finally, we point out the <em>GAN-dilemma</em> that refutes the common interpretation of GANs as approximate minimizers of a divergence obtained in the limit of a fully trained discriminator. Instead, we argue that GAN performance relies on the <em>implicit competitive regularization</em> (ICR) due to the simultaneous optimization of generator and discriminator and support this hypothesis with results on low-dimensional model problems and GANs on CIFAR10.</p>https://thesis.library.caltech.edu/id/eprint/14261Singularity Formation in Incompressible Fluids and Related Models
https://resolver.caltech.edu/CaltechTHESIS:05172022-223804694
Authors: {'items': [{'email': 'cjiajie26@126.com', 'id': 'Chen-Jiajie', 'name': {'family': 'Chen', 'given': 'Jiajie'}, 'orcid': '0000-0002-0194-1975', 'show_email': 'NO'}]}
Year: 2022
DOI: 10.7907/nqff-dh92
<p>Whether the three-dimensional (3D) incompressible Euler equations can develop a finite-time singularity from smooth initial data with finite energy is a major open problem in partial differential equations. A few years ago, Tom Hou and Guo Luo obtained strong numerical evidence of a potential finite time singularity of the 3D axisymmetric Euler equations with boundary from smooth initial data. So far, there is no rigorous justification. In this thesis, we develop a framework to study the Hou-Luo blowup scenario and singularity formation in related equations and models. In addition, we analyze the obstacle to singularity formation.</p>
<p>In the first part, we propose a novel framework of analysis based on the dynamic rescaling formulation to study singularity formation. Our strategy is to reformulate the problem of proving finite time blowup into the problem of establishing the nonlinear stability of an approximate self-similar blowup profile using the dynamic rescaling equations. Then we prove finite time blowup of the 2D Boussinesq and the 3D Euler equations with C<sup>1,α</sup> velocity and boundary. This result provides the first rigorous justification of the Hou-Luo scenario using C<sup>1,α</sup> velocity.</p>
<p>In the second part, we further develop the framework for smooth data. The method in the first part relies crucially on the low regularity of the data, and there are several essential difficulties to generalize it to study the Hou-Luo scenario with smooth data. We demonstrate that some of the challenges can be overcome by proving the asymptotically self-similar blowup of the Hou-Luo model. Applying this framework, we establish the finite time blowup of the De Gregorio (DG) model on the real line (ℝ) with smooth data. Our result resolves the open problem on the regularity of this model on ℝ that has been open for quite a long time.</p>
<p>In the third part, we investigate the competition between advection and vortex stretching, an essential difficulty in studying the regularity of the 3D Euler equations. This competition can be modeled by the DG model on S<sup>1</sup>. We consider odd initial data with a specific sign property and show that the regularity of the initial data in this class determines the competition between advection and vortex stretching. For any 0 < α < 1, we construct a finite time blowup solution from some C<sup>α</sup> initial data. On the other hand, we prove that the solution exists globally for C<sup>1</sup> initial data. Our results resolve some conjecture on the finite time blowup of this model and imply that singularities developed in the DG model and the generalized Constantin-Lax-Majda model on S<sup>1</sup> can be prevented by stronger advection.</p>https://thesis.library.caltech.edu/id/eprint/14584Machine Learning and Scientific Computing
https://resolver.caltech.edu/CaltechTHESIS:05252022-180406320
Authors: {'items': [{'email': 'kovachki93@gmail.com', 'id': 'Kovachki-Nikola-Borislavov', 'name': {'family': 'Kovachki', 'given': 'Nikola Borislavov'}, 'orcid': '0000-0002-3650-2972', 'show_email': 'NO'}]}
Year: 2022
DOI: 10.7907/8nc5-cc67
<p>The remarkable success of machine learning methods for tacking problems in computer vision and natural language processing has made them auspicious tools for applications to scientific computing tasks. The present work advances both machine learning techniques by using ideas from numerical analysis, inverse problems, and data assimilation and introduces new machine learning based tools for accurate and computationally efficient scientific computing. Chapters 2 and 3 introduce new methods and analyze existing methods for the optimization of deep neural networks. Chapters 4 and 5 formulate approximation architectures acting between infinite dimensional functions spaces for applications to parametric PDE problems. Chapter 6 demonstrates how to re-formulate GAN(s) so they can condition on continuous data and exhibits applications to Bayesian inverse problems. In Chapter 7, we present a novel regression-clustering method and apply it to the problem of predicting molecular activation energies.</p>https://thesis.library.caltech.edu/id/eprint/14621The "Interpolated Factored Green Function" Method
https://resolver.caltech.edu/CaltechTHESIS:07072022-003500251
Authors: {'items': [{'email': 'christoph.bauinger@gmail.com', 'id': 'Bauinger-Christoph', 'name': {'family': 'Bauinger', 'given': 'Christoph'}, 'show_email': 'NO'}]}
Year: 2023
DOI: 10.7907/1cnc-s558
<p>This thesis presents a novel <i>Interpolated Factored Green Function</i> (IFGF) method for the accelerated evaluation of the integral operators in scattering theory and other areas. Like existing acceleration methods in these fields, the IFGF algorithm evaluates the action of Green function-based integral operators at a cost of <i>O</i>(<i>N</i> log <i>N</i>) operations for an <i>N</i>-point surface mesh. The IFGF strategy capitalizes on slow variations inherent in a certain Green function <i>analytic factor</i>, which is analytic up to and including infinity, and which therefore allows for accelerated evaluation of fields produced by groups of sources on the basis of a recursive application of classical interpolation methods. Unlike other approaches, the IFGF method does not utilize the Fast Fourier Transform (FFT), and it is thus better suited than other methods for efficient parallelization in distributed-memory computer systems. In fact, a (hybrid MPI-OpenMP) parallel implementation of the IFGF algorithm is proposed in this thesis which results in highly efficient data communication, and which exhibits in practice excellent parallel scaling up to large numbers of cores -- without any hard limitations on the number of cores concurrently employed with high efficiency. Moreover, on any given number of cores, the proposed parallel approach preserves the linearithmic (<i>O</i>(<i>N</i> log <i>N</i>)) computing cost inherent in the sequential version of the IFGF algorithm. This thesis additionally introduces a complete acoustic scattering solver that incorporates the IFGF method in conjunction with a suitable singular integration scheme. A variety of numerical results presented in this thesis illustrate the character of the proposed parallel IFGF-accelerated acoustic solver. These results include applications to several highly relevant engineering problems, e.g., problems concerning acoustic scattering by structures such as a submarine and an aircraft-nacelle geometry, thus establishing the suitability of the IFGF method in the context of real-world engineering problems. The theoretical properties of the IFGF method, finally, are demonstrated by means of a variety of numerical experiments which display the method's serial and parallel linearithmic scaling as well as its excellent weak and strong parallel scaling -- for problems of up to 4,096 wavelengths in acoustic size, and scaling tests spanning from 1 compute core to all 1,680 cores available in the High Performance Computing cluster used.</p>https://thesis.library.caltech.edu/id/eprint/14968Machine Learning and Data Assimilation for Blending Incomplete Models and Noisy Data
https://resolver.caltech.edu/CaltechTHESIS:06012023-213052258
Authors: {'items': [{'email': 'mattlevine22@gmail.com', 'id': 'Levine-Matthew-Emanuel', 'name': {'family': 'Levine', 'given': 'Matthew Emanuel'}, 'orcid': '0000-0002-5627-3169', 'show_email': 'NO'}]}
Year: 2023
DOI: 10.7907/b82h-ye78
<p>The prediction and inference of dynamical systems is of widespread interest across scientific and engineering disciplines. Data assimilation (DA) offers a well-established and successful paradigm for blending such models with noisy observational data. However, traditional DA-based inference often fails when available data are insufficiently informative. Chapter 2 copes with this challenge by introducing constraints into Ensemble Kalman Filtering, which is shown to improve forecasting of glucose dynamics in real patient-level clinical data. Chapter 3 addresses this identifiability challenge by instead developing a simplified, reduced-order stochastic model for glucose dynamics that is more easily identified from patient data. Despite these successes, the forecasting performance of the methods are fundamentally limited by the fidelity of the employed model, which is often not fully understood a priori.</p>
<p>Chapter 4 presents a general picture of how noisy, partially-observed time-series data can be used to learn flexible (e.g., neural network-based) corrections to a pre-specified mechanistic model. In Chapter 5, the proposed methodology is then validated in simulated settings for glucose-insulin models. Chapter 6 provides further perspective on learning flexible model corrections, comparing approaches that use i) gradient-based or gradient-free optimization, ii) temporal or time-averaged data, iii) different model parameterizations, iv) deterministic and stochastic corrections, and v) physical conservation laws to constrain inference.</p>
<p>Chapter 7 studies how these perspectives on machine learning and dynamical systems can help us understand the roles of biochemical networks. In particular, it considers protein dimerization networks from the lens of approximation theory and evaluates how the equilibria of these networks can be fine-tuned to perform a variety of biological computations.</p>https://thesis.library.caltech.edu/id/eprint/15264Low-Rank Matrix Recovery: Manifold Geometry and Global Convergence
https://resolver.caltech.edu/CaltechTHESIS:05302023-222447373
Authors: {'items': [{'email': 'zyzhang0907@gmail.com', 'id': 'Zhang-Ziyun', 'name': {'family': 'Zhang', 'given': 'Ziyun'}, 'orcid': '0000-0002-5794-2387', 'show_email': 'YES'}]}
Year: 2023
DOI: 10.7907/hd6q-g460
<p>Low-rank matrix recovery problems are prevalent in modern data science, machine learning, and artificial intelligence, and the low-rank property of matrices is widely exploited to extract the hidden low-complexity structure in massive datasets. Compared with Burer-Monteiro factorization in the Euclidean space, using the low-rank matrix manifold has its unique advantages, as it eliminates duplicated spurious points and reduces the polynomial order of the objective function. Yet a few fundamental questions have remained unanswered until recently. We highlight two problems here in particular, which are the global geometry of the manifold and the global convergence guarantee.</p>
<p>As for the global geometry, we point out that there exist some spurious critical points on the boundary of the low-rank matrix manifold Mᵣ, which have rank smaller than r but can serve as limit points of iterative sequences in the manifold Mᵣ. For the least squares loss function, the spurious critical points are rank-deficient matrices that capture part of the eigen spaces of the ground truth. Unlike classical strict saddle points, their Riemannian gradient is singular and their Riemannian Hessian is unbounded.</p>
<p>We show that randomly initialized Riemannian gradient descent almost surely escapes some of the spurious critical points. To prove this result, we first establish the asymptotic escape of classical strict saddle sets consisting of non-isolated strict critical submanifolds on Riemannian manifolds. We then use a dynamical low-rank approximation to parameterize the manifold Mᵣ and map the spurious critical points to strict critical submanifolds in the classical sense in the parameterized domain, which leads to the desired result. Our result is the first to partially overcome the nonclosedness of the low-rank matrix manifold without altering the vanilla gradient descent algorithm. Numerical experiments are provided to support our theoretical findings.</p>
<p>As for the global convergence guarantee, we point out that earlier approaches to many of the low-rank recovery problems only imply a geometric convergence rate toward a second-order stationary point. This is in contrast to the numerical evidence, which suggests a nearly linear convergence rate starting from a global random initialization. To establish the nearly linear convergence guarantee, we propose a unified framework for a class of low-rank matrix recovery problems including matrix sensing, matrix completion, and phase retrieval. All of them can be considered as random sensing problems of low-rank matrices with a linear measurement operator from some random ensembles. These problems share similar population loss functions that are either least squares or its variant.</p>
<p>We show that under some assumptions, for the population loss function, the Riemannian gradient descent starting from a random initialization with high probability converges to the ground truth in a nearly linear convergence rate, i.e., it takes O(log 1/ϵ + log n) iterations to reach an ϵ-accurate solution. The key to establishing a nearly optimal convergence guarantee is closely intertwined with the analysis of the spurious critical points S_# on Mᵣ. Outside the local neighborhoods of spurious critical points, we use the fundamental convergence tool by the Łojasiewicz inequality to derive a linear convergence rate. In the spurious regions in the neighborhood of spurious critical points, the Riemannian gradient becomes degenerate and the Łojasiewicz inequality could fail. By tracking the dynamics of the trajectory in three stages, we are able to show that with high probability, Riemannian gradient descent escapes the spurious regions in a small number of steps.</p>
<p>After addressing the two problems of global geometry and global convergence guarantee, we use two applications to demonstrate the broad applicability of our analytical tools. The first is the robust principal component analysis problem on the manifold Mᵣ with the Riemannian subgradient method. The second application is the convergence rate analysis of the Sobolev gradient descent method for the nonlinear Gross-Pitaevskii eigenvalue problem on the infinite dimensional sphere manifold. These two examples demonstrate that the analysis of manifold first-order algorithms can be extended beyond the previous framework, to nonsmooth functions and subgradient methods, and to infinite dimensional Hilbert manifolds. This exemplifies that the insights gained and tools developed for the low-rank matrix manifold Mᵣ can be extended to broader scientific and technological fields.</p>https://thesis.library.caltech.edu/id/eprint/15236On Multiscale and Statistical Numerical Methods for PDEs and Inverse Problems
https://resolver.caltech.edu/CaltechTHESIS:05292023-175108484
Authors: {'items': [{'email': 'yifanc96@gmail.com', 'id': 'Chen-Yifan', 'name': {'family': 'Chen', 'given': 'Yifan'}, 'orcid': '0000-0001-5494-4435', 'show_email': 'YES'}]}
Year: 2023
DOI: 10.7907/83p4-c644
<p> This thesis focuses on numerical methods for scientific computing and scientific machine learning, specifically on solving partial differential equations and inverse problems. The design of numerical algorithms usually encompasses a spectrum that ranges from specialization to generality. Classical approaches, such as finite element methods, and contemporary scientific machine learning approaches, like neural nets, can be viewed as lying at relatively opposite ends of this spectrum. Throughout this thesis, we tackle mathematical challenges associated with both ends by advancing rigorous multiscale and statistical numerical methods. </p>
<p>Regarding the multiscale numerical methods, we present an exponentially convergent multiscale finite element method for solving high-frequency Helmholtz's equation with rough coefficients. To achieve this, we first identify the local low-complexity structure of Helmholtz's equations when the resolution is smaller than the wavelength. Then, we construct local basis functions by solving local spectral problems and couple them globally through non-overlapped domain decomposition and Galerkin's method. This results in a numerical method that achieves nearly exponentially convergent accuracy regarding the number of local basis functions, even when the solution is highly non-smooth. We also analyze the role of a subsampled lengthscale in variational multiscale methods, characterizing the tradeoff between accuracy and efficiency in the numerical upscaling of heterogeneous PDEs and scattered data approximation.</p>
<p>As for the statistical numerical methods, we discuss using Gaussian processes and kernel methods to solve nonlinear PDEs and inverse problems. This framework incorporates the flavor of scientific machine learning automation and extends classical meshless solvers. It transforms general PDE problems into quadratic optimization with nonlinear constraints. We present the theoretical underpinning of the methodology. For the scalability of the method, we develop state-of-the-art algorithms to handle dense kernel matrices in both low and high-dimensional scientific problems. For adaptivity, we analyze the convergence and consistency of hierarchical learning algorithms that adaptively select kernel functions. Additionally, we note that statistical numerical methods offer natural uncertainty quantification within the Bayesian framework. In this regard, our further work contributes to some new understanding of efficient statistical sampling techniques based on gradient flows.</p>https://thesis.library.caltech.edu/id/eprint/15224Singularity Formation in the High-Dimensional Euler Equations and Sampling of High-Dimensional Distributions by Deep Generative Networks
https://resolver.caltech.edu/CaltechTHESIS:09202022-034157716
Authors: {'items': [{'email': 'zhangsm1995@gmail.com', 'id': 'Zhang-Shumao', 'name': {'family': 'Zhang', 'given': 'Shumao'}, 'orcid': '0000-0003-3071-3362', 'show_email': 'NO'}]}
Year: 2023
DOI: 10.7907/8had-3a90
<p>High dimensionality brings both opportunities and challenges to the study of applied mathematics. This thesis consists of two parts. The first part explores the singularity formation of the axisymmetric incompressible Euler equations with no swirl in ℝⁿ, which is closely related to the Millennium Prize Problem on the global singularity of the Navier-Stokes equations. In this part, the high dimensionality contributes to the singularity formation in finite time by enhancing the strength of the vortex stretching term. The second part focuses on sampling from a high-dimensional distribution using deep generative networks, which has wide applications in the Bayesian inverse problem and the image synthesis task. The high dimensionality in this part becomes a significant challenge to the numerical algorithms, known as the curse of dimensionality.</p>
<p>In the first part of this thesis, we consider the singularity formation in two scenarios. In the first scenario, for the axisymmetric Euler equations with no swirl, we consider the case when the initial condition for the angular vorticity is C<sup>α</sup> Hölder continuous. We provide convincing numerical examples where the solutions develop potential self-similar blow-up in finite time when the Hölder exponent α < α*, and this upper bound α* can asymptotically approach 1 - 2/n. This result supports a conjecture from Drivas and Elgindi [37], and generalizes it to the high-dimensional case. This potential blow-up is insensitive to the perturbation of initial data. Based on assumptions summarized from numerical experiments, we study a limiting case of the Euler equations, and obtain α* = 1 - 2/n which agrees with the numerical result. For the general case, we propose a relatively simple one-dimensional model and numerically verify its approximation to the Euler equations. This one-dimensional model might suggest a possible way to show this finite-time blow-up scenario analytically. Compared to the first proved blow-up result of the 3D axisymmetric Euler equations with no swirl and Hölder continuous initial data by Elgindi in [40], our potential blow-up scenario has completely different scaling behavior and regularity of the initial condition. In the second scenario, we consider using smooth initial data, but modify the Euler equations by adding a factor ε as the coefficient of the convection terms to weaken the convection effect. The new model is called the weak convection model. We provide convincing numerical examples of the weak convection model where the solutions develop potential self-similar blow-up in finite time when the convection strength ε < ε*, and this upper bound ε* should be close to 1 - 2/n. This result is closely related to the infinite-dimensional case of an open question [37] stated by Drivas and Elgindi. Our numerical observations also inspire us to approximate the weak convection model with a one-dimensional model. We give a rigorous proof that the one-dimensional model will develop finite-time blow-up if ε < 1 - 2/n, and study the approximation quality of the one-dimensional model to the weak convection model numerically, which could be beneficial to a rigorous proof of the potential finite-time blow-up.</p>
<p>In the second part of the thesis, we propose the Multiscale Invertible Generative Network (MsIGN) to sample from high-dimensional distributions by exploring the low-dimensional structure in the target distribution. The MsIGN models a transport map from a known reference distribution to the target distribution, and thus is very efficient in generating uncorrelated samples compared to MCMC-type methods. The MsIGN captures multiple modes in the target distribution by generating new samples hierarchically from a coarse scale to a fine scale with the help of a novel prior conditioning layer. The hierarchical structure of the MsIGN also allows training in a coarse-to-fine scale manner. The Jeffreys divergence is used as the objective function in training to avoid mode collapse. Importance sampling based on the prior conditioning layer is leveraged to estimate the Jeffreys divergence, which is intractable in previous deep generative networks. Numerically, when applied to two Bayesian inverse problems, the MsIGN clearly captures multiple modes in the high-dimensional posterior and approximates the posterior accurately, demonstrating its superior performance compared with previous methods. We also provide an ablation study to show the necessity of our proposed network architecture and training algorithm for the good numerical performance. Moreover, we also apply the MsIGN to the image synthesis task, where it achieves superior performance in terms of bits-per-dimension value over other flow-based generative models and yields very good interpretability of its neurons in intermediate layers.</p>https://thesis.library.caltech.edu/id/eprint/15033General Domain FC-Based Shock Dynamics Solver
https://resolver.caltech.edu/CaltechTHESIS:03152024-221312028
Authors: {'items': [{'email': 'dleibovi@gmail.com', 'id': 'Leibovici-Daniel-Victor', 'name': {'family': 'Leibovici', 'given': 'Daniel Victor'}, 'orcid': '0009-0007-8267-4250', 'show_email': 'YES'}]}
Year: 2024
DOI: 10.7907/bd5r-4q30
This thesis presents a novel FC-SDNN (Fourier Continuation Shock-detecting Neural Network) spectral scheme for the numerical solution of nonlinear conservation laws in general domains and under arbitrary boundary conditions, without the limiting CFL constraints inherent in other spectral schemes for general domains. The approach relies on the use of the Fourier Continuation (FC) method for spectral representation of non-periodic functions in conjunction with smooth artificial viscosity assignments localized in regions detected by means of a Shock-Detecting Neural Network (SDNN). Like previous shock capturing schemes and artificial viscosity techniques, the combined FC-SDNN strategy effectively controls spurious oscillations in the proximity of discontinuities. Thanks to its use of a localized but smooth artificial viscosity term, whose support is restricted to a vicinity of flow-discontinuity points, the algorithm enjoys spectral accuracy and low dissipation away from flow discontinuities, and, in such regions, it produces smooth numerical solutions—as evidenced by an essential absence of spurious oscillations in contour levels. The FC-SDNN viscosity assignment, which does not require use of problem-dependent algorithmic parameters, induces a significantly lower overall dissipation than other methods, including the Fourier-spectral versions of the previous entropy viscosity method, especially in the vicinity of contact discontinuities. The approach, which does not require the use of otherwise ubiquitous positivity-preserving limiters, enjoys a great geometrical flexibility on the basis of an overlapping-patch discretization. This allows its application for the simulation of supersonic and hypersonic flows and shocks, including Euler simulations at significantly higher speeds than previously achieved, such as e.g. Mach 25 re-entry flow speeds, impinging upon complex physical obstacles. This multi-domain approach is suitable for efficient parallelization on large computer clusters, and the MPI implementation proposed in this thesis enjoys high parallel scalability and in particular perfect weak scaling, as demonstrated by simulations on general complex domains. The character of the proposed algorithm is demonstrated through a variety of numerical tests for the linear advection, Burgers and Euler equations in one and two-dimensional non-periodic spatial domains, with results in accordance with physical theory and prior experimental and computational results up to and including both supersonic and hypersonic regimes.https://thesis.library.caltech.edu/id/eprint/16329