Book Section records
https://feeds.library.caltech.edu/people/Tropp-J-A/book_section.rss
A Caltech Library Repository Feedhttp://www.rssboard.org/rss-specificationpython-feedgenenTue, 16 Apr 2024 14:24:35 +0000Improved sparse approximation over quasi-incoherent dictionaries
https://resolver.caltech.edu/CaltechAUTHORS:TROicip03
Authors: {'items': [{'id': 'Tropp-J-A', 'name': {'family': 'Tropp', 'given': 'J. A.'}, 'orcid': '0000-0003-1024-1791'}, {'id': 'Gilbert-A-C', 'name': {'family': 'Gilbert', 'given': 'A. C.'}}, {'id': 'Muthukrishnan-S', 'name': {'family': 'Muthukrishnan', 'given': 'S.'}}, {'id': 'Strauss-M-J', 'name': {'family': 'Strauss', 'given': 'M. J.'}}]}
Year: 2003
This paper discusses a new greedy algorithm for solving the sparse approximation problem over quasi-incoherent dictionaries. These dictionaries consist of waveforms that are uncorrelated "on average," and they provide a natural generalization of incoherent dictionaries. The algorithm provides strong guarantees on the quality of the approximations it produces, unlike most other methods for sparse approximation. Moreover, very efficient implementations are possible via approximate nearest-neighbor data structureshttps://authors.library.caltech.edu/records/r2p14-c5f08Optimal CDMA signature sequences, inverse eigenvalue problems and alternating minimization
https://resolver.caltech.edu/CaltechAUTHORS:TROisit03
Authors: {'items': [{'id': 'Tropp-J-A', 'name': {'family': 'Tropp', 'given': 'Joel A.'}, 'orcid': '0000-0003-1024-1791'}, {'id': 'Heath-R-W-Jr', 'name': {'family': 'Heath', 'given': 'Robert W., Jr.'}}, {'id': 'Strohmer-T', 'name': {'family': 'Strohmer', 'given': 'Thomas'}}]}
Year: 2003
DOI: 10.1109/ISIT.2003.1228424
This paper describes the matrix-theoretic ideas known as Welch-bound-equality sequences or unit-norm tight frames that are used to alternate minimizing the total squared correlation. This paper shows the construction of an optimal signature sequences for the synchronous code-division multiple-access (S-CDMA) channel in the presence of white noise and uniform received powers to solve inverse eigenvalue problems that maximize the sum capacity of the S-CDMA channel.https://authors.library.caltech.edu/records/jck9d-nk170CDMA signature sequences with low peak-to-average-power ratio via alternating projection
https://resolver.caltech.edu/CaltechAUTHORS:TROasilo03
Authors: {'items': [{'id': 'Tropp-J-A', 'name': {'family': 'Tropp', 'given': 'Joel A.'}, 'orcid': '0000-0003-1024-1791'}, {'id': 'Dhillon-I-S', 'name': {'family': 'Dhillon', 'given': 'Inderjit S.'}}, {'id': 'Heath-R-W-Jr', 'name': {'family': 'Heath', 'given': 'Robert W., Jr.'}}, {'id': 'Strohmer-T', 'name': {'family': 'Strohmer', 'given': 'Thomas'}}]}
Year: 2004
DOI: 10.1109/ACSSC.2003.1291956
Several algorithms have been proposed to construct optimal signature sequences that maximize the sum capacity of the uplink in a direct-spread synchronous code division multiple access (CDMA) system. These algorithms produce signatures with real-valued or complex-valued entries that generally have a large peak-to-average power ratio (PAR). This paper presents an alternating projection algorithm that can design optimal signature sequences that satisfy PAR side constraints. This algorithm converges to a fixed point, and these fixed points are partially characterized.https://authors.library.caltech.edu/records/4g88j-99p44Optimal CDMA signatures: a finite-step approach
https://resolver.caltech.edu/CaltechAUTHORS:TROissta04
Authors: {'items': [{'id': 'Tropp-J-A', 'name': {'family': 'Tropp', 'given': 'Joel A.'}, 'orcid': '0000-0003-1024-1791'}, {'id': 'Dhillon-I-S', 'name': {'family': 'Dhillon', 'given': 'Inderjit S.'}}, {'id': 'Heath-R-W-Jr', 'name': {'family': 'Heath', 'given': 'Robert W., Jr.'}}]}
Year: 2005
A description of optimal sequences for direct-sequence code division multiple access is a byproduct of recent characterizations of the sum capacity. The paper restates the sequence design problem as an inverse singular value problem and shows that it can be solved with finite-step algorithms from matrix analysis. Relevant algorithms are reviewed and a new one-sided construction is proposed that obtains the sequences directly instead of computing the Gram matrix of the optimal signatures.https://authors.library.caltech.edu/records/g9q7e-wkc24Construction of equiangular signatures for synchronous CDMA systems
https://resolver.caltech.edu/CaltechAUTHORS:HEAissta04
Authors: {'items': [{'id': 'Heath-R-W-Jr', 'name': {'family': 'Heath', 'given': 'Robert W., Jr.'}}, {'id': 'Tropp-J-A', 'name': {'family': 'Tropp', 'given': 'Joel A.'}, 'orcid': '0000-0003-1024-1791'}, {'id': 'Dhillon-I-S', 'name': {'family': 'Dhillon', 'given': 'Inderjit S.'}}, {'id': 'Strohmer-T', 'name': {'family': 'Strohmer', 'given': 'Thomas'}}]}
Year: 2005
Welch bound equality (WBE) signature sequences maximize the uplink sum capacity in direct-spread synchronous code division multiple access (CDMA) systems. WBE sequences have a nice interference invariance property that typically holds only when the system is fully loaded, and, to maintain this property, the signature set must be redesigned and reassigned as the number of active users changes. An additional equiangular constraint on the signature set, however, maintains interference invariance. Finding such signatures requires equiangular side constraints to be imposed on an inverse eigenvalue problem. The paper presents an alternating projection algorithm that can design WBE sequences that satisfy equiangular side constraints. The proposed algorithm can be used to find Grassmannian frames as well as equiangular tight frames. Though one projection is onto a closed, but non-convex, set, it is shown that this algorithm converges to a fixed point, and these fixed points are partially characterized.https://authors.library.caltech.edu/records/cr2rp-9ma37Simultaneous sparse approximation via greedy pursuit
https://resolver.caltech.edu/CaltechAUTHORS:TROicassp05
Authors: {'items': [{'id': 'Tropp-J-A', 'name': {'family': 'Tropp', 'given': 'J. A.'}, 'orcid': '0000-0003-1024-1791'}, {'id': 'Gilbert-A-C', 'name': {'family': 'Gilbert', 'given': 'A. C.'}}, {'id': 'Strauss-M-J', 'name': {'family': 'Strauss', 'given': 'M. J.'}}]}
Year: 2005
DOI: 10.1109/ICASSP.2005.1416405
A simple sparse approximation problem requests an approximation of a given input signal as a linear combination of T elementary signals drawn from a large, linearly dependent collection. An important generalization is simultaneous sparse approximation. Now one must approximate several input signals at once using different linear combinations of the same T elementary signals. This formulation appears, for example, when analyzing multiple observations of a sparse signal that have been contaminated with noise. A new approach to this problem is presented here: a greedy pursuit algorithm called simultaneous orthogonal matching pursuit. The paper proves that the algorithm calculates simultaneous approximations whose error is within a constant factor of the optimal simultaneous approximation error. This result requires that the collection of elementary signals be weakly correlated, a property that is also known as incoherence. Numerical experiments demonstrate that the algorithm often succeeds, even when the inputs do not meet the hypotheses of the proof.https://authors.library.caltech.edu/records/22hc7-s3g15Applications of sparse approximation in communications
https://resolver.caltech.edu/CaltechAUTHORS:GILisit05
Authors: {'items': [{'id': 'Gilbert-A-C', 'name': {'family': 'Gilbert', 'given': 'A. C.'}}, {'id': 'Tropp-J-A', 'name': {'family': 'Tropp', 'given': 'J. A.'}, 'orcid': '0000-0003-1024-1791'}]}
Year: 2005
DOI: 10.1109/ISIT.2005.1523488
Sparse approximation problems abound in many scientific, mathematical, and engineering applications. These problems are defined by two competing notions: we approximate a signal vector as a linear combination of elementary atoms and we require that the approximation be both as accurate and as concise as possible. We introduce two natural and direct applications of these problems and algorithmic solutions in communications. We do so by constructing enhanced codebooks from base codebooks. We show that we can decode these enhanced codebooks in the presence of Gaussian noise. For MIMO wireless communication channels, we construct simultaneous sparse approximation problems and demonstrate that our algorithms can both decode the transmitted signals and estimate the channel parameters.https://authors.library.caltech.edu/records/yhdsp-53481Sparse Approximation Via Iterative Thresholding
https://resolver.caltech.edu/CaltechAUTHORS:HERicassp06.876
Authors: {'items': [{'id': 'Herrity-K-K', 'name': {'family': 'Herrity', 'given': 'Kyle K.'}}, {'id': 'Gilbert-A-C', 'name': {'family': 'Gilbert', 'given': 'Anna C.'}}, {'id': 'Tropp-J-A', 'name': {'family': 'Tropp', 'given': 'Joel A.'}, 'orcid': '0000-0003-1024-1791'}]}
Year: 2006
DOI: 10.1109/ICASSP.2006.1660731
The well-known shrinkage technique is still relevant for contemporary signal processing problems over redundant dictionaries. We present theoretical and empirical analyses for two iterative algorithms for sparse approximation that use shrinkage. The GENERAL IT algorithm amounts to a Landweber iteration with nonlinear shrinkage at each iteration step. The BLOCK IT algorithm arises in morphological components analysis. A sufficient condition for which General IT exactly recovers a sparse signal is presented, in which the cumulative coherence function naturally arises. This analysis extends previous results concerning the Orthogonal Matching Pursuit (OMP) and Basis Pursuit (BP) algorithms to IT algorithms.https://authors.library.caltech.edu/records/5gpyq-hz670Row-Action Methods for Compressed Sensing
https://resolver.caltech.edu/CaltechAUTHORS:SRAicassp06.963
Authors: {'items': [{'id': 'Sra-S', 'name': {'family': 'Sra', 'given': 'Suvrit'}}, {'id': 'Tropp-J-A', 'name': {'family': 'Tropp', 'given': 'Joel A.'}, 'orcid': '0000-0003-1024-1791'}]}
Year: 2006
DOI: 10.1109/ICASSP.2006.1660792
Compressed Sensing uses a small number of random, linear measurements to acquire a sparse signal. Nonlinear algorithms, such as l1minimization, are used to reconstruct the signal from the measured data. This paper proposes row-action methods as a computational approach to solving the l1optimization problem. This paper presents a specific row-action method and provides extensive empirical evidence that it is an effective technique for signal reconstruction. This approach offers several advantages over interior-point methods, including minimal storage and computational requirements, scalability, and robustness.https://authors.library.caltech.edu/records/900p1-ymh12Random Filters for Compressive Sampling and Reconstruction
https://resolver.caltech.edu/CaltechAUTHORS:TROicassp06.977
Authors: {'items': [{'id': 'Tropp-J-A', 'name': {'family': 'Tropp', 'given': 'Joel A.'}, 'orcid': '0000-0003-1024-1791'}, {'id': 'Wakin-M-B', 'name': {'family': 'Wakin', 'given': 'Michael .'}}, {'id': 'Duarte-M-F', 'name': {'family': 'Duarte', 'given': 'Marco F.'}}, {'id': 'Baron-D', 'name': {'family': 'Baron', 'given': 'Dror'}}, {'id': 'Baraniuk-R-G', 'name': {'family': 'Baraniuk', 'given': 'Richard G.'}, 'orcid': '0000-0002-0721-8999'}]}
Year: 2006
DOI: 10.1109/ICASSP.2006.1660793
We propose and study a new technique for efficiently acquiring and reconstructing signals based on convolution with a fixed FIR filter having random taps. The method is designed for sparse and compressible signals, i.e., ones that are well approximated by a short linear combination of vectors from an orthonormal basis. Signal reconstruction involves a non-linear Orthogonal Matching Pursuit algorithm that we implement efficiently by exploiting the nonadaptive, time-invariant structure of the measurement process. While simpler and more efficient than other random acquisition techniques like Compressed Sensing, random filtering is sufficiently generic to summarize many types of compressible signals and generalizes to streaming and continuous-time signals. Extensive numerical experiments demonstrate its efficacy for acquiring and reconstructing signals sparse in the time, frequency, and wavelet domains, as well as piecewise smooth signals and Poisson processes.https://authors.library.caltech.edu/records/hp8sm-g0q87Random Filters for Compressive Sampling
https://resolver.caltech.edu/CaltechAUTHORS:TROciss06.975
Authors: {'items': [{'id': 'Tropp-J-A', 'name': {'family': 'Tropp', 'given': 'Joel A.'}, 'orcid': '0000-0003-1024-1791'}]}
Year: 2007
DOI: 10.1109/CISS.2006.286465
This paper discusses random filtering, a recently proposed method for directly acquiring a compressed version of a digital signal. The technique is based on convolution of the signal with a fixed FIR filter having random taps, followed by downsampling. Experiments show that random filtering is effective at acquiring sparse and compressible signals. This process has the potential for implementation in analog hardware, and so it may have a role to play in new types of analog/digital converters.https://authors.library.caltech.edu/records/sf41z-mfp58Greedy Signal Recovery Review
https://resolver.caltech.edu/CaltechAUTHORS:20180831-112109709
Authors: {'items': [{'id': 'Needell-Deanna', 'name': {'family': 'Needell', 'given': 'Deanna'}, 'orcid': '0000-0002-8058-8638'}, {'id': 'Tropp-J-A', 'name': {'family': 'Tropp', 'given': 'Joel'}, 'orcid': '0000-0003-1024-1791'}, {'id': 'Vershynin-R', 'name': {'family': 'Vershynin', 'given': 'Roman'}}]}
Year: 2008
DOI: 10.1109/ACSSC.2008.5074572
The two major approaches to sparse recovery are L_1-minimization and greedy methods. Recently, Needell and Vershynin developed regularized orthogonal matching pursuit (ROMP) that has bridged the gap between these two approaches. ROMP is the first stable greedy algorithm providing uniform guarantees.
Even more recently, Needell and Tropp developed the stable greedy algorithm compressive sampling matching pursuit (CoSaMP). CoSaMP provides uniform guarantees and improves upon the stability bounds and RIC requirements of ROMP. CoSaMP offers rigorous bounds on computational cost and storage. In many cases, the running time is just O(N log N), where N is the ambient dimension of the signal. This review summarizes these major advances.https://authors.library.caltech.edu/records/j70zc-y3276Efficient Sampling of Sparse Wideband Analog Signals
https://resolver.caltech.edu/CaltechAUTHORS:20180831-112113124
Authors: {'items': [{'id': 'Mishali-M', 'name': {'family': 'Mishali', 'given': 'Moshe'}}, {'id': 'Eldar-Y-C', 'name': {'family': 'Eldar', 'given': 'Yonina C.'}}, {'id': 'Tropp-J-A', 'name': {'family': 'Tropp', 'given': 'Joel A.'}, 'orcid': '0000-0003-1024-1791'}]}
Year: 2008
DOI: 10.1109/EEEI.2008.4736707
Periodic nonuniform sampling is a known method to sample spectrally sparse signals below the Nyquist rate. This strategy relies on the implicit assumption that the individual samplers are exposed to the entire frequency range. This assumption becomes impractical for wideband sparse signals. The current paper proposes an alternative sampling stage that does not require a full-band front end. Instead, signals are captured with an analog front end that consists of a bank of multipliers and lowpass filters whose cutoff is much lower than the Nyquist rate. The problem of recovering the original signal from the low-rate samples can be studied within the framework of compressive sampling. An appropriate parameter selection ensures that the samples uniquely determine the analog input. Moreover, the analog input can be stably reconstructed with digital algorithms. Numerical experiments support the theoretical analysis.https://authors.library.caltech.edu/records/mrem4-vrd44Column Subset Selection, Matrix Factorization, and Eigenvalue Optimization
https://resolver.caltech.edu/CaltechAUTHORS:20100921-101535590
Authors: {'items': [{'id': 'Tropp-J-A', 'name': {'family': 'Tropp', 'given': 'Joel A.'}, 'orcid': '0000-0003-1024-1791'}]}
Year: 2009
DOI: 10.48550/arXiv.0806.4404
Given a fixed matrix, the problem of column subset selection
requests a column submatrix that has favorable spectral
properties. Most research from the algorithms and
numerical linear algebra communities focuses on a variant
called rank-revealing QR, which seeks a well-conditioned
collection of columns that spans the (numerical) range of
the matrix. The functional analysis literature contains
another strand of work on column selection whose algorithmic
implications have not been explored. In particular,
a celebrated result of Bourgain and Tzafriri demonstrates
that each matrix with normalized columns contains
a large column submatrix that is exceptionally well
conditioned. Unfortunately, standard proofs of this result
cannot be regarded as algorithmic. This paper presents
a randomized, polynomial-time algorithm that produces
the submatrix promised by Bourgain and Tzafriri. The
method involves random sampling of columns, followed by
a matrix factorization that exposes the well-conditioned
subset of columns. This factorization, which is due to
Grothendieck, is regarded as a central tool in modern
functional analysis. The primary novelty in this work
is an algorithm, based on eigenvalue minimization, for
constructing the Grothendieck factorization. These ideas
also result in an approximation algorithm for the (∞, 1)
norm of a matrix, which is generally NP-hard to compute
exactly. As an added bonus, this work reveals a surprising
connection between matrix factorization and the famous
maxcut semidefinite program.https://authors.library.caltech.edu/records/1kq65-m1r89Practical Large-Scale Optimization for Max-Norm Regularization
https://resolver.caltech.edu/CaltechAUTHORS:20160331-164724199
Authors: {'items': [{'id': 'Lee-J', 'name': {'family': 'Lee', 'given': 'Jason'}}, {'id': 'Recht-B', 'name': {'family': 'Recht', 'given': 'Benjamin'}}, {'id': 'Salakhutdinov-R-R', 'name': {'family': 'Salakhutdinov', 'given': 'Ruslan R.'}}, {'id': 'Srebro-N', 'name': {'family': 'Srebro', 'given': 'Nathan'}}, {'id': 'Tropp-J-A', 'name': {'family': 'Tropp', 'given': 'Joel A.'}, 'orcid': '0000-0003-1024-1791'}]}
Year: 2010
The max-norm was proposed as a convex matrix regularizer in [1] and was shown to be empirically superior to the trace-norm for collaborative filtering problems.
Although the max-norm can be computed in polynomial time, there are currently no practical algorithms for solving large-scale optimization problems that incorporate
the max-norm. The present work uses a factorization technique of Burer and Monteiro [2] to devise scalable first-order algorithms for convex programs
involving the max-norm. These algorithms are applied to solve huge collaborative filtering, graph cut, and clustering problems. Empirically, the new methods
outperform mature techniques from all three areas.https://authors.library.caltech.edu/records/bcew3-g2m51The sparsity gap: Uncertainty principles proportional to dimension
https://resolver.caltech.edu/CaltechAUTHORS:20180831-112116678
Authors: {'items': [{'id': 'Tropp-J-A', 'name': {'family': 'Tropp', 'given': 'Joel A.'}, 'orcid': '0000-0003-1024-1791'}]}
Year: 2010
DOI: 10.1109/CISS.2010.5464824
In an incoherent dictionary, most signals that admit a sparse representation admit a unique sparse representation. In other words, there is no way to express the signal without using strictly more atoms. This work demonstrates that sparse signals typically enjoy a higher privilege: each nonoptimal representation of the signal requires far more atoms than the sparsest representation-unless it contains many of the same atoms as the sparsest representation. One impact of this finding is to confer a certain degree of legitimacy on the particular atoms that appear in a sparse representation. This result can also be viewed as an uncertainty principle for random sparse signals over an incoherent dictionary.https://authors.library.caltech.edu/records/sm6cv-f2606Factoring nonnegative matrices with linear programs
https://resolver.caltech.edu/CaltechAUTHORS:20160401-165447853
Authors: {'items': [{'id': 'Bittorf-V', 'name': {'family': 'Bittorf', 'given': 'Victor'}}, {'id': 'Recht-B', 'name': {'family': 'Recht', 'given': 'Benjamin'}}, {'id': 'Ré-C', 'name': {'family': 'Ré', 'given': 'Christopher'}}, {'id': 'Tropp-J-A', 'name': {'family': 'Tropp', 'given': 'Joel A.'}, 'orcid': '0000-0003-1024-1791'}]}
Year: 2012
DOI: 10.48550/arXiv.1206.1270
This paper describes a new approach, based on linear programming, for computing nonnegative matrix factorizations (NMFs). The key idea is a data-driven
model for the factorization where the most salient features in the data are used to express the remaining features. More precisely, given a data matrix X, the algorithm
identifies a matrix C that satisfies X ≈ CX and some linear constraints. The constraints are chosen to ensure that the matrix C selects features; these features can then be used to find a low-rank NMF of X. A theoretical analysis
demonstrates that this approach has guarantees similar to those of the recent NMF algorithm of Arora et al. (2012). In contrast with this earlier work, the proposed
method extends to more general noise models and leads to efficient, scalable algorithms. Experiments with synthetic and real datasets provide evidence that the
new approach is also superior in practice. An optimized C++ implementation can factor a multigigabyte matrix in a matter of minutes.https://authors.library.caltech.edu/records/4tr2r-xse68Time–Data Tradeoffs by Aggressive Smoothing
https://resolver.caltech.edu/CaltechAUTHORS:20160401-170735760
Authors: {'items': [{'id': 'Bruer-J-J', 'name': {'family': 'Bruer', 'given': 'John J.'}}, {'id': 'Tropp-J-A', 'name': {'family': 'Tropp', 'given': 'Joel A.'}, 'orcid': '0000-0003-1024-1791'}, {'id': 'Cevher-V', 'name': {'family': 'Cevher', 'given': 'Volkan'}}, {'id': 'Becker-S-R', 'name': {'family': 'Becker', 'given': 'Stephen R.'}}]}
Year: 2014
This paper proposes a tradeoff between sample complexity and computation time that applies to statistical estimators based on convex optimization. As the amount of
data increases, we can smooth optimization problems more and more aggressively to achieve accurate estimates more quickly. This work provides theoretical and
experimental evidence of this tradeoff for a class of regularized linear inverse problems.https://authors.library.caltech.edu/records/ez84w-x7c72Convex Recovery of a Structured Signal from Independent Random Linear Measurements
https://resolver.caltech.edu/CaltechAUTHORS:20160818-084011981
Authors: {'items': [{'id': 'Tropp-J-A', 'name': {'family': 'Tropp', 'given': 'Joel A.'}, 'orcid': '0000-0003-1024-1791'}]}
Year: 2015
DOI: 10.1007/978-3-319-19749-4_2
This chapter develops a theoretical analysis of the convex programming method for recovering a structured signal from independent random linear measurements. This technique delivers bounds for the sampling complexity that are similar to recent results for standard Gaussian measurements, but the argument applies to a much wider class of measurement ensembles. To demonstrate the power of this approach, the chapter presents a short analysis of phase retrieval by trace-norm minimization. The key technical tool is a framework, due to Mendelson and coauthors, for bounding a nonnegative empirical process.https://authors.library.caltech.edu/records/1vmc9-r4269The Expected Norm of a Sum of Independent Random Matrices: An Elementary Approach
https://resolver.caltech.edu/CaltechAUTHORS:20170214-075417526
Authors: {'items': [{'id': 'Tropp-J-A', 'name': {'family': 'Tropp', 'given': 'Joel A.'}, 'orcid': '0000-0003-1024-1791'}]}
Year: 2016
DOI: 10.1007/978-3-319-40519-3_8
In contemporary applied and computational mathematics, a frequent challenge is to bound the expectation of the spectral norm of a sum of independent random matrices. This quantity is controlled by the norm of the expected square of the random matrix and the expectation of the maximum squared norm achieved by one of the summands; there is also a weak dependence on the dimension of the random matrix. The purpose of this paper is to give a complete, elementary proof of this important inequality.https://authors.library.caltech.edu/records/bemq2-qw914Sketchy decisions: convex optimization with optimal storage (Conference Presentation)
https://resolver.caltech.edu/CaltechAUTHORS:20190826-161640394
Authors: {'items': [{'id': 'Tropp-J-A', 'name': {'family': 'Tropp', 'given': 'Joel A.'}, 'orcid': '0000-0003-1024-1791'}]}
Year: 2017
DOI: 10.1117/12.2281058
This recording is for the presentation titled, "Sketchy decisions: convex optimization with optimal storage", part of the SPIE symposium on "SPIE Optical Engineering + Applications"https://authors.library.caltech.edu/records/ejhk7-npe53Concentration of the Intrinsic Volumes of a Convex Body
https://resolver.caltech.edu/CaltechAUTHORS:20200821-083421883
Authors: {'items': [{'id': 'Lotz-Martin', 'name': {'family': 'Lotz', 'given': 'Martin'}, 'orcid': '0000-0001-8500-864X'}, {'id': 'McCoy-Michael-B', 'name': {'family': 'McCoy', 'given': 'Michael B.'}}, {'id': 'Nourdin-Ivan', 'name': {'family': 'Nourdin', 'given': 'Ivan'}, 'orcid': '0000-0002-8742-0723'}, {'id': 'Peccati-Giovanni', 'name': {'family': 'Peccati', 'given': 'Giovanni'}}, {'id': 'Tropp-J-A', 'name': {'family': 'Tropp', 'given': 'Joel A.'}, 'orcid': '0000-0003-1024-1791'}]}
Year: 2020
DOI: 10.1007/978-3-030-46762-3_6
The intrinsic volumes are measures of the content of a convex body. This paper applies probabilistic and information-theoretic methods to study the sequence of intrinsic volumes. The main result states that the intrinsic volume sequence concentrates sharply around a specific index, called the central intrinsic volume. Furthermore, among all convex bodies whose central intrinsic volume is fixed, an appropriately scaled cube has the intrinsic volume sequence with maximum entropy.https://authors.library.caltech.edu/records/xgnz6-yxr86Inference of Black Hole Fluid-Dynamics from Sparse Interferometric Measurements
https://resolver.caltech.edu/CaltechAUTHORS:20220307-188412000
Authors: {'items': [{'id': 'Levis-Aviad', 'name': {'family': 'Levis', 'given': 'Aviad'}, 'orcid': '0000-0001-7307-632X'}, {'id': 'Lee-Daeyoung', 'name': {'family': 'Lee', 'given': 'Daeyoung'}}, {'id': 'Tropp-J-A', 'name': {'family': 'Tropp', 'given': 'Joel A.'}, 'orcid': '0000-0003-1024-1791'}, {'id': 'Gammie-Charles-F', 'name': {'family': 'Gammie', 'given': 'Charles F.'}, 'orcid': '0000-0001-7451-8935'}, {'id': 'Bouman-K-L', 'name': {'family': 'Bouman', 'given': 'Katherine L.'}, 'orcid': '0000-0003-0077-4367'}]}
Year: 2021
DOI: 10.1109/iccv48922.2021.00234
We develop an approach to recover the underlying properties of fluid-dynamical processes from sparse measurements. We are motivated by the task of imaging the stochastically evolving environment surrounding black holes, and demonstrate how flow parameters can be estimated from sparse interferometric measurements used in radio astronomical imaging. To model the stochastic flow we use spatio-temporal Gaussian Random Fields (GRFs). The high dimensionality of the underlying source video makes direct representation via a GRF's full covariance matrix intractable. In contrast, stochastic partial differential equations are able to capture correlations at multiple scales by specifying only local interaction coefficients. Our approach estimates the coefficients of a space-time diffusion equation that dictates the stationary statistics of the dynamical process. We analyze our approach on realistic simulations of black hole evolution and demonstrate its advantage over state-of-the-art dynamic black hole imaging techniques.https://authors.library.caltech.edu/records/wbj75-1gv55