Book Section records
https://feeds.library.caltech.edu/people/Chandrasekaran-V/book_section.rss
A Caltech Library Repository Feedhttp://www.rssboard.org/rss-specificationpython-feedgenenThu, 30 Nov 2023 17:53:25 +0000Complexity of Inference in Graphical Models
https://resolver.caltech.edu/CaltechAUTHORS:20121008-154959981
Authors: Chandrasekaran, Venkat; Srebro, Nathan; Harsha, Prahladh
Year: 2008
DOI: 10.48550/arXiv.1206.3240
It is well-known that inference in graphical models is hard in the worst case, but tractable for models with bounded treewidth. We ask whether treewidth is the only structural criterion of the underlying graph that enables tractable inference. In other words, is there some class of structures with un- bounded treewidth in which inference is tractable? Subject to a combinatorial hypothesis due to Robertson et al. (1994), we show that low treewidth is indeed the only structural restriction that can ensure tractability. Thus, even for the "best case" graph structure, there is no inference algorithm with complexity polynomial in the treewidth.https://authors.library.caltech.edu/records/kj6yy-vpj18Exploiting Sparse Markov and Covariance Structure in Multiresolution Models
https://resolver.caltech.edu/CaltechAUTHORS:20121008-111307673
Authors: Choi, Myung Jin; Chandrasekaran, Venkat; Willsky, Alan S.
Year: 2009
DOI: 10.1145/1553374.1553397
We consider Gaussian multiresolution (MR)
models in which coarser, hidden variables serve
to capture statistical dependencies among the
finest scale variables. Tree-structured MR models
have limited modeling capabilities, as variables
at one scale are forced to be uncorrelated
with each other conditioned on other scales.
We propose a new class of Gaussian MR models
that capture the residual correlations within
each scale using sparse covariance structure.
Our goal is to learn a tree-structured graphical
model connecting variables across different
scales, while at the same time learning sparse
structure for the conditional covariance within
each scale conditioned on other scales. This
model leads to an efficient, new inference algorithm
that is similar to multipole methods in computational
physics.https://authors.library.caltech.edu/records/ggrm5-wc586Sparse and low-rank matrix decompositions
https://resolver.caltech.edu/CaltechAUTHORS:20121008-082643123
Authors: Chandrasekaran, Venkat; Sanghavi, Sujay; Parrilo, Pablo A.; Willsky, Alan S.
Year: 2009
DOI: 10.1109/ALLERTON.2009.5394889
We consider the following fundamental problem: given a matrix that is the sum of an unknown sparse matrix and an unknown low-rank matrix, is it possible to exactly recover the two components? Such a capability enables a considerable number of applications, but the goal is both ill-posed and NP-hard in general. In this paper we develop (a) a new uncertainty principle for matrices, and (b) a simple method for exact decomposition based on convex optimization. Our uncertainty principle is a quantification of the notion that a matrix cannot be sparse while having diffuse row/column spaces. It characterizes when the decomposition problem is ill-posed, and forms the basis for our decomposition method and its analysis. We provide deterministic conditions - on the sparse and low-rank components - under which our method guarantees exact recovery.https://authors.library.caltech.edu/records/8ea39-50s38Feedback Message Passing for Inference in Gaussian Graphical Models
https://resolver.caltech.edu/CaltechAUTHORS:20121004-153356182
Authors: Liu, Ying; Chandrasekaran, Venkat; Anandkumar, Animashree; Willsky, Alan S.
Year: 2010
DOI: 10.1109/ISIT.2010.5513321
For Gaussian graphical models with cycles, loopy belief propagation often performs reasonably well, but its convergence is not guaranteed and the computation of variances is generally incorrect. In this paper, we identify a set of special vertices called a feedback vertex set whose removal results in a cycle-free graph. We propose a feedback message passing algorithm in which non-feedback nodes send out one set of messages while the feedback nodes use a different message update scheme. Exact inference results can be obtained in O(k^2n), where k is the number of feedback nodes and n is the total number of nodes. For graphs with large feedback vertex sets, we describe a tractable approximate feedback message passing algorithm. Experimental results show that this procedure converges more often, faster, and provides better results than loopy belief propagation.https://authors.library.caltech.edu/records/49gfm-x6q43Adaptive Embedded Subgraph Algorithms using Walk-Sum Analysis
https://resolver.caltech.edu/CaltechAUTHORS:20121008-155412807
Authors: Chandrasekaran, Venkat; Johnson, Jason K.; Willsky, Alan S.
Year: 2012
We consider the estimation problem in Gaussian graphical models with arbitrary
structure. We analyze the Embedded Trees algorithm, which solves a sequence of
problems on tractable subgraphs thereby leading to the solution of the estimation
problem on an intractable graph. Our analysis is based on the recently developed
walk-sum interpretation of Gaussian estimation. We show that non-stationary iterations
of the Embedded Trees algorithm using any sequence of subgraphs converge
in walk-summable models. Based on walk-sum calculations, we develop
adaptive methods that optimize the choice of subgraphs used at each iteration with
a view to achieving maximum reduction in error. These adaptive procedures provide
a significant speedup in convergence over stationary iterative methods, and
also appear to converge in a larger class of models.https://authors.library.caltech.edu/records/e0hnq-s2x10Maximum Entropy Relaxation for Graphical Model Selection given Inconsistent Statistics
https://resolver.caltech.edu/CaltechAUTHORS:20121009-080604998
Authors: Chandrasekaran, Venkat; Johnson, Jason K.; Willsky, Alan S.
Year: 2012
DOI: 10.1109/SSP.2007.4301334
We develop a novel approach to approximate a specified collection
of marginal distributions on subsets of variables by
a globally consistent distribution on the entire collection of
variables. In general, the specified marginal distributions may
be inconsistent on overlapping subsets of variables. Our method
is based on maximizing entropy over an exponential family
of graphical models, subject to divergence constraints on
small subsets of variables that enforce closeness to the specified
marginals. The resulting optimization problem is convex,
and can be solved efficiently using a primal-dual interiorpoint
algorithm. Moreover, this framework leads naturally to
a solution that is a sparse graphical model.https://authors.library.caltech.edu/records/30y4w-z3y06Surflets: a sparse representation for multidimensional functions containing smooth discontinuities
https://resolver.caltech.edu/CaltechAUTHORS:20121011-131626671
Authors: Chandrasekaran, Venkat; Wakin, Michael B.; Baron, Dror; Baraniuk, Richard G.
Year: 2012
DOI: 10.1109/ISIT.2004.1365602
Discontinuities in data often provide vital information, and representing these discontinuities sparsely is an important goal for approximation and compression algorithms. Little work has been done on efficient representations for higher dimensional functions containing arbitrarily smooth discontinuities. We consider the N-dimensional Horizon class-N-dimensional functions containing a C^K smooth (N-1)-dimensional singularity separating two constant regions. We derive the optimal rate-distortion function for this class and introduce the multiscale surflet representation for sparse piecewise approximation of these functions. We propose a compression algorithm using surflets that achieves the optimal asymptotic rate-distortion performance for Horizon functions. This algorithm can be implemented using knowledge of only the N-dimensional function, without explicitly estimating the (N-1)-dimensional discontinuity.https://authors.library.caltech.edu/records/1jpa7-xjq41Recovery of Sparse Probability Measures via Convex Programming
https://resolver.caltech.edu/CaltechAUTHORS:20130107-155151292
Authors: Pilanci, Mert; El Ghaoui, Laurent; Chandrasekaran, Venkat
Year: 2012
We consider the problem of cardinality penalized optimization of a convex function
over the probability simplex with additional convex constraints. The classical
ℓ_1 regularizer fails to promote sparsity on the probability simplex since ℓ_1 norm
on the probability simplex is trivially constant. We propose a direct relaxation of
the minimum cardinality problem and show that it can be efficiently solved using
convex programming. As a first application we consider recovering a sparse probability
measure given moment constraints, in which our formulation becomes linear
programming, hence can be solved very efficiently. A sufficient condition for
exact recovery of the minimum cardinality solution is derived for arbitrary affine
constraints. We then develop a penalized version for the noisy setting which can
be solved using second order cone programs. The proposed method outperforms
known rescaling heuristics based on ℓ_1 norm. As a second application we consider
convex clustering using a sparse Gaussian mixture and compare our results with
the well known soft k-means algorithm.https://authors.library.caltech.edu/records/2r1kp-2rd47Conic Geometric Programming
https://resolver.caltech.edu/CaltechAUTHORS:20150430-092047133
Authors: Chandrasekaran, Venkat; Shah, Parikshit
Year: 2014
DOI: 10.1109/CISS.2014.6814151
This invited submission summarizes recent
work by the authors on conic geometric programs
(CGPs), which are convex optimization problems
obtained by blending geometric programs (GPs) and
conic optimization problems such as semidefinite
programs (SDPs). GPs and SDPs are two prominent
families of structured convex programs that each
generalize linear programs (LPs) in different ways,
and that are both employed in a broad range of
applications. This submission provides a summary
of a unified mathematical and algorithmic treatment
of GPs and SDPs under the framework of CGPs.
Although CGPs contain GPs and SDPs as special
instances, computing global optima of CGPs is not
much harder than solving GPs and SDPs. More
broadly, the CGP framework facilitates a range of
new applications – permanent maximization, hitting-time
estimation in dynamical systems, the computation
of the capacity of channels transmitting quantum
information, and robust optimization formulations of
GPs – that fall outside the scope of SDPs and GPs
alone.https://authors.library.caltech.edu/records/13fkx-c7p18High-dimensional change-point estimation: Combining filtering with convex optimization
https://resolver.caltech.edu/CaltechAUTHORS:20151007-111059602
Authors: Soh, Yong Sheng; Chandrasekaran, Venkat
Year: 2015
DOI: 10.1109/ISIT.2015.7282435
We consider change-point estimation in a sequence of high-dimensional signals given noisy observations. Classical approaches to this problem such as the filtered derivative method are useful for sequences of scalar-valued signals, but they have undesirable scaling behavior in the high-dimensional setting. However, many high-dimensional signals encountered in practice frequently possess latent low-dimensional structure. Motivated by this observation, we propose a technique for high-dimensional change-point estimation that combines the filtered derivative approach from previous work with convex optimization methods based on atomic norm regularization, which are useful for exploiting structure in high-dimensional data. Our algorithm is applicable in online settings as it operates on small portions of the sequence of observations at a time, and it is well-suited to the high-dimensional setting both in terms of computational scalability and of statistical efficiency. The main result of this paper shows that our method performs change-point estimation reliably as long as the product of the smallest-sized change (the Euclidean-norm-squared of the difference between signals at a change-point) and the smallest distance between change-points (number of time instances) is larger than a Gaussian width parameter that characterizes the low-dimensional complexity of the underlying signal sequence. A full version of this paper is available online [1].https://authors.library.caltech.edu/records/gma0a-nfa97