Combined Feed
https://feeds.library.caltech.edu/people/Oymak-Samet/combined.rss
A Caltech Library Repository Feedhttp://www.rssboard.org/rss-specificationpython-feedgenenTue, 16 Apr 2024 15:46:44 +0000Recovering Structured Signals in Noise: Least-Squares Meets Compressed Sensing
https://resolver.caltech.edu/CaltechAUTHORS:20160901-121710649
Authors: {'items': [{'id': 'Thrampoulidis-Christos', 'name': {'family': 'Thrampoulidis', 'given': 'Christos'}, 'orcid': '0000-0001-9053-9365'}, {'id': 'Oymak-Samet', 'name': {'family': 'Oymak', 'given': 'Samet'}}, {'id': 'Hassibi-B', 'name': {'family': 'Hassibi', 'given': 'Babak'}}]}
Year: 2015
DOI: 10.1007/978-3-319-16042-9_4
The typical scenario that arises in most "big data" problems is one where the ambient dimension of the signal is very large (e.g., high resolution images, gene expression data from a DNA microarray, social network data, etc.), yet is such that its desired properties lie in some low dimensional structure (sparsity, low-rankness, clusters, etc.). In the modern viewpoint, the goal is to come up with efficient algorithms to reveal these structures and for which, under suitable conditions, one can give theoretical guarantees. We specifically consider the problem of recovering such a structured signal (sparse, low-rank, block-sparse, etc.) from noisy compressed measurements. A general algorithm for such problems, commonly referred to as generalized LASSO, attempts to solve this problem by minimizing a least-squares cost with an added "structure-inducing" regularization term (ℓ_ 1 norm, nuclear norm, mixed ℓ _2/ℓ_ 1 norm, etc.). While the LASSO algorithm has been around for 20 years and has enjoyed great success in practice, there has been relatively little analysis of its performance. In this chapter, we will provide a full performance analysis and compute, in closed form, the mean-square-error of the reconstructed signal. We will highlight some of the mathematical vignettes necessary for the analysis, make connections to noiseless compressed sensing and proximal denoising, and will emphasize the central role of the "statistical dimension" of a structured signal.https://authors.library.caltech.eduhttps://authors.library.caltech.edu/records/cqrjk-bxj69Sharp MSE Bounds for Proximal Denoising
https://resolver.caltech.edu/CaltechAUTHORS:20160825-102038882
Authors: {'items': [{'id': 'Oymak-Samet', 'name': {'family': 'Oymak', 'given': 'Samet'}}, {'id': 'Hassibi-B', 'name': {'family': 'Hassibi', 'given': 'Babak'}, 'orcid': '0000-0002-1375-5838'}]}
Year: 2016
DOI: 10.1007/s10208-015-9278-4
Denoising has to do with estimating a signal x_0 from its noisy observations y = x_0 + z. In this paper, we focus on the "structured denoising problem," where the signal x_0 possesses a certain structure and z has independent normally distributed entries with mean zero and variance σ^2. We employ a structure-inducing convex function f(⋅) and solve min_x {1/2 ∥y−x∥^2_2 +σλf(x)}to estimate x_0, for some λ>0. Common choices for f(⋅) include the ℓ_1 norm for sparse vectors, the ℓ_1 −ℓ_2 norm for block-sparse signals and the nuclear norm for low-rank matrices. The metric we use to evaluate the performance of an estimate x∗ is the normalized mean-squared error NMSE(σ)=E∥x∗ − x_0∥^2_2/σ^2. We show that NMSE is maximized as σ→0 and we find the exact worst-case NMSE, which has a simple geometric interpretation: the mean-squared distance of a standard normal vector to the λ-scaled subdifferential λ∂f(x_0). When λ is optimally tuned to minimize the worst-case NMSE, our results can be related to the constrained denoising problem min_(f(x)≤f(x_0)){∥y−x∥2}. The paper also connects these results to the generalized LASSO problem, in which one solves min_(f(x)≤f(x_0)){∥y−Ax∥2} to estimate x_0 from noisy linear observations y=Ax_0 + z. We show that certain properties of the LASSO problem are closely related to the denoising problem. In particular, we characterize the normalized LASSO cost and show that it exhibits a "phase transition" as a function of number of observations. We also provide an order-optimal bound for the LASSO error in terms of the mean-squared distance. Our results are significant in two ways. First, we find a simple formula for the performance of a general convex estimator. Secondly, we establish a connection between the denoising and linear inverse problems.https://authors.library.caltech.eduhttps://authors.library.caltech.edu/records/fbe5r-xn288