CaltechAUTHORS: Combined
https://feeds.library.caltech.edu/people/Wu-Stephen/combined.rss
A Caltech Library Repository Feedhttp://www.rssboard.org/rss-specificationpython-feedgenenFri, 02 Aug 2024 19:28:07 -0700Robust Diagnostics for Bayesian Compressive Sensing with Applications to Structural Health Monitoring
https://resolver.caltech.edu/CaltechAUTHORS:20120831-112317360
Year: 2011
DOI: 10.1117/12.880687
In structural health monitoring (SHM) systems for civil structures, signal compression is often important to reduce the cost of data transfer and storage because of the large volumes of data generated from the monitoring system. Compressive sensing is a novel data compressing method whereby one does not measure the entire signal directly but rather a set of related ("projected") measurements. The length of the required compressive-sensing measurements is typically much smaller than the original signal, therefore increasing the efficiency of data transfer and storage. Recently, a Bayesian formalism has also been employed for optimal compressive sensing, which adopts the ideas in the relevance vector machine (RVM) as a decompression tool, such as the automatic relevance determination prior (ARD). Recently publications illustrate the benefits of using the Bayesian compressive sensing (BCS) method. However, none of these publications have investigated the robustness of the BCS method. We show that the usual RVM optimization algorithm lacks robustness when the number of measurements is a lot less than the length of the signals because it can produce sub-optimal signal representations; as a result, BCS is not robust when high compression efficiency is required. This induces a tradeoff between efficiently compressing data and accurately decompressing it. Based on a study of the robustness of the BCS method, diagnostic tools are proposed to investigate whether the compressed representation of the signal is optimal. With reliable diagnostics, the performance of the BCS method can be monitored effectively. The numerical results show that it is a powerful tool to examine the correctness of reconstruction results without knowing the original signal.https://resolver.caltech.edu/CaltechAUTHORS:20120831-112317360Robust Diagnostics for Bayesian Compressive Sensing Technique in Structural Health Monitoring
https://resolver.caltech.edu/CaltechAUTHORS:20120831-110516244
Year: 2011
Signal compression is often important to reduce the cost of data transfer and storage
for structural health monitoring (SHM) systems of civil structures. Compressive
sensing is a novel data compressing method whereby one does not measure the
entire signal directly but rather a set of related ("projected") measurements. The
length of the required compressive-sensing measurements is typically much smaller
than the original signal, therefore increasing the efficiency of data transfer and
storage. Recently, a Bayesian formalism has also been employed for optimal
compressive sensing, which adopts the ideas in the relevance vector machine
(RVM) as a decompression tool. In this article, we study the robustness of the BCS
method. We show that the usual RVM optimization algorithm lacks robustness
when the number of measurements is a lot less than the length of the signals
because it can produce sub-optimal signal representations; as a result, BCS is not
robust when high compression efficiency is required. This induces a tradeoff
between efficiently compressing data and accurately decompressing it. Based on a
study of the robustness of the BCS method, diagnostic tools are proposed to
investigate whether the compressed representation of the signal is optimal. The
numerical results also are given to validate the proposed method.https://resolver.caltech.edu/CaltechAUTHORS:20120831-110516244Stochastic Optimization using Automatic Relevance Determination Prior Model for Bayesian Compressive Sensing Technique in Structural Health Monitoring
https://resolver.caltech.edu/CaltechAUTHORS:20120831-104915867
Year: 2012
DOI: 10.1117/12.921257
Compared with the conventional monitoring approach of separately sensing and then compressing the data, compressive sensing (CS) is a novel data acquisition framework whereby the compression is done during the sampling. If the original sensed signal would have been sufficiently sparse in terms of some orthogonal basis, the decompression can be done essentially perfectly up to some critical compression ratio. In structural health monitoring (SHM) systems for civil structures, novel data compression techniques such as CS are needed to reduce the cost of signal transfer and storage. In this article, Bayesian compressive sensing (BCS) is investigated for SHM signals. By explicitly quantifying the uncertainty in the signal reconstruction, the BCS technique exhibits an obvious benefit over the existing regularized norm-minimization CS. However, current BCS algorithms suffer from a robustness problem; sometimes the reconstruction errors are large. The source of the problem is that inversion of the compressed signal is a severely ill-posed problem that often leads to sub-optimal signal representations. To ensure the strong robustness of the signal reconstruction, even at a high compression ratio, an improved BCS algorithm is proposed which uses stochastic optimization for the automatic relevance determination approach to reconstructing the underlying signal. Numerical experiments are used as examples; the improved BCS algorithm demonstrates superior performance than state-of-the-art BCS reconstruction algorithms.https://resolver.caltech.edu/CaltechAUTHORS:20120831-104915867ePAD: Earthquake Probability-Based Automated Decision-Making Framework for Earthquake Early Warning
https://resolver.caltech.edu/CaltechAUTHORS:20131105-094211973
Year: 2013
DOI: 10.1111/mice.12048
The benefits and feasibility of earthquake early warning (EEW) are becoming more appreciated throughout the world. An EEW system detects an earthquake initiation based on a seismic sensor network and broadcasts a warning of the predicted location and magnitude shortly before an earthquake hits a site. The typical range of this lead time is very short, for example, from a few seconds up to a minute in California, which is a huge challenge for applications taking advantage of EEW. As a result, a robust automated decision process about whether to initiate a mitigation action is essential. Recent approaches based on cost–benefit analyses to properly treat the trade-off between false alarms and missed alarms still face challenges in practical use, such as the exclusion of an important factor, lead time, in the real-time decision process. In this study, we lay out an earthquake probability-based automated decision-making (ePAD) framework to give a general decision criterion based on basic decision theory and an existing cost–benefit analysis procedure. The concepts of decision function, decision contour, and surrogate model are utilized to achieve fast computation and to allow comparison between various decision criteria. A value of information model is developed to handle the lead time of EEW and its uncertainty to reduce the "false response rate" in the cost–benefit trade-off. An illustrative example is presented to demonstrate how this framework allows more flexibility for users to adapt ePAD to correspond to their desired rational decision behavior.https://resolver.caltech.edu/CaltechAUTHORS:20131105-094211973Earthquake early warning application to buildings
https://resolver.caltech.edu/CaltechAUTHORS:20140501-133917005
Year: 2014
DOI: 10.1016/j.engstruct.2013.12.033
In California, United States, an earthquake early warning system is currently being tested through the California Integrated Seismic Network (CISN). The system aims to provide warnings in seconds to tens of seconds prior to the occurrence of ground shaking at a site; since the system broadcasts the location and time of the earthquake, user software can estimate the arrival time and intensity of the expected S-wave. However, the shaking experienced by a user in a tall building will be significantly different from that on the ground. This paper provides a method to develop engineering applications in earthquake early warning system using Performance-based Earthquake Engineering framework. An example is included to estimate the characteristics of shaking that can be expected in mid-rise to high-rise buildings. Potential engineering applications (e.g. elevator control) for buildings based on the prediction of building shaking level are also addressed.https://resolver.caltech.edu/CaltechAUTHORS:20140501-133917005Robust Bayesian Compressive Sensing for Signals in Structural Health Monitoring
https://resolver.caltech.edu/CaltechAUTHORS:20140213-094350582
Year: 2014
DOI: 10.1111/mice.12051
In structural health monitoring (SHM) systems
for civil structures, massive amounts of data are often
generated that need data compression techniques to
reduce the cost of signal transfer and storage, meanwhile
offering a simple sensing system. Compressive sensing
(CS) is a novel data acquisition method whereby the
compression is done in a sensor simultaneously with
the sampling. If the original sensed signal is sufficiently
sparse in terms of some orthogonal basis (e.g., a sufficient
number of wavelet coefficients are zero or negligibly
small), the decompression can be done essentially
perfectly up to some critical compression ratio; otherwise
there is a trade-off between the reconstruction error
and how much compression occurs. In this article,
a Bayesian compressive sensing (BCS) method is investigated
that uses sparse Bayesian learning to reconstruct
signals from a compressive sensor. By explicitly quantifying
the uncertainty in the reconstructed signal from
compressed data, the BCS technique exhibits an obvious
benefit over existing regularized norm-minimization
CS methods that provide a single signal estimate. However,
current BCS algorithms suffer from a robustness
problem: sometimes the reconstruction errors are very
large when the number of measurements K are a lot
less than the number of signal degrees of freedom N that are needed to capture the signal accurately in a directly
sampled form. In this article, we present improvements
to the BCS reconstruction method to enhance its
robustness so that even higher compression ratios N/K
can be used and we examine the trade-off between efficiently
compressing data and accurately decompressing
it. Synthetic data and actual acceleration data collected
from a bridge SHM system are used as examples.
Compared with the state-of-the-art BCS reconstruction
algorithms, the improved BCS algorithm demonstrates
superior performance. With the same acceptable error
rate based on a specified threshold of reconstruction error,
the proposed BCS algorithm works with relatively
large compression ratios and it can achieve perfect loss-less
compression performance with quite high compression
ratios. Furthermore, the error bars for the signal reconstruction
are also quantified effectively.https://resolver.caltech.edu/CaltechAUTHORS:20140213-094350582General network reliability problem and its efficient solution by Subset Simulation
https://resolver.caltech.edu/CaltechAUTHORS:20150604-083829598
Year: 2015
DOI: 10.1016/j.probengmech.2015.02.002
Complex technological networks designed for distribution of some resource or commodity are a pervasive feature of modern society. Moreover, the dependence of our society on modern technological networks constantly grows. As a result, there is an increasing demand for these networks to be highly reliable in delivering their service. As a consequence, there is a pressing need for efficient computational methods that can quantitatively assess the reliability of technological networks to enhance their design and operation in the presence of uncertainty in their future demand, supply and capacity. In this paper, we propose a stochastic framework for quantitative assessment of the reliability of network service, formulate a general network reliability problem within this framework, and then show how to calculate the service reliability using Subset Simulation, an efficient Markov chain Monte Carlo method that was originally developed for estimating small failure probabilities of complex dynamic systems. The efficiency of the method is demonstrated with an illustrative example where two small-world network generation models are compared in terms of the maximum-flow reliability of the networks that they produce.https://resolver.caltech.edu/CaltechAUTHORS:20150604-083829598An Engineering Application of Earthquake Early Warning: ePAD-Based Decision Framework for Elevator Control
https://resolver.caltech.edu/CaltechAUTHORS:20160108-125732803
Year: 2016
DOI: 10.1061/(ASCE)ST.1943-541X.0001356
In a medium-to-large earthquake, there are often reports of people being trapped or injured in elevators. This study investigates using an earthquake early warning (EEW) system, which provides seconds to tens of seconds warning before seismic waves arrive at a site, to help people escape from the elevators before a strong shaking arrives. However, such an application remains as a major engineering challenge due to the uncertainty of the EEW information and the short lead time available. A recent study presented an earthquake probability-based automated decision-making (ePAD) framework to address these issues. This paper focuses on studying the influence of two commonly ignored factors, uncertainty of warning and lead time, on the decision of stopping the elevators and opening the doors when an EEW message is received. Application of the ePAD framework requires using the performance-based earthquake engineering methodology for elevator damage prediction, making decision based on a cost-benefit model and reducing computational time with a surrogate model. The authors' results show that ePAD can provide rational decisions for elevator control based on EEW information under different amounts of lead time and uncertainty level of the warning.https://resolver.caltech.edu/CaltechAUTHORS:20160108-125732803Virtual Inspector and its application to immediate pre-event and post-event earthquake loss and safety assessment of buildings
https://resolver.caltech.edu/CaltechAUTHORS:20160415-092533347
Year: 2016
DOI: 10.1007/s11069-016-2159-6
We previously introduced the Virtual Inspector, which is a decision-support system that follows current US guidelines for post-earthquake damage and safety evaluation of buildings in order to calculate probabilities that a building will be tagged with red, yellow or green safety placards after earthquake shaking of the building. The procedure is based on an existing probabilistic methodology for performance-based earthquake engineering that involves four analysis stages: hazard, structural, damage and loss analyses. In this paper, we propose to integrate the Virtual Inspector into an automated system for immediate pre- and post-earthquake loss and safety assessment in a building. This system could be combined with an earthquake early warning system to assist in an automated decision analysis for initiating safety and loss mitigation actions just before the arrival of earthquake shaking at the building site. The Virtual Inspector can also be used immediately after strong earthquake shaking to provide an automated probabilistic safety and loss assessment to support risk decision making related to possible building closure and the cost of recovery to bring the building back to an operating condition. The proposed theory for these extensions of the Virtual Inspector is illustrated using an example based on a previously studied benchmark office building.https://resolver.caltech.edu/CaltechAUTHORS:20160415-092533347Bayesian compressive sensing for approximately sparse signals and application to structural health monitoring signals for data loss recovery
https://resolver.caltech.edu/CaltechAUTHORS:20170119-124035615
Year: 2016
DOI: 10.1016/j.probengmech.2016.08.001
The theory and application of compressive sensing (CS) have received a lot of interest in recent years. The basic idea in CS is to use a specially-designed sensor to sample signals that are sparse in some basis (e.g. wavelet basis) directly in a compressed form, and then to reconstruct (decompress) these signals accurately using some inversion algorithm after transmission to a central processing unit. However, many signals in reality are only approximately sparse, where only a relatively small number of the signal coefficients in some basis are significant and the remaining basis coefficients are relatively small but they are not all zero. In this case, perfect reconstruction from compressed measurements is not expected. In this paper, a Bayesian CS algorithm is proposed for the first time to reconstruct approximately sparse signals. A robust treatment of the uncertain parameters is explored, including integration over the prediction-error precision parameter to remove it as a "nuisance" parameter, and introduction of a successive relaxation procedure for the required optimization of the basis coefficient hyper-parameters. The performance of the algorithm is investigated using compressed data from synthetic signals and real signals from structural health monitoring systems installed on a space-frame structure and on a cable-stayed bridge. Compared with other state-of-the-art CS methods, including our previously-published Bayesian method, the new CS algorithm shows superior performance in reconstruction robustness and posterior uncertainty quantification, for approximately sparse signals. Furthermore, our method can be utilized for recovery of lost data during wireless transmission, even if the level of sparseness in the signal is low.https://resolver.caltech.edu/CaltechAUTHORS:20170119-124035615Combining Multiple Earthquake Models in Real Time for Earthquake Early Warning
https://resolver.caltech.edu/CaltechAUTHORS:20170613-131115322
Year: 2017
DOI: 10.1785/0120160331
The ultimate goal of earthquake early warning (EEW) is to provide local shaking information to users before the strong shaking from an earthquake reaches their location. This is accomplished by operating one or more real‐time analyses that attempt to predict shaking intensity, often by estimating the earthquake's location and magnitude and then predicting the ground motion from that point source. Other EEW algorithms use finite rupture models or may directly estimate ground motion without first solving for an earthquake source. EEW performance could be improved if the information from these diverse and independent prediction models could be combined into one unified, ground‐motion prediction. In this article, we set the forecast shaking at each location as the common ground to combine all these predictions and introduce a Bayesian approach to creating better ground‐motion predictions. We also describe how this methodology could be used to build a new generation of EEW systems that provide optimal decisions customized for each user based on the user's individual false‐alarm tolerance and the time necessary for that user to react.https://resolver.caltech.edu/CaltechAUTHORS:20170613-131115322