(PHD, 2021)

Abstract:

This study explores the problems of identifying structural damage in steel frame buildings, through the use of dense instrumentation over the height of the building, and of characterizing the ground motion response in urban Los Angeles following the 2019 Ridgecrest earthquakes, through the use of dense instrumentation from available seismic networks, including the very dense Community Seismic Network.

First we explore the possibility of tracing possible nonlinear behavior of a structure by updating an equivalent linear system model in short time segments of the earthquake-induced excitation and response time histories, using a moving time window approach. The stiffness and damping related parameters of the equivalent linear model are estimated by minimizing a measure of fit between the measured and model predicted response time histories for each time window. We explore the effectiveness of the methodology for two example applications, a single-story and a six-story steel moment frame building. For the single-story building, the methodology is shown to be very effective in tracing the nonlinearities, while the six-story building is designed to also reveal the limitations of the methodology, mainly arising from the different types of model errors manifested in the formulation.

Next, we investigate the problem of structural damage identification through the use of sparse Bayesian learning (SBL) techniques. This is based on the premise that damage in a structure appears only in a limited number of locations. SBL methods that had been previously applied for structural damage identification used measurements related to modal properties and were thus limited to linear models. Here we present a methodology that allows for the application of SBL in non-linear models, using time history measurements recorded from a dense network of sensors installed along the building height. We develop a two-step optimization algorithm in which the most probable values of the structural model parameters and the hyper-parameters are iteratively obtained. An equivalent single-objective minimization problem that results in the most probable model parameter values is also derived. We consider the example problem of identifying damage in the form of weld fractures in a 15-story moment resisting steel frame building, using a nonlinear finite element model and simulated acceleration data. Fiber elements and a bilinear material model are used in order to account for the change of local stiffness when cracks at the welds are subjected to tension and the model parameters characterize the loss of stiffness as the crack opens under tension. The damage identification results demonstrate the effectiveness and robustness of the proposed methodology in identifying the existence, location, and severity of damage for a variety of different damage scenarios, and degrees of model and measurement errors. The results show the great promise of the SBL methodology for damage identification by integrating nonlinear finite element models and response time history measurements.

The final part of the thesis involves studying the ground motion response in urban Los Angeles during the two largest events (M7.1 and M6.4) of the 2019 Ridgecrest earthquake sequence using recordings from multiple regional seismic networks as well as a subset of 350 stations from the much denser Community Seismic Network. The response spectral (pseudo) accelerations for a selection of periods of engineering significance are calculated. Significant spectral acceleration amplification is present and reproducible between the two events. For the longer periods, coherent spectral acceleration patterns are visible throughout the Los Angeles Basin, while for the shorter periods, the motions are less spatially coherent. The dense Community Seismic Network instrumentation allows us to observe smaller-scale coherence even for these shorter periods. Examining possible correlations of the computed response spectral accelerations with basement depth and Vs30, we find the correlations to be stronger for the longer periods. Furthermore, we study the performance of two state-of-the-art methods for estimating ground motions for the largest event of the Ridgecrest earthquake sequence, namely 3D finite difference simulations and ground motion prediction equations. For the simulations, we are interested in the performance of the two Southern California Earthquake Center 3D Community Velocity Models (CVM-S and CVM-H). For the ground motion prediction equations, we consider four of the 2014 Next Generation Attenuation-West2 Project equations. For some cases, the methods match the observations reasonably well; however, neither approach is able to reproduce the specific locations of the maximum response spectral accelerations, or match the details of the observed amplification patterns.

]]>

(PHD, 2021)

Abstract:

Current earthquake early warning (EEW) algorithms are continuously optimized to strive for fast, accurate source parameter estimates for the rupturing earthquake (i.e. magnitude, location), which are then used to predict ground motions expected at a site. However, they may still struggle with challenging cases, such as offshore events and complex sequences. An envelope-based two-part search algorithm is developed to handle such cases. This algorithm matches different templates to the incoming observed ground motion envelopes to find the optimal earthquake source parameter estimates.

The algorithm consists of two methods. Method I is the standard grid search, and it uses Cua-Heaton ground motion envelopes as its templates; Method II is the extended catalog search, and its templates are waveform envelopes from past real and synthetic earthquakes. The grid search is intended for robustness and provides approximate average solutions, whereas the extended catalog search matches envelopes considering the station’s specific site and path effects. In parallel execution, Methods I and II work together – either by confirming each other’s solutions or accepting the solution with stronger fits – to provide the best parameter estimates based on waveform-based data.

The main advantage of the two-part search algorithm is its ability to find parameter estimates of reduced uncertainties using the P-wave data from a single station. Many algorithms wait until multiple stations are triggered to reduce tradeoffs between the magnitude and location. This waiting time, however, is detrimental in EEW, for it jeopardizes the warning time that can be issued to nearby regions expected to experience strong shaking. The use of a single station would virtually eliminate this waiting time, maximizing the warning time without the cost in accuracy of the estimates.

Because EEW is a race against time, further actions are taken for more rapid estimation of the earthquake source parameters. A Bayesian approach using prior information has the potential to reduce uncertainties that arise in the initial time points due to tradeoffs between the magnitude and location. This essentially increases the confidence of the initial parameter estimates, allowing alerts to be issued faster. A KD tree nearest neighbor search is also introduced to reduce latency in the time it takes to find the best-fitting solutions. In comparison to an exhaustive, brute-force search, it cuts the searching time by only examining through a fraction of the total database.

An envelope-based algorithm examines the shape and relative frequency content and makes appropriate judgments, just as a human seismologist would; it also addresses the issue of data transmission latencies. Overall, this algorithm is able to interpret the complexity of earthquakes and assess the features they hold to ultimately communicate information of significant ground shaking to different regions.

]]>

(PHD, 2018)

Abstract:

It is important to be able to accurately assess seismic risk so that vulnerabilities can be prioritized for retrofit, emergency response procedures can be properly informed, and insurance rates can be sustainably priced to manage risk. To assess the risk of a building (or class of buildings) collapsing in a seismic event, procedures exist for creating one or more mathematical models of the structure of interest and performing nonlinear time history analysis with a large suite of input ground motions to calculate the building’s seismic fragility and collapse risk. In this dissertation, three aspects of these procedures for assessing seismic collapse risk are investigated for the purpose of improving their accuracy.

It is common to use spectral acceleration with a damping ratio of 5% as a ground motion intensity measure (IM) for assessing collapse fragility. In this dissertation, the use of 70%-damped spectral acceleration as an IM is investigated, with a focus on evaluating its sufficiency and efficiency. Incremental dynamic analysis (IDA) is performed for 22 steel moment frame (SMF) models with 50 biaxial ground motion records to formally evaluate the performance of 70%-damped spectral acceleration as an IM for highly nonlinear response and collapse. It is found that 70%-damped spectral acceleration is much more efficient than 5%-damped spectral acceleration and much more sufficient with respect to epsilon for all considered levels of highly nonlinear response. Its efficiency and sufficiency compares also compares well with more advanced IMs such as average spectral acceleration.

When selecting input ground motions for nonlinear time history analysis, most engineers select ground motion records from the NGA-West2 database, which are processed with high-pass filters to remove long-period noise. In this dissertation, the extent to which these filters remove actual ground motion that is relevant to nonlinear time history analysis is evaluated. 52 near-source ground motion records from large-magnitude events are considered. Some records are processed by applying high-pass filters and others are processed by record-specific tilt corrections. Raw and NGA-West2 records are also considered. IDA is performed for 9-, 20-, and 55-story steel moment frame models with these processed records to assess the effects of ground motion processing on the calculated collapse capacity. It is found that if the cutoff period (Tc) is at least 40 seconds, then applying a high-pass filter does not have more than a negligible effect on collapse capacity for any of the considered records or building models. For shorter Tc (e.g. 10 or 15 seconds), it is found that the filters sometimes have a large effect on calculated collapse capacity, in some cases by over 50%, even if Tc is much larger than the building’s fundamental period. Of the considered ground motions, simply using the raw, uncorrected records usually yields more accurate results than using ground motions that have been processed with Tc less than or equal to 20 seconds.

For an existing building with unknown design plans, one might perform a collapse risk assessment using an archetype model for which the specific member sizes are assumed based on the relevant design code and building site. In this dissertation, the sensitivity of seismic collapse risk estimates to design criteria and procedures are evaluated for six 9-story and four 20-story post-Northridge SMFs. These SMFs are designed for downtown Los Angeles using different design procedures according to ASCE 7-05 and ASCE 7-10. Seismic risk analysis is performed using the results of IDA with 44 ground motion records and the results are compared to those of pre-Northridge models. It is found that the collapse risk of 9-story SMFs designed according to performance-based design vary by 3x, owing to differences in GMPEs used to generate site-specific response spectra. There is generally less variation in the collapse risk estimates of 20-story post-Northridge SMFs when compared to 9-story post-Northridge SMFs because wind drift limits control the design of many members of the 20-story SMFs. Differences in collapse risk between pre- and post-Northridge SMFs are found to be at least 4x and 8x for the 9- and 20-story models, respectively. Furthermore, in response to four strong ground motion records from large-magnitude events, some of the 9-story and all of the 20-story pre-Northridge SMFs experience collapse and most of the post-Northridge SMFs experience significant damage (MIDR > 0.03).

]]>

(PHD, 2018)

Abstract:

Existing Earthquake Early Warning (EEW) algorithms use waveform analysis for earthquake detections, estimation of source parameters (i.e., magnitude and hypocenter location), and prediction of peak ground motions at sites near the source. The latency of warning delivery due to data collection significantly restricts the usefulness of the system, especially for users in the vicinity of the earthquake source, as the warning may not arrive before the strong shaking. This presentation discusses several methods to reduce the warning latency, while maintaining reliability and robustness, so that the warning time can be maximized for users to take appropriate actions to reduce causalities and economic losses.

Firstly, we incorporated the seismicity forecast information from Epidemic-Type Aftershock Sequence (ETAS) model into EEW as prior information, under the Bayesian probabilistic inference framework. Similar to human’s decision-making process, the Bayesian approach updates the probability of the estimations as more information becomes available. This allows us to reduce the required time for reliable earthquake signal detection from at least 3 seconds to 0.5 second. Furthermore, the initial error of hypocenter location estimation is reduced by 58%. The performance of the algorithm is further improved during aftershock sequences and swarm earthquakes.

Secondly, we introduce the use of multidimensional (KD tree) data structure to organize seismic database, so that the querying time can be reduced for the nearest neighbor search during earthquake source parameter estimation. The processing time of KD tree is approximately 15% of the processing time of linear exhaustive search, which allows the potential use of large seismic databases in real-time.

EEW is an interdisciplinary subject that involves collaboration among different scientific and engineering communities. Only by optimizing the warning time, such a unified system could be successful in taking protective actions before, during, and after earthquake natural disasters.

]]>

(PHD, 2016)

Abstract: Current earthquake early warning systems usually make magnitude and location predictions and send out a warning to the users based on those predictions. We describe an algorithm that assesses the validity of the predictions in real-time. Our algorithm monitors the envelopes of horizontal and vertical acceleration, velocity, and displacement. We compare the observed envelopes with the ones predicted by Cua & Heaton’s envelope ground motion prediction equations (Cua 2005). We define a “test function” as the logarithm of the ratio between observed and predicted envelopes at every second in real-time. Once the envelopes deviate beyond an acceptable threshold, we declare a misfit. Kurtosis and skewness of a time evolving test function are used to rapidly identify a misfit. Real-time kurtosis and skewness calculations are also inputs to both probabilistic (Logistic Regression and Bayesian Logistic Regression) and nonprobabilistic (Least Squares and Linear Discriminant Analysis) models that ultimately decide if there is an unacceptable level of misfit. This algorithm is designed to work at a wide range of amplitude scales. When tested with synthetic and actual seismic signals from past events, it works for both small and large events.

]]>

(PHD, 2016)

Abstract:

STEEL, the Caltech created nonlinear large displacement analysis software, is currently used by a large number of researchers at Caltech. However, due to its complexity, lack of visualization tools (such as pre- and post-processing capabilities) rapid creation and analysis of models using this software was difficult. SteelConverter was created as a means to facilitate model creation through the use of the industry standard finite element solver ETABS. This software allows users to create models in ETABS and intelligently convert model information such as geometry, loading, releases, fixity, etc., into a format that STEEL understands. Models that would take several days to create and verify now take several hours or less. The productivity of the researcher as well as the level of confidence in the model being analyzed is greatly increased.

It has always been a major goal of Caltech to spread the knowledge created here to other universities. However, due to the complexity of STEEL it was difficult for researchers or engineers from other universities to conduct analyses. While SteelConverter did help researchers at Caltech improve their research, sending SteelConverter and its documentation to other universities was less than ideal. Issues of version control, individual computer requirements, and the difficulty of releasing updates made a more centralized solution preferred. This is where the idea for Caltech VirtualShaker was born. Through the creation of a centralized website where users could log in, submit, analyze, and process models in the cloud, all of the major concerns associated with the utilization of SteelConverter were eliminated. Caltech VirtualShaker allows users to create profiles where defaults associated with their most commonly run models are saved, and allows them to submit multiple jobs to an online virtual server to be analyzed and post-processed. The creation of this website not only allowed for more rapid distribution of this tool, but also created a means for engineers and researchers with no access to powerful computer clusters to run computationally intensive analyses without the excessive cost of building and maintaining a computer cluster.

In order to increase confidence in the use of STEEL as an analysis system, as well as verify the conversion tools, a series of comparisons were done between STEEL and ETABS. Six models of increasing complexity, ranging from a cantilever column to a twenty-story moment frame, were analyzed to determine the ability of STEEL to accurately calculate basic model properties such as elastic stiffness and damping through a free vibration analysis as well as more complex structural properties such as overall structural capacity through a pushover analysis. These analyses showed a very strong agreement between the two softwares on every aspect of each analysis. However, these analyses also showed the ability of the STEEL analysis algorithm to converge at significantly larger drifts than ETABS when using the more computationally expensive and structurally realistic fiber hinges. Following the ETABS analysis, it was decided to repeat the comparisons in a software more capable of conducting highly nonlinear analysis, called Perform. These analyses again showed a very strong agreement between the two softwares in every aspect of each analysis through instability. However, due to some limitations in Perform, free vibration analyses for the three story one bay chevron brace frame, two bay chevron brace frame, and twenty story moment frame could not be conducted. With the current trend towards ultimate capacity analysis, the ability to use fiber based models allows engineers to gain a better understanding of a building’s behavior under these extreme load scenarios.

Following this, a final study was done on Hall’s U20 structure [1] where the structure was analyzed in all three softwares and their results compared. The pushover curves from each software were compared and the differences caused by variations in software implementation explained. From this, conclusions can be drawn on the effectiveness of each analysis tool when attempting to analyze structures through the point of geometric instability. The analyses show that while ETABS was capable of accurately determining the elastic stiffness of the model, following the onset of inelastic behavior the analysis tool failed to converge. However, for the small number of time steps the ETABS analysis was converging, its results exactly matched those of STEEL, leading to the conclusion that ETABS is not an appropriate analysis package for analyzing a structure through the point of collapse when using fiber elements throughout the model. The analyses also showed that while Perform was capable of calculating the response of the structure accurately, restrictions in the material model resulted in a pushover curve that did not match that of STEEL exactly, particularly post collapse. However, such problems could be alleviated by choosing a more simplistic material model.

]]>

(PHD, 2015)

Abstract:

This thesis examines collapse risk of tall steel braced frame buildings using rupture-to-rafters simulations due to suite of San Andreas earthquakes. Two key advancements in this work are the development of (i) a rational methodology for assigning scenario earthquake probabilities and (ii) an artificial correction-free approach to broadband ground motion simulation. The work can be divided into the following sections: earthquake source modeling, earthquake probability calculations, ground motion simulations, building response, and performance analysis.

As a first step the kinematic source inversions of past earthquakes in the magnitude range of 6-8 are used to simulate 60 scenario earthquakes on the San Andreas fault. For each scenario earthquake a 30-year occurrence probability is calculated and we present a rational method to redistribute the forecast earthquake probabilities from UCERF to the simulated scenario earthquake. We illustrate the inner workings of the method through an example involving earthquakes on the San Andreas fault in southern California.

Next, three-component broadband ground motion histories are computed at 636 sites in the greater Los Angeles metropolitan area by superposing short-period (0.2_{s-2.0}s) empirical Green’s function synthetics on top of long-period (> 2.0~s) spectral element synthetics. We superimpose these seismograms on low-frequency seismograms, computed from kinematic source models using the spectral element method, to produce broadband seismograms.

Using the ground motions at 636 sites for the 60 scenario earthquakes, 3-D nonlinear analysis of several variants of an 18-story steel braced frame building, designed for three soil types using the 1994 and 1997 Uniform Building Code provisions and subjected to these ground motions, are conducted. Model performance is classified into one of five performance levels: Immediate Occupancy, Life Safety, Collapse Prevention, Red-Tagged, and Model Collapse. The results are combined with the 30-year probability of occurrence of the San Andreas scenario earthquakes using the PEER performance based earthquake engineering framework to determine the probability of exceedance of these limit states over the next 30 years.

]]>

(PHD, 2015)

Abstract:

There is a sparse number of credible source models available from large-magnitude past earthquakes. A stochastic source model generation algorithm thus becomes necessary for robust risk quantification using scenario earthquakes. We present an algorithm that combines the physics of fault ruptures as imaged in laboratory earthquakes with stress estimates on the fault constrained by field observations to generate stochastic source models for large-magnitude (Mw 6.0-8.0) strike-slip earthquakes. The algorithm is validated through a statistical comparison of synthetic ground motion histories from a stochastically generated source model for a magnitude 7.90 earthquake and a kinematic finite-source inversion of an equivalent magnitude past earthquake on a geometrically similar fault. The synthetic dataset comprises of three-component ground motion waveforms, computed at 636 sites in southern California, for ten hypothetical rupture scenarios (five hypocenters, each with two rupture directions) on the southern San Andreas fault. A similar validation exercise is conducted for a magnitude 6.0 earthquake, the lower magnitude limit for the algorithm. Additionally, ground motions from the Mw7.9 earthquake simulations are compared against predictions by the Campbell-Bozorgnia NGA relation as well as the ShakeOut scenario earthquake. The algorithm is then applied to generate fifty source models for a hypothetical magnitude 7.9 earthquake originating at Parkfield, with rupture propagating from north to south (towards Wrightwood), similar to the 1857 Fort Tejon earthquake. Using the spectral element method, three-component ground motion waveforms are computed in the Los Angeles basin for each scenario earthquake and the sensitivity of ground shaking intensity to seismic source parameters (such as the percentage of asperity area relative to the fault area, rupture speed, and risetime) is studied.

Under plausible San Andreas fault earthquakes in the next 30 years, modeled using the stochastic source algorithm, the performance of two 18-story steel moment frame buildings (UBC 1982 and 1997 designs) in southern California is quantified. The approach integrates rupture-to-rafters simulations into the PEER performance based earthquake engineering (PBEE) framework. Using stochastic sources and computational seismic wave propagation, three-component ground motion histories at 636 sites in southern California are generated for sixty scenario earthquakes on the San Andreas fault. The ruptures, with moment magnitudes in the range of 6.0-8.0, are assumed to occur at five locations on the southern section of the fault. Two unilateral rupture propagation directions are considered. The 30-year probabilities of all plausible ruptures in this magnitude range and in that section of the fault, as forecast by the United States Geological Survey, are distributed among these 60 earthquakes based on proximity and moment release. The response of the two 18-story buildings hypothetically located at each of the 636 sites under 3-component shaking from all 60 events is computed using 3-D nonlinear time-history analysis. Using these results, the probability of the structural response exceeding Immediate Occupancy (IO), Life-Safety (LS), and Collapse Prevention (CP) performance levels under San Andreas fault earthquakes over the next thirty years is evaluated.

Furthermore, the conditional and marginal probability distributions of peak ground velocity (PGV) and displacement (PGD) in Los Angeles and surrounding basins due to earthquakes occurring primarily on the mid-section of southern San Andreas fault are determined using Bayesian model class identification. Simulated ground motions at sites within 55-75km from the source from a suite of 60 earthquakes (Mw 6.0 − 8.0) primarily rupturing mid-section of San Andreas fault are considered for PGV and PGD data.

]]>

(PHD, 2014)

Abstract:

In this thesis, we develop an efficient collapse prediction model, the PFA (Peak Filtered Acceleration) model, for buildings subjected to different types of ground motions.

For the structural system, the PFA model covers modern steel and reinforced concrete moment-resisting frame buildings (potentially reinforced concrete shear wall buildings). For ground motions, the PFA model covers ramp-pulse-like ground motions, long-period ground motions, and short-period ground motions.

To predict whether a building will collapse in response to a given ground motion, we first extract long-period components from the ground motion using a Butterworth low-pass filter with suggested order and cutoff frequency. The order depends on the type of ground motion, and the cutoff frequency depends on the building’s natural frequency and ductility. We then compare the filtered acceleration time history with the capacity of the building. The capacity of the building is a constant for 2-dimentional buildings and a limit domain for 3-dimentional buildings. If the filtered acceleration exceeds the building’s capacity, the building is predicted to collapse. Otherwise, it is expected to survive the ground motion.

The parameters used in PFA model, which include fundamental period, global ductility and lateral capacity, can be obtained either from numerical analysis or interpolation based on the reference building system proposed in this thesis.

The PFA collapse prediction model greatly reduces computational complexity while archiving good accuracy. It is verified by FEM simulations of 13 frame building models and 150 ground motion records.

Based on the developed collapse prediction model, we propose to use PFA (Peak Filtered Acceleration) as a new ground motion intensity measure for collapse prediction. We compare PFA with traditional intensity measures PGA, PGV, PGD, and Sa in collapse prediction and find that PFA has the best performance among all the intensity measures.

We also provide a close form in term of a vector intensity measure (PGV, PGD) of the PFA collapse prediction model for practical collapse risk assessment.

]]>

(PHD, 2014)

Abstract:

The proliferation of smartphones and other internet-enabled, sensor-equipped consumer devices enables us to sense and act upon the physical environment in unprecedented ways. This thesis considers Community Sense-and-Response (CSR) systems, a new class of web application for acting on sensory data gathered from participants’ personal smart devices. The thesis describes how rare events can be reliably detected using a decentralized anomaly detection architecture that performs client-side anomaly detection and server-side event detection. After analyzing this decentralized anomaly detection approach, the thesis describes how weak but spatially structured events can be detected, despite significant noise, when the events have a sparse representation in an alternative basis. Finally, the thesis describes how the statistical models needed for client-side anomaly detection may be learned efficiently, using limited space, via coresets.

The Caltech Community Seismic Network (CSN) is a prototypical example of a CSR system that harnesses accelerometers in volunteers’ smartphones and consumer electronics. Using CSN, this thesis presents the systems and algorithmic techniques to design, build and evaluate a scalable network for real-time awareness of spatial phenomena such as dangerous earthquakes.

]]>

(PHD, 2014)

Abstract:

The dynamic properties of a structure are a function of its physical properties, and changes in the physical properties of the structure, including the introduction of structural damage, can cause changes in its dynamic behavior. Structural health monitoring (SHM) and damage detection methods provide a means to assess the structural integrity and safety of a civil structure using measurements of its dynamic properties. In particular, these techniques enable a quick damage assessment following a seismic event. In this thesis, the application of high-frequency seismograms to damage detection in civil structures is investigated.

Two novel methods for SHM are developed and validated using small-scale experimental testing, existing structures in situ, and numerical testing. The first method is developed for pre-Northridge steel-moment-resisting frame buildings that are susceptible to weld fracture at beam-column connections. The method is based on using the response of a structure to a nondestructive force (i.e., a hammer blow) to approximate the response of the structure to a damage event (i.e., weld fracture). The method is applied to a small-scale experimental frame, where the impulse response functions of the frame are generated during an impact hammer test. The method is also applied to a numerical model of a steel frame, in which weld fracture is modeled as the tensile opening of a Mode I crack. Impulse response functions are experimentally obtained for a steel moment-resisting frame building in situ. Results indicate that while acceleration and velocity records generated by a damage event are best approximated by the acceleration and velocity records generated by a colocated hammer blow, the method may not be robust to noise. The method seems to be better suited for damage localization, where information such as arrival times and peak accelerations can also provide indication of the damage location. This is of significance for sparsely-instrumented civil structures.

The second SHM method is designed to extract features from high-frequency acceleration records that may indicate the presence of damage. As short-duration high-frequency signals (i.e., pulses) can be indicative of damage, this method relies on the identification and classification of pulses in the acceleration records. It is recommended that, in practice, the method be combined with a vibration-based method that can be used to estimate the loss of stiffness. Briefly, pulses observed in the acceleration time series when the structure is known to be in an undamaged state are compared with pulses observed when the structure is in a potentially damaged state. By comparing the pulse signatures from these two situations, changes in the high-frequency dynamic behavior of the structure can be identified, and damage signals can be extracted and subjected to further analysis. The method is successfully applied to a small-scale experimental shear beam that is dynamically excited at its base using a shake table and damaged by loosening a screw to create a moving part. Although the damage is aperiodic and nonlinear in nature, the damage signals are accurately identified, and the location of damage is determined using the amplitudes and arrival times of the damage signal. The method is also successfully applied to detect the occurrence of damage in a test bed data set provided by the Los Alamos National Laboratory, in which nonlinear damage is introduced into a small-scale steel frame by installing a bumper mechanism that inhibits the amount of motion between two floors. The method is successfully applied and is robust despite a low sampling rate, though false negatives (undetected damage signals) begin to occur at high levels of damage when the frequency of damage events increases. The method is also applied to acceleration data recorded on a damaged cable-stayed bridge in China, provided by the Center of Structural Monitoring and Control at the Harbin Institute of Technology. Acceleration records recorded after the date of damage show a clear increase in high-frequency short-duration pulses compared to those previously recorded. One undamage pulse and two damage pulses are identified from the data. The occurrence of the detected damage pulses is consistent with a progression of damage and matches the known chronology of damage.

]]>

(PHD, 2014)

Abstract: This thesis describes engineering applications that come from extending seismic networks into building structures. The proposed applications will benefit the data from the newly developed crowd-sourced seismic networks which are composed of low-cost accelerometers. An overview of the Community Seismic Network and the earthquake detection method are addressed. In the structural array components of crowd-sourced seismic networks, there may be instances in which a single seismometer is the only data source that is available from a building. A simple prismatic Timoshenko beam model with soil-structure interaction (SSI) is developed to approximate mode shapes of buildings using natural frequency ratios. A closed form solution with complete vibration modes is derived. In addition, a new method to rapidly estimate total displacement response of a building based on limited observational data, in some cases from a single seismometer, is presented. The total response of a building is modeled by the combination of the initial vibrating motion due to an upward traveling wave, and the subsequent motion as the low-frequency resonant mode response. Furthermore, the expected shaking intensities in tall buildings will be significantly different from that on the ground during earthquakes. Examples are included to estimate the characteristics of shaking that can be expected in mid-rise to high-rise buildings. Development of engineering applications (e.g., human comfort prediction and automated elevator control) for earthquake early warning system using probabilistic framework and statistical learning technique is addressed.

]]>

(MS, 2014)

Abstract:

Smartphones and other powerful sensor-equipped consumer devices make it possible to sense the physical world at an unprecedented scale. Nearly 2 million Android and iOS devices are activated every day, each carrying numerous sensors and a high-speed internet connection. Whereas traditional sensor networks have typically deployed a fixed number of devices to sense a particular phenomena, community networks can grow as additional participants choose to install apps and join the network. In principle, this allows networks of thousands or millions of sensors to be created quickly and at low cost. However, making reliable inferences about the world using so many community sensors involves several challenges, including scalability, data quality, mobility, and user privacy.

This thesis focuses on how learning at both the sensor- and network-level can provide scalable techniques for data collection and event detection. First, this thesis considers the abstract problem of distributed algorithms for data collection, and proposes a distributed, online approach to selecting which set of sensors should be queried. In addition to providing theoretical guarantees for submodular objective functions, the approach is also compatible with local rules or heuristics for detecting and transmitting potentially valuable observations. Next, the thesis presents a decentralized algorithm for spatial event detection, and describes its use detecting strong earthquakes within the Caltech Community Seismic Network. Despite the fact that strong earthquakes are rare and complex events, and that community sensors can be very noisy, our decentralized anomaly detection approach obtains theoretical guarantees for event detection performance while simultaneously limiting the rate of false alarms.

]]>

(PHD, 2011)

Abstract:

Seismic inversion and computational models have shown that earthquake ruptures may propagate in one of two basic modes; the cracklike mode and the slip pulse mode. In this work we use analytical and numerical techniques to study the dynamics and implications of pulselike ruptures propagating on strong velocity-weakening frictional interfaces using both discrete and continuum models of fracture.

Results of the study of the discrete spring block slider model suggest that strong velocity-weakening friction might yield to the propagation of unsteady slip pulses and chaotic dynamics. The prestress in most of these systems evolves into very heterogeneous spatial distributions characterized, in general, by non-Gaussian statistics and power-law spectral properties. It is also shown that the combined effect of slip pulse propagation and strong velocity-weakening friction could yield to size effects in strength with the strength decreasing as a power law with increasing rupture length.

By examining the energy budget of slip pulses in the discrete model, we show that it is possible to derive a nonlinear differential equation that could predict the final slip distribution in an event, given the prestress existing before that event and some information about friction and pulse dynamics. The equation is successful in replicating many of the macroscopic slip features, including the slip distribution and total rupture length, can also match many long-time statistics regarding the prestress evolution and the event size distribution.

Results from the continuum study suggest that the absence of steady pulses in previous studies could be attributed to the details of the nucleation procedure. We show that steady pulses could exist on strong velocity-weakening friction and uniform prestress if both the prestress and nucleation procedures are correctly tuned. We find that steady pulses are unstable to perturbations in the form of a step in the prestress and could arrest quickly in regions of low prestress. Steady pulses are also found to adapt well to local fluctuations in the prestress, leading to heterogeneous slip distributions. This result might have important implications for the problem of slip complexity in real earthquakes.

]]>

(PHD, 2009)

Abstract:

With the exception of the 2003 Tokachi-Oki earthquake, strong ground recordings from large subduction earthquakes (Mw > 8.0) are meager. Furthermore there are no strong motion recordings of giant earthquakes. However, there is a growing set of high-quality broadband teleseismic recordings of large and giant earthquakes. In this thesis, we use recordings from the 2003 Tokachi-Oki (Mw 8.3) earthquake as empirical Green’s functions to simulate the rock and soil ground motions from the 2004 Sumatra-Andaman earthquake and a scenario Mw 9.2 Cascadia subduction earthquake in the frequency band of interest to flexible and tall buildings (0.075 to 1 Hz). The effect of amplification by the Seattle basin is considered by using a basin response transfer function, which is derived from deconvolving the teleseismic waves recorded at rock sites from basin sites at the SHIP02 experiment. These strong ground motion time histories are used to simulate of the fully nonlinear response of 20-story and 6-story steel moment-frame buildings designed according to both the U.S. 1994 Uniform Building Code and the 1987 Japanese building code. We consider several realizations of the hypothetical subduction earthquake. The basin amplification and the down-dip limit of rupture are of particular importance to the simulated ground motions in Seattle. At rock sites, if slip is limited to offshore regions, the building model responses are mostly in the linear range. However, if rupture is extended beyond the Olympic Mountains, large deformations occur in the high-rise buildings models, especially those with brittle welds. At basin sites, our simulations indicate the collapse of all building models for a source model with rupture beyond the Olympic Mountains, whereas buildings with perfect welds avoid collapse for simulations based on a source model with rupture limited to offshore. The synthetic ground motions all have very long durations (more than 5 minutes at soil sites), and our building simulations should be considered as a low estimate since we the degradation model used in our simulation did not consider local flange buckling.

]]>

(PHD, 2008)

Abstract: This thesis studies the response of steel moment-resisting frame buildings in simulated strong ground motions. I collect 37 simulations of crustal earthquakes in California. These ground motions are applied to nonlinear finite element models of four types of steel moment frame buildings: six- or twenty-stories with either a stiffer, higher-strength design or a more flexible, lower-strength design. I also consider the presence of fracture-prone welds in each design. Since these buildings experience large deformations in strong ground motions, the building states considered in this thesis are collapse, total structural loss (must be demolished), and if repairable, the peak inter-story drift. This thesis maps these building responses on the simulation domains which cover many sites in the San Francisco and Los Angeles regions. The building responses can also be understood as functions of ground motion intensity measures, such as pseudo-spectral acceleration (PSA), peak ground displacement (PGD), and peak ground velocity (PGV). This thesis develops building response prediction equations to describe probabilistically the state of a steel moment frame given a ground motion. The presence of fracture-prone welds increases the probability of collapse by a factor of 2–8. The probability of collapse of the more flexible design is 1–4 times that of the stiffer design. The six-story buildings are slightly less likely to collapse than the twenty-story buildings assuming sound welds, but the twenty-story buildings are 2–4 times more likely to collapse than the six-story buildings if both have fracture-prone welds. A vector intensity measure of PGD and PGV predicts collapse better than PSA. Models based on the vector of PGD and PGV predict total structural loss equally well as models using PSA. PSA alone best predicts the peak inter-story drift, assuming that the building is repairable. As “rules of thumb,” the twenty-story steel moment frames with sound welds collapse in ground motions with long-period PGD greater than 1 m and long-period PGV greater than 2 m/s, and they are a total structural loss for long-period PGD greater than 0.6 m and long-period PGV greater than 1 m/s.

]]>

(PHD, 2007)

Abstract:

Earthquake early warning systems have become popular these days, and many seismologists and engineers are making research efforts for their practical application. The existing earthquake early warning systems provide estimates of the location and size of earthquakes, and then ground motions at a site are estimated as a function of the epicentral distance and site soil properties. However, for large earthquakes, the energy is radiated from a large area surrounding the entire fault plane, and the epicenter indicates only where rupture starts.

In this project, we focus on an earthquake early warning system considering fault finiteness. We provide a new methodology to estimate rupture geometry and slip size on a finite fault in real time for the purpose of earthquake early warning.

We propose a new model to simulate high-frequency motions from earthquakes with large fault dimension: the envelope of high-frequency ground motion from a large earthquake can be expressed as a root-mean-squared combination of envelope functions from smaller earthquakes. We parameterize the fault geometry with an epicenter, a fault strike, and two along-strike rupture lengths, and find these parameters by minimizing the residual sum of squares of errors between ground motion models and observed ground motion envelopes.

To provide the information on the spatial extent of rupture geometry, we present a methodology to estimate a fault dimension of an earthquake in real time by classifying seismic records into near-source or far-source records. We analyze peak ground motions and use Bayesian model class selection to find a function that best classifies near-source and far-source records based on these parameters. This discriminant function is useful to estimate the fault rupture dimension in real time, especially for large earthquakes.

In order to characterize slip on the fault in real time, we construct an analytical function to estimate slip on the fault from near-source ground displacement observations. In real-time analysis, we back project the recorded displacement data onto the fault line to estimate the size of the slip on the fault. The simulation results show that the slip size estimation predicts the observed GPS static displacement on the fault quite well. This current slip size on the fault is used for a probabilistic prediction of additional rupture length in the near future. We characterize the distribution of additional rupture length conditioned on the current slip on the fault for the ongoing rupture from the simulation with a 1-D slip model. The probability density of additional rupture length can be approximated by a lognormal distribution conditioned on the current slip size.

]]>

(PHD, 2007)

Abstract:

The Wigner-Ville Distribution, and related refinements, represent a class of advanced time-frequency analysis tools that are distinguished from Fourier and wavelet methods by an increase in resolution in the time frequency plane. Time-frequency analysis provides a set of exploratory tools for analyzing changing frequency content in a signal, which can then be correlated with damage patterns in a structure.

For systems of interest to engineers, investigating the changing properties of a system is typically performed by analyzing vibration data from the system, rather than direct inspection of each component. Nonlinear elastic behavior in the force-displacement relationship can decrease the apparent natural frequencies of the system - these changes typically occur over fractions of a second in moderate to strong excitation and the system gradually recovers to pre-event levels. Structures can also suffer permanent damage (e.g., plastic deformation or fracture), permanently decreasing the observed natural frequencies as the system loses stiffness. Advanced time-frequency representations provide a set of exploratory tools for analyzing changing frequency content in a signal, which can then be correlated with damage patterns in a structure. Modern building instrumentation allows for an unprecedented investigation into the changing dynamic properties of structures: a framework for using time-frequency analysis methods for instantaneous system identification is discussed.

]]>

(PHD, 2006)

Abstract:

Current stress studies often utilize stress inversions of earthquake focal mechanisms to estimate four parameters of the spatially uniform stress tensor, three principal stress orientations, and a ratio of the principal stresses. An implicit assumption in these studies is that earthquakes are good random samplers of stress; hence, the set of earthquake focal mechanisms within some region can be used to estimate the spatial mean stress state within the region. Numerical simulations indicate some regions, such as Southern California, have sufficient stress heterogeneity to bias the stress inversions toward the stress rate orientation and that stress studies using stress inversions need to be reinterpreted by taking this bias into account. An outline of how to subtract out this bias to yield the actual spatial mean stress is presented.

Numerical simulations demonstrate that spatially heterogeneous stress in 3D can bias stress inversions of focal mechanisms toward the stress rate tensor instead of the stress. Stochastic models of 3D spatially heterogeneous stress are created, synthetic earthquake focal mechanisms are generated using the Hencky-Mises plastic yield criterion, and results are compared with Hardebeck’s Southern California earthquake catalog [Hardebeck, 2006]. The presence of 3D spatial stress heterogeneity biases which orientations are most likely to fail, a bias toward the stress rate tensor. When synthetic focal mechanisms are compared to real data, estimates of two stress heterogeneity parameters for Southern California are obtained: 1) A spatial smoothing parameter, α≈0.8, where α describes the spectral falloff of 1D cross sections through a 3D grid for the three principal stresses and three orientation angles. 2) A heterogeneity ratio, HR ≈ 1.25, which describes the relative amplitude of the spatial stress heterogeneity to the spatial mean stress. The estimate for α is tentative; however, varying α for α ≤ 1.0 has little to no effect on the observation that spatially heterogeneous stress biases failures toward the stress rate. The estimate for HR is more robust and produces a bias toward the stress rate of approximately 40%. If the spatial mean stress and the stress rate are not aligned, the average focal mechanism failure mechanism should yield a stress estimate from stress inversions, approximately halfway between the two.

]]>

(PHD, 2005)

Abstract:

The Virtual Seismologist method for earthquake early warning uses a Bayesian approach to find the most probable magnitude and location estimates given the incoming ground motions envelopes from a rupturing earthquake. Ground motion ratios and ground motion envelope attenuation relationships are used to estimate magnitude and epicentral location as early as 3 seconds after the initial P wave detection. The use of prior information distinguishes this method from other proposed methods for seismic early warning. The state of health of the seismic network, previously observed seismicity, fault locations, and the Gutenberg-Richter relationship are the types of prior information useful in resolving trade-offs in the initial source estimates which are unresolved by the limited data. Short-term earthquake forecasts are ideal priors for seismic early warning.

Having a high density of stations with real-time telemetry reduces the complexity involved in finding the most probable source estimates and communicating these estimates to early warning subscribers. The benefits of prior information are most evident for regions with low station density. Most early warning studies are focused exclusively on either the source estimation problem, or how subscribers use the warning information. The inclusion of prior information ultimately requires a level of coordination and communication between the network broadcasting the early warning information and the subscribers that is not consistent with this divide. The need for a more integrated approach to seismic early warning which considers the source estimation and user response as interacting and interrelated parts of a single problem is discussed.

A parameterization that decomposes observed ground motion envelopes into P-wave, S-wave, and ambient noise envelopes is developed and applied to a large suite of observed ground motion envelopes recorded within 200 km of 2.0 < M < 7.3 Southern California earthquakes. Separate attenuation relationships are developed to describe the magnitude, distance, and site dependence of various channels of P- and S-wave envelopes. The P-wave relationships allow the early warning source estimates to be obtained from observed P-wave amplitudes. Aside from early warning applications, these envelope attenuation relationships are used to investigate the average properties of ground motions recorded by the Southern California Seismic Network. Station-specific amplification factors for 150 Southern California Seismic Network stations were obtained for horizontal and vertical acceleration, velocity, and displacement amplitudes, and are included (Excel format) as external multimedia objects.

]]>

(PHD, 2004)

Abstract:

Damping limits the resonance of vibrating systems and thus higher anelastic damping is generally favored for engineered structures subjected to earthquake motions, because it means that a structure can dissipate a larger percentage of its energy per oscillation cycle. However, there are elastic processes that can mimic the effects of anelastic damping. In particular, buildings lose kinetic energy when their motion generates elastic waves in the Earth, which is referred to as radiation damping. Unlike anelastic damping, strong radiation damping may not always be desirable, as reciprocity can be used to show that buildings may be strongly excited by elastic waves of similar characteristics to those generated by the building’s forced vibrations. As a result, it is important to quantify the radiation damping of structures to be able to improve their design.

Several experiments using Caltech’s nine story Millikan Library as a controlled source were performed to investigate the radiation damping of the structure. The building was forced to resonate at its North-South and East-West fundamental modes, and seismometers were deployed around the structure in order to measure the waves generated by the library’s excitation. From this “local” data set, we determine the elastic properties of the soils surrounding the structure and estimate what percentage of the total damping of the structure is due to energy radiation. Using Fourier transforms, we were also able to detect these waves at distances up to 400 km from the source using the broadband stations of the Southern California Seismic Network. This “regional” data set is used in an attempt to identify arrival times and to constrain the type of waves being observed at regional distances.

]]>

(PHD, 2004)

Abstract:

The recording of ground motions is a fundamental part of both seismology and earthquake engineering. The current state-of-the-art 24-bit continuously recording seismic station is described, with particular attention to the frequency range and dynamic range of the seismic sensors typically installed. An alternative method of recording the strong-motions would be to deploy a velocity sensor rather than an accelerometer. This instrument has the required ability to measure the strongest earth motions, with enhanced long period sensitivity.

An existing strong motion velocity sensor from Japan was tested for potential use in US seismic networks. It was found to be incapable of recording strong motions typically observed in the near source of even moderate earthquakes. The instrument was widely deployed near the M8.3 Sept 2003 Tokachi-Oki earthquake. The dataset corroborated our laboratory observations of low velocity saturations. The dataset also served to show all inertial sensors are equally sensitive to tilting, which is widespread in large earthquakes. High rate GPS data is also recorded during the event. Co-locating high-rate GPS with strong motion sensors is suggested to be currently the optimal method by which the complete and unambiguous deformation field at a station can be recorded.

A new application of the modern seismic station is to locate them inside structures. A test station on the 9th floor of Millikan Library is analysed. The continuous data-stream facilitates analysis of the building response to ambient weather, forced vibration tests, and small earthquakes that have occurred during its lifetime. The structure’s natural frequencies are shown to be sensitive not only to earthquake excitation, but rainfall, temperature and wind. This has important implications on structural health monitoring, which assumes the natural frequencies of a structure do not vary significantly unless there is structural damage.

Moderate to small earthquakes are now regularly recorded by dense, high dynamic range networks. This enhanced recording of the earthquake and its aftershock sequences makes possible the development of a Green’s Function deconvolution approach for determining rupture parameters.

]]>

(Engineer, 1995)

Abstract:

In order to investigate near-field ground motions, an important Lucerne Valley record from the Landers earthquake is studied. The Lucerne Valley record was recorded on the Kinemetrics SMA-2/EMA instrument located 2 km from the fault. Since the characteristics of the SMA-2/EMA instrument were not completely understood and the conventional data processing procedures have difficulty in recovering long-period information from near-field earthquake accelerograms, an instrument test on the SMA-2/EMA is conducted and a new data processing procedure is developed to perform the instrument and baseline corrections.

For an electro-magnetic transducer, an additional parameter of corner frequency, other than natural frequency, electronic damping ratio and sensitivity, should be considered during instrument correction of the SMA-2/EMA recorded accelerograms. For this purpose, a special instrument correction filter was derived in support of instrument correction and a laboratory test of the SMA-2/EMA accelerograph was conducted for obtaining the characteristic parameters of the instrument. The possible error sources in data recording and playback procedure were also examined and an appropriate baseline correction scheme was formulated for effectively removing the nonphysical trend involved in the earthquake data.

The new data processing procedure was verified by a set of SMA-2/EMA simulated long-period accelerograms and then applied to the Lucerne Valley record. The results of new data processing revealed the important features of near-field ground motion, which were a displacement offset parallel to the fault and a large pulse-like motion perpendicular to the fault. The response spectra and Fourier spectra were also calculated and compared to those of the conventionally processed record. With these investigations, a number of important conclusions are obtained and several suggestions for future studies are given.

]]>

(PHD, 1976)

Abstract:

Expressions for displacements on the surface of a layered half space due to an arbitrary oriented shear dislocation are given in terms of generalized ray expansions. Useful approximations of these expressions for shallow events as recorded at teleseismic distances for realistic earth models are presented. The results of this procedure are used to generate synthetic P, SV and SH waveforms for various assumptions of stress drop. The Thomson-Haskell layer matrix method for computing far-field body wave displacements from shear dislocations is also formulated to complement the ray theory methods when complicated earth structures are considered. An iterative generalized inverse technique is developed using analytic partial derivatives for estimating source parameters from data sets of P and S seismograms from shallow earthquakes.

With the inverse technique and ray theory methods long period P and SH waveforms are analyzed from the Koyna, India, earthquake of 10 December 1967. Using published crustal models of the Koyna region and primarily by modelling the crustal phases P, pP, and sP, the first 25 seconds of the long-period waveforms are synthesized for 17 stations and a focal mechanism obtained for the Koyna earthquake which is significantly different from previous mechanisms. The fault orientation is 67° dip to the east, -29° rake plunging to the northeast, and N16°E strike, all angles ± 6°. This is an eastward dipping, left-lateral oblique-slip fault which agrees favorably with the trend of fissures in the meizoseismal area. The source time duration is estimated to be 6.5 ± 1 5 sec, from a triangular time pulse which has a rise time of 2.5 sec and a tail-off of 3.9 sec, source depth of 4.5 ± 1.5 km and seismic moment of 3.2 ± 1.4 x 10(25) dyne-cm. Some short period complexity in the time function is indicated by modelling short-period WWSSN records but is complicated by crustal phases. The long-period P waveforms exhibit complicated behavior due to intense crustal phase interference caused by the shallow source depth and radiation pattern effects. These structure effects can explain much of the apparent multiplicity of the Koyna source. An interpretation of the Koyna dam accelerograms has yielded an S-P time. Taken together with the I. M. D. epicenter and present depth determination, it places the epicenter directly on the meizoseismal area.

Simultaneous modelling of source parameters and local layered earth structure for the April 29, 1965, Puget Sound earthquake was done using both the ray and layer matrix formulations. The source parameters obtained are: dip 70° to the east, strike 344° rake -75°, 63 km depth, average moment of 1.4 ± 0.6 x 10(26) dyne-cm and a triangular time function with a rise time of 0.5 sec and fall-off of 2.5 sec. An upper mantle and crustal model for southern Puget Sound was determined from inferred reflections from interfaces above the source. The main features of the model include a distinct 15 km thick low velocity zone with a 2.5 km/sec P wave velocity contrast lower boundary situated at approximately 56 km depth. The crustal model is less than 15 km thick with a substantial sediment section near the surface. A stacking technique using the instantaneous amplitude of the analytic signal is developed for interpreting short period teleseismic observations. The inferred reflection from the base of the low velocity zone is recovered from short period P and S waves. An apparent attenuation is also observed for pP from comparisons between the short and long period data sets. This correlates with the local surface structure of Puget Sound and yields an effective Q of approximately 65 for the crust and upper mantle.

To substantiate the unusual structure found from the Puget Sound waveform study the structure under Corvallis, Oregon, was examined using long period Ps and Sp conversions and P reverberations from teleseismic events as recorded at the WWSSN station COR. By modelling these phases in the time domain using a data set composed of six deep and intermediate depth earthquakes a similar low velocity zone structure is again inferred. The lower boundary occurs at 45 km depth and has S and P velocity contrasts of 1.3 and 1.4 km/sec, respectively. The material comprising the low velocity zone has a Poisson ratio of at least 0.33 and is constrained by the average P and S travel times determined from the converted phases.
]]>