• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 109
  • 15
  • 4
  • 3
  • 3
  • 2
  • 1
  • 1
  • 1
  • Tagged with
  • 208
  • 208
  • 61
  • 60
  • 57
  • 48
  • 41
  • 32
  • 30
  • 28
  • 27
  • 23
  • 21
  • 20
  • 19
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
191

The Performance of House Foundations in the Canterbury Earthquakes

Henderson, Duncan Robert Keall January 2013 (has links)
The Canterbury Earthquakes of 2010-2011, in particular the 4th September 2010 Darfield earthquake and the 22nd February 2011 Christchurch earthquake, produced severe and widespread liquefaction in Christchurch and surrounding areas. The scale of the liquefaction was unprecedented, and caused extensive damage to a variety of man-made structures, including residential houses. Around 20,000 residential houses suffered serious damage as a direct result of the effects of liquefaction, and this resulted in approximately 7000 houses in the worst-hit areas being abandoned. Despite the good performance of light timber-framed houses under the inertial loads of the earthquake, these structures could not withstand the large loads and deformations associated with liquefaction, resulting in significant damage. The key structural component of houses subjected to liquefaction effects was found to be their foundations, as these are in direct contact with the ground. The performance of house foundations directly influenced the performance of the structure as a whole. Because of this, and due to the lack of research in this area, it was decided to investigate the performance of houses and in particular their foundations when subjected to the effects of liquefaction. The data from the inspections of approximately 500 houses conducted by a University of Canterbury summer research team following the 4th September 2010 earthquake in the worst-hit areas of Christchurch were analysed to determine the general performance of residential houses when subjected to high liquefaction loads. This was followed by the detailed inspection of around 170 houses with four different foundation types common to Christchurch and New Zealand: Concrete perimeter with short piers constructed to NZS3604, concrete slab-on-grade also to NZS3604, RibRaft slabs designed by Firth Industries and driven pile foundations. With a focus on foundations, floor levels and slopes were measured, and the damage to all areas of the house and property were recorded. Seven invasive inspections were also conducted on houses being demolished, to examine in more detail the deformation modes and the causes of damage in severely affected houses. The simplified modelling of concrete perimeter sections subjected to a variety of liquefaction-related scenarios was also performed, to examine the comparative performance of foundations built in different periods, and the loads generated under various bearing loss and lateral spreading cases. It was found that the level of foundation damage is directly related to the level of liquefaction experienced, and that foundation damage and liquefaction severity in turn influence the performance of the superstructure. Concrete perimeter foundations were found to have performed most poorly, suffering high local floor slopes and being likely to require foundation repairs even when liquefaction was low enough that no surface ejecta was seen. This was due to their weak, flexible foundation structure, which cannot withstand liquefaction loads without deforming. The vulnerability of concrete perimeter foundations was confirmed through modelling. Slab-on-grade foundations performed better, and were unlikely to require repairs at low levels of liquefaction. Ribraft and piled foundations performed the best, with repairs unlikely up to moderate levels of liquefaction. However, all foundation types were susceptible to significant damage at higher levels of liquefaction, with maximum differential settlements of 474mm, 202mm, 182mm and 250mm found for concrete perimeter, slab-on-grade, ribraft and piled foundations respectively when subjected to significant lateral spreading, the most severe loading scenario caused by liquefaction. It was found through the analysis of the data that the type of exterior wall cladding, either heavy or light, and the number of storeys, did not affect the performance of foundations. This was also shown through modelling for concrete perimeter foundations, and is due to the increased foundation strengths provided for heavily cladded and two-storey houses. Heavy roof claddings were found to increase the demands on foundations, worsening their performance. Pre-1930 concrete perimeter foundations were also found to be very vulnerable to damage under liquefaction loads, due to their weak and brittle construction.
192

The integration of earthquake engineering resources

Lamata Martinez, Ignacio January 2014 (has links)
Earthquake engineering is increasingly focusing on large international collaborations to address complex problems. Recent computing advances have greatly contributed to the way scientific collaborations are conducted, where web-based solutions are an emerging trend to manage and present results to the scientific community and the general public. However, collaborations in earthquake engineering lack a common interoperability framework, resulting in tedious and complex processes to integrate results, which cannot be efficiently used by third-party institutions. The work described in this thesis applies novel computing techniques to enable the interoperability of earthquake engineering resources, by integrating data, distributed simulation services and laboratory facilities. This integration focuses on distributed approaches rather than centralised solutions, and has been materialised in a platform called Celestina, that supports the integration of hazard mitigation resources. The prototype of Celestina has been implemented and validated within the context of two of the current largest earthquake engineering networks, the SERIES network in Europe and the NEES network in the USA. It has been divided into three sub-systems to address different problems: (i) Celestina Data, to develop best methods to define, store, integrate and share earthquake engineering experimental data. Celestina Data uses a novel approach based on Semantic Web technologies, and it has accomplished the first data integration between earthquake engineering institutions from the United States and Europe by means of a formalised infrastructure. (ii) Celestina Tools, to research applications that can be implemented on top of the data integration, in order to provide a practical benefit for the end user. (iii) Celestina Simulations, to create the most efficient methods to integrate distributed testing software and to support the planning, definition and execution of the experimental workflow from a high-level perspective. Celestina Simulations has been implemented and validated by conducting distributed simulations between the Universities of Oxford and Kassel. Such validation has demonstrated the feasibility to conduct both flexible, general-purpose and high performance simulations under the framework. Celestina has enabled global analysis of data requirements for the whole community, the definition of global policies for data authorship, curation and preservation, more efficient use of efforts and funding, more accurate decision support systems and more efficient sharing and evaluation of data results in scientific articles.
193

Seismic Response Of Geosynthetic Reinforced Soil Wall Models Using Shaking Table Tests

Adapa, Murali Krishna 02 1900 (has links)
Use of soil retaining walls for roads, embankments and bridges is increasing with time and reinforced soil retaining walls are found to be very efficient even under critical conditions compared to unreinforced walls. They offer competitive solutions to earth retaining problems associated with less space and more loads posed by tremendous growth in infrastructure, in addition to the advantages in ease and cost of construction compared to conventional retaining wall systems. The study of seismic performance of reinforced soil retaining walls is receiving much attention in the light of lessons learned from past failures of conventional retaining walls. Laboratory model studies on these walls under controlled seismic loading conditions help to understand better how these walls actually behave during earthquakes. The objective of the present study is to investigate the seismic response of geosynthetic reinforced soil wall models through shaking table tests. To achieve this, wrap faced and rigid faced reinforced soil retaining walls of size 750 × 500 mm in plan and 600 mm height are built in rigid and flexible containers and tested under controlled dynamic conditions using a uni-axial shaking table. The effects of frequency and acceleration of the base motion, surcharge pressure on the crest, number of reinforcing layers, container boundary, wall structure and reinforcement layout on the seismic performance of the retaining walls are studied through systematic series of shaking table tests. Results are analyzed to understand the effect of each of the considered parameters on the face displacements, acceleration amplifications and soil pressures on facing at different elevations of the walls. A numerical model is developed to simulate the shaking table tests on wrap faced reinforced soil walls using a computer program FLAC (Fast Lagrangian Analysis of Continua). The experimental data are used to validate the numerical model and parametric studies are carried out on 6 m height full-scale wall using this model. Thus, the study deals with the shaking table tests, dynamic response of reinforced walls and their numerical simulation. The thesis presents detailed description of various features and various parts of the shaking table facility along with the instrumentation and model containers. Methodology adopted for the construction of reinforced soil model walls and testing procedures are briefly described. Scaling and stability issues related to the model wall size and reinforcement strength are also discussed. From the study, it is observed that the displacements are decreasing with the increase in relative density of backfill, increase in surcharge pressure and increase in number of reinforcing layers; In general, accelerations are amplified to the most at the top of the wall; Behaviour of model walls is sensitive to model container boundary. The frequency content is very important parameter affecting the model response. Further, it is noticed that the face displacements are significantly affected by all of the above parameters, while the accelerations are less sensitive to reinforcement parameters. Even very low strength geonet and geotextile are able to reduce the displacements by 75% compared to unreinforced wall. The strain levels in the reinforcing elements are observed to be very low, in the order of ±150 micro strains. A random dynamic event is also used in one of the model tests and the resulted accelerations and displacements are presented. Numerical parametric studies provided important insight into the behaviour of wrap faced walls under various seismic loading conditions and variation in physical parameters.
194

Engineering seismological studies and seismic design criteria for the Buller Region, South Island, New Zealand

Stafford, Peter James January 2006 (has links)
This thesis addresses two fundamental topics in Engineering Seismology; the application of Probabilistic Seismic Hazard Analysis (PSHA) methodology, and the estimation of measures of Strong Ground Motion. These two topics, while being related, are presented as separate sections. In the first section, state-of-the-art PSHA methodologies are applied to various sites in the Buller Region, South Island, New Zealand. These sites are deemed critical to the maintenance of economic stability in the region. A fault-source based seismicity model is developed for the region that is consistent with the governing tectonic loading, and seismic moment release of the region. In attempting to ensure this consistency the apparent anomaly between the rates of activity dictated by deformation throughout the Quaternary, and rates of activity dictated by observed seismicity is addressed. Individual fault source activity is determined following the application of a Bayesian Inference procedure in which observed earthquake events are attributed to causative faults in the study region. The activity of fault sources, in general, is assumed to be governed by bounded power law behaviour. An exception is made for the Alpine Fault which is modelled as a purely characteristic source. The calculation of rates of exceedance of various ground motion indices is made using a combination of Poissonian and time-dependent earthquake occurrence models. The various ground motion indices for which rates of exceedance are determined include peak ground acceleration, ordinates of 5% damped Spectral Acceleration, and Arias Intensity. The total hazard determined for each of these ground motion measures is decomposed using a four dimensional disaggregation procedure. From this disaggregation procedure, design earthquake scenarios are specified for the sites that are considered. The second part of the thesis is concerned with the estimation of ground motion measures that are more informative than the existing scalar measures that are available for use in New Zealand. Models are developed for the prediction of Fourier Amplitude Spectra (FAS) as well as Arias Intensity for use in the New Zealand environment. The FAS model can be used to generate ground motion time histories for use in structural and geotechnical analyses. Arias Intensity has been shown to be an important strong motion measure due to its positive correlation with damage in short period structures as well as its utility in predicting the onset of liquefaction and landslides. The models are based upon the analysis of a dataset of New Zealand Strong Motion records as well as supplementary near field records from major overseas events. While the two measures of ground motion intensity are strongly related, different methods have been adopted in order to develop the models. As part of the methodology used for the FAS model, Monte Carlo simulation coupled with a simple ray tracing procedure is employed to estimate source spectra from various New Zealand earthquakes and, consequently, a magnitude - corner-frequency relationship is obtained. In general, the parameters of the predictive equations are determined using the most state-of-the-art mixed effects regression procedures.
195

Comparative performance of ductile and damage protected bridge piers subjected to bi-directional earthquake attack

Mashiko, Naoto January 2006 (has links)
Incremental Dynamic Analysis (IDA) procedures are advanced and then applied to a quantitative risk assessment for bridge structures. This is achieved by combining IDA with site-dependent hazard-recurrence relations and damage outcomes. The IDA procedure is also developed as a way to select a critical earthquake motion record for a one-off destructive experiment. Three prototype bridge substructures are designed according to the loading and detailing requirements of New Zealand, Japan and Caltrans codes. From these designs 30 percent reduced scale specimens are constructed as part of an experimental investigation. The Pseudodynamic test is then to control on three specimens using the identified critical earthquake records. The results are presented in a probabilistic riskbased format. The differences in the seismic performance of the three different countries' design codes are examined. Each of these current seismic design codes strive for ductile behaviour of bridge substructures. Seismic response is expected to be resulting damage on structures, which may threaten post-earthquake serviceability. To overcome this major performance shortcoming, the seismic behaviour under bi-directional lateral loading is investigated for a bridge pier designed and constructed in accordance with Damage Avoidance principles. Due to the presence of steel armoured rocking interface at the base, it is demonstrated that damage can be avoided, but due to the lack of hysteresis it is necessary to add some supplemental damping. Experimental results of the armoured rocking pier under bi-directional loading are compared with a companion ductile design specimen.
196

Engineering Seismic Source Models And Strong Ground Motion

Raghu Kanth, S T G 04 1900 (has links) (PDF)
No description available.
197

Assessment Of Seismic Hazard With Local Site Effects : Deterministic And Probabilistic Approaches

Vipin, K S 12 1900 (has links)
Many researchers have pointed out that the accumulation of strain energy in the Penninsular Indian Shield region may lead to earthquakes of significant magnitude(Srinivasan and Sreenivas, 1977; Valdiya, 1998; Purnachandra Rao, 1999; Seeber et al., 1999; Ramalingeswara Rao, 2000; Gangrade and Arora, 2000). However very few studies have been carried out to quantify the seismic hazard of the entire Pennisular Indian region. In the present study the seismic hazard evaluation of South Indian region (8.0° N - 20° N; 72° E - 88° E) was done using the deterministic and probabilistic seismic hazard approaches. Effects of two of the important geotechnical aspects of seismic hazard, site response and liquefaction, have also been evaluated and the results are presented in this work. The peak ground acceleration (PGA) at ground surface level was evaluated by considering the local site effects. The liquefaction potential index (LPI) and factor of safety against liquefaction wee evaluated based on performance based liquefaction potential evaluation method. The first step in the seismic hazard analysis is to compile the earthquake catalogue. Since a comprehensive catalogue was not available for the region, it was complied by collecting data from different national (Guaribidanur Array, Indian Meterorological Department (IMD), National Geophysical Research Institute (NGRI) Hyderabad and Indira Gandhi Centre for Atomic Research (IGCAR) Kalpakkam etc.) and international agencies (Incorporated Research Institutions for Seismology (IRIS), International Seismological Centre (ISC), United States Geological Survey (USGS) etc.). The collected data was in different magnitude scales and hence they were converted to a single magnitude scale. The magnitude scale which is chosen in this study is the moment magnitude scale, since it the most widely used and the most advanced scientific magnitude scale. The declustering of earthquake catalogue was due to remove the related events and the completeness of the catalogue was analysed using the method suggested by Stepp (1972). Based on the complete part of the catalogue the seismicity parameters were evaluated for the study area. Another important step in the seismic hazard analysis is the identification of vulnerable seismic sources. The different types of seismic sources considered are (i) linear sources (ii) point sources (ii) areal sources. The linear seismic sources were identified based on the seismotectonic atlas published by geological survey of India (SEISAT, 2000). The required pages of SEISAT (2000) were scanned and georeferenced. The declustered earthquake data was superimposed on this and the sources which were associated with earthquake magnitude of 4 and above were selected for further analysis. The point sources were selected using a method similar to the one adopted by Costa et.al. (1993) and Panza et al. (1999) and the areal sources were identified based on the method proposed by Frankel et al. (1995). In order to map the attenuation properties of the region more precisely, three attenuation relations, viz. Toto et al. (1997), Atkinson and Boore (2006) and Raghu Kanth and Iyengar (2007) were used in this study. The two types of uncertainties encountered in seismic hazard analysis are aleatory and epistemic. The uncertainty of the data is the cause of aleatory variability and it accounts for the randomness associated with the results given by a particular model. The incomplete knowledge in the predictive models causes the epistemic uncertainty (modeling uncertainty). The aleatory variability of the attenuation relations are taken into account in the probabilistic seismic hazard analysis by considering the standard deviation of the model error. The epistemic uncertainty is considered by multiple models for the evaluation of seismic hazard and combining them using a logic tree. Two different methodologies were used in the evaluation of seismic hazard, based on deterministic and probabilistic analysis. For the evaluation of peak horizontal acceleration (PHA) and spectral acceleration (Sa) values, a new set of programs were developed in MATLAB and the entire analysis was done using these programs. In the deterministic seismic hazard analysis (DSHA) two types of seismic sources, viz. linear and point sources, were considered and three attenuation relations were used. The study area was divided into small grids of size 0.1° x 0.1° (about 12000 grid points) and the PHA and Sa values were evaluated for the mean and 84th percentile values at the centre of each of the grid points. A logic tree approach, using two types of sources and three attenuation relations, was adopted for the evaluation of PHA and Sa values. Logic tree permits the use of alternative models in the hazard evaluation and appropriate weightages can be assigned to each model. By evaluating the 84th percentile values, the uncertainty in spectral acceleration values can also be considered (Krinitzky, 2002). The spatial variations of PHA and Sa values for entire South India are presented in this work. The DSHA method will not consider the uncertainties involved in the earthquake recurrence process, hypocentral distance and the attenuation properties. Hence the seismic hazard analysis was done based on the probabilistic seismic hazard analysis (PSHA), and the evaluation of PHA and Sa values were done by considering the uncertainties involved in the earthquake occurrence process. The uncertainties in earthquake recurrence rate, hypocentral location and attenuation characteristic were considered in this study. For evaluating the seismicity parameters and the maximum expected earthquake magnitude (mmax) the study area was divided into different source zones. The division of study area was done based on the spatial variation of the seismicity parameters ‘a’ and ‘b’ and the mmax values were evaluated for each of these zones and these values were used in the analysis. Logic tree approach was adopted in the analysis and this permits the use of multiple models. Twelve different models (2 sources x 2 zones x 3 attenuation) were used in the analysis and based on the weightage for each of them; the final PHA and Sa values at bed rock level were evaluated. These values were evaluated for a grid size of 0.1° x 0.1° and the spatial variation of these values for return periods of 475 and 2500 years (10% and 2% probability of exceedance in 50 years) are presented in this work. Both the deterministic and probabilistic analyses highlighted that the seismic hazard is high at Koyna region. The PHA values obtained for Koyna, Bangalore and Ongole regions are higher than the values given by BIS-1893(2002). The values obtained for south western part of the study area, especially for parts of kerala are showing the PHA values less than what is provided in BIS-1893(2002). The 84th percentile values given DSHA can be taken as the upper bound PHA and Sa values for South India. The main geotechnical aspects of earthquake hazard are site response and seismic soil liquefaction. When the seismic waves travel from the bed rock through the overlying soil to the ground surface the PHA and Sa values will get changed. This amplification or de-amplification of the seismic waves depends on the type of the overlying soil. The assessment of site class can be done based on different site classification schemes. In the present work, the surface level peak ground acceleration (PGA) values were evaluated based on four different site classes suggested by NEHRP (BSSC, 2003) and the PGA values were developed for all the four site classes based on non-linear site amplification technique. Based on the geotechnical site investigation data, the site class can be determined and then the appropriate PGA and Sa values can be taken from the respective PGA maps. Response spectra were developed for the entire study area and the results obtained for three major cities are discussed here. Different methods are suggested by various codes to Smooth the response spectra. The smoothed design response spectra were developed for these cities based on the smoothing techniques given by NEHRP (BSSC, 2003), IS code (BIS-1893,2002) and Eurocode-8 (2003). A Comparison of the results obtained from these studies is also presented in this work. If the site class at any location in the study area is known, then the peak ground acceleration (PGA) values can be obtained from the respective map. This provides a simplified methodology for evaluating the PGA values for a vast area like South India. Since the surface level PGA values were evaluated for different site classes, the effects of surface topography and basin effects were not taken into account. The analysis of response spectra clearly indicates the variation of peak spectral acceleration values for different site classes and the variation of period of oscillation corresponding to maximum Sa values. The comparison of the smoothed design response spectra obtained using different codal provisions suggest the use of NEHRP(BSSC, 2003) provisions. The conventional liquefaction analysis method takes into account only one earthquake magnitude and ground acceleration values. In order to overcome this shortfall, a performance based probabilistic approach (Kramer and Mayfield, 2007) was adopted for the liquefaction potential evaluation in the present work. Based on this method, the factor of safety against liquefaction and the SPT values required to prevent liquefaction for return periods of 475 and 2500 years were evaluated for Bangalore city. This analysis was done based on the SPT data obtained from 450 boreholes across Bangalore. A new method to evaluate the liquefaction return period based on CPT values is proposed in this work. To validate the new method, an analysis was done for Bangalore by converting the SPT values to CPT values and then the results obtained were compared with the results obtained using SPT values. The factor of safety against liquefaction at different depths were integrated using liquefaction potential index (LPI) method for Bangalore. This was done by calculating the factor of safety values at different depths based on a performance based method and then the LPI values were evaluated. The entire liquefaction potential analysis and the evaluation of LPI values were done using a set of newly developed programs in MATLAB. Based on the above approaches it is possible to evaluate the SPT and CPT values required to prevent liquefaction for any given return period. An analysis was done to evaluate the SPT and CPT values required to prevent liquefaction for entire South India for return periods of 475 and 2500 years. The spatial variations of these values are presented in this work. The liquefaction potential analysis of Bangalore clearly indicates that majority of the area is safe against liquefaction. The liquefaction potential map developed for South India, based on both SPT and CPT values, will help hazard mitigation authorities to identify the liquefaction vulnerable area. This in turn will help in reducing the liquefaction hazard.
198

IDENTIFICATION OF FAILURE-CAUSED TRAFFIC CONFLICTS IN TRACKING SYSTEMS: A GENERAL FRAMEWORK

Cristhian Lizarazo Jimenez (9375209) 16 December 2020 (has links)
<p><a>Proactive evaluation of road safety is one of the most important objectives of transportation engineers. While current practice typically relies on crash-based analysis after the fact to diagnose safety problems and provide corrective countermeasures on roads, surrogate measures of safety are emerging as a complementary evaluation that can allow engineers to proactively respond to safety issues. These surrogate measures attempt to address the primary limitations of crash data, which include underreporting, lack of reliable insight into the events leading to the crash, and long data collection times. </a></p> <p>Traffic conflicts are one of the most widely adopted surrogate measures of safety because they meet the following two conditions for crash surrogacy: (1) they are non-crash events that can be physically related in a predictable and reliable way to crashes, and (2) there is a potential for bridging crash frequency and severity with traffic conflicts. However, three primary issues were identified in the literature that need to be resolved for the practical application of conflicts: (1) the lack of consistency in the definition of traffic conflict, (2) the predictive validity from such events, and (3) the adequacy of traffic conflict observations.</p> <p>Tarko (2018) developed a theoretical framework in response to the first two issues and defined traffic conflicts using counterfactual theory as events where the lack of timely responses from drivers or road users can produce crashes if there is no evasive action. The author further introduced a failure-based definition to emphasize conflicts as an undesirable condition that needs to be corrected to avoid a crash. In this case, the probability of a crash, given failure, depends on the response delay. The distribution of this delay is adjusted, and the probability is estimated using the fitted distribution. As this formal theory addresses the first two issues, a complete framework for the proper identification of conflicts needs to be investigated in line with the failure mechanism proposed in this theory.</p> <p>The objective of this dissertation, in response to the third issue, is to provide a generalized framework for proper identification of traffic conflicts by considering the failure-based definition of traffic conflicts. The framework introduced in this dissertation is built upon an empirical evaluation of the methods applied to identify traffic conflicts from naturalistic driving studies and video-based tracking systems. This dissertation aimed to prove the practicality of the framework for proactive safety evaluation using emerging technologies from in-vehicle and roadside instrumentation.</p> <p>Two conditions must be met to properly claim observed traffic events as traffic conflicts: (1) analysis of longitudinal and lateral acceleration profiles for identification of response due to failure and (2) estimation of the time-to-collision as the period between the end of the evasion and the hypothetical collision. Extrapolating user behavior in the counterfactual scenario of no evasion is applied for identifying the hypothetical collision point.</p> <p>The results from the SHRP2 study were particularly encouraging, where the appropriate identification of traffic conflicts resulted in the estimation of an expected number of crashes similar to the number reported in the study. The results also met the theoretical postulates including stabilization of the estimated crashes at lower proximity values and Lomax-distributed response delays. In terms of area-wide tracking systems, the framework was successful in identifying and removing failure-free encounters from the In-Depth understanding of accident causation for Vulnerable road users (InDeV) program.</p> <p>This dissertation also extended the application of traffic conflicts technique by considering estimation of the severity of a hypothetical crash given that a conflict occurs. This component is important in order for conflicts to resemble the practical applications of crashes, including the diagnostics of hazardous locations and evaluating the effectiveness of the countermeasures. Countermeasures should not only reduce the number of conflicts but also the risk of crash given the conflict. Severity analysis identifies the environmental, road, driver, and pre-crash conditions that increase the likelihood of severe impacts. Using dynamic characterization of crash events, this dissertation structured a probability model to evaluate crash reporting and its associated severity. Multinomial logistic models were applied in the estimation; and quasi-complete separation in logistic regression was addressed by providing a Bayesian estimation of these models.</p>
199

Study of Fragility Functions for Assessing Damage to Water Pipe Networks Caused by Earthquake Loading

Merlo, Dylan Joseph 01 April 2021 (has links) (PDF)
The performance of water lifelines during seismic events is an area of ongoing research. In this study we evaluate eight (8) different seismic events and the impact that ground shaking and ground deformations had on water pipeline systems. The overall goal of this work is to provide municipalities and utility providers with tools for mitigating the consequences of seismic hazards on water lifeline systems by analyzing the accuracy of damage estimation models. Three (3) different repair rate models are evaluated using data collected from the seismic events and compared to observed repair rate data. Results are analyzed to examine the utility of the models for forecasting damage. Results are shown. The overall goal of this work is to provide municipalities and utility providers with tools for mitigating the consequences of seismic hazards on water lifeline systems by analyzing the accuracy of damage estimation models. Results indicate that fragility functions that utilize a linear PGV-based function are the most accurate in predicting repair rates to a system based on residual plots developed for different models. Differentiating between continuous and segmented water lifeline systems is best done by using coefficients to modify the backbone PGV-based equation. Results also indicate that utilizing an additional PGD-based function could increase the predictive capabilities of water lifeline system fragility functions.
200

Development of a Metamaterial-Based Foundation System for the Seismic Protection of Fuel Storage Tanks

Wenzel, Moritz 14 April 2020 (has links)
Metamaterials are typically described as materials with ’unusual’ wave propagation properties. Originally developed for electromagnetic waves, these materials have also spread into the field of acoustic wave guiding and cloaking, with the most relevant of these ’unusual’ properties, being the so called band-gap phenomenon. A band-gap signifies a frequency region where elastic waves cannot propagate through the material, which in principle, could be used to protect buildings from earthquakes. Based on this, two relevant concepts have been proposed in the field of seismic engineering, namely: metabarriers, and metamaterial-based foundations. This thesis deals with the development of the Metafoundation, a metamaterial-based foundation system for the seismic protection of fuel storage tanks against excessive base shear and pipeline rupture. Note that storage tanks have proven to be highly sensitive to earthquakes, can trigger sever economic and environmental consequences in case of failure and were therefore chosen as a superstructure for this study. Furthermore, when tanks are protected with traditional base isolation systems, the resulting horizontal displacements, during seismic action, may become excessively large and subsequently damage connected pipelines. A novel system to protect both, tank and pipeline, could significantly augment the overall safety of industrial plants. With the tank as the primary structure of interest in mind, the Metafoundation was conceived as a locally resonant metamaterial with a band gap encompassing the tanks critical eigenfrequency. The initial design comprised a continuous concrete matrix with embedded resonators and rubber inclusions, which was later reinvented to be a column based structure with steel springs for resonator suspension. After investigating the band-gap phenomenon, a parametric study of the system specifications showed that the horizontal stiffness of the overall foundation is crucial to its functionality, while the superstructure turned out to be non-negligible when tuning the resonators. Furthermore, storage tanks are commonly connected to pipeline system, which can be damaged by the interaction between tank and pipeline during seismic events. Due to the complex and nonlinear response of pipeline systems, the coupled tank-pipeline behaviour becomes increasingly difficult to represent through numerical models, which lead to the experimental study of a foundation-tank-pipeline setup. Under the aid of a hybrid simulation, only the pipeline needed to be represented via a physical substructure, while both tank and Metafoundation were modelled as numerical substrucutres and coupled to the pipeline. The results showed that the foundation can effectively reduce the stresses in the tank and, at the same time, limit the displacements imposed on the pipeline. Leading up on this, an optimization algorithm was developed in the frequency domain, under the consideration of superstructure and ground motion spectrum. The advantages of optimizing in the frequency domain were on the one hand the reduction of computational effort, and on the other hand the consideration of the stochastic nature of the earthquake. Based on this, two different performance indices, investigating interstory drifts and energy dissipation, revealed that neither superstructure nor ground motion can be disregarded when designing a metamaterial-based foundation. Moreover, a 4 m tall optimized foundation, designed to remain elastic when verified with a response spectrum analysis at a return period of 2475 years (according to NTC 2018), reduced the tanks base shear on average by 30%. These results indicated that the foundation was feasible and functional in terms of construction practices and dynamic response, yet unpractical from an economic point of view. In order to tackle the issue of reducing the uneconomic system size, a negative stiffness mechanism was invented and implemented into the foundation as a periodic structure. This mechanism, based on a local instability, amplified the metamaterial like properties and thereby enhanced the overall system performance. Note that due to the considered instability, the device exerted a nonlinear force-displacement relationship, which had the interesting effect of reducing the band-gap instead of increasing it. Furthermore, time history analyses demonstrated that with 50% of the maximum admissible negative stiffness, the foundation could be reduced to 1/3 of its original size, while maintaining its performance. Last but not least, a study on wire ropes as resonator suspension was conducted. Their nonlinear behaviour was approximated with the Bouc Wen model, subsequently linearized by means of stochastic techniques and finally optimized with the algorithm developed earlier. The conclusion was that wire ropes could be used as a more realistic suspension mechanism, while maintaining the high damping values required by the optimized foundation layouts. In sum, a metamaterial-based foundation system is developed and studied herein, with the main findings being: (i) a structure of this type is feasible under common construction practices; (ii) the shear stiffness of the system has a fundamental impact on its functionality; (iii) the superstructure cannot be neglected when studying metamaterial-based foundations; (iv) the complete coupled system can be tuned with an optimization algorithm based on calculations in the frequency domain; (v) an experimental study suggests that the system could be advantageous to connected pipelines; (vi) wire ropes may serve as resonator suspension; and (vii) a novel negative stiffness mechanism can effectively improve the system performance.

Page generated in 0.0974 seconds