• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 123
  • 13
  • 10
  • 5
  • 4
  • 2
  • 2
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • Tagged with
  • 208
  • 208
  • 94
  • 37
  • 32
  • 31
  • 29
  • 28
  • 28
  • 28
  • 27
  • 25
  • 20
  • 19
  • 19
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
141

An Examination of Site Response in Columbia, South Carolina: Sensitivity of Site Response to "Rock" Input Motion and the Utility of Vs(30)

Lester, Alanna Paige 21 July 2005 (has links)
This study examines the sensitivity of calculated site response in connection with alternative assumptions regarding input motions and procedures prescribed in the IBC 2000 building code, particularly the use of average shear wave velocity in the upper 30 meters as an index for engineering design response spectra. Site specific subsurface models are developed for four sites in and near Columbia, South Carolina using shear wave velocity measurements from cone penetrometer tests. The four sites are underlain by thin coastal plain sedimentary deposits, overlying high velocity Paleozoic crystalline rock. An equivalent-linear algorithm is used to estimate site response for vertically incident shear waves in a horizontally layered Earth model. Non-linear mechanical behavior of the soils is analyzed using previously published strain-dependent shear modulus and damping degradation models. Two models for material beneath the investigated near-surface deposits are used: B-C outcrop conditions and hard rock outcrop conditions. The rock outcrop model is considered a geologically realistic model where a velocity gradient, representing a transition zone of partially weathered rock and fractured rock, overlies a rock half-space. Synthetic earthquake input motions are generated using the deaggregations from the 2002 National Seismic Hazard Maps, representing the characteristic Charleston source. The U. S. Geological Survey (2002) uniform hazard spectra are used to develop 2% in 50 year probability of exceedance input ground motions for both B-C boundary and hard rock outcrop conditions. An initial analysis was made for all sites using an 8 meter thick velocity gradient for the rock input model. Sensitivity of the models to uncertainty of the weathered zone thickness was assessed by randomizing the thickness of the velocity gradient. The effect of the velocity gradient representing the weathered rock zone increases site response at high frequencies. Both models (B-C outcrop conditions and rock outcrop conditions) are compared with the International Building Code (IBC 2000) maximum credible earthquake spectra. The results for both models exceed the IBC 2000 spectra at some frequencies, between 3 and 10 Hz at all four sites. However, site 2, which classifies as a C site and is therefore assumed to be the most competent of the four sites according to IBC 2000 design procedures, has the highest calculated spectral acceleration of the four sites analyzed. Site 2 has the highest response because a low velocity zone exists at the bottom of the geotechnical profile in immediate contact with the higher velocity rock material, producing a very large impedance contrast. An important shortcoming of the IBC 2000 building code results from the fact that it does not account for cases in which there is a strong rock-soil velocity contrast at depth less than 30 meters. It is suggested that other site-specific parameters, specifically, depth to bedrock and near-surface impedance ratio, should be included in the IBC design procedures. / Master of Science
142

Partitioning Uncertainty for Non-Ergodic Probabilistic Seismic Hazard Analyses

Dawood, Haitham Mohamed Mahmoud Mousad 29 October 2014 (has links)
Properly accounting for the uncertainties in predicting ground motion parameters is critical for Probabilistic Seismic Hazard Analyses (PSHA). This is particularly important for critical facilities that are designed for long return period motions. Non-ergodic PSHA is a framework that allows for this proper accounting of uncertainties. This, in turn, allows for more informed decisions by designers, owners and regulating agencies. The ergodic assumption implies that the standard deviation applicable to a specific source-path-site combination is equal to the standard deviation estimated using a database with multiple source-path-site combinations. The removal of the ergodic assumption requires dense instrumental networks operating in seismically active zones so that a sufficient number of recordings are made. Only recently, with the advent of networks such as the Japanese KiK-net network has this become possible. This study contributes to the state of the art in earthquake engineering and engineering seismology in general and in non-ergodic seismic hazard analysis in particular. The study is divided in for parts. First, an automated protocol was developed and implemented to process a large database of strong ground motions for GMPE development. A comparison was conducted between the common records in the database processed within this study and other studies. The comparison showed the viability of using the automated algorithm to process strong ground motions. On the other hand, the automated algorithm resulted in narrower usable frequency bandwidths because of the strict criteria adopted for processing the data. Second, an approach to include path-specific attenuation rates in GMPEs was proposed. This approach was applied to a subset of the KiK-net database. The attenuation rates across regions that contains volcanoes was found to be higher than other regions which is in line with the observations of other researchers. Moreover, accounting for the path-specific attenuation rates reduced the aleatoric variability associated with predicting pseudo-spectral accelerations. Third, two GMPEs were developed for active crustal earthquakes in Japan. The two GMPEs followed the ergodic and site-specific formulations, respectively. Finally, a comprehensive residual analysis was conducted to find potential biases in the residuals and propose models to predict some components of variability as a function of some input parameters. / Ph. D.
143

Application of Functional Safety Standards to the Electrification of a Vehicle Powertrain

Neblett, Alexander Mark Hattier 02 August 2018 (has links)
With the introduction of electronic control units to automotive vehicles, system complexity has increased. With this change in complexity, new standards have been created to ensure safety at the system level for these vehicles. Furthermore, vehicles have become increasingly complex with the push for electrification of automotive vehicles, which has resulted in the creation of hybrid electric and battery electric vehicles. The goal of this thesis is to provide an example of a hazard and operability analysis as well as a hazard and risk analysis for a hybrid electric vehicle. Additionally, the safety standards developed do not align well with educational prototype vehicles because the standards are designed for corporations. The hybrid vehicle supervisory controller example within this thesis demonstrates how to define a system and then perform system-level analytical techniques to identify potential failures and associated requirements. Ultimately, through this analysis suggestions are made on how best to reduce system complexity and improve system safety of a student built prototype vehicle. / Master of Science / With the introduction of electronic control units to automotive vehicles, system complexity has increased. With this change in complexity, new standards have been created to ensure safety at the system level for these vehicles. Furthermore, vehicles have become increasingly complex with the push for electrification of automotive vehicles, which has resulted in the creation of hybrid electric and battery electric vehicles. There are different ways for corporations to demonstrate adherence to these standards, however it is more difficult for student design projects to follow the same standards. Through the application of hazard and operability analysis and hazard and risk analysis on the hybrid vehicle supervisory controller, an example is provided for future students to follow the guidelines established by the safety standards. The end result is to develop system requirements to improve the safety of the prototype vehicle with the added benefit of making design changes to reduce the complexity of the student project.
144

Landslide susceptibility mapping : remote sensing and GIS approach

Tyoda, Zipho 03 1900 (has links)
Thesis (MSc)--Stellenbosch University, 2013. / Landslide susceptibility maps are important for development planning and disaster management. The current synthesis of landslide susceptibility maps largely applies GIS and remote sensing techniques. One of the most critical stages on landslide susceptibility mapping is the selection of landslide causative factors and weighting of the selected causative factors, in accordance to their influence to slope instability. GIS is ideal when deriving static factors i.e. slope and aspect and most importantly in the synthesis of landslide susceptibility maps. The integration of landslide causative thematic maps requires the selection of the weighting method; in order to weight the causative thematic maps in accordance to their influence to slope instability. Landslide susceptibility mapping is based on the assumption that future landslides will occur under similar circumstances as historic landslides. The weight of evidence method is ideal for landslide susceptibility mapping, as it calculates the weights of the causative thematic maps using known landslides points. This method was applied in an area within the Western Cape province of South Africa, the area is known to be highly susceptible to landslide occurrences. A prediction rate of 80.37% was achieved. The map combination approach was also applied and achieved a prediction rate of 50.98%. Satellite remote sensing techniques can be used to derive the thematic information needed to synthesize landslide susceptibility maps and to monitor the variable parameters influencing landslide susceptibility. Satellite remote sensing techniques can contribute to landslide investigation at three distinct phases namely: (1) detection and classification of landslides (2) monitoring landslide movement and identification of conditions leading up to an event (3) analysis and prediction of slope failures. Various sources of remote sensing data can contribute to these phases. Although the detection and classification of landslides through the remote sensing techniques is important to define landslide controlling parameters, the ideal is to use remote sensing data for monitoring of areas susceptible to landslide occurrence in an effort to provide an early warning. In this regard, optical remote sensing data was used successfully to monitor the variable conditions (vegetation health and productivity) that make an area susceptible to landslide occurrence.
145

Comprehensive Seismic Hazard Analysis of India

Kolathayar, Sreevalsa January 2012 (has links) (PDF)
Planet earth is restless and one cannot control its inside activities and vibrations those leading to natural hazards. Earthquake is one of such natural hazards that have affected the mankind most. Most of the causalities due to earthquakes happened not because of earthquakes as such, but because of poorly designed structures which could not withstand the earthquake forces. The improper building construction techniques adopted and the high population density are the major causes of the heavy damage due to earthquakes. The damage due to earthquakes can be reduced by following proper construction techniques, taking into consideration of appropriate forces on the structure that can be caused due to future earthquakes. The steps towards seismic hazard evaluation are very essential to estimate an optimal and reliable value of possible earthquake ground motion during a specific time period. These predicted values can be an input to assess the seismic vulnerability of an area based on which new construction and the restoration works of existing structures can be carried out. A large number of devastating earthquakes have occurred in India in the past. The northern region of India, which is along the plate boundary of the Indian plate with the Eurasian plate, is seismically very active. The north eastern movement of Indian plate has caused deformation in the Himalayan region, Tibet and the North Eastern India. Along the Himalayan belt, the Indian and Eurasian plates converge at the rate of about 50 mm/year (Bilham 2004; Jade 2004). The North East Indian (NEI) region is known as one of the most seismically active regions in the world. However the peninsular India, which is far away from the plate boundary, is a stable continental region, which is considered to be of moderate seismic activity. Even though, the activity is considered to be moderate in the Peninsular India, world’s deadliest earthquake occurred in this region (Bhuj earthquake 2001). The rapid drifting of Indian plate towards Himalayas in the north east direction with a high velocity along with its low plate thickness might be the cause of high seismicity of the Indian region. Bureau of Indian Standard has published a seismic zonation map in 1962 and revised it in 1966, 1970, 1984 and 2002. The latest version of the seismic zoning map of India assigns four levels of seismicity for the entire Country in terms of different zone factors. The main drawback of the seismic zonation code of India (BIS-1893, 2002) is that, it is based on the past seismic activity and not based on a scientific seismic hazard analysis. Several seismic hazard studies, which were taken up in the recent years, have shown that the hazard values given by BIS-1893 (2002) need to be revised (Raghu Kanth and Iyengar 2006; Vipin et al. 2009; Mahajan et al. 2009 etc.). These facts necessitate a comprehensive study for evaluating the seismic hazard of India and development of a seismic zonation map of India based on the Peak Ground Acceleration (PGA) values. The objective of this thesis is to estimate the seismic hazard of entire India using updated seismicity data based on the latest and different methodologies. The major outcomes of the thesis can be summarized as follows. An updated earthquake catalog that is uniform in moment magnitude, has been prepared for India and adjoining areas for the period till 2010. Region specific magnitude scaling relations have been established for the study region, which facilitated the generation of a homogenous earthquake catalog. By carefully converting the original magnitudes to unified MW magnitudes, we have removed a major obstacle for consistent assessment of seismic hazards in India. The earthquake catalog was declustered to remove the aftershocks and foreshocks. Out of 203448 events in the raw catalog, 75.3% were found to be dependent events and remaining 50317 events were identified as main shocks of which 27146 events were of MW ≥ 4. The completeness analysis of the catalog was carried out to estimate completeness periods of different magnitude ranges. The earthquake catalog containing the details of the earthquake events until 2010 is uploaded in the website the catalog was carried out to estimate completeness periods of different magnitude ranges. The earthquake catalog containing the details of the earthquake events until 2010 is uploaded in the website the catalog was carried out to estimate completeness periods of different magnitude ranges. The earthquake catalog containing the details of the earthquake events until 2010 is uploaded in the website A quantitative study of the spatial distribution of the seismicity rate across India and its vicinity has been performed. The lower b values obtained in shield regions imply that the energy released in these regions is mostly from large magnitude events. The b value of northeast India and Andaman Nicobar region is around unity which implies that the energy released is compatible for both smaller and larger events. The effect of aftershocks in the seismicity parameters was also studied. Maximum likelihood estimations of the b value from the raw and declustered earthquake catalogs show significant changes leading to a larger proportion of low magnitude events as foreshocks and aftershocks. The inclusions of dependent events in the catalog affect the relative abundance of low and high magnitude earthquakes. Thus, greater inclusion of dependent events leads to higher b values and higher activity rate. Hence, the seismicity parameters obtained from the declustered catalog is valid as they tend to follow a Poisson distribution. Mmax does not significantly change, since it depends on the largest observed magnitude rather than the inclusion of dependent events (foreshocks and aftershocks). The spatial variation of the seismicity parameters can be used as a base to identify regions of similar characteristics and to delineate regional seismic source zones. Further, Regions of similar seismicity characteristics were identified based on fault alignment, earthquake event distribution and spatial variation of seismicity parameters. 104 regional seismic source zones were delineated which are inevitable input to seismic hazard analysis. Separate subsets of the catalog were created for each of these zones and seismicity analysis was done for each zone after estimating the cutoff magnitude. The frequency magnitude distribution plots of all the source zones can be found at http://civil.iisc.ernet.in/~sitharam . There is considerable variation in seismicity parameters and magnitude of completeness across the study area. The b values for various regions vary from a lower value of 0.5 to a higher value of 1.5. The a value for different zones vary from a lower value of 2 to a higher value of 10. The analysis of seismicity parameters shows that there is considerable difference in the earthquake recurrence rate and Mmax in India. The coordinates of these source zones and the seismicity parameters a, b & Mmax estimated can be directly input into the Probabilistic seismic hazard analysis. The seismic hazard evaluation of the Indian landmass based on a state-of-the art Probabilistic Seismic Hazard Analysis (PSHA) study has been performed using the classical Cornell–McGuire approach with different source models and attenuation relations. The most recent knowledge of seismic activity in the region has been used to evaluate the hazard incorporating uncertainty associated with different modeling parameters as well as spatial and temporal uncertainties. The PSHA has been performed with currently available data and their best possible scientific interpretation using an appropriate instrument such as the logic tree to explicitly account for epistemic uncertainty by considering alternative models (source models, maximum magnitude in hazard computations, and ground-motion attenuation relationships). The hazard maps have been produced for horizontal ground motion at bedrock level (Shear wave velocity ≥ 3.6 km/s) and compared with the earlier studies like Bhatia et al., 1999 (India and adjoining areas); Seeber et al, 1999 (Maharashtra state); Jaiswal and Sinha, 2007 (Peninsular India); Sitharam and Vipin, 2011 (South India); Menon et al., 2010 (Tamilnadu). It was observed that the seismic hazard is moderate in Peninsular shield (except the Kutch region of Gujarat), but the hazard in the North and Northeast India and Andaman-Nicobar region is very high. The ground motion predicted from the present study will not only give hazard values for design of structures, but also will help in deciding the locations of important structures such as nuclear power plants. The evaluation of surface level PGA values is of very high importance in the engineering design. The surface level PGA values were evaluated for the entire study area for four NEHRP site classes using appropriate amplification factors. If the site class at any location in the study area is known, then the ground level PGA values can be obtained from the respective map. In the absence of VS30 values, the site classes can be identified based on local geological conditions. Thus this method provides a simplified methodology for evaluating the surface level PGA values. The evaluation of PGA values for different site classes were evaluated based on the PGA values obtained from the DSHA and PSHA. This thesis also presents VS30 characterization of entire country based on the topographic gradient using existing correlations. Further, surface level PGA contour map was developed based on the same. Liquefaction is the conversion of formally stable cohesionless soils to a fluid mass, due to increase in pore pressure and is prominent in areas that have groundwater near the surface and sandy soil. Soil liquefaction has been observed during the earthquakes because of the sudden dynamic earthquake load, which in turn increases the pore pressure. The evaluation of liquefaction potential involves evaluation of earthquake loading and evaluation of soil resistance to liquefaction. In the present work, the spatial variation of the SPT value required to prevent liquefaction has been estimated using a probabilistic methodology, for entire India. To summarize, the major contribution of this thesis are the development of region specific magnitude correlations suitable for Indian subcontinent and an updated homogeneous earthquake catalog for India that is uniform in moment magnitude scale. The delineation and characterization of regional seismic source zones for a vast country like India is a unique contribution, which requires reasonable observation and engineering judgement. Considering complex seismotectonic set up of the country, the present work employed numerous methodologies (DSHA and PSHA) in analyzing the seismic hazard using appropriate instrument such as the logic tree to explicitly account for epistemic uncertainties considering alternative models (For Source model, Mmax estimation and Ground motion prediction equations) to estimate the PGA value at bedrock level. Further, VS30 characterization of India was done based on the topographic gradient, as a first level approach, which facilitated the development of surface level PGA map for entire country using appropriate amplification factors. Above factors make the present work very unique and comprehensive touching various aspects of seismic hazard. It is hoped that the methodology and outcomes presented in this thesis will be beneficial to practicing engineers and researchers working in the area of seismology and geotechnical engineering in particular and to the society as a whole.
146

Assessment Of Seismic Hazard With Local Site Effects : Deterministic And Probabilistic Approaches

Vipin, K S 12 1900 (has links)
Many researchers have pointed out that the accumulation of strain energy in the Penninsular Indian Shield region may lead to earthquakes of significant magnitude(Srinivasan and Sreenivas, 1977; Valdiya, 1998; Purnachandra Rao, 1999; Seeber et al., 1999; Ramalingeswara Rao, 2000; Gangrade and Arora, 2000). However very few studies have been carried out to quantify the seismic hazard of the entire Pennisular Indian region. In the present study the seismic hazard evaluation of South Indian region (8.0° N - 20° N; 72° E - 88° E) was done using the deterministic and probabilistic seismic hazard approaches. Effects of two of the important geotechnical aspects of seismic hazard, site response and liquefaction, have also been evaluated and the results are presented in this work. The peak ground acceleration (PGA) at ground surface level was evaluated by considering the local site effects. The liquefaction potential index (LPI) and factor of safety against liquefaction wee evaluated based on performance based liquefaction potential evaluation method. The first step in the seismic hazard analysis is to compile the earthquake catalogue. Since a comprehensive catalogue was not available for the region, it was complied by collecting data from different national (Guaribidanur Array, Indian Meterorological Department (IMD), National Geophysical Research Institute (NGRI) Hyderabad and Indira Gandhi Centre for Atomic Research (IGCAR) Kalpakkam etc.) and international agencies (Incorporated Research Institutions for Seismology (IRIS), International Seismological Centre (ISC), United States Geological Survey (USGS) etc.). The collected data was in different magnitude scales and hence they were converted to a single magnitude scale. The magnitude scale which is chosen in this study is the moment magnitude scale, since it the most widely used and the most advanced scientific magnitude scale. The declustering of earthquake catalogue was due to remove the related events and the completeness of the catalogue was analysed using the method suggested by Stepp (1972). Based on the complete part of the catalogue the seismicity parameters were evaluated for the study area. Another important step in the seismic hazard analysis is the identification of vulnerable seismic sources. The different types of seismic sources considered are (i) linear sources (ii) point sources (ii) areal sources. The linear seismic sources were identified based on the seismotectonic atlas published by geological survey of India (SEISAT, 2000). The required pages of SEISAT (2000) were scanned and georeferenced. The declustered earthquake data was superimposed on this and the sources which were associated with earthquake magnitude of 4 and above were selected for further analysis. The point sources were selected using a method similar to the one adopted by Costa et.al. (1993) and Panza et al. (1999) and the areal sources were identified based on the method proposed by Frankel et al. (1995). In order to map the attenuation properties of the region more precisely, three attenuation relations, viz. Toto et al. (1997), Atkinson and Boore (2006) and Raghu Kanth and Iyengar (2007) were used in this study. The two types of uncertainties encountered in seismic hazard analysis are aleatory and epistemic. The uncertainty of the data is the cause of aleatory variability and it accounts for the randomness associated with the results given by a particular model. The incomplete knowledge in the predictive models causes the epistemic uncertainty (modeling uncertainty). The aleatory variability of the attenuation relations are taken into account in the probabilistic seismic hazard analysis by considering the standard deviation of the model error. The epistemic uncertainty is considered by multiple models for the evaluation of seismic hazard and combining them using a logic tree. Two different methodologies were used in the evaluation of seismic hazard, based on deterministic and probabilistic analysis. For the evaluation of peak horizontal acceleration (PHA) and spectral acceleration (Sa) values, a new set of programs were developed in MATLAB and the entire analysis was done using these programs. In the deterministic seismic hazard analysis (DSHA) two types of seismic sources, viz. linear and point sources, were considered and three attenuation relations were used. The study area was divided into small grids of size 0.1° x 0.1° (about 12000 grid points) and the PHA and Sa values were evaluated for the mean and 84th percentile values at the centre of each of the grid points. A logic tree approach, using two types of sources and three attenuation relations, was adopted for the evaluation of PHA and Sa values. Logic tree permits the use of alternative models in the hazard evaluation and appropriate weightages can be assigned to each model. By evaluating the 84th percentile values, the uncertainty in spectral acceleration values can also be considered (Krinitzky, 2002). The spatial variations of PHA and Sa values for entire South India are presented in this work. The DSHA method will not consider the uncertainties involved in the earthquake recurrence process, hypocentral distance and the attenuation properties. Hence the seismic hazard analysis was done based on the probabilistic seismic hazard analysis (PSHA), and the evaluation of PHA and Sa values were done by considering the uncertainties involved in the earthquake occurrence process. The uncertainties in earthquake recurrence rate, hypocentral location and attenuation characteristic were considered in this study. For evaluating the seismicity parameters and the maximum expected earthquake magnitude (mmax) the study area was divided into different source zones. The division of study area was done based on the spatial variation of the seismicity parameters ‘a’ and ‘b’ and the mmax values were evaluated for each of these zones and these values were used in the analysis. Logic tree approach was adopted in the analysis and this permits the use of multiple models. Twelve different models (2 sources x 2 zones x 3 attenuation) were used in the analysis and based on the weightage for each of them; the final PHA and Sa values at bed rock level were evaluated. These values were evaluated for a grid size of 0.1° x 0.1° and the spatial variation of these values for return periods of 475 and 2500 years (10% and 2% probability of exceedance in 50 years) are presented in this work. Both the deterministic and probabilistic analyses highlighted that the seismic hazard is high at Koyna region. The PHA values obtained for Koyna, Bangalore and Ongole regions are higher than the values given by BIS-1893(2002). The values obtained for south western part of the study area, especially for parts of kerala are showing the PHA values less than what is provided in BIS-1893(2002). The 84th percentile values given DSHA can be taken as the upper bound PHA and Sa values for South India. The main geotechnical aspects of earthquake hazard are site response and seismic soil liquefaction. When the seismic waves travel from the bed rock through the overlying soil to the ground surface the PHA and Sa values will get changed. This amplification or de-amplification of the seismic waves depends on the type of the overlying soil. The assessment of site class can be done based on different site classification schemes. In the present work, the surface level peak ground acceleration (PGA) values were evaluated based on four different site classes suggested by NEHRP (BSSC, 2003) and the PGA values were developed for all the four site classes based on non-linear site amplification technique. Based on the geotechnical site investigation data, the site class can be determined and then the appropriate PGA and Sa values can be taken from the respective PGA maps. Response spectra were developed for the entire study area and the results obtained for three major cities are discussed here. Different methods are suggested by various codes to Smooth the response spectra. The smoothed design response spectra were developed for these cities based on the smoothing techniques given by NEHRP (BSSC, 2003), IS code (BIS-1893,2002) and Eurocode-8 (2003). A Comparison of the results obtained from these studies is also presented in this work. If the site class at any location in the study area is known, then the peak ground acceleration (PGA) values can be obtained from the respective map. This provides a simplified methodology for evaluating the PGA values for a vast area like South India. Since the surface level PGA values were evaluated for different site classes, the effects of surface topography and basin effects were not taken into account. The analysis of response spectra clearly indicates the variation of peak spectral acceleration values for different site classes and the variation of period of oscillation corresponding to maximum Sa values. The comparison of the smoothed design response spectra obtained using different codal provisions suggest the use of NEHRP(BSSC, 2003) provisions. The conventional liquefaction analysis method takes into account only one earthquake magnitude and ground acceleration values. In order to overcome this shortfall, a performance based probabilistic approach (Kramer and Mayfield, 2007) was adopted for the liquefaction potential evaluation in the present work. Based on this method, the factor of safety against liquefaction and the SPT values required to prevent liquefaction for return periods of 475 and 2500 years were evaluated for Bangalore city. This analysis was done based on the SPT data obtained from 450 boreholes across Bangalore. A new method to evaluate the liquefaction return period based on CPT values is proposed in this work. To validate the new method, an analysis was done for Bangalore by converting the SPT values to CPT values and then the results obtained were compared with the results obtained using SPT values. The factor of safety against liquefaction at different depths were integrated using liquefaction potential index (LPI) method for Bangalore. This was done by calculating the factor of safety values at different depths based on a performance based method and then the LPI values were evaluated. The entire liquefaction potential analysis and the evaluation of LPI values were done using a set of newly developed programs in MATLAB. Based on the above approaches it is possible to evaluate the SPT and CPT values required to prevent liquefaction for any given return period. An analysis was done to evaluate the SPT and CPT values required to prevent liquefaction for entire South India for return periods of 475 and 2500 years. The spatial variations of these values are presented in this work. The liquefaction potential analysis of Bangalore clearly indicates that majority of the area is safe against liquefaction. The liquefaction potential map developed for South India, based on both SPT and CPT values, will help hazard mitigation authorities to identify the liquefaction vulnerable area. This in turn will help in reducing the liquefaction hazard.
147

Faulty Measurements and Shaky Tools: An Exploration into Hazus and the Seismic Vulnerabilities of Portland, OR

Brannon, Brittany Ann 27 August 2013 (has links)
Events or forces of nature with catastrophic consequences, or "natural disasters," have increased in both frequency and force due to climate change and increased urbanization in climate-sensitive areas. To create capacity to face these dangers, an entity must first quantify the threat and translate scientific knowledge on nature into comprehensible estimates of cost and loss. These estimates equip those at risk with knowledge to enact policy, formulate mitigation plans, raise awareness, and promote preparedness in light of potential destruction. Hazards-United States, or Hazus, is one such tool created by the federal government to estimate loss from a variety of threats, including earthquakes, hurricanes, and floods. Private and governmental agencies use Hazus to provide information and support to enact mitigation measures, craft plans, and create insurance assessments; hence the results of Hazus can have lasting and irreversible effects once the hazard in question occurs. This thesis addresses this problem and sheds light on the obvious and deterministic failings of Hazus in the context of the probable earthquake in Portland, OR; stripping away the tool's black box and exposing the grim vulnerabilities it fails to account for. The purpose of this thesis is twofold. First, this thesis aims to examine the critical flaws within Hazus and the omitted vulnerabilities particular to the Portland region and likely relevant in other areas of study. Second and more nationally applicable, this thesis intends to examine the influence Hazus outputs can have in the framing of seismic risk by the non-expert public. Combining the problem of inadequate understanding of risk in Portland with the questionable faith in Hazus alludes to a larger, socio-technical situation in need of attention by the academic and hazard mitigation community. This thesis addresses those issues in scope and adds to the growing body of literature on defining risk, hazard mitigation, and the consequences of natural disasters to urban environments.
148

An explicit finite difference method for analyzing hazardous rock mass

Basson, Gysbert 12 1900 (has links)
Thesis (MSc)--Stellenbosch University, 2011. / ENGLISH ABSTRACT: FLAC3D is a three-dimensional explicit nite difference program for solving a variety of solid mechanics problems, both linear and non-linear. The development of the algorithm and its initial implementation were performed by Itasca Consulting Group Inc. The main idea of the algorithm is to discritise the domain of interest into a Lagrangian grid where each cell represents an element of the material. Each cell can then deform according to a prescribed stress/strain law together with the equations of motion. An in-depth study of the algorithm was performed and implemented in Java. During the implementation, it was observed that the type of boundary conditions typically used has a major in uence on the accuracy of the results, especially when boundaries are close to regions with large stress variations, such as in mining excavations. To improve the accuracy of the algorithm, a new type of boundary condition was developed where the FLAC3D domain is embedded in a linear elastic material, named the Boundary Node Shell (BNS). Using the BNS shows a signi cant improvement in results close to excavations. The FLAC algorithm is also quite amendable to paralellization and a multi-threaded version that makes use of multiple Central Processing Unit (CPU) cores was developed to optimize the speed of the algorithm. The nal outcome is new non-commercial Java source code (JFLAC) which includes the Boundary Node Shell (BNS) and shared memory parallelism over and above the basic FLAC3D algorithm. / AFRIKAANSE OPSOMMING: FLAC3D is 'n eksplisiete eindige verskil program wat 'n verskeidenheid liniêre en nieliniêre soliede meganika probleme kan oplos. Die oorspronklike algoritme en die implimentasies daarvan was deur Itasca Consulting Group Inc. toegepas. Die hoo dee van die algoritme is om 'n gebied te diskritiseer deur gebruik te maak van 'n Lagrangese rooster, waar elke sel van die rooster 'n element van die rooster materiaal beskryf. Elke sel kan dan vervorm volgens 'n sekere spannings/vervormings wet. 'n Indiepte ondersoek van die algoritme was uitgevoer en in Java geïmplimenteer. Tydens die implementering was dit waargeneem dat die grense van die rooster 'n groot invloed het op die akkuraatheid van die resultate. Dit het veral voorgekom in areas waar stress konsentrasies hoog is, gewoonlik naby areas waar myn uitgrawings gemaak is. Dit het die ontwikkelling van 'n nuwe tipe rand kondisie tot gevolg gehad, sodat die akkuraatheid van die resultate kon verbeter. Die nuwe rand kondisie, genaamd die Grens Node Omhulsel (GNO), aanvaar dat die gebied omring is deur 'n elastiese materiaal, wat veroorsaak dat die grense van die gebied 'n elastiese reaksie het op die stress binne die gebied. Die GNO het 'n aansienlike verbetering in die resultate getoon, veral in areas naby myn uitgrawings. Daar was ook waargeneem dat die FLAC algoritme parralleliseerbaar is en het gelei tot die implentering van 'n multi-SVE weergawe van die sagteware om die spoed van die algoritme te optimeer. Die nale uitkomste is 'n nuwe nie-kommersiële Java weergawe van die algoritme (JFLAC), wat die implimentering van die nuwe GNO randwaardekondisie insluit, asook toelaat vir die gebruik van multi- Sentrale Verwerkings Eenheid (SVE) as 'n verbetering op die basiese FLAC3D algoritme.
149

Seismic drift assessment of buildings in Hong Kong with particular application to transfer structures

Li, Jianhui, 李建輝 January 2004 (has links)
published_or_final_version / Civil Engineering / Doctoral / Doctor of Philosophy
150

Análise Preliminar de Perigos(APP) em projetos de arquitetura: aplicação e teste de viabilidade da ferramenta de análise de risco / Preliminary Hazard Analysis (PHA) applied to architecture design level: application and test of risk analysis tools

Rubia da Eucaristia Barretto 18 March 2008 (has links)
O objetivo foi testar a viabilidade de aplicação da ferramenta da Análise Preliminar de Perigo (APP) em projeto de arquitetura. As bases conceituais foram extraídas da análise de risco de processos industriais e segurança no ambiente de trabalho. A estrutura de encaminhamento da análise foi referenciada na norma internacional ISO 6.241 (International Organization for Standardization Performance standards in building Principles for their preparation and factors to be considered), a qual orienta a avaliação de desempenho do edifício, seus elementos e instalações, quando submetidos a condições normais de exposição e uso. Os referenciais utilizados como requisitos foram: segurança ao fogo e segurança ao uso; este último, com ênfase na acessibilidade. Os critérios aplicados foram norteados pelo Decreto do Estado de São Paulo, n. 46.076, de 31 de agosto de 2001, e pela norma da ABNT NBR 9.050, 2004. A adequação da APP ao uso em projeto de arquitetura envolveu a estruturação das categorias de análise, adequação da estrutura de composição da APP, a sistematização e codificação de listas de verificações, uso do aplicativo Excel e formatação de questionários com validação dos níveis de importância por especialistas. A ferramenta foi testada em dois edifícios do campus USP/Leste. Constatou-se que o uso dos macros facilitou a priorização dos pontos de interesse no processo de tomada de decisão, tanto para o projetista (matriz de risco por meio do nível de ação), indicando as situações relevantes para a melhoria contínua do projeto, como para o gerente de projetos (significância), trazendo informações representativas para sua atuação. Nos projetos analisados as falhas recaíram sobre as especificações de materiais, consideradas para o projetista como substancial ficando a distribuição espacial versus funcionalidade com um grau de importância moderado em situações de emergência. No atual estágio de desenvolvimento da ferramenta, o analista precisa ter habilidades como: senso crítico na comparação entre o prescritivo (leis, normas e regulamentos) e o real (proposição projetual) e capacidade de reconhecer os caminhos críticos entre os vários elementos dos subsistemas que contribuem para a geração de conflitos, desvios e falhas na proposição projetual. Este estudo confirma a viabilidade da aplicação da APP em projeto de arquitetura; entretanto, há necessidade do uso de um aplicativo que integre sistemicamente um banco de dados prescritivo e gráfico, permitindo uma associação entre os subsistemas e suas interfaces com desempenho, além da facilidade de manuseio. / This MASTER thesis aims at testing the viability of the Preliminary Hazard Analysis Tool (PHA) on architecture design level. The conceptual bases come from risk analysis industrial processes and security on labor ambient. The architecture design analysis was based on international standard ISO6241 (International organization for standardization Performance standards in building Principles for their preparation and factors to be considered) which establishes the performance evaluation of buildings, their elements and facilities when they are under use and external exposure conditions. The following performance requirements were adopted: fire safety and use safety, the last one with emphasis in universal accessibility. The performance criteria were oriented by São Paulo State Law n° 46.076 august 31, 2001 and national standard ABNT NBR9.050: 2004. The adaptation of PHA for application to architecture design level implies the structure of analysis categories, structure of PHA, systematization and codification of checklists, use of Excel application, formulation of questionnaires and validation of importance levels based on declared preference technique. The PHA tool was tested on two building projects of Campus USP/ Leste of University of São Paulo. The employment of programming scripts (excel macros) has demonstrated their potential for facilitating prioritization of actions during the design process. The designer could be able to improve the technical solutions by mean of the action level pointed by risk matrix and the manager could be able to take the most suitable decision by mean of the degree of significance. The main faults detected in the analyzed projects are related to: lack and inadequacy of materials specification and poor spaces functionality. The PHA tool user is supposed to have prior abilities such as: common sense in order to compare the prescription (laws, standards) to the real situation (design) as well as has the capacity of finding the critical path between many elements of the subsystems that contribute to the generation of conflicts and failures on the project. This study confirms the viability of PHA application in architecture design level, however, it is necessary the use of an application which integrates systemically a prescriptive and graphic database, allows an association between the subsystems and their interfaces with constructive performance and is easy of utilization.

Page generated in 0.0683 seconds