• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 8
  • 6
  • Tagged with
  • 18
  • 18
  • 18
  • 12
  • 9
  • 9
  • 6
  • 6
  • 5
  • 5
  • 4
  • 4
  • 4
  • 3
  • 3
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
1

Seismic Vulnerability Assessment of a Shallow Two-Story Underground RC Box Structure

Huh, Jungwon, Tran, Quang, Haldar, Achintya, Park, Innjoon, Ahn, Jin-Hee 18 July 2017 (has links)
Tunnels, culverts, and subway stations are the main parts of an integrated infrastructure system. Most of them are constructed by the cut-and-cover method at shallow depths (mainly lower than 30 m) of soil deposits, where large-scale seismic ground deformation can occur with lower stiffness and strength of the soil. Therefore, the transverse racking deformation (one of the major seismic ground deformation) due to soil shear deformations should be included in the seismic design of underground structures using cost- and time-efficient methods that can achieve robustness of design and are easily understood by engineers. This paper aims to develop a simplified but comprehensive approach relating to vulnerability assessment in the form of fragility curves on a shallow two-story reinforced concrete underground box structure constructed in a highly-weathered soil. In addition, a comparison of the results of earthquakes per peak ground acceleration (PGA) is conducted to determine the effective and appropriate number for cost- and time-benefit analysis. The ground response acceleration method for buried structures (GRAMBS) is used to analyze the behavior of the structure subjected to transverse seismic loading under quasi-static conditions. Furthermore, the damage states that indicate the exceedance level of the structural strength capacity are described by the results of nonlinear static analyses (or so-called pushover analyses). The Latin hypercube sampling technique is employed to consider the uncertainties associated with the material properties and concrete cover owing to the variation in construction conditions. Finally, a large number of artificial ground shakings satisfying the design spectrum are generated in order to develop the seismic fragility curves based on the defined damage states. It is worth noting that the number of ground motions per PGA, which is equal to or larger than 20, is a reasonable value to perform a structural analysis that produces satisfactory fragility curves.
2

Evaluating Parameter Uncertainty in Transportation Demand Models

Gray, Natalie Mae 12 June 2023 (has links) (PDF)
The inherent uncertainty in travel forecasting models -- arising from errors in input data, parameter estimation, or model formulation -- is receiving increasing attention from the scholarly and practicing community. In this research, we investigate the variance in forecasted traffic volumes resulting from varying the mode and destination choice parameters in an advanced trip-based travel demand model. Using Latin hypercube sampling to construct several hundred combinations of parameters across the plausible parameter space, we introduce substantial changes to mode and destination choice logsums and probabilities. However, the aggregate effects of of these changes on forecasted traffic volumes is small, with a variance of approximately 1 percent on high-volume facilities. Thus, parameter uncertainty does not appear to be a significant factor in forecasting traffic volumes using transportation demand models.
3

Machine Learning from Computer Simulations with Applications in Rail Vehicle Dynamics and System Identification

Taheri, Mehdi 01 July 2016 (has links)
The application of stochastic modeling for learning the behavior of multibody dynamics models is investigated. The stochastic modeling technique is also known as Kriging or random function approach. Post-processing data from a simulation run is used to train the stochastic model that estimates the relationship between model inputs, such as the suspension relative displacement and velocity, and the output, for example, sum of suspension forces. Computational efficiency of Multibody Dynamics (MBD) models can be improved by replacing their computationally-intensive subsystems with stochastic predictions. The stochastic modeling technique is able to learn the behavior of a physical system and integrate its behavior in MBS models, resulting in improved real-time simulations and reduced computational effort in models with repeated substructures (for example, modeling a train with a large number of rail vehicles). Since the sampling plan greatly influences the overall accuracy and efficiency of the stochastic predictions, various sampling plans are investigated, and a space-filling Latin Hypercube sampling plan based on the traveling salesman problem (TPS) is suggested for efficiently representing the entire parameter space. The simulation results confirm the expected increased modeling efficiency, although further research is needed for improving the accuracy of the predictions. The prediction accuracy is expected to improve through employing a sampling strategy that considers the discrete nature of the training data and uses infill criteria that considers the shape of the output function and detects sample spaces with high prediction errors. It is recommended that future efforts consider quantifying the computation efficiency of the proposed learning behavior by overcoming the inefficiencies associated with transferring data between multiple software packages, which proved to be a limiting factor in this study. These limitations can be overcome by using the user subroutine functionality of SIMPACK and adding the stochastic modeling technique to its force library. / Ph. D.
4

Alternative Sampling and Analysis Methods for Digital Soil Mapping in Southwestern Utah

Brungard, Colby W. 01 May 2009 (has links)
Digital soil mapping (DSM) relies on quantitative relationships between easily measured environmental covariates and field and laboratory data. We applied innovative sampling and inference techniques to predict the distribution of soil attributes, taxonomic classes, and dominant vegetation across a 30,000-ha complex Great Basin landscape in southwestern Utah. This arid rangeland was characterized by rugged topography, diverse vegetation, and intricate geology. Environmental covariates calculated from digital elevation models (DEM) and spectral satellite data were used to represent factors controlling soil development and distribution. We investigated optimal sample size and sampled the environmental covariates using conditioned Latin Hypercube Sampling (cLHS). We demonstrated that cLHS, a type of stratified random sampling, closely approximated the full range of variability of environmental covariates in feature and geographic space with small sample sizes. Site and soil data were collected at 300 locations identified by cLHS. Random forests was used to generate spatial predictions and associated probabilities of site and soil characteristics. Balanced random forests and balanced and weighted random forests were investigated for their use in producing an overall soil map. Overall and class errors (referred to as out-of-bag [OOB] error) were within acceptable levels. Quantitative covariate importance was useful in determining what factors were important for soil distribution. Random forest spatial predictions were evaluated based on the conceptual framework developed during field sampling.
5

Statistical Yield Analysis and Design for Nanometer VLSI

Jaffari, Javid January 2010 (has links)
Process variability is the pivotal factor impacting the design of high yield integrated circuits and systems in deep sub-micron CMOS technologies. The electrical and physical properties of transistors and interconnects, the building blocks of integrated circuits, are prone to significant variations that directly impact the performance and power consumption of the fabricated devices, severely impacting the manufacturing yield. However, the large number of the transistors on a single chip adds even more challenges for the analysis of the variation effects, a critical task in diagnosing the cause of failure and designing for yield. Reliable and efficient statistical analysis methodologies in various design phases are key to predict the yield before entering such an expensive fabrication process. In this thesis, the impacts of process variations are examined at three different levels: device, circuit, and micro-architecture. The variation models are provided for each level of abstraction, and new methodologies are proposed for efficient statistical analysis and design under variation. At the circuit level, the variability analysis of three crucial sub-blocks of today's system-on-chips, namely, digital circuits, memory cells, and analog blocks, are targeted. The accurate and efficient yield analysis of circuits is recognized as an extremely challenging task within the electronic design automation community. The large scale of the digital circuits, the extremely high yield requirement for memory cells, and the time-consuming analog circuit simulation are major concerns in the development of any statistical analysis technique. In this thesis, several sampling-based methods have been proposed for these three types of circuits to significantly improve the run-time of the traditional Monte Carlo method, without compromising accuracy. The proposed sampling-based yield analysis methods benefit from the very appealing feature of the MC method, that is, the capability to consider any complex circuit model. However, through the use and engineering of advanced variance reduction and sampling methods, ultra-fast yield estimation solutions are provided for different types of VLSI circuits. Such methods include control variate, importance sampling, correlation-controlled Latin Hypercube Sampling, and Quasi Monte Carlo. At the device level, a methodology is proposed which introduces a variation-aware design perspective for designing MOS devices in aggressively scaled geometries. The method introduces a yield measure at the device level which targets the saturation and leakage currents of an MOS transistor. A statistical method is developed to optimize the advanced doping profiles and geometry features of a device for achieving a maximum device-level yield. Finally, a statistical thermal analysis framework is proposed. It accounts for the process and thermal variations simultaneously, at the micro-architectural level. The analyzer is developed, based on the fact that the process variations lead to uncertain leakage power sources, so that the thermal profile, itself, would have a probabilistic nature. Therefore, by a co-process-thermal-leakage analysis, a more reliable full-chip statistical leakage power yield is calculated.
6

Coupled flow systems, adjoint techniques and uncertainty quantification

Garg, Vikram Vinod, 1985- 25 October 2012 (has links)
Coupled systems are ubiquitous in modern engineering and science. Such systems can encompass fluid dynamics, structural mechanics, chemical species transport and electrostatic effects among other components, all of which can be coupled in many different ways. In addition, such models are usually multiscale, making their numerical simulation challenging, and necessitating the use of adaptive modeling techniques. The multiscale, multiphysics models of electrosomotic flow (EOF) constitute a particularly challenging coupled flow system. A special feature of such models is that the coupling between the electric physics and hydrodynamics is via the boundary. Numerical simulations of coupled systems are typically targeted towards specific Quantities of Interest (QoIs). Adjoint-based approaches offer the possibility of QoI targeted adaptive mesh refinement and efficient parameter sensitivity analysis. The formulation of appropriate adjoint problems for EOF models is particularly challenging, due to the coupling of physics via the boundary as opposed to the interior of the domain. The well-posedness of the adjoint problem for such models is also non-trivial. One contribution of this dissertation is the derivation of an appropriate adjoint problem for slip EOF models, and the development of penalty-based, adjoint-consistent variational formulations of these models. We demonstrate the use of these formulations in the simulation of EOF flows in straight and T-shaped microchannels, in conjunction with goal-oriented mesh refinement and adjoint sensitivity analysis. Complex computational models may exhibit uncertain behavior due to various reasons, ranging from uncertainty in experimentally measured model parameters to imperfections in device geometry. The last decade has seen a growing interest in the field of Uncertainty Quantification (UQ), which seeks to determine the effect of input uncertainties on the system QoIs. Monte Carlo methods remain a popular computational approach for UQ due to their ease of use and "embarassingly parallel" nature. However, a major drawback of such methods is their slow convergence rate. The second contribution of this work is the introduction of a new Monte Carlo method which utilizes local sensitivity information to build accurate surrogate models. This new method, called the Local Sensitivity Derivative Enhanced Monte Carlo (LSDEMC) method can converge at a faster rate than plain Monte Carlo, especially for problems with a low to moderate number of uncertain parameters. Adjoint-based sensitivity analysis methods enable the computation of sensitivity derivatives at virtually no extra cost after the forward solve. Thus, the LSDEMC method, in conjuction with adjoint sensitivity derivative techniques can offer a robust and efficient alternative for UQ of complex systems. The efficiency of Monte Carlo methods can be further enhanced by using stratified sampling schemes such as Latin Hypercube Sampling (LHS). However, the non-incremental nature of LHS has been identified as one of the main obstacles in its application to certain classes of complex physical systems. Current incremental LHS strategies restrict the user to at least doubling the size of an existing LHS set to retain the convergence properties of LHS. The third contribution of this research is the development of a new Hierachical LHS algorithm, that creates designs which can be used to perform LHS studies in a more flexibly incremental setting, taking a step towards adaptive LHS methods. / text
7

Statistical Yield Analysis and Design for Nanometer VLSI

Jaffari, Javid January 2010 (has links)
Process variability is the pivotal factor impacting the design of high yield integrated circuits and systems in deep sub-micron CMOS technologies. The electrical and physical properties of transistors and interconnects, the building blocks of integrated circuits, are prone to significant variations that directly impact the performance and power consumption of the fabricated devices, severely impacting the manufacturing yield. However, the large number of the transistors on a single chip adds even more challenges for the analysis of the variation effects, a critical task in diagnosing the cause of failure and designing for yield. Reliable and efficient statistical analysis methodologies in various design phases are key to predict the yield before entering such an expensive fabrication process. In this thesis, the impacts of process variations are examined at three different levels: device, circuit, and micro-architecture. The variation models are provided for each level of abstraction, and new methodologies are proposed for efficient statistical analysis and design under variation. At the circuit level, the variability analysis of three crucial sub-blocks of today's system-on-chips, namely, digital circuits, memory cells, and analog blocks, are targeted. The accurate and efficient yield analysis of circuits is recognized as an extremely challenging task within the electronic design automation community. The large scale of the digital circuits, the extremely high yield requirement for memory cells, and the time-consuming analog circuit simulation are major concerns in the development of any statistical analysis technique. In this thesis, several sampling-based methods have been proposed for these three types of circuits to significantly improve the run-time of the traditional Monte Carlo method, without compromising accuracy. The proposed sampling-based yield analysis methods benefit from the very appealing feature of the MC method, that is, the capability to consider any complex circuit model. However, through the use and engineering of advanced variance reduction and sampling methods, ultra-fast yield estimation solutions are provided for different types of VLSI circuits. Such methods include control variate, importance sampling, correlation-controlled Latin Hypercube Sampling, and Quasi Monte Carlo. At the device level, a methodology is proposed which introduces a variation-aware design perspective for designing MOS devices in aggressively scaled geometries. The method introduces a yield measure at the device level which targets the saturation and leakage currents of an MOS transistor. A statistical method is developed to optimize the advanced doping profiles and geometry features of a device for achieving a maximum device-level yield. Finally, a statistical thermal analysis framework is proposed. It accounts for the process and thermal variations simultaneously, at the micro-architectural level. The analyzer is developed, based on the fact that the process variations lead to uncertain leakage power sources, so that the thermal profile, itself, would have a probabilistic nature. Therefore, by a co-process-thermal-leakage analysis, a more reliable full-chip statistical leakage power yield is calculated.
8

Pravděpodobnostní řešení porušení ochranné hráze v důsledku přelití / The probabilistic solution of dike breaching due to overtopping

Alhasan, Zakaraya January 2017 (has links)
Doctoral thesis deals with reliability analysis of flood protection dikes by estimating the probability of dike failure. This study based on theoretical knowledge, experimental and statistical researches, mathematical models and field survey extends present knowledge concerning with reliability analysis of dikes vulnerable to the problem of breaching due to overtopping. This study contains the results of probabilistic solution of breaching of a left bank dike of the River Dyje at a location adjacent to the village of Ladná near the town of Břeclav in the Czech Republic. Within thin work, a mathematical model describing the overtopping and erosion processes was proposed. The dike overtopping is simulated using simple surface hydraulics equations. For modelling the dike erosion which commences with the exceedance of erosion resistance of the dike surface, simple transport equations were used with erosion parameters calibrated depending on data from past real embankment failures. In the context of analysis of the model, uncertainty in input parameters was determined and subsequently the sensitivity analysis was carried out using the screening method. In order to achieve the probabilistic solution, selected input parameters were considered random variables with different probability distributions. For generating the sets of random values for the selected input variables, the Latin Hypercube Sampling (LHS) method was used. Concerning with the process of dike breaching due to overtopping, four typical phases were distinguished. The final results of this study take the form of probabilities for those typical dike breach phases.
9

Využití softwarové podpory pro ekonomické hodnocení investičního projektu / Use of Software Support for the Economic Evaluation of the Investment Project

Hortová, Michaela January 2016 (has links)
This thesis deals with economic evaluation case study of Ekofarm construction using applications Crystal Ball and Pertmaster Risk Project. The thesis represents the fundamental characteristics of the investment project and methods of its evaluation. There are introduced basic features of both applications on probabilistic risk analysis performed by simulation method Latin Hypercube Sampling. The case study is described in detail including breeding system and method of financing. This is linked to the calculation of economic fundamentals and creation of project cash flow. The result is probabilistic analysis which is output from tested software tools, and its evaluation.
10

Nelineární analýza zatížitelnosti železobetonového mostu / Nonlinear analysis of load-bearing capacity of reinforced concrete bridge

Šomodíková, Martina January 2012 (has links)
The subject of master’s thesis is determination of bridge load-bearing capacity and fully probabilistic approach to reliability assessment. It includes a nonlinear analysis of the specific bridge load-bearing capacity in compliance with co-existing Standards and its stochastic and sensitivity analysis. In connection with durability limit states of reinforced concrete structures, the influence of carbonation and the corrosion of reinforcement on the structure’s reliability is also mentioned.

Page generated in 0.0715 seconds