Spelling suggestions: "subject:"[een] SENSITIVITY ANALYSIS"" "subject:"[enn] SENSITIVITY ANALYSIS""
61 |
Analise de sensibilidade para modelagem semi-mecanistica de acidentes severosBRAGA, CLAUDIA C. 09 October 2014 (has links)
Made available in DSpace on 2014-10-09T12:38:15Z (GMT). No. of bitstreams: 0 / Made available in DSpace on 2014-10-09T14:04:52Z (GMT). No. of bitstreams: 1
05655.pdf: 6224612 bytes, checksum: 86a04b4dcc94dbc7c8ce73759afdf4b2 (MD5) / Dissertacao (Mestrado) / IPEN/D / Instituto de Pesquisas Energeticas e Nucleares - IPEN/CNEN-SP
|
62 |
Guidance for using pilot studies to inform the design of intervention trials with continuous outcomesBell, Melanie L, Whitehead, Amy L, Julious, Steven A 01 1900 (has links)
Background: A pilot study can be an important step in the assessment of an intervention by providing information to design the future definitive trial. Pilot studies can be used to estimate the recruitment and retention rates and population variance and to provide preliminary evidence of efficacy potential. However, estimation is poor because pilot studies are small, so sensitivity analyses for the main trial's sample size calculations should be undertaken. Methods: We demonstrate how to carry out easy-to-perform sensitivity analysis for designing trials based on pilot data using an example. Furthermore, we introduce rules of thumb for the size of the pilot study so that the overall sample size, for both pilot and main trials, is minimized. Results: The example illustrates how sample size estimates for the main trial can alter dramatically by plausibly varying assumptions. Required sample size for 90% power varied from 392 to 692 depending on assumptions. Some scenarios were not feasible based on the pilot study recruitment and retention rates. Conclusion: Pilot studies can be used to help design the main trial, but caution should be exercised. We recommend the use of sensitivity analyses to assess the robustness of the design assumptions for a main trial.
|
63 |
Výnosové ocenění podniku založené na využití simulací / Business Valuation Using SimulationGavrylyuk, Zinayida January 2013 (has links)
The following thesis is focused on the use of Monte Carlo simulation in business valuation. It examines the theoretical context of the valuation process and simulation techniques and subsequently applies these to the valuation of Plzeňský Prazdroj, a. s. as of 31.3.2008. The aim was to explore the potential of application of Monte Carlo simulation and to interpret obtained information. There was created the valuation model and performed sensitivity analysis based on which there were identified factors which have significant impact on the value. These were further investigated and characterized in terms of probability. Following the extension of the model to include uncertainty factors there was simulated business value in relation to the variability of uncertainty factors and the result was subsequently interpreted. It was concluded that Monte Carlo simulation may be useful predominantly in search for subjective value for investor due to additional information obtained.
|
64 |
Estimating multidimensional density functions using the Malliavin-Thalmaier formulaKohatsu Higa, Arturo, Yasuda, Kazuhiro 25 September 2017 (has links)
The Malliavin-Thalmaier formula was introduced for simulation of high dimensional probability density functions. But when this integration by parts formula is applied directly in computer simulations, we show that it is unstable. We propose an approximation to the Malliavin-Thalmaier formula. In this paper, we find the order of the bias and the variance of the approximation error. And we obtain an explicit Malliavin-Thalmaier formula for the calculation of Greeks in finance. The weights obtained are free from the curse of dimensionality.
|
65 |
Reliability Sensitivity Analysis of Dropped Objects Hitting on the Pipeline at SeabedYu, Hanqi 20 December 2019 (has links)
Nowadays, as oil industry gradually moves towards deep sea fields with water depth more than 1000 meters, they are subjected to several threats which can cause failure of the pipeline, of which the accidentally-dropped objects have become the leading external risk factor for subsea developments. In this thesis, a sample field layout introduced in Det Norske Veritas (DNV) guide rules is selected as the study case with 100 m water depth. Six different groups of dropped objects are used in this paper. The conditional hit probability for long/flat shaped objects will be calculated with the methods from both DNV rules and an in-house tool Dropped Objects Simulator (DROBS). The difference between the results will be discussed. Meanwhile, the sensitivity analysis on mass, collision area , the volume, added mass coefficient and drag coefficient of the objects are calculated.
|
66 |
Sensibilité et incertitude de modélisation sur les bassins méditerranéens à forte composante karstique / Sensitivity and uncertainty associated with the numerical modelling of groundwater flow within karst systemsMazzilli, Naomi 09 November 2011 (has links)
Les aquifères karstiques sont associés à des enjeux importants en termes à la fois de gestion de la ressource en eau et de gestion du risque d'inondation. Ces systèmes sont caractérisés par une structure fortement hétérogène et un fonctionnement non-linéaire. Cette thèse est consacrée à l'étude de la sensibilité et de l'incertitude associés à la modélisation numérique des écoulements en milieu karstique. De façon systématique, l'analyse de sensibilité est utilisée comme outil afin de répondre aux questions suivantes: (i) la calibration est-elle possible ? (ii) la calibration est-elle robuste ? (iii) est-il possible de réduire l'équifinalité via une calibration multi-objectif ou multi-variable ?Cette contribution met en évidence le potentiel des méthodes locales d'analyse de sensibilité. En dépit des limitations inhérentes à cette approche (approximation locale et perturbation d'un facteur à la fois), l'analyse locale permet une compréhension fine du fonctionnement du modèle, pour un coût de calcul réduit.Par ailleurs, cet travail souligne l'intérêt d'une calibration multi-variable par rapport à une calibration multi-objectif, dans une optique de réduction de l'équifinalité / Karst aquifers are associated with key issues for water resource management and also for flood risk mitigation. These systems are characterized by a highly heterogeneous structure and non-linear functioning. This thesis addresses the sensitivity and uncertainty associated with the numerical modelling of groundwater flow in karst systems. As a systematic approach, sensitivity analysis has been used to answer the following questions:(i) is it possible to calibrate the model ? (ii) is the calibration robust ? (iii) is it possible to reduce equifinality, through multi-objective calibration or through multi-variable calibration ? This contribution stresses the potentialities of local sensitivity analyses. Despite their inherent limitations (local approximation), local analyses have proved to bring valuable insights into the general behaviour of complex, non-linear flow models, at little computational cost. Besides, this contribution also stresses the interest of multi-variable calibration as compared to multi-objective calibration, as regards equifinality reduction.
|
67 |
Impact of Uncertainties in Reaction Rates and Thermodynamic Properties on Ignition Delay TimeHantouche, Mireille 04 1900 (has links)
Ignition delay time, τ_ign, is a key quantity of interest that is used to assess the predictability of a chemical kinetic model. This dissertation explores the sensitivity of τ_ign to uncertainties in: 1. rate-rule kinetic rates parameters and 2. enthalpies and entropies of fuel and fuel radicals using global and local sensitivity approaches.
We begin by considering variability in τ_ign to uncertainty in rate parameters. We consider a 30-dimensional stochastic germ in which each random variable is associated with one reaction class, and build a surrogate model for τ_ign using polynomial chaos expansions. The adaptive pseudo-spectral projection technique is used for this purpose. First-order and total-order sensitivity indices characterizing the dependence of τ_ign on uncertain inputs are estimated. Results indicate that τ_ign is mostly sensitive to variations in four dominant reaction classes.
Next, we develop a thermodynamic class approach to study variability in τ_ign of n-butanol due to uncertainty in thermodynamic properties of species of interest, and to define associated uncertainty ranges. A global sensitivity analysis is performed, again using surrogates constructed using an adaptive pseudo-spectral method. Results indicate that the
variability of τ_ign is dominated by uncertainties in the classes associated with peroxy and hydroperoxide radicals. We also perform a combined sensitivity analysis of uncertainty in kinetic rates and thermodynamic properties which revealed that uncertainties in thermodynamic properties can induce variabilities in ignition delay time that are as large as those associated with kinetic rate uncertainties.
In the last part, we develop a tangent linear approximation (TLA) to estimate the sensitivity of τ_ign with respect to individual rate parameters and thermodynamic properties in detailed chemical mechanisms. Attention is focused on a gas mixture reacting under adiabatic,
constant-volume conditions. The proposed approach is based on integrating the linearized
system of equations governing the evolution of the partial derivatives of the state vector with respect to individual random variables, and a linearized approximation is developed to relate ignition delay sensitivity to scaled partial derivatives of temperature.
The computations indicate that TLA leads to robust local sensitivity predictions at a computational cost that is order-of-magnitude smaller than that incurred by finite-difference approaches.
|
68 |
Sensitivity analysis of pluvial flood modelling tools for dense urban areas : A case study in Lundby-Lindholmen, GothenburgEriksson, Johanna January 2020 (has links)
As a result of the global climate change, extreme precipitation is occurring more frequently which increases the risk of flooding, especially in urban areas. Urbanisation is widely discussed regarding urban flooding where an increase of impervious surfaces limits the infiltration and increases the surface runoff. Flooding events in urban areas are increasing around the world and can cause large damages on infrastructure and buildings, which makes the cities vulnerable. Urban flood models are an important tool for analysing the capacity of the drainage systems, to predict the extent of the events and to find optimal locations to implement measures to prevent damages from flooding. In this project, a sensitivity analysis in MIKE FLOOD, a coupled 1D-2D flood model developed by DHI is presented, where sewer- and surface systems are integrated. The aim with this project is to investigate how the result of a coupled flood model vary in relation to changes in input parameters. The sensitivity analysis is performed to evaluate how different parameters impact the model output in terms of water depth and variations in cost of flooded buildings, roads, rail- and tramways. The analysis is applied in a case study in Lundby-Lindholmen, Gothenburg city, Sweden. The results show that modelling without infiltration influenced the model output the most, with the largest increase both in terms of cost and water depth over the investigated area. Here the correlation between the initial water saturation and location of the applied pre-rain was highlighted. The model outputs were less sensitive to changes in surface roughness (expressed as Manning value) than without infiltration but did lead to measurable changes in surface water depth and distribution while the flood damage cost didn’t show any major changes. Additionally, the coupled flood model was evaluated in terms of handling changes in magnitudes of rain-events. Data indicates the shorter the return period, the smaller the flood propagation, as well as the flood damage cost decreases with shorter return periods. The data evaluated supports the use of this coupled model approach for shorter return periods in terms of flood propagation.
|
69 |
Uncertainty Analysis for Land Surface Model Predictions: Application to the Simple Biosphere 3 and Noah Models at Tropical and Semiarid LocationsRoundy, Joshua K. 01 May 2009 (has links)
Uncertainty in model predictions is associated with data, parameters, and model structure. The estimation of these contributions to uncertainty is a critical issue in hydrology. Using a variety of single and multiple criterion methods for sensitivity analysis and inverse modeling, the behaviors of two state-of-the-art land surface models, the Simple Biosphere Model 3 and Noah model, are analyzed. The different algorithms used for sensitivity and inverse modeling are analyzed and compared along with the performance of the land surface models. Generalized sensitivity and variance methods are used for the sensitivity analysis, including the Multi-Objective Generalized Sensitivity Analysis, the Extended Fourier Amplitude Sensitivity Test, and the method of Sobol. The methods used for the parameter uncertainty estimation are based on Markov Chain Monte Carlo simulations with Metropolis type algorithms and include A Multi-algorithm Genetically Adaptive Multi-objective algorithm, Differential Evolution Adaptive Metropolis, the Shuffled Complex Evolution Metropolis, and the Multi-objective Shuffled Complex Evolution Metropolis algorithms. The analysis focuses on the behavior of land surface model predictions for sensible heat, latent heat, and carbon fluxes at the surface. This is done using data from hydrometeorological towers collected at several locations within the Large-Scale Biosphere Atmosphere Experiment in Amazonia domain (Amazon tropical forest) and at locations in Arizona (semiarid grass and shrub-land). The influence that the specific location exerts upon the model simulation is also analyzed. In addition, the Santarém kilometer 67 site located in the Large-Scale Biosphere Atmosphere Experiment in Amazonia domain is further analyzed by using datasets with different levels of quality control for evaluating the resulting effects on the performance of the individual models. The method of Sobol was shown to give the best estimates of sensitivity for the variance-based algorithms and tended to be conservative in terms of assigning parameter sensitivity, while the multi-objective generalized sensitivity algorithm gave a more liberal number of sensitive parameters. For the optimization, the Multi-algorithm Genetically Adaptive Multi-objective algorithm consistently resulted in the smallest overall error; however all other algorithms gave similar results. Furthermore the Simple Biosphere Model 3 provided better estimates of the latent heat and the Noah model gave better estimates of the sensible heat.
|
70 |
Uncertainty Quantification and Sensitivity Analysis of Multiphysics Environments for Application in Pressurized Water Reactor DesignBlakely, Cole David 01 August 2018 (has links)
The most common design among U.S. nuclear power plants is the pressurized water reactor (PWR). The three primary design disciplines of these plants are system analysis (which includes thermal hydraulics), neutronics, and fuel performance. The nuclear industry has developed a variety of codes over the course of forty years, each with an emphasis within a specific discipline. Perhaps the greatest difficulty in mathematically modeling a nuclear reactor, is choosing which specific phenomena need to be modeled, and to what detail.
A multiphysics computational environment provides a means of advancing simulations of nuclear plants. Put simply, users are able to combine various physical models which have commonly been treated as separate in the past. The focus of this work is a specific multiphysics environment currently under development at Idaho National Labs known as the LOCA Toolkit for US light water reactors (LOTUS).
The ability of LOTUS to use uncertainty quantification (UQ) and sensitivity analysis (SA) tools within a multihphysics environment allow for a number of unique analyses which to the best of our knowledge, have yet to be performed. These include the first known integration of the neutronics and thermal hydraulic code VERA-CS currently under development by CASL, with the well-established fuel performance code FRAPCON by PNWL. The integration was used to model a fuel depletion case.
The outputs of interest for this integration were the minimum departure from nucleate boiling ratio (MDNBR) (a thermal hydraulic parameter indicating how close a heat flux is to causing a dangerous form of boiling in which an insulating layer of coolant vapour is formed), the maximum fuel centerline temperature (MFCT) of the uranium rod, and the gap conductance at peak power (GCPP). GCPP refers to the thermal conductance of the gas filled gap between fuel and cladding at the axial location with the highest local power generation.
UQ and SA were performed on MDNBR, MFCT, and GCPP at a variety of times throughout the fuel depletion. Results showed the MDNBR to behave linearly and consistently throughout the depletion, with the most impactful input uncertainties being coolant outlet pressure and inlet temperature as well as core power. MFCT also behaves linearly, but with a shift in SA measures. Initially MFCT is sensitive to fuel thermal conductivity and gap dimensions. However, later in the fuel cycle, nearly all uncertainty stems from fuel thermal conductivity, with minor contributions coming from core power and initial fuel density. GCPP uncertainty exhibits nonlinear, time-dependent behaviour which requires higher order SA measures to properly analyze. GCPP begins with a dependence on gap dimensions, but in later states, shifts to a dependence on the biases of a variety of specific calculation such as fuel swelling and cladding creep and oxidation.
LOTUS was also used to perform the first higher order SA of an integration of VERA-CS and the BISON fuel performance code currently under development at INL. The same problem and outputs were studied as the VERA-CS and FRAPCON integration. Results for MDNBR and MFCT were relatively consistent. GCPP results contained notable differences, specifically a large dependence on fuel and clad surface roughness in later states. However, this difference is due to the surface roughness not being perturbed in the first integration. SA of later states also showed an increased sensitivity to fission gas release coefficients.
Lastly a Loss of Coolant Accident was investigated with an integration of FRAPCON with the INL neutronics code PHISICS and system analysis code RELAP5-3D. The outputs of interest were ratios of the peak cladding temperatures (highest temperature encountered by cladding during LOCA) and equivalent cladding reacted (the percentage of cladding oxidized) to their cladding hydrogen content-based limits. This work contains the first known UQ of these ratios within the aforementioned integration. Results showed the PCT ratio to be relatively well behaved. The ECR ratio behaves as a threshold variable, which is to say it abruptly shifts to radically higher values under specific conditions. This threshold behaviour establishes the importance of performing UQ so as to see the full spectrum of possible values for an output of interest.
The SA capabilities of LOTUS provide a path forward for developers to increase code fidelity for specific outputs. Performing UQ within a multiphysics environment may provide improved estimates of safety metrics in nuclear reactors. These improved estimates may allow plants to operate at higher power, thereby increasing profits. Lastly, LOTUS will be of particular use in the development of newly proposed nuclear fuel designs.
|
Page generated in 0.0537 seconds