Spelling suggestions: "subject:"[een] SENSITIVITY ANALYSIS"" "subject:"[enn] SENSITIVITY ANALYSIS""
1 |
Sensitivity analysis research of Enterprise accounts receivableShih, Tsai- Hsien 21 August 2001 (has links)
none
|
2 |
Statistical methods for the analysis of DSMC simulations of hypersonic shocksStrand, James Stephen 25 June 2012 (has links)
In this work, statistical techniques were employed to study the modeling of a hypersonic
shock with the Direct Simulation Monte Carlo (DSMC) method, and to gain insight into how the
model interacts with a set of physical parameters.
Direct Simulation Monte Carlo (DSMC) is a particle based method which is useful for
simulating gas dynamics in rarefied and/or highly non-equilibrium flowfields. A DSMC code
was written and optimized for use in this research. The code was developed with shock tube
simulations in mind, and it includes a number of improvements which allow for the efficient
simulation of 1D, hypersonic shocks. Most importantly, a moving sampling region is used to
obtain an accurate steady shock profile from an unsteady, moving shock wave. The code is MPI
parallel and an adaptive load balancing scheme ensures that the workload is distributed properly
between processors over the course of a simulation.
Global, Monte Carlo based sensitivity analyses were performed in order to determine
which of the parameters examined in this work most strongly affect the simulation results for
two scenarios: a 0D relaxation from an initial high temperature state and a hypersonic shock.
The 0D relaxation scenario was included in order to examine whether, with appropriate initial
conditions, it can be viewed in some regards as a substitute for the 1D shock in a statistical
sensitivity analysis. In both analyses sensitivities were calculated based on both the square of the
Pearson correlation coefficient and the mutual information. The quantity of interest (QoI)
chosen for these analyses was the NO density profile. This vector QoI was broken into a set of
scalar QoIs, each representing the density of NO at a specific point in time (for the relaxation) or
a specific streamwise location (for the shock), and sensitivities were calculated for each scalar
QoI based on both measures of sensitivity. The sensitivities were then integrated over the set of
scalar QoIs to determine an overall sensitivity for each parameter. A weighting function was
used in the integration in order to emphasize sensitivities in the region of greatest thermal and
chemical non-equilibrium. The six parameters which most strongly affect the NO density profile
were found to be the same for both scenarios, which provides justification for the claim that a 0D
relaxation can in some situations be used as a substitute model for a hypersonic shock. These six
parameters are the pre-exponential constants in the Arrhenius rate equations for the N2
dissociation reaction N2 + N ⇄ 3N, the O2 dissociation reaction O2 + O ⇄ 3O, the NO
dissociation reactions NO + N ⇄ 2N + O and NO + O ⇄ N + 2O, and the exchange reactions
N2 + O ⇄ NO + N and NO + O ⇄ O2 + N.
After identification of the most sensitive parameters, a synthetic data calibration was
performed to demonstrate that the statistical inverse problem could be solved for the 0D
relaxation scenario. The calibration was performed using the QUESO code, developed at the
PECOS center at UT Austin, which employs the Delayed Rejection Adaptive Metropolis
(DRAM) algorithm. The six parameters identified by the sensitivity analysis were calibrated
successfully with respect to a group of synthetic datasets. / text
|
3 |
Quantification of Uncertainties Due to Opacities in a Laser-Driven Radiative-Shock ProblemHetzler, Adam C 03 October 2013 (has links)
This research presents new physics-based methods to estimate predictive uncertainty stemming from uncertainty in the material opacities in radiative transfer computations of key quantities of interest (QOIs). New methods are needed because it is infeasible to apply standard uncertainty-propagation techniques to the O(105) uncertain opacities in a realistic simulation. The new approach toward uncertainty quantification applies the uncertainty analysis to the physical parameters in the underlying model used to calculate the opacities. This set of uncertain parameters is much smaller (O(102)) than the number of opacities. To further reduce the dimension of the set of parameters to be rigorously explored, we use additional screening applied at two different levels of the calculational hierarchy: first, physics-based screening eliminates the physical parameters that are unimportant from underlying physics models a priori; then, sensitivity analysis in simplified versions of the complex problem of interest screens out parameters that are not important to the QOIs. We employ a Bayesian Multivariate Adaptive Regression Spline (BMARS) emulator for this sensitivity analysis. The high dimension of the input space and large number of samples test the efficacy of these methods on larger problems. Ultimately, we want to perform uncertainty quantification on the large, complex problem with the reduced set of parameters. Results of this research demonstrate that the QOIs for target problems agree at for different parameter screening criteria and varying sample sizes. Since the QOIs agree, we have gained confidence in our results using the multiple screening criteria and sample sizes.
|
4 |
Sensitivity Enhanced Model ReductionMunster, Drayton William 06 June 2013 (has links)
In this study, we numerically explore methods of coupling sensitivity analysis to the reduced model in order to increase the accuracy of a proper orthogonal decomposition (POD) basis across a wider range of parameters. Various techniques based on polynomial interpolation and basis alteration are compared. These techniques are performed on a 1-dimensional reaction-diffusion equation and 2-dimensional incompressible Navier-Stokes equations solved using the finite element method (FEM) as the full scale model. The expanded model formed by expanding the POD basis with the orthonormalized basis sensitivity vectors achieves the best mixture of accuracy and computational efficiency among the methods compared. / Master of Science
|
5 |
Local and Global Sensitivity Analysis of Thin Ply Laminated Carbon CompositesNeigh, Thomas Alexander 14 May 2024 (has links)
Recent work in the area of composite laminates has focused on the characterization of the strength of laminates constructed from very thin plies. Interlaminar shear and normal stress components have been shown to be concentrated on the edges, the so-called edge effect, of unidirectional laminates at the interface between plies of different fiber orientation. Research has shown that decreasing ply thickness can reduce these interlaminar stress edge effects, and delay delamination in quasi-isotropic laminate specimen for laminates of equal total thickness. First ply failure stress has also been shown to increase with decreasing ply thickness. For these reasons, there has been a great deal of interest in laminated composites constructed from very thin plies. This work studies the impact of manufacturing tolerances on ply orientation on the mechanical properties of the constructed laminate. Direct Monte Carlo simulation is used to model the variance introduced in the manufacturing process. First-order variance-based sensitivity analysis using a local analysis of variance technique is used to study the contribution of each individual ply to the variation in as built mechanical properties. Variation in mechanical properties of thick-ply and thin-ply laminate designs are compared to study if thin-ply laminate designs show more or less variation than their thick-ply counterparts. This work has found potential impacts of ply angle variation on variance of as-built stiffness in laminates of different ply thicknesses. These differences are attributable to the total ply count in a laminate. For a fixed height laminate, the ply count is inversely proportional to thickness, yielding the apparent benefit of thin plies. Using thinner plies in a sub-laminar stacking arrangement, repeating a sublaminate instead of repeating plies, reduces sensitivity to manufacturing errors and would suppress tranverse failure modes. / Master of Science / Carbon fiber reinforced polymer composites, a material consisting of carbon fiber filaments bound within a polymer matrix, are commonly used in aerospace applications for their excellent strength to weight ratio. This class of materials is highly tailorable, with strength and stiffness controlled by the number of fiber layers, their thickness, and each layer's respective orientation. Variability in these characteristics arising from manufacturing processes can result in changes in the laminate's engineering properties. This work shows that characterizing the impacts to the engineering properties through Monte-Carlo simulation of variability in the orientation is possible. A Monte-Carlo simulation is a type of statistical simulation where a sample population is generated using an assumed mean and standard deviation. Engineering and statistical analyses can then be performed on this sample population to determine the variability in the engineering properties of the population. In addition, the variability in the population can be studied as a function of each individual fiber layer to understand individual impacts based on orientation and position within the larger composite. Using these analysis techniques presented in this work allows for the study of laminate variability prior to manufacturing, allowing engineers to better understand the material during the design of complex aerospace structures.
|
6 |
Investigations on Stabilized Sensitivity Analysis of Chaotic SystemsTaoudi, Lamiae 03 May 2019 (has links)
Many important engineering phenomena such as turbulent flow, fluid-structure interactions, and climate diagnostics are chaotic and sensitivity analysis of such systems is a challenging problem. Computational methods have been proposed to accurately and efficiently estimate the sensitivity analysis of these systems which is of great scientific and engineering interest. In this thesis, a new approach is applied to compute the direct and adjoint sensitivities of time-averaged quantities defined from the chaotic response of the Lorenz system and the double pendulum system. A stabilized time-integrator with adaptive time-step control is used to maintain stability of the sensitivity calculations. A study of convergence of a quantity of interest and its square is presented. Results show that the approach computes accurate sensitivity values with a computational cost that is multiple orders-of-magnitude lower than competing approaches based on least-squares-shadowing approach.
|
7 |
Sensitivity Analysis of the Economic Lot-Sizing ProblemVan Hoesel, Stan, Wagelmans, Albert 11 1900 (has links)
In this paper we study sensitivity analysis of the uncapacitated single level economic lot-sizing problem, which was introduced by Wagner and Whitin about thirty years ago. In particular we are concerned with the computation of the maximal ranges in which the numerical problem parameters may vary individually, such that a solution already obtained remains optimal. Only recently it was discovered that faster algorithms than the Wagner-Whitin algorithm exist to solve the economic lot-sizing problem. Moreover, these algorithms reveal that the problem has more structure than was recognized so far. When performing the sensitivity analysis we exploit these newly obtained insights.
|
8 |
Ensaio imunoradiometrico ultra-sensivel de tireotrofina humana (hTsH) obtido mediante a identificacao e minimizacao de ligacoes inespecificasPERONI, CIBELE N. 09 October 2014 (has links)
Made available in DSpace on 2014-10-09T12:38:00Z (GMT). No. of bitstreams: 0 / Made available in DSpace on 2014-10-09T14:04:41Z (GMT). No. of bitstreams: 1
05581.pdf: 1858679 bytes, checksum: 40e224a27b1e68838662dfa34b14949f (MD5) / Dissertacao (Mestrado) / IPEN/D / Instituto de Pesquisas Energeticas e Nucleares - IPEN/CNEN-SP
|
9 |
Ensaio imunoradiometrico ultra-sensivel de tireotrofina humana (hTsH) obtido mediante a identificacao e minimizacao de ligacoes inespecificasPERONI, CIBELE N. 09 October 2014 (has links)
Made available in DSpace on 2014-10-09T12:38:00Z (GMT). No. of bitstreams: 0 / Made available in DSpace on 2014-10-09T14:04:41Z (GMT). No. of bitstreams: 1
05581.pdf: 1858679 bytes, checksum: 40e224a27b1e68838662dfa34b14949f (MD5) / Dissertacao (Mestrado) / IPEN/D / Instituto de Pesquisas Energeticas e Nucleares - IPEN/CNEN-SP
|
10 |
Sensitivity Analysis and Parameter Estimation for the APEX Model on Runoff, Sediments and PhosphorusJiang, Yi 09 December 2016 (has links)
Sensitivity analysis is essential for the hydrologic models to help gain insight into model’s behavior, and assess the model structure and conceptualization. Parameter estimation in the distributed hydrologic models is difficult due to the high-dimensional parameter spaces. Sensitivity analysis identified the influential and non-influential parameters in the modeling process, thus it will benefit the calibration process. This study identified, applied and evaluated two sensitivity analysis methods for the APEX model. The screening methods, the Morris method, and LH-OAT method, were implemented in the experimental site in North Carolina for modeling runoff, sediment loss, TP and DP losses. At the beginning of the application, the run number evaluation was conducted for the Morris method. The result suggested that 2760 runs were sufficient for 45 input parameters to get reliable sensitivity result. Sensitivity result for the five management scenarios in the study site indicated that the Morris method and LH-OAT method provided similar results on the sensitivity of the input parameters, except the difference on the importance of PARM2, PARM8, PARM12, PARM15, PARM20, PARM49, PARM76, PARM81, PARM84, and PARM85. The results for the five management scenarios indicated the very influential parameters were consistent in most cases, such as PARM23, PARM34, and PARM84. The “sensitive” parameters had good overlaps between different scenarios. In addition, little variation was observed in the importance of the sensitive parameters in the different scenarios, such as PARM26. The optimization process with the most influential parameters from sensitivity analysis showed great improvement on the APEX modeling performance in all scenarios by the objective functions, PI1, NSE, and GLUE.
|
Page generated in 0.0964 seconds