Spelling suggestions: "subject:"ensitivity aanalysis"" "subject:"ensitivity 2analysis""
1 
Sensitivity analysis research of Enterprise accounts receivableShih, Tsai Hsien 21 August 2001 (has links)
none

2 
Statistical methods for the analysis of DSMC simulations of hypersonic shocksStrand, James Stephen 25 June 2012 (has links)
In this work, statistical techniques were employed to study the modeling of a hypersonic
shock with the Direct Simulation Monte Carlo (DSMC) method, and to gain insight into how the
model interacts with a set of physical parameters.
Direct Simulation Monte Carlo (DSMC) is a particle based method which is useful for
simulating gas dynamics in rarefied and/or highly nonequilibrium flowfields. A DSMC code
was written and optimized for use in this research. The code was developed with shock tube
simulations in mind, and it includes a number of improvements which allow for the efficient
simulation of 1D, hypersonic shocks. Most importantly, a moving sampling region is used to
obtain an accurate steady shock profile from an unsteady, moving shock wave. The code is MPI
parallel and an adaptive load balancing scheme ensures that the workload is distributed properly
between processors over the course of a simulation.
Global, Monte Carlo based sensitivity analyses were performed in order to determine
which of the parameters examined in this work most strongly affect the simulation results for
two scenarios: a 0D relaxation from an initial high temperature state and a hypersonic shock.
The 0D relaxation scenario was included in order to examine whether, with appropriate initial
conditions, it can be viewed in some regards as a substitute for the 1D shock in a statistical
sensitivity analysis. In both analyses sensitivities were calculated based on both the square of the
Pearson correlation coefficient and the mutual information. The quantity of interest (QoI)
chosen for these analyses was the NO density profile. This vector QoI was broken into a set of
scalar QoIs, each representing the density of NO at a specific point in time (for the relaxation) or
a specific streamwise location (for the shock), and sensitivities were calculated for each scalar
QoI based on both measures of sensitivity. The sensitivities were then integrated over the set of
scalar QoIs to determine an overall sensitivity for each parameter. A weighting function was
used in the integration in order to emphasize sensitivities in the region of greatest thermal and
chemical nonequilibrium. The six parameters which most strongly affect the NO density profile
were found to be the same for both scenarios, which provides justification for the claim that a 0D
relaxation can in some situations be used as a substitute model for a hypersonic shock. These six
parameters are the preexponential constants in the Arrhenius rate equations for the N2
dissociation reaction N2 + N ⇄ 3N, the O2 dissociation reaction O2 + O ⇄ 3O, the NO
dissociation reactions NO + N ⇄ 2N + O and NO + O ⇄ N + 2O, and the exchange reactions
N2 + O ⇄ NO + N and NO + O ⇄ O2 + N.
After identification of the most sensitive parameters, a synthetic data calibration was
performed to demonstrate that the statistical inverse problem could be solved for the 0D
relaxation scenario. The calibration was performed using the QUESO code, developed at the
PECOS center at UT Austin, which employs the Delayed Rejection Adaptive Metropolis
(DRAM) algorithm. The six parameters identified by the sensitivity analysis were calibrated
successfully with respect to a group of synthetic datasets. / text

3 
Quantification of Uncertainties Due to Opacities in a LaserDriven RadiativeShock ProblemHetzler, Adam C 03 October 2013 (has links)
This research presents new physicsbased methods to estimate predictive uncertainty stemming from uncertainty in the material opacities in radiative transfer computations of key quantities of interest (QOIs). New methods are needed because it is infeasible to apply standard uncertaintypropagation techniques to the O(105) uncertain opacities in a realistic simulation. The new approach toward uncertainty quantification applies the uncertainty analysis to the physical parameters in the underlying model used to calculate the opacities. This set of uncertain parameters is much smaller (O(102)) than the number of opacities. To further reduce the dimension of the set of parameters to be rigorously explored, we use additional screening applied at two different levels of the calculational hierarchy: first, physicsbased screening eliminates the physical parameters that are unimportant from underlying physics models a priori; then, sensitivity analysis in simplified versions of the complex problem of interest screens out parameters that are not important to the QOIs. We employ a Bayesian Multivariate Adaptive Regression Spline (BMARS) emulator for this sensitivity analysis. The high dimension of the input space and large number of samples test the efficacy of these methods on larger problems. Ultimately, we want to perform uncertainty quantification on the large, complex problem with the reduced set of parameters. Results of this research demonstrate that the QOIs for target problems agree at for different parameter screening criteria and varying sample sizes. Since the QOIs agree, we have gained confidence in our results using the multiple screening criteria and sample sizes.

4 
Sensitivity Enhanced Model ReductionMunster, Drayton William 06 June 2013 (has links)
In this study, we numerically explore methods of coupling sensitivity analysis to the reduced model in order to increase the accuracy of a proper orthogonal decomposition (POD) basis across a wider range of parameters. Various techniques based on polynomial interpolation and basis alteration are compared. These techniques are performed on a 1dimensional reactiondiffusion equation and 2dimensional incompressible NavierStokes equations solved using the finite element method (FEM) as the full scale model. The expanded model formed by expanding the POD basis with the orthonormalized basis sensitivity vectors achieves the best mixture of accuracy and computational efficiency among the methods compared. / Master of Science

5 
Investigations on Stabilized Sensitivity Analysis of Chaotic SystemsTaoudi, Lamiae 03 May 2019 (has links)
Many important engineering phenomena such as turbulent flow, fluidstructure interactions, and climate diagnostics are chaotic and sensitivity analysis of such systems is a challenging problem. Computational methods have been proposed to accurately and efficiently estimate the sensitivity analysis of these systems which is of great scientific and engineering interest. In this thesis, a new approach is applied to compute the direct and adjoint sensitivities of timeaveraged quantities defined from the chaotic response of the Lorenz system and the double pendulum system. A stabilized timeintegrator with adaptive timestep control is used to maintain stability of the sensitivity calculations. A study of convergence of a quantity of interest and its square is presented. Results show that the approach computes accurate sensitivity values with a computational cost that is multiple ordersofmagnitude lower than competing approaches based on leastsquaresshadowing approach.

6 
Sensitivity Analysis of the Economic LotSizing ProblemVan Hoesel, Stan, Wagelmans, Albert 11 1900 (has links)
In this paper we study sensitivity analysis of the uncapacitated single level economic lotsizing problem, which was introduced by Wagner and Whitin about thirty years ago. In particular we are concerned with the computation of the maximal ranges in which the numerical problem parameters may vary individually, such that a solution already obtained remains optimal. Only recently it was discovered that faster algorithms than the WagnerWhitin algorithm exist to solve the economic lotsizing problem. Moreover, these algorithms reveal that the problem has more structure than was recognized so far. When performing the sensitivity analysis we exploit these newly obtained insights.

7 
A Study of Predicted Energy Savings and Sensitivity AnalysisYang, Ying 16 December 2013 (has links)
The sensitivity of the important inputs and the savings prediction function reliability for the WinAM 4.3 software is studied in this research. WinAM was developed by the Continuous Commissioning (CC) group in the Energy Systems Laboratory at Texas A&M University. For the sensitivity analysis task, fourteen inputs are studied by adjusting one input at a time within ± 30% compared with its baseline. The Single Duct Variable Air Volume (SDVAV) system with and without the economizer has been applied to the square zone model. Mean Bias Error (MBE) and Influence Coefficient (IC) have been selected as the statistical methods to analyze the outputs that are obtained from WinAM 4.3. For the saving prediction reliability analysis task, eleven Continuous Commissioning projects have been selected. After reviewing each project, seven of the eleven have been chosen. The measured energy consumption data for the seven projects is compared with the simulated energy consumption data that has been obtained from WinAM 4.3. Normalization Mean Bias Error (NMBE) and Coefficient of Variation of the Root Mean Squared Error (CV (RMSE)) statistical methods have been used to analyze the results from real measured data and simulated data.
Highly sensitive parameters for each energy resource of the system with the economizer and the system without the economizer have been generated in the sensitivity analysis task. The main result of the savings prediction reliability analysis is that calibration improves the model’s quality. It also improves the predicted energy savings results compared with the results generated from the uncalibrated model.

8 
Ensaio imunoradiometrico ultrasensivel de tireotrofina humana (hTsH) obtido mediante a identificacao e minimizacao de ligacoes inespecificasPERONI, CIBELE N. 09 October 2014 (has links)
Made available in DSpace on 20141009T12:38:00Z (GMT). No. of bitstreams: 0 / Made available in DSpace on 20141009T14:04:41Z (GMT). No. of bitstreams: 1
05581.pdf: 1858679 bytes, checksum: 40e224a27b1e68838662dfa34b14949f (MD5) / Dissertacao (Mestrado) / IPEN/D / Instituto de Pesquisas Energeticas e Nucleares  IPEN/CNENSP

9 
Ensaio imunoradiometrico ultrasensivel de tireotrofina humana (hTsH) obtido mediante a identificacao e minimizacao de ligacoes inespecificasPERONI, CIBELE N. 09 October 2014 (has links)
Made available in DSpace on 20141009T12:38:00Z (GMT). No. of bitstreams: 0 / Made available in DSpace on 20141009T14:04:41Z (GMT). No. of bitstreams: 1
05581.pdf: 1858679 bytes, checksum: 40e224a27b1e68838662dfa34b14949f (MD5) / Dissertacao (Mestrado) / IPEN/D / Instituto de Pesquisas Energeticas e Nucleares  IPEN/CNENSP

10 
Sensitivity Analysis and Parameter Estimation for the APEX Model on Runoff, Sediments and PhosphorusJiang, Yi 09 December 2016 (has links)
Sensitivity analysis is essential for the hydrologic models to help gain insight into model’s behavior, and assess the model structure and conceptualization. Parameter estimation in the distributed hydrologic models is difficult due to the highdimensional parameter spaces. Sensitivity analysis identified the influential and noninfluential parameters in the modeling process, thus it will benefit the calibration process. This study identified, applied and evaluated two sensitivity analysis methods for the APEX model. The screening methods, the Morris method, and LHOAT method, were implemented in the experimental site in North Carolina for modeling runoff, sediment loss, TP and DP losses. At the beginning of the application, the run number evaluation was conducted for the Morris method. The result suggested that 2760 runs were sufficient for 45 input parameters to get reliable sensitivity result. Sensitivity result for the five management scenarios in the study site indicated that the Morris method and LHOAT method provided similar results on the sensitivity of the input parameters, except the difference on the importance of PARM2, PARM8, PARM12, PARM15, PARM20, PARM49, PARM76, PARM81, PARM84, and PARM85. The results for the five management scenarios indicated the very influential parameters were consistent in most cases, such as PARM23, PARM34, and PARM84. The “sensitive” parameters had good overlaps between different scenarios. In addition, little variation was observed in the importance of the sensitive parameters in the different scenarios, such as PARM26. The optimization process with the most influential parameters from sensitivity analysis showed great improvement on the APEX modeling performance in all scenarios by the objective functions, PI1, NSE, and GLUE.

Page generated in 0.1069 seconds