Spelling suggestions: "subject:"ensitivity 2analysis"" "subject:"ensitivity 3analysis""
11 |
Sensitivity Analysis and Parameter Estimation for the APEX Model on Runoff, Sediments and PhosphorusJiang, Yi 09 December 2016 (has links)
Sensitivity analysis is essential for the hydrologic models to help gain insight into model’s behavior, and assess the model structure and conceptualization. Parameter estimation in the distributed hydrologic models is difficult due to the high-dimensional parameter spaces. Sensitivity analysis identified the influential and non-influential parameters in the modeling process, thus it will benefit the calibration process. This study identified, applied and evaluated two sensitivity analysis methods for the APEX model. The screening methods, the Morris method, and LH-OAT method, were implemented in the experimental site in North Carolina for modeling runoff, sediment loss, TP and DP losses. At the beginning of the application, the run number evaluation was conducted for the Morris method. The result suggested that 2760 runs were sufficient for 45 input parameters to get reliable sensitivity result. Sensitivity result for the five management scenarios in the study site indicated that the Morris method and LH-OAT method provided similar results on the sensitivity of the input parameters, except the difference on the importance of PARM2, PARM8, PARM12, PARM15, PARM20, PARM49, PARM76, PARM81, PARM84, and PARM85. The results for the five management scenarios indicated the very influential parameters were consistent in most cases, such as PARM23, PARM34, and PARM84. The “sensitive” parameters had good overlaps between different scenarios. In addition, little variation was observed in the importance of the sensitive parameters in the different scenarios, such as PARM26. The optimization process with the most influential parameters from sensitivity analysis showed great improvement on the APEX modeling performance in all scenarios by the objective functions, PI1, NSE, and GLUE.
|
12 |
FICST: A Tool for Sensitivity Analysis of SCWR Fuel Isotopic Composition to Nuclear DataMostofian, Sara January 2014 (has links)
With an ever-increasing population both in Canada and globally, an improved quality of life will depend on having access to energy. The non-renewable, carbon-based, sources of energy that presently provide a major amount of the world's energy supply are depleting and therefore will be expensive in the future. Nuclear technology is a relatively new technology which can fulfill future energy needs but requires highly specialized skills and knowledge to continue to make it safer, cleaner, more reliable, and more affordable. Thus the nuclear industry puts lots of efforts to develop and improve the next generation of nuclear power plants. The Supercritical Water Reactors (SCWRs) are one of the Generation IV nuclear-reactor systems.
The SCWRs, to a large extent, are very similar to light water reactors, but with a simpler design. The main advantage of SCWRs is their higher thermal efficiency. The Canadian SCWR has adopted an innovative fuel concept which is a mixture of plutonium and thorium oxides (Th, Pu) O2.
The role of nuclear data in fuel development and reactor-physics analysis is quite significant. With the development of nuclear data files over the years, nuclear cross sections and other parameters are widely available, but their accuracy is still a concern. Also the accuracy of nuclear data is more reliable for uranium-based fuels than for thorium-based fuels. It is not known how the uncertainties in the nuclear data will impact the fuel depletion in a SCWR. Thus a sensitivity analysis tool has been developed to evaluate the impact of uncertainties in the neutron cross-sections of the actinides present in SCWR fuel. This document provides the details on the theory and methodology used to develop this tool (FICST). The objective of this work is to develop a code, not any specific calculation done with it. / Thesis / Master of Applied Science (MASc)
|
13 |
Sensitivity calculations on a soot model using a partially stirred reactorWu, Nathan Gabriel 05 November 2010 (has links)
Sensitivity analysis was performed on a soot model using a partially stirred reactor (PaSR) in order to determine the effects of mixing model parameters on soot scalar values. The sensitivities of the mixture fraction zeta and progress variable C to the mixing model constant C_phi were calculated; these values were used to compute the sensitivity of water mass fraction Y_H2O to C_phi and several soot quantities to soot moments. Results were validated by evaluating the mean mixture fraction sensitivity and a long simulation time case. From the baseline case, it was noted that soot moment sensitivities tended to peak on the rich side of the stoichiometric mixture fraction zeta_st. Timestep, number of notional particles, mixing timescale tau_mix, and residence time tau_res were varied independently. Choices for timestep and notional particle count were shown to be sufficient to capture relevant scalar profiles, and did not greatly affect sensitivity calculations. Altering tau_mix or tau_res was shown to affect sensitivity to mixing, and it was concluded that the soot model is more heavily influenced by the chemistry than mixing. / text
|
14 |
Sensitivity Analysis of Models with Input CodependenciesDougherty, SEAN 05 December 2013 (has links)
Assuming a set of variates are independent and normally distributed is commonplace in statistics. In this thesis, we consider the consequences of these assumptions as they pertain to global sensitivity analysis. We begin by illustrating how the notion of sensitivity becomes distorted in the presence of codependent model inputs. This observation motivates us to develop a new methodology which accommodates for input codependencies. Our methodology can be summarized through three points: First, a new form of sensitivity is presented which performs as well as the classical form but can be obtained at a fraction of the computational cost. Second, we define a measure which quantifies the extent of distortion caused by codependent inputs. The third point is regarding the modelling of said codependencies. The multivariate normal distribution is a natural choice for modelling codependent inputs; however, our methodology uses a copula-based approach instead. Copulas are a contemporary strategy for constructing multivariate distributions whereby the marginal and joint behaviours are treated separately. As a result, a practitioner has more flexibility when modelling inputs. / Thesis (Master, Chemical Engineering) -- Queen's University, 2013-12-05 10:16:26.81
|
15 |
Handling missing data in RCTs; a review of the top medical journalsBell, Melanie, Fiero, Mallorie, Horton, Nicholas J, Hsu, Chiu-Hsieh January 2014 (has links)
UA Open Access Publishing Fund / Background
Missing outcome data is a threat to the validity of treatment effect estimates in randomized controlled trials. We aimed to evaluate the extent, handling, and sensitivity analysis of missing data and intention-to-treat (ITT) analysis of randomized controlled trials (RCTs) in top tier medical journals, and compare our findings with previous reviews related to missing data and ITT in RCTs.
Methods
Review of RCTs published between July and December 2013 in the BMJ, JAMA, Lancet, and New England Journal of Medicine, excluding cluster randomized trials and trials whose primary outcome was survival.
Results
Of the 77 identified eligible articles, 73 (95%) reported some missing outcome data. The median percentage of participants with a missing outcome was 9% (range 0 – 70%). The most commonly used method to handle missing data in the primary analysis was complete case analysis (33, 45%), while 20 (27%) performed simple imputation, 15 (19%) used model based methods, and 6 (8%) used multiple imputation. 27 (35%) trials with missing data reported a sensitivity analysis. However, most did not alter the assumptions of missing data from the primary analysis. Reports of ITT or modified ITT were found in 52 (85%) trials, with 21 (40%) of them including all randomized participants. A comparison to a review of trials reported in 2001 showed that missing data rates and approaches are similar, but the use of the term ITT has increased, as has the report of sensitivity analysis.
Conclusions
Missing outcome data continues to be a common problem in RCTs. Definitions of the ITT approach remain inconsistent across trials. A large gap is apparent between statistical methods research related to missing data and use of these methods in application settings, including RCTs in top medical journals.
|
16 |
The Valuation of Participating Life Insurance Contracts under Levy ProcessesChen, Chih-Hsuan 26 June 2010 (has links)
none
|
17 |
Apply System Dynamics Software for the Study of the Impacts of Oysters to the Nutrient Dynamics in a Tropical LagoonLee, Liang-shan 12 February 2007 (has links)
Tapeng Bay is the second largest lagoon in Taiwan. The biological and ecological environments are an autotrophic system and are influenced by seasonal variability, terrestrial pollutant inputs and the exchange rates with seawater. There¡¦re intense oyster culture and fish farming activities in the bay before July, 2002. The oyster was the most important spineless member in the lagoon. They would filtrate microplankton and detritus; they would also excrete nutrient and consume dissolved oxygen. Therefore, oyster played a very crucial role in the Tapeng Bay. This study combines the biological responses of the oysters with the complicated interaction among microplankton, nutrient, detritus and dissolved oxygen to establish the relationship of dynamical mechanisms between variables by applying the system dynamics simulation software STELLA.
Model results clearly reveal that the oysters are the main species of filtration. The factors which affect the biomass of oysters include microplankton concentration, temperature and individual mass; the connection between oysters and other biological variables is closely tied. The study has also shown that the removing of oysters may cause significant increases of plankton and detritus during the eutrophication condition. Although the simulated water quality variables show higher than those obtained from sampling experiments in the literatures, the trend corresponds well with the relative studies. Despite the fact that oysters excrete much nutrient and nutrient is mostly taken up by microplankton, the condition of nutrient limitation has never happened, which is in correspondence with the result. In the sensitivity analysis, the parameters of oyster filtration rate and the autotroph nitrogen to carbon ratio are important factors which have influence on oysters biomass, and the concentration of microplankton and ammonium. Oyster excretion rate and the proportion of oyster feces and pseudofeces also have significant influences on the concentration of ammonium.
The oyster culture racks in the Bay have already been torn down, but the Bay is still eutrophic. This is a clear indication of the importance of the oysters in the lagoon. Properly culture some oysters in the area where exist high concentrations of microplankton or organic input. By applying the oyster abundant filtration, planktons and suspended solids, mostly detritus and organic matters, can probably be controlled and the water quality in the bay can thus be improved.
Although the STELLA has its limitation on broader applications, the model developed by this study can be combined with the features of social or economic fields. A decision supporting system can be developed for the management of ecological environment policies.
|
18 |
A Study on Fault Current Limiter Installation in Power System NetworkYang, Chien-Chih 10 September 2007 (has links)
Due to the difficulty of reinforcement in power network and the interconnection of more distributed generators, fault current level has become a serious problem in transmission and distribution system operations. The utilization of fault current limiters (FCLs) in power system provides an effective way to suppress the fault currents. In this thesis the sensitivity of impedance matrix due to changes in the branch parameters is derived and used to choose the candidates for FCL installation in a complex power system. The proposed method also considers the effect on power system transient stability due to the installation of FCL. The Extended Equal Area Criterion (EEAC) is used to simplify the multi-machine transient stability problem to a simple equivalent modal and to simplify the transient stability evaluation. A fuzzy logic approach considering impedance of FCL, transient stability and voltage sag effects is used to choose good FCL installation locations in loop transmission systems.
|
19 |
Adjoint-Based Uncertainty Quantification and Sensitivity Analysis for Reactor Depletion CalculationsStripling, Hayes Franklin 16 December 2013 (has links)
Depletion calculations for nuclear reactors model the dynamic coupling between the material composition and neutron flux and help predict reactor performance and safety characteristics. In order to be trusted as reliable predictive tools and inputs to licensing and operational decisions, the simulations must include an accurate and holistic quantification of errors and uncertainties in its outputs. Uncertainty quantification is a formidable challenge in large, realistic reactor models because of the large number of unknowns and myriad sources of uncertainty and error.
We present a framework for performing efficient uncertainty quantification in depletion problems using an adjoint approach, with emphasis on high-fidelity calculations using advanced massively parallel computing architectures. This approach calls for a solution to two systems of equations: (a) the forward, engineering system that models the reactor, and (b) the adjoint system, which is mathematically related to but different from the forward system. We use the solutions of these systems to produce sensitivity and error estimates at a cost that does not grow rapidly with the number of uncertain inputs. We present the framework in a general fashion and apply it to both the source-driven and k-eigenvalue forms of the depletion equations. We describe the implementation and verification of solvers for the forward and ad- joint equations in the PDT code, and we test the algorithms on realistic reactor analysis problems. We demonstrate a new approach for reducing the memory and I/O demands on the host machine, which can be overwhelming for typical adjoint algorithms. Our conclusion is that adjoint depletion calculations using full transport solutions are not only computationally tractable, they are the most attractive option for performing uncertainty quantification on high-fidelity reactor analysis problems.
|
20 |
Power Grid Correction Using Sensitivity AnalysisAydonat, Meric 14 December 2010 (has links)
Power grid voltage integrity verification requires checking if all the voltage drops on
the grid are less than a certain threshold that guarantees proper circuit operation.
This thesis addresses the problem of correcting the grid when some voltage drops
exceed this threshold by making minor modifications to the existing design. The
method uses current constraints that capture the uncertainty about the underlying
circuit behavior to find the maximum voltage drop on the grid, and then to estimate
the voltage drop as a function of the metal widths on the grid. It formulates a nonlinear
optimization problem and finds the required change in widths that reduces the
maximum voltage drop on the grid below the threshold while keeping the total area
cost at a minimum.
|
Page generated in 0.0554 seconds