Spelling suggestions: "subject:"censitivity aanalysis"" "subject:"censitivity 2analysis""
21 
Phosphorus Management: An Analysis of the Virginia Phosphorus IndexJesiek, Julie B. 12 March 2003 (has links)
Excess phosphorus (P) that is transported into water bodies can cause water quality problems. A high potential for P delivery occurs when there is a high transport potential from erosion, runoff, and/or leaching coupled with high soil test P and/or high rate of fertilizer P application. A management tool is needed to identify those fields that have a high transport and source potential to deliver P to surface water. The Virginia PIndex is a massbased tool that estimates the annual risk of delivery of P from a given field to surface water. Guidelines on P application rates are then given based on the level of risk. This is a new tool and additional research and testing are needed to determine the dependability and validity of the index.
The overall goal of the research was to contribute to the continued development of the Virginia PIndex as an effective P management tool. A sensitivity analysis was completed to identify the parameters to which the PIndex was most sensitive under a range of conditions. In low erosion and runoff conditions, the PIndex was most sensitive to P management factors including application rate. As erosion and runoff potential increased, the PIndex was most sensitive to the erosion risk factors including soil loss. Under conditions with subsurface leaching, the PIndex was most sensitive to the subsurface leaching factors and Mehlich I soil test P. A stochastic analysis was also conducted to determine the effects of parameter variability. Variability of the PIndex output was greater as the risk of P delivery increased and this could affect management recommendations.
A survey was completed to determine expert opinion as to the appropriateness of parameter estimation methods used in the Virginia PIndex. Thirtyeight surveys were returned, representing a diverse range of participants within and outside of Virginia. Comments from the respondents were used to evaluate the appropriateness of the parameter methods. All factors were determined to be appropriate given the state of the science. Estimation methods for the following factors were determined to be less appropriate than the other subfactors by the survey respondents: soil texture/drainage class, subsurface dissolved reactive orthophosphate (DRP), runoff delivery, and sediment delivery. The Virginia PIndex was determined to be a well thought out management tool and implementation should identify fields with the greatest risk of P delivery to surface water. Recommendations for improvement were identified including a need for additional analysis and studies. / Master of Science

22 
SENSITIVITY ANALYSIS WITH FINITEELEMENT METHOD FOR MICROWAVE DESIGN AND OPTIMIZATIONLi, Dongying 06 1900 (has links)
<p> The thesis proposes a novel method for the computation of the design
sensitivity of microwave network parameters. The approach is based on the
finiteelement method. When combined with the iterative update method (the
Broyden method) during the gradientbased optimization process, the approach
requires practically no overhead for the computation of the response Jacobian,
thus accelerating the optimization. </p> <p> The efficiency and accuracy of the gradientbased optimization and the tolerance analysis greatly depend on the computation of the design sensitivity. However, common commercial fullwave electromagnetic solvers do not provide sensitivity information. With them, the design sensitivities are computed from the response themselves using finitedifference or higherorder approximations at the response level. Consequently, for each design parameter of interest, at least one additional fullwave analysis is performed. </p> <p> The proposed selfadjoint sensitivity analysis (SASA) is so far the most efficient way to extract the sensitivity information for the network parameters with the finiteelement method. As an improvement of the adjointvariable method (AVM), it eliminates the additional system analyses. With one single fullwave analysis, the sensitivities with respect to all design parameters are computed. This significantly improves the efficiency of the sensitivity computations. </p> <p> When employed in gradientbased optimization, the computational overhead of the SASA can be further reduced. Instead of the finitedifference approximation, the system matrix derivatives are updated iteratively using the Broyden update. This reduces the computational overhead of the sensitivity analysis to practically zero. Further, several switching criteria between the Broyden update and the finitedifference approximation of the system matrix derivatives is proposed to guarantee the robust convergence of the optimization algorithm. This leads to our Broyden/finitedifference SASA (B/FDSASA). </p> <p> The efficiency in terms of CPU time as well as the accuracy of the SASA is verified by several numerical examples, where the reference results are provided through the traditional finitedifference approximations. Also, the efficiency of the B/FDSASA is validated by a filter design example and a microwave imaging example, with implementations exploiting different gradientbased optimization algorithms. </p> / Thesis / Master of Applied Science (MASc)

23 
Adjoint based solution and uncertainty quantification techniques for variational inverse problemsHebbur Venkata Subba Rao, Vishwas 25 September 2015 (has links)
Variational inverse problems integrate computational simulations of physical phenomena with physical measurements in an informational feedback control system. Control parameters of the computational model are optimized such that the simulation results fit the physical measurements.The solution procedure is computationally expensive since it involves running the simulation computer model (the emph{forward model}) and the associated emph {adjoint model} multiple times. In practice, our knowledge of the underlying physics is incomplete and hence the associated computer model is laden with emph {model errors}. Similarly, it is not possible to measure the physical quantities exactly and hence the measurements are associated with emph {data errors}. The errors in data and model adversely affect the inference solutions. This work develops methods to address the challenges posed by the computational costs and by the impact of data and model errors in solving variational inverse problems.
Variational inverse problems of interest here are formulated as optimization problems constrained by partial differential equations (PDEs). The solution process requires multiple evaluations of the constraints, therefore multiple solutions of the associated PDE. To alleviate the computational costs we develop a parallel in time discretization algorithm based on a nonlinear optimization approach. Like in the emph{parareal} approach, the time interval is partitioned into subintervals, and local time integrations are carried out in parallel. Solution continuity equations across interval boundaries are added as constraints. All the computational steps  forward solutions, gradients, and Hessianvector products  involve only ideally parallel computations and therefore are highly scalable.
This work develops a systematic mathematical framework to compute the impact of data and model errors on the solution to the variational inverse problems. The computational algorithm makes use of first and second order adjoints and provides an aposteriori error estimate for a quantity of interest defined on the inverse solution (i.e., an aspect of the inverse solution). We illustrate the estimation algorithm on a shallow water model and on the Weather Research and Forecast model.
Presence of outliers in measurement data is common, and this negatively impacts the solution to variational inverse problems. The traditional approach, where the inverse problem is formulated as a minimization problem in $L_2$ norm, is especially sensitive to large data errors. To alleviate the impact of data outliers we propose to use robust norms such as the $L_1$ and Huber norm in data assimilation. This work develops a systematic mathematical framework to perform three and four dimensional variational data assimilation using $L_1$ and Huber norms. The power of this approach is demonstrated by solving data assimilation problems where measurements contain outliers. / Ph. D.

24 
Continuum Sensitivity Analysis for Shape Optimization in Incompressible Flow ProblemsTurner, Aaron Michael 18 July 2017 (has links)
An important part of an aerodynamic design process is optimizing designs to maximize quantities such as lift and the lifttodrag ratio, in a process known as shape optimization. It is the goal of this thesis to develop and apply understanding of mixed finite element method and sensitivity analysis in a way that sets the foundation for shape optimization. The opensource Incompressible Flow Iterative Solution Software (IFISS) mixed finite element method toolbox for MATLAB developed by Silvester, Elman, and Ramage is used. Meshes are produced for a backwardfacing step problem, using builtin tools from IFISS as well as the mesh generation software Gmsh, and grid convergence studies are performed for both sets of meshes along a sampled data line to ensure that the simulations converge asymptotically with increasing mesh resolution. As a preliminary study of sensitivity analysis, analytic sensitivities of velocity components along the backwardfacing step data line to inflow velocity parameters are determined and verified using finite difference and complex step sensitivity values. The method is then applied to pressure drag calculated by integrating the pressure over the surface of a circular cylinder in a freestream flow, and verified and validated using published simulation data and experimental data. The sensitivity analysis study is extended to shape optimization, wherein the shape of a circular cylinder is altered and the sensitivities of the pressure drag coefficient to the changes in the cylinder shape are determined and verified. / Master of Science

25 
Finite Element Analysis and Sensitivity Analysis for the Potential EquationCapozzi, Marco G F 08 May 2004 (has links)
A finite element solver has been developed for performing analysis and sensitivity analysis with Poisson's equation. An application of Poisson's equation in fluid dynamics is that of poential flow, in which case Posson's equaiton reduces to Laplace's equation. The stiffness matrix and sensitivity of the stiffness matrix are evaluated by direct integrations, as opposed to numerical integration. This allows less computational effort and minimizes the sources of computational errors. The capability of evaluating sensitivity derivatives has been added in orde to perform design sensitivity analysis of nonlifting airfoils. The discretedirect approach to sensitivity analysis is utilized in the current work. The potential flow equations and the sensitivity equations are computed by using a preconditionaed conjugate gradient method. This method greatly reduces the time required to perfomr analysis, and the subsequent design optimization. Airfoil shape is updated at each design iteratioan by using a BezierBerstein surface parameterization. The unstrucured grid is adapted considering the mesh as a system of inteconnected springs. Numerical solutions from the flow solver are compared with analytical results obtained for a Joukowsky airfoil. Sensitivity derivaatives are validated using carefully determined central finite difference values. The developed software is then used to perform inverse design of a NACA 0012 and a multielement airfoil.

26 
Nash strategies with adaptation and their application in the deregulated electricity marketTan, Xiaohuan 28 November 2006 (has links)
No description available.

27 
Pilotní projekt výstavby bioplynové staniceMoravec, František January 2011 (has links)
No description available.

28 
Developing and testing a model of wind damage risk for forest plantations in SouthWest EuropeLocatelli, Tommaso January 2016 (has links)
Wind is the main abiotic cause of disturbance to forests in large parts of the world, particularly at temperate and boreal latitudes. In the past few decades the consequences of large windinduced losses have been experienced at various levels, from small forest owners to largescale, wholesociety level. This is particularly relevant for areas, such as Europe, where forests are intensively managed, and the assets exposed to wind hazard are substantial. To better manage forests and commercial tree plantations to reduce the risk of wind damage, processbased, semimechanistic mathematical models such as ForestGALES are used. This model has been parameterised and evaluated for numerous conifer species, which constitute the major plantation types in temperate and boreal biomes. However, the geographical extent and economic importance of fastgrowing broadleaved species, such as those of the Eucalyptus genus, and the lack of detailed historical data on wind damage to these species, require that tools for the estimation of the risk of wind damage to these species are developed and evaluated. This is particularly relevant in light of the projected increases of surface temperature due to climate change, and of the frequency and severity of extreme windstorms, that are expected as a consequence of climate change. Fieldwork was conducted in a seminatural Eucalyptus globulus (Labill.) forest in the Asturias region in Northern Spain to acquire data for the parameterisation of ForestGALES for E. globulus, using a treepulling experiment. The behaviour of the parameterisation was investigated for different stocking densities to evaluate whether the effects of tree height, stocking density, and presence of a fresh upwind gap are consistent with the literature. This parameterisation was then used to compare the vulnerability to wind damage between E. globulus and Pinus pinaster (Ait.), the predominant plantation species in the Aquitaine region of SW France where extensive damage was experienced from storms Martin (1999) and Klaus (2009). The effects of rooting depth (2x), growth rate (2x), presence/absence of a recently created windward gap, and of the predominant wind climate in Aquitaine were investigated in this comparison. In order to aid forest managers with optimal resource allocation for practical applications of ForestGALES, and to provide forest modellers with invaluable insights for the development of robust wind damage risk models, ForestGALES was subjected to a sensitivity analysis. A generalisation of the variancebased method of Sobol’ for the case of correlated variables was used to investigate the sensitivity of the outputs of ForestGALES (the critical wind speeds for stem breakage and uprooting, and the associated probabilities of damage) to variation in its input variables. Almost all the E. globulus trees pulled in Asturias failed by overturning rather than breakage, which allowed for good confidence in the calculations of the overturning moments required for the empirical component of ForestGALES. Resistance to overturning was not significantly influenced by the presence of a taproot. Modelling the shape of the tree crowns with an ellipsoid provided a good approximation of the geometry of the canopy, but required additional fieldwork as crown width in the four cardinal directions had to be estimated visually prior to the tests. The scarcity of detailed published data on wind damage to E. globulus made evaluating the parameterisation particularly challenging. This impediment was obviated by investigating the behaviour of the parameterisation with regards to the wellknown effects of tree height, stocking density, and presence of a fresh upwind gap. The simulations showed that the parameterisation behaved as expected, with vulnerability of E. globulus stands increasing with tree height, stocking density, and the presence of a gap. High initial planting densities, an early thinning, and a final harvesting before the trees have reached a height of 20 – 25m are recommended to reduce the risk of wind damage to E. globulus. The comparison with P. pinaster showed that E. globulus trees are particularly susceptible to the presence of a recently created windward gap. Therefore, harvesting at neighbouring sites should be minimised, and preferentially performed when the neighbouring stands are still at a young age to take advantage of the fast growth rates of E. globulus. These practices would ensure that in case of wind damage any losses are recovered in a short time. These procedures can reduce the cumulative risk through the rotation, while maintaining competitive yields. The ForestGALES simulations have also highlighted that the silvicultural practices currently in place in Aquitaine expose P. pinaster trees to high levels of cumulative risk (> 20%). The sensitivity analysis of ForestGALES has highlighted the strengths of the model and the areas that require substantial improvement. The results of the analysis show that ForestGALES is able to simulate very effectively the dynamics of wind damage to forest stands, as the model architecture reflects the significant influences of tree height, stocking density, dbh, and size of an upwind gap, on the calculations of the critical wind speeds of damage. Similarly, in ForestGALES the wind climate of a site is the main driver of variation of the probabilities of damage, as it is for real forests affected by extreme storms. Conversely, when the windiness of a site is moderate, ForestGALES accounts for the larger role of tree and stand variables. The sensitivity analysis has shown that ForestGALES is particularly efficient at simulating not only the effect of the size of windward gaps on the vulnerability of a stand, but also at differentiating between recently formed stand edges and edges that have been in place since the establishment of a stand. Therefore, for practical applications of the model, tree height, dbh, stocking density, the size and nature of an upwind gap, and the local wind climate, are the variables that need to be known with a high accuracy in order to maximally reduce the uncertainty of the model predictions. The section of the model that requires further attention and research is the one dedicated to the calculation of the trees’ resistance to overturning. The sensitivity analysis has shown that rooting depth and soil type, the model input variables on which the empirical component of ForestGALES that describes the resistance to overturning is based, contribute only marginally to the variation in the outputs. This finding unequivocally identifies that efforts for future research should be aimed at studying the mechanics of rootsoil interactions with regards to tree stability. The results of the sensitivity analysis have also shown that the variancebased method used in this research project is equally sensitive to the accurate description of the probability distribution functions of the scrutinised variables, as it is to their correlation structure.

29 
Groundwater Vulnerability Assessment Using a GISBased Modified DRASTIC Model in Agricultural AreasGheisari, Narges January 2017 (has links)
DRASTIC model is the most widely used method for aquifer vulnerability mapping which consists of seven hydrogeological parameters. Despite of its popularity, this technique disregards the effect of regional characteristics and there is no specific validation method to demonstrate the accuracy of this method. The main goal of this research was developing an integrated GISbased DRASTIC model using Depth to water, Net Recharge, Aquifer media, Soil media, Topography, Impact of vadose zone and Hydraulic Conductivity (DRASTIC). In order to obtain a more reliable and accurate assessment, the rates and weights of original DRASTIC were modified using Wilcoxon ranksum nonparametric statistical test and Single Parameter Sensitivity Analysis (SPSA). The methodology was implemented for the Shahrekord plain in the southwestern region of Iran. Two different sets of measured nitrate concentrations from two monitoring events were used, one for modification and other for validation purposes. Validation nitrate values were compared to the calculated DRASTIC index to assess the efficacy of the DRASTIC model. The validation results obtained from Pearson's correlation and chisquare values, revealed that the modified DRASTIC is more efficient than original DRASTIC. The modified rate/weight DRASTIC (spline) model showed the highest correlation coefficient and chi square value as 0.88 and 72.93, respectively, compared to 0.3 and 25.2 for the original DRASTIC (spline) model. The integrated vulnerability map showed the high risk imposed on the southeastern part of the Shahrekord aquifer. In addition, sensitivity analysis indicated that the removal of net recharge parameter from the modified model caused larger variation in vulnerability index showing that this parameter has more impact on the DRASTIC vulnerability of the aquifer. Moreover, Aquifer media (A), Topography (T) and Impact of vadose zone (I) were found to have less effect and importance compared to other variables as expected. Therefore, reduced modified DRASTIC model was proposed by eliminating A, T and I parameters. Pearson's correlation coefficient and chisquare value for the reduced model were calculated as 0.88 and 100.38, respectively, which was found to be as reliable as full modified DRASTIC model.

30 
Development, risk analysis, and compression of a multihost model for Chagas disease transmission in southern LouisianaJanuary 2020 (has links)
archives@tulane.edu / 1 / Harley Hanes

Page generated in 0.1221 seconds