Spelling suggestions: "subject:"ensitivity 2analysis"" "subject:"ensitivity 3analysis""
351 |
Development of a Design-Based Computational Model of Bioretention SystemsLiu, Jia 03 December 2013 (has links)
Multiple problems caused by urban runoff have emerged as a consequence to the continuing development of urban areas in recent decades. The increase of impervious land areas can significantly alter watershed hydrology and water quality. Typical impacts to downstream hydrologic regimes include higher peak flows and runoff volumes, shorter concentration times, and reduced infiltration. Urban runoff increases the transport of pollutants and nutrients and thus degrades water bodies adjacent to urban areas. One of the most frequently used practices to restore the hydrology and water quality of urban watersheds is bioretention (also known as a rain garden). Despite its wide applicability, an understanding of its multiple physiochemical and biological treatment processes remains an active research area.
To provide a wide ability to evaluate the hydrologic input to bioretention systems, spatial and temporal distribution of storm events in Virginia were studied. Results generated from long-term frequency analysis of 60-year precipitation data demonstrate that the 90 percentile, or 10-year return period rainfall depth and dry duration in Virginia are between 22.9 – 35.6 mm and 15.3 – 25.8 days, respectively. Monte-Carlo simulations demonstrated that sampling programs applied in different regions would likely encounter more than 30% of precipitation events less than 2.54 mm, and 10% over 25.4 mm.
Further experimental research was conducted to evaluate bioretention recipes for retaining stormwater nitrogen (N) and phosphorus (P). A mesocosm experiment was performed to simulate bioretention facilities with 3 different bioretention blends as media layers with underdrain pipes for leachate collection. A control group with 3 duplicates for each media was compared with a replicated vegetated group. Field measurement of dissolved oxygen (DO), oxidation-reduction potential (ORP), pH, and total dissolved solids (TDS) was combined with laboratory analyses of total suspended solids (TSS), nitrate (NO3), ammonium (NH4), phosphate (PO4), total Kjeldahl nitrogen (TKN) and total phosphorus (TP) to evaluate the nutrient removal efficacies of these blends. Physicochemical measurements for property parameters were performed to determine characteristics of blends. Isotherm experiments to examine P adsorption were also conducted to provide supplementary data for evaluation of bioretention media blends. The results show that the blend with water treatment residuals (WTR) removed >90% P from influent, and its effluent had the least TDS / TSS. Another blend with mulch-free compost retained the most (50 – 75%) total nitrogen (TN), and had the smallest DO / ORP values, which appears to promote denitrification under anaerobic conditions. Increase of hydraulic retention time (HRT) to 6 h could influence DO, ORP, TKN, and TN positively. Plant health should also be considered as part of a compromise mix that sustains vegetation. Two-way analysis of variance (ANOVA) found that single and interaction effects of HRT and plants existed, and could affect water quality parameters of mesocosm leachate.
Based upon the understanding of the physiochemical and hydrologic conditions mentioned previously, a design model of a bioretention system became the next logical step. The computational model was developed within the Matlab® programming environment to describe the hydraulic performance and nutrient removal of a bioretention system. The model comprises a main function and multiple subroutines for hydraulics and treatment computations. Evapotranspiration (ET), inflow, infiltration, and outflow were calculated for hydrologic quantitation. Biomass accumulation, nitrogen cycle and phosphorus fate within bioretention systems were also computed on basis of the hydrologic outputs. The model was calibrated with the observed flow and water quality data from a field-scale bioretention in Blacksburg, VA. The calibrated model is capable of providing quantitative estimates on flow pattern and nutrient removal that agree with the observed data. Sensitivity analyses determined the major factors affecting discharge were: watershed width and roughness for inflow; pipe head and diameter for outflow. Nutrient concentrations in inflow are very influential to outflow quality. A long-term simulation demonstrates that the model can be used to estimate bioretention performance and evaluate its impact on the surrounding environment.
This research advances the current understanding of bioretention systems in a systematic way, from hydrologic behavior, monitoring, design criteria, physiochemical performance, and computational modeling. The computational model, combined with the results from precipitation frequency analysis and evaluation of bioretention blends, can be used to improve the operation, maintenance, and design of bioretention facilities in practical applications. / Ph. D.
|
352 |
Fréchet Sensitivity Analysis and Parameter Estimation in Groundwater Flow ModelsLeite Dos Santos Nunes, Vitor Manuel 09 May 2013 (has links)
In this work we develop and analyze algorithms motivated by the parameter estimation problem corresponding to a multilayer aquifer/interbed groundwater flow model. The parameter estimation problem is formulated as an optimization problem, then addressed with algorithms based on adjoint equations, quasi-Newton schemes, and multilevel optimization. In addition to the parameter estimation problem, we consider properties of the parameter to solution map. This includes invertibility (known as identifiability) and differentiability properties of the map. For differentiability, we expand existing results on Fréchet sensitivity analysis to convection diffusion equations and groundwater flow equations. This is achieved by proving that the Fréchet derivative of the solution operator is Hilbert-Schmidt, under smoothness assumptions for the parameter space. In addition, we approximate this operator by time dependent matrices, where their singular values and singular vectors converge to their infinite dimension peers. This decomposition proves to be very useful as it provides vital information as to which perturbations in the distributed parameters lead to the most significant changes in the solutions, as well as applications to uncertainty quantification. Numerical results complement our theoretical findings. / Ph. D.
|
353 |
Quantitative Anisotropy Imaging based on Spectral InterferometryLi, Chengshuai 01 February 2019 (has links)
Spectral interferometry, also known as spectral-domain white light or low coherence interferometry, has seen numerous applications in sensing and metrology of physical parameters. It can provide phase or optical path information of interest in single shot measurements with exquisite sensitivity and large dynamic range. As fast spectrometer became more available in 21st century, spectral interferometric techniques start to dominate over time-domain interferometry, thanks to its speed and sensitivity advantage.
In this work, a dual-modality phase/birefringence imaging system is proposed to offer a quantitative approach to characterize phase, polarization and spectroscopy properties on a variety of samples. An interferometric spectral multiplexing method is firstly introduced by generating polarization mixing with specially aligned polarizer and birefringence crystal. The retardation and orientation of sample birefringence can then be measured simultaneously from a single interference spectrum. Furthermore, with the addition of a Nomarski prism, the same setup can be used for quantitative differential interference contrast (DIC) imaging. The highly integrated system demonstrates its capability for noninvasive, label-free, highly sensitive birefringence, DIC and phase imaging on anisotropic materials and biological specimens, where multiple intrinsic contrasts are desired.
Besides using different intrinsic contrast regime to quantitatively measure different biological samples, spectral multiplexing interferometry technique also finds an exquisite match in imaging single anisotropic nanoparticles, even its size is well below diffraction limit. Quantitative birefringence spectroscopy measurement over gold nanorod particles on glass substrate demonstrates that the proposed system can simultaneously determine the polarizability-induced birefringence orientation, as well as the scattering intensity and the phase differences between major/minor axes of single nanoparticles. With the anisotropic nanoparticles' spectroscopic polarizability defined prior to the measurement with calculation or simulation, the system can be further used to reveal size, aspect ratio and orientation information of the detected anisotropic nanoparticle.
Alongside developing optical anisotropy imaging systems, the other part of this research describes our effort of investigating the sensitivity limit for general spectral interferometry based systems. A complete, realistic multi-parameter interference model is thus proposed, while corrupted by a combination of shot noise, dark noise and readout noise. With these multiple noise sources in the detected spectrum following different statistical behaviors, Cramer-Rao Bounds is derived for multiple unknown parameters, including optical pathlength, system-specific initial phase, spectrum intensity as well as fringe visibility. The significance of the work is to establish criteria to evaluate whether an interferometry-based optical measurement system has been optimized to its hardware best potential.
An algorithm based on maximum likelihood estimation is also developed to achieve absolute optical pathlength demodulation with high sensitivity. In particular, it achieves Cramer-Rao bound and offers noise resistance that can potentially suppress the demodulation jump occurrence. By simulations and experimental validations, the proposed algorithm demonstrates its capability of achieving the Cramer-Rao bound over a large dynamic range of optical pathlengths, initial phases and signal-to-noise ratios. / PHD / Optical imaging is unique for its ability to use light to provide both structural and functional information from microscopic to macroscopic scales. As for microscopy, how to create contrast for better visualization of detected objects is one of the most important topic. In this work, we are aiming at developing a noninvasive, label-free and quantitative imaging technique based on multiple intrinsic contrast regimes, such as intensity, phase and birefringence.
Spectral multiplexing interferometry method is firstly introduced by generating spectral interference with polarization mixing. Multiple parameters can thus be demodulated from single-shot interference spectrum. With Jones Matrix analysis, the retardation and orientation of sample birefringence can be measured simultaneously. A dual-modality phase/birefringence imaging system is proposed to offer a quantitative approach to characterize phase, polarization and spectroscopy properties on a variety of samples. The high integrated system can not only deliver label-free, highly sensitive birefringence, DIC and phase imaging of anisotropic materials and biological specimens, but also reveal size, aspect ratio and orientation information of anisotropic nanoparticles of which the size is well below diffraction limit.
Alongside developing optical imaging systems based on spectral interferometry, the other part of this research describes our effort of investigating the sensitivity limit for general spectral interferometry based systems. The significance of the work is using Cramer-Rao Bounds to establish criteria to evaluate whether an optical measurement system has been optimized to its hardware best potential. An algorithm based on maximum likelihood estimation is also developed to achieve absolute optical pathlength demodulation with high sensitivity. In particular, it achieves Cramer-Rao bound and offers noise resistance that can potentially suppress the demodulation jump occurrence.
|
354 |
Model Validation for a Steel Deck Truss Bridge over the New RiverHickey, Lucas James 26 May 2008 (has links)
This thesis presents the methods utilized to model a steel deck truss bridge over the New River in Hillsville, Virginia. These methods were evaluated by comparing analytical results with data recorded from 14 members during live load testing. The research presented herein is part of a larger endeavor to understand the structural behavior and collapse mechanism of the erstwhile I-35W bridge in Minneapolis, MN. Objectives accomplished toward this end include investigation of lacing effects on built up member strain detection, live load testing of a steel truss bridge, and evaluating modeling techniques in comparison to recorded data.
Before any live load testing could be performed, it was necessary to confirm an acceptable strain gage layout for measuring member strains. The effect of riveted lacing in built-up members was investigated by constructing a two-thirds mockup of a typical bridge member. The mockup was then instrumented with strain gages and subjected to known strains in order to determine the most effective strain gage arrangement. Testing analysis concluded that for a built up member consisting of laced channels, one strain gage installed on the middle of the extreme fiber of each channel's flanges was sufficient. Thus, laced members on the bridge were mounted with four strain gages each.
Data from live loads were obtained by loading two trucks to 25 tons each. Trucks were positioned at eight locations on the bridge in four different relative truck positions. Data were recorded continuously and reduced to member forces for model validation comparisons. Deflections at selected truss nodes were also recorded for model validation purposes.
The model validation process began by developing four simple truss models, each reflecting different expected restraint conditions, in the hopes of bracketing data from recorded results. Models were refined to frames, and then frames including floor beams and stringers for greater accuracy. The final, most accurate model was selected and used for a failure analysis. This model showed where the minimum amount of load could be applied in order to learn about the bridge's failure behavior, for a test to be conducted at a later time. / Master of Science
|
355 |
Continuum Analytical Shape Sensitivity Analysis of 1-D Elastic BarNayak, Soumya Sambit 06 January 2021 (has links)
In this thesis, a continuum sensitivity analysis method is presented for calculation of shape sensitivities of an elastic bar. The governing differential equations and boundary conditions for the elastic bar are differentiated with respect to the shape design parameter to derive the continuum sensitivity equations. The continuum sensitivity equations are linear ordinary differential equations in terms of local or material shape design derivatives, otherwise known as shape sensitivities. One of the novelties of this work is the derivation of three variational formulations for obtaining shape sensitivities, one in terms of the local sensitivity and two in terms of the material sensitivity. These derivations involve evaluating (a) the variational form of the continuum sensitivity equations, or (b) the sensitivity of the variational form of the analysis equations. We demonstrate their implementation for various combinations of design velocity and global basis functions. These variational formulations are further solved using finite element analysis. The order of convergence of each variational formulation is determined by comparing the sensitivity solutions with the exact solutions for analytical test cases. This research focusses on 1-D structural equations. In future work, the three variational formulations can be derived for 2-D and 3-D structural and fluid domains. / Master of Science / When solving an optimization problem, the extreme value of the performance metric of interest is calculated by tuning the values of the design variables. Some optimization problems involve shape change as one of the design variables. Change in shape leads to change in the boundary locations. This leads to a change in the domain definition and the boundary conditions. We consider a 1-D structural element, an elastic bar, for this study. Subsequently, we demonstrate a method for calculating the sensitivity of solution (e.g. displacement at a point) to change in the shape (length for 1-D case) of the elastic bar. These sensitivities, known as shape sensitivities, are critical for design optimization problems. We make use of continuum analytical shape sensitivity analysis to derive three variational formulations to compute these shape sensitivities. The accuracy and convergence of solutions is verified using a finite element analysis code. In future, the approach can be extended to multi-dimensional structural and fluid domain problems.
|
356 |
Progressive development of a hydrologic and inorganic nitrogen conceptual model to improve the understanding of small Mediterranean catchments behaviourMedici ., Chiara 09 July 2010 (has links)
El conocimiento de los procesos hidrológicos es esencial para la gestión de los recursos hídricos tanto desde el punto de vista cuantitativo (crecidas o sequías) como desde el punto de vista cualitativo (contaminación).
El funcionamiento hidrológico de las cuencas mediterráneas es aún bastante desconocido a pesar de los diferentes estudios realizados desde hace una veintena de años. Los progresos realizados en la identificación y modelización de los procesos hidrológicos corresponden casi en la totalidad a investigaciones realizadas en clima templado-húmedo (Bonell y Balek, 1993; Buttle, 1994). Esta falta de información, fuerza según Bonell (1993) a la "transferencia de resultados", a pesar de la necesidad evidente de desarrollar aproximaciones diferentes, principalmente en el ámbito de la modelización (Pilgrim et al. 1988).
Por lo que se refiere a la modelación hidrológica, los estudios disponibles (Durand et al., 1992; Parkin et al., 1996; Piñol et al., 1997 entre otros) muestran serias dificultades para reproducir las primeras crecidas de otoño, después del periodo estival seco. Para estas cuencas parece difícil modelizar correctamente uno o más años hidrológicos completos con un solo juego de parámetros (Piñol et al., 1997, Bernal et al., 2004).
El clima mediterráneo está caracterizado por una dinámica estacional muy marcada del régimen de precipitaciones y de la evapotranspiración, que favorece la alternancia durante el año de periodos secos y húmedos. Esto modifica fuertemente el estado hidrológico de la cuenca, de lo que deriva un comportamiento hidrológico complejo y no-lineal (Piñol et al. 1999).
La necesidad de comprender el funcionamiento hidrológico de un sistema responde a dos cuestiones importantes: por un lado es el procedimiento más indicado para proporcionar elementos útiles a la gestión integrada de los recursos hídricos y por otro lado es fundamental para la modelación del comportamiento de nutrientes por ejemplo como el nitrato. / Medici ., C. (2010). Progressive development of a hydrologic and inorganic nitrogen conceptual model to improve the understanding of small Mediterranean catchments behaviour [Tesis doctoral]. Universitat Politècnica de València. https://doi.org/10.4995/Thesis/10251/8428
|
357 |
An evaluation of membrane properties and process characteristics of a scaled-up pressure retarded osmosis (PRO) processHe, W., Wang, Y., Mujtaba, Iqbal, Shaheed, M.H. 24 August 2015 (has links)
Yes / This work presents a systematic evaluation of the membrane and process characteristics of a scaled-up pressure retarded osmosis (PRO). In order to meet pre-defined membrane economic viability ( ≥ 5 W/m2), different operating conditions and design parameters are studied with respect to the increase of the process scale, including the initial flow rates of the draw and feed solution, operating pressure, membrane permeability-selectivity, structural parameter, and the efficiency of the high-pressure pump (HP), energy recovery device (ERD) and hydro-turbine (HT). The numerical results indicate that the performance of the scaled-up PRO process is significantly dependent on the dimensionless flow rate. Furthermore, with the increase of the specific membrane scale, the accumulated solute leakage becomes important. The membrane to achieve the optimal performance moves to the low permeability in order to mitigate the reverse solute permeation. Additionally, the counter-current flow scheme is capable to increase the process performance with a higher permeable and less selectable membrane compared to the co-current flow scheme. Finally, the inefficiencies of the process components move the optimal APD occurring at a higher dimensionless flow rate to reduce the energy losses in the pressurization and at a higher specific membrane scale to increase energy generation.
|
358 |
Cost evaluation and optimisation of hybrid multi effect distillation and reverse osmosis system for seawater desalinationAl-Obaidi, Mudhar A.A.R., Filippini, G., Manenti, F., Mujtaba, Iqbal 01 February 2019 (has links)
Yes / In this research, the effect of operating parameters on the fresh water production cost of hybrid Multi Effect Distillation (MED) and Reverse Osmosis (RO) system is investigated. To achieve this, an earlier comprehensive model developed by the authors for MED + RO system is combined with two full-scale cost models of MED and RO processes collected from the literature. Using the economic model, the variation of the overall fresh water cost with respect to some operating conditions, namely steam temperature and steam flow rate for the MED process and inlet pressure and flow rate for the RO process, is accurately investigated. Then, the hybrid process model is incorporated into a single-objective non-linear optimisation framework to minimise the fresh water cost by finding the optimal values of the above operating conditions. The optimisation results confirm the economic feasibility of the proposed hybrid seawater desalination plant.
|
359 |
Computational Framework for Uncertainty Quantification, Sensitivity Analysis and Experimental Design of Network-based Computer Simulation ModelsWu, Sichao 29 August 2017 (has links)
When capturing a real-world, networked system using a simulation model, features are usually omitted or represented by probability distributions. Verification and validation (V and V) of such models is an inherent and fundamental challenge. Central to V and V, but also to model analysis and prediction, are uncertainty quantification (UQ), sensitivity analysis (SA) and design of experiments (DOE). In addition, network-based computer simulation models, as compared with models based on ordinary and partial differential equations (ODE and PDE), typically involve a significantly larger volume of more complex data. Efficient use of such models is challenging since it requires a broad set of skills ranging from domain expertise to in-depth knowledge including modeling, programming, algorithmics, high- performance computing, statistical analysis, and optimization. On top of this, the need to support reproducible experiments necessitates complete data tracking and management. Finally, the lack of standardization of simulation model configuration formats presents an extra challenge when developing technology intended to work across models. While there are tools and frameworks that address parts of the challenges above, to the best of our knowledge, none of them accomplishes all this in a model-independent and scientifically reproducible manner.
In this dissertation, we present a computational framework called GENEUS that addresses these challenges. Specifically, it incorporates (i) a standardized model configuration format, (ii) a data flow management system with digital library functions helping to ensure scientific reproducibility, and (iii) a model-independent, expandable plugin-type library for efficiently conducting UQ/SA/DOE for network-based simulation models. This framework has been applied to systems ranging from fundamental graph dynamical systems (GDSs) to large-scale socio-technical simulation models with a broad range of analyses such as UQ and parameter studies for various scenarios. Graph dynamical systems provide a theoretical framework for network-based simulation models and have been studied theoretically in this dissertation. This includes a broad range of stability and sensitivity analyses offering insights into how GDSs respond to perturbations of their key components. This stability-focused, structure-to-function theory was a motivator for the design and implementation of GENEUS.
GENEUS, rooted in the framework of GDS, provides modelers, experimentalists, and research groups access to a variety of UQ/SA/DOE methods with robust and tested implementations without requiring them to necessarily have the detailed expertise in statistics, data management and computing. Even for research teams having all the skills, GENEUS can significantly increase research productivity. / Ph. D. / Uncertainties are ubiquitous in computer simulation models especially for network-based models where the underlying mechanisms are difficult to characterize explicitly by mathematical formalizations. Quantifying uncertainties is challenging because of either the lack of knowledge or their inherent indeterminate properties. Verification and validation of models with uncertainties cannot include every detail of real systems and therefore will remain a fundamental task in modeling. Many tools are developed for supporting uncertainty quantification, sensitivity analysis, and experimental design. However, few of them is domain-independent or supports the data management and complex simulation workflow of network-based simulation models.
In this dissertation, we present a computational framework called GENEUS, which incorporates a multitude of functions including uncertain parameter specification, experimental design, model execution management, data access and registrations, sensitivity analysis, surrogate modeling, and model calibration. This framework has been applied to systems ranging from fundamental graph dynamical systems (GDSs) to large-scale socio-technical simulation models with a broad range of analyses for various scenarios. GENEUS provides researchers access to uncertainty quantification, sensitivity analysis and experimental design methods with robust and tested implementations without requiring detailed expertise in modeling, statistics, or computing. Even for groups having all the skills, GENEUS can help save time, guard against mistakes and improve productivity.
|
360 |
Computational Study of Turbulent Combustion Systems and Global Reactor NetworksChen, Lu 05 September 2017 (has links)
A numerical study of turbulent combustion systems was pursued to examine different computational modeling techniques, namely computational fluid dynamics (CFD) and chemical reactor network (CRN) methods. Both methods have been studied and analyzed as individual techniques as well as a coupled approach to pursue better understandings of the mechanisms and interactions between turbulent flow and mixing, ignition behavior and pollutant formation. A thorough analysis and comparison of both turbulence models and chemistry representation methods was executed and simulations were compared and validated with experimental works. An extensive study of turbulence modeling methods, and the optimization of modeling techniques including turbulence intensity and computational domain size have been conducted. The final CFD model has demonstrated good predictive performance for different turbulent bluff-body flames. The NOx formation and the effects of fuel mixtures indicated that the addition of hydrogen to the fuel and non-flammable diluents like CO2 and H2O contribute to the reduction of NOx.
The second part of the study focused on developing chemical models and methods that include the detailed gaseous reaction mechanism of GRI-Mech 3.0 but cost less computational time. A new chemical reactor network has been created based on the CFD results of combustion characteristics and flow fields. The proposed CRN has been validated with the temperature and species emission for different bluff-body flames and has shown the capability of being applied to general bluff-body systems. Specifically, the rate of production of NOx and the sensitivity analysis based on the CRN results helped to summarize the reduced reaction mechanism, which not only provided a promising method to generate representative reactions from hundreds of species and reactions in gaseous mechanism but also presented valuable information of the combustion mechanisms and NOx formation. Finally, the proposed reduced reaction mechanism from the sensitivity analysis was applied to the CFD simulations, which created a fully coupled process between CFD and CRN, and the results from the reduced reaction mechanism have shown good predictions compared with the probability density function method. / Ph. D. / Turbulent combustion has been regarded as one of the most typical occurrences with industrial burners, where turbulent flow is produced by large vortex eddies when fuel and oxidizer mixes. Due to increasing demands for energy and concerns for environmental pollution, it is important to have a comprehensive understanding of turbulent combustion processes. To help provide information related to turbulent combustion, computational modeling can be used to give physical insights of the combustion process. A numerical study of turbulent combustion systems was pursued to examine different computational modeling techniques and to understand the mechanisms in terms of fluid dynamics and chemical kinetics. Computational fluid dynamics (CFD) was used to predict the flow field, including gas velocities, temperatures and fuel characteristics. Another computational technique known as the chemical reactor network (CRN) was used to provide information related to the chemical reactions and pollutant production. A method was developed as part of the study to couple the computational methods to pursue better understandings of the mechanisms and interactions between turbulent flow and mixing, ignition behavior and pollutant formation. Results have been compared with experimental data to optimize the modeling techniques and validate the developed model. The CRN model with the detailed gaseous reaction mechanism from the Gas Research Institute GRI-Mech 3.0 created a reacting network across the combustor with flame chemistry details. By post-processing the CRN results using a sensitivity analysis, the reduced reaction mechanism was summarized, which provided a promising method to generate representative reactions of the system from hundreds of species and reactions that occur in the combustion process. The proposed reduced reaction mechanism was applied to the CFD simulations, which created a fully coupled process between CFD and CRN. The results from the reduced reaction mechanism have shown good predictions compared with the probability density function method, which is a simplified way to model combustion. Pollutant emission such as NOx has also been studied in both CFD and CRN models, in terms of the effects of fuel mixtures, the formation mechanisms and influential factors as well as reactions to the formation process. The work provides guidance for an integrated framework to model and study turbulence and chemical reactions for turbulent combustion systems.
|
Page generated in 0.0599 seconds