Spelling suggestions: "subject:"[een] SENSITIVITY ANALYSIS"" "subject:"[enn] SENSITIVITY ANALYSIS""
81 |
Modeling and Analysis for Optimization of Unsteady Aeroelastic SystemsGhommem, Mehdi 06 December 2011 (has links)
Simulating the complex physics and dynamics associated with unsteady aeroelastic systems is often attempted with high-fidelity numerical models. While these high-fidelity approaches are powerful in terms of capturing the main physical features, they may not discern the role of underlying phenomena that are interrelated in a complex manner. This often makes it difficult to characterize the relevant causal mechanisms of the observed features. Besides, the extensive computational resources and time associated with the use these tools could limit the capability of assessing different configurations for design purposes. These shortcomings present the need for the development of simplified and reduced-order models that embody relevant physical aspects and elucidate the underlying phenomena that help in characterizing these aspects. In this work, different fluid and aeroelastic systems are considered and reduced-order models governing their behavior are developed.
In the first part of the dissertation, a methodology, based on the method of multiple scales, is implemented to show its usefulness and effectiveness in the characterization of the physics underlying the system, the implementation of control strategies, and the identification of high-impact system parameters. In the second part, the unsteady aerodynamic aspects of flapping micro air vehicles (MAVs) are modeled. This modeling is required for evaluation of performance requirements associated with flapping flight. The extensive computational resources and time associated with the implementation of high-fidelity simulations limit the ability to perform optimization and sensitivity analyses in the early stages of MAV design. To overcome this and enable rapid and reasonably accurate exploration of a large design space, a medium-fidelity aerodynamic tool (the unsteady vortex lattice method) is implemented to simulate flapping wing flight. This model is then combined with uncertainty quantification and optimization tools to test and analyze the performance of flapping wing MAVs under varying conditions. This analysis can be used to provide guidance and baseline for assessment of MAVs performance in the early stages of decision making on flapping kinematics, flight mechanics, and control strategies. / Ph. D.
|
82 |
High-sensitivity Full-field Quantitative Phase Imaging Based on Wavelength Shifting InterferometryChen, Shichao 06 September 2019 (has links)
Quantitative phase imaging (QPI) is a category of imaging techniques that can retrieve the phase information of the sample quantitatively. QPI features label-free contrast and non-contact detection. It has thus gained rapidly growing attention in biomedical imaging. Capable of resolving biological specimens at tissue or cell level, QPI has become a powerful tool to reveal the structural, mechanical, physiological and spectroscopic properties. Over the past two decades, QPI has seen a broad spectrum of evolving implementations. However, only a few have seen successful commercialization. The challenges are manifold. A major problem for many QPI techniques is the necessity of a custom-made system which is hard to interface with existing commercial microscopes. For this type of QPI techniques, the cost is high and the integration of different imaging modes requires nontrivial hardware modifications. Another limiting factor is insufficient sensitivity. In QPI, sensitivity characterizes the system repeatability and determines the quantification resolution of the system. With more emerging applications in cell imaging, the requirement for sensitivity also becomes more stringent.
In this work, a category of highly sensitive full-field QPI techniques based on wavelength shifting interferometry (WSI) is proposed. On one hand, the full-field implementations, compared to point-scanning, spectral domain QPI techniques, require no mechanical scanning to form a phase image. On the other, WSI has the advantage of preserving the integrity of the interferometer and compatibility with multi-modal imaging requirement. Therefore, the techniques proposed here have the potential to be readily integrated into the ubiquitous lab microscopes and equip them with quantitative imaging functionality. In WSI, the shifts in wavelength can be applied in fine steps, termed swept source digital holographic phase microscopy (SS-DHPM), or a multi-wavelength-band manner, termed low coherence wavelength shifting interferometry (LC-WSI). SS-DHPM brings in an additional capability to perform spectroscopy, whilst the LC-WSI achieves a faster imaging rate which has been demonstrated with live sperm cell imaging. In an attempt to integrate WSI with the existing commercial microscope, we also discuss the possibility of demodulation for low-cost sources and common path implementation.
Besides experimentally demonstrating the high sensitivity (limited by only shot noise) with the proposed techniques, a novel sensitivity evaluation framework is also introduced for the first time in QPI. This framework examines the Cramér-Rao bound (CRB), algorithmic sensitivity and experimental sensitivity, and facilitates the diagnosis of algorithm efficiency and system efficiency. The framework can be applied not only to the WSI techniques we proposed, but also to a broad range of QPI techniques. Several popular phase shifting interferometry techniques as well as off-axis interferometry is studied. The comparisons between them are shown to provide insights into algorithm optimization and energy efficiency of sensitivity. / Doctor of Philosophy / The most common imaging systems nowadays capture the image of an object with the irradiance perceived by the camera. Based on the intensity contrast, morphological features, such as edges, humps, and grooves, can be inferred to qualitatively characterize the object. Nevertheless, in scientific measurements and research applications, a quantitative characterization of the object is desired. Quantitative phase imaging (QPI) is such a category of imaging techniques that can retrieve the phase information of the sample by properly design the irradiance capturing scheme and post-process the data, converting them to quantitative metrics such as surface height, material density and so on. The imaging process of QPI will neither harm the sample nor leave exogenous residuals. As a result, it has thus gained rapidly growing attention in biomedical imaging. Over the past two decades, QPI has seen a broad spectrum of evolving implementations, but only a few have seen successful commercialization. The challenges are manifold whilst one stands out - that they have expensive optical setups that are often incompatible with existing commercial microscope platforms. The setups are also very complicated such that without professionals having solid optics background, it is difficult to operate the system to perform imaging applications. Another limiting factor is the insufficient understanding of sensitivity. In QPI, sensitivity characterizes the system repeatability and determines its quantification resolution. With more emerging applications in cell imaging, the requirement for sensitivity also becomes more stringent.
In this work, a category of highly sensitive full-field QPI techniques based on wavelength shifting interferometry (WSI) is proposed. WSI images the full-field of the sample simultaneously, unlike some other techniques requiring scanning one probe point across the sample. It also has the advantage of preserving the integrity of the interferometer, which is the key structure to enable highly sensitive measurement for QPI methods. Therefore, the techniques proposed here have the potential to be readily integrated into the ubiquitous lab microscopes and equip them with quantitative imaging functionality. Differed by implementations, two WSI techniques have been proposed, termed swept source digital holographic phase microscopy (SS-DHPM), and low coherence wavelength shifting interferometry (LC-WSI), respectively. SS-DHPM brings in an additional capability to perform spectroscopy, whilst the LC-WSI achieves a faster imaging rate which has been demonstrated with live sperm cell imaging. In an attempt to integrate WSI with the existing commercial microscope, we also discuss the possibility of demodulation for low-cost sources and common path implementation.
Besides experimentally demonstrating the high sensitivity with the proposed techniques, a novel sensitivity evaluation framework is also introduced for the first time in QPI. This framework not only examines the realistic sensitivity obtained in experiments, but also compares it to the theoretical values. The framework can be widely applied to a broad range of QPI techniques, providing insights into algorithm optimization and energy efficiency of sensitivity.
|
83 |
Computational Tools for Chemical Data Assimilation with CMAQGou, Tianyi 15 February 2010 (has links)
The Community Multiscale Air Quality (CMAQ) system is the Environmental Protection Agency's main modeling tool for atmospheric pollution studies. CMAQ-ADJ, the adjoint model of CMAQ, offers new analysis capabilities such as receptor-oriented sensitivity analysis and chemical data assimilation.
This thesis presents the construction, validation, and properties of new adjoint modules in CMAQ, and illustrates their use in sensitivity analyses and data assimilation experiments. The new module of discrete adjoint of advection is implemented with the aid of automatic differentiation tool (TAMC) and is fully validated by comparing the adjoint sensitivities with finite difference values. In addition, adjoint sensitivity with respect to boundary conditions and boundary condition scaling factors are developed and validated in CMAQ.
To investigate numerically the impact of the continuous and discrete advection adjoints on data assimilation, various four dimensional variational (4D-Var) data assimilation experiments are carried out with the 1D advection PDE, and with CMAQ advection using synthetic and real observation data. The results show that optimization procedure gives better estimates of the reference initial condition and converges faster when using gradients computed by the continuous adjoint approach. This counter-intuitive result is explained using the nonlinearity properties of the piecewise parabolic method (the numerical discretization of advection in CMAQ).
Data assimilation experiments are carried out using real observation data. The simulation domain encompasses Texas and the simulation period is August 30 to September 1, 2006. Data assimilation is used to improve both initial and boundary conditions. These experiments further validate the tools developed in this thesis. / Master of Science
|
84 |
Uncertainty quantification in dynamical models. An application to cocaine consumption in SpainRubio Monzó, María 13 October 2015 (has links)
[EN] The present Ph.D. Thesis considers epidemiological mathematical models based on ordinary differential equations and shows its application to understand the cocaine consumption epidemic in Spain. Three mathematical models are presented to predict the evolution of the epidemic in the near future in order to select the model that best reflects the data. By the results obtained for the selected model, if there are not changes in cocaine consumption policies or in the economic environment, the cocaine consumption will increase in Spain over the next few years. Furthermore, we use different techniques to estimate 95% confidence intervals and, consequently, quantify the uncertainty in the predictions. In addition, using several techniques, we conducted a model sensitivity analysis to determine which parameters are those that most influence the cocaine consumption in Spain. These analysis reveal that prevention actions on cocaine consumer population can be the most effective strategy to control this trend. / [ES] La presente Tesis considera modelos matemáticos epidemiológicos basados en ecuaciones diferenciales ordinarias y muestra su aplicación para entender la epidemia del consumo de cocaína en España. Se presentan tres modelos matemáticos para predecir la evolución de dicha epidemia en un futuro próximo, con el objetivo de seleccionar el modelo que mejor refleja los datos. Por los resultados obtenidos para el modelo seleccionado, si no hay cambios en las políticas del consumo de cocaína ni en el ámbito económico, el consumo de cocaína aumentará en los próximos años. Además, utilizamos diferentes técnicas para estimar los intervalos de confianza al 95% y, de esta forma, cuantificar la incertidumbre en las predicciones. Finalmente, utilizando diferentes técnicas, hemos realizado un análisis de sensibilidad para determinar qué parámetros son los que más influyen en el consumo de cocaína. Estos análisis revelan que las acciones de prevención sobre la población de consumidores de cocaína pueden ser la estrategia más efectiva para controlar esta tendencia. / [CA] La present Tesi considera models matemàtics epidemiològics basats en equacions diferencials ordinàries i mostra la seua aplicació per a entendre l'epidèmia del consum de cocaïna en Espanya. Es presenten tres models matemàtics per a predir l'evolució d'aquesta epidèmia en un futur pròxim, amb l'objectiu de seleccionar el model que millor reflecteix les dades. Pels resultats obtinguts per al model seleccionat, si no hi ha canvis en les polítiques de consum de cocaïna ni en l'àmbit econòmic, el consum de cocaïna augmentarà en els pròxims anys. A més, utilitzem diferents tècniques per a estimar els intervals de confiança al 95% i, d'aquesta manera, quantificar la incertesa en les prediccions. Finalment, utilitzant diferents tècniques, hem realitzat un anàlisi de sensibilitat per a determinar quins paràmetres són els que més influencien el consum de cocaïna. Aquestos anàlisis revelen que les accions de prevenció en la població de consumidors de cocaïna poden ser l'estratègia més efectiva per a controlar aquesta tendència. / Rubio Monzó, M. (2015). Uncertainty quantification in dynamical models. An application to cocaine consumption in Spain [Tesis doctoral]. Universitat Politècnica de València. https://doi.org/10.4995/Thesis/10251/55844
|
85 |
Flexible design and operation of multi-stage reverse osmosis desalination process for producing different grades of water with maintenance and cleaning opportunityAl-Obaidi, Mudhar A.A.R., Rasn, K.H., Aladhwani, S.H., Kadhom, M., Mujtaba, Iqbal 20 April 2022 (has links)
Yes / The use of Reverse Osmosis (RO) process in seawater desalination to provide high-quality drinking water is progressively increased compared to thermal technologies. In this paper, multistage spiral wound RO desalination process is considered. Each stage consists of several pressure vessels (PVs) organised in parallel with membrane modules in each PV being organised in series. This allows disconnecting a set of PVs and membrane modules depending on the requirement of cleaning and maintenance. While this flexibility offers the opportunity of generating several RO configurations, we presented only four such configurations of the RO system and analysed them via simulation and optimisation. Production of different grades of water catering different needs of a city is also considered for each of these configurations. The optimisation has resulted in the optimal operating conditions, which maximises the water productivity and minimises the specific energy consumption of the proposed configurations for a given water grade in terms of salinity. For instance, the results indicate that the proposed RO networks can produce drinking water of 500 ppm salinity with a minimum specific energy consumption of 3.755 kWh/m3. The strategy offers the production of different grades of water without plant shutdown while maintaining the membrane modules throughout the year.
|
86 |
Numerical modelling and sensitivity analysis of natural draft cooling towersDhorat, A., Al-Obaidi, Mudhar A.A.R., Mujtaba, Iqbal 12 April 2018 (has links)
Yes / Cooling towers are a relatively inexpensive and consistent method of ejecting heat from several industries such as thermal power plants, refineries, and food processing. In this research, an earlier model from literature was to be validated across three different case studies. Unlike previous models, this model considers the height of the fill as the discretised domain, which produces results that give it in a distribution form along the height of the tower. As there are limitations with the software used (gPROMS) where differential equations with respect to independent variables in the numerator and denominator cannot be solved, a derivative of the saturation vapour pressure with respect to the temperature of the air was presented. Results shown were in agreement with the literature and a parametric sensitivity analysis of the cooling tower design and operating parameters were undertaken. In this work the height of fill, mass flowrates of water and air were studied with respect to sensitivity analysis. Results had shown large variations in the outlet temperatures of the water and air if the mass flows of water and air were significantly reduced. However, upon high values of either variable had shown only small gains in the rejection of heat from the water stream. With respect to the height of the fill, at larger heights of the fill, the outlet water temperature had reduced significantly. From a cost perspective, it was found that a change in the water flowrate had incurred the largest cost penalty with a 1% increase in flowrate had increased the average operating cost by 1.2%. In comparison, a change in air flowrate where a 1% increase in flowrate had yielded an average of 0.4% increase in operating cost.
|
87 |
Performance analysis of hybrid system of multi effect distillation and reverse osmosis for seawater desalination via modeling and simulationFilippini, G., Al-Obaidi, Mudhar A.A.R., Manenti, F., Mujtaba, Iqbal 01 October 2018 (has links)
Yes / The coupling of thermal (Multi Stage Flash, MSF) and membrane processes (Reverse Osmosis, RO) in desalination systems has been widely presented in the literature to achieve an improvement of performance compared to an individual process. However, very little study has been made to the combined Multi Effect Distillation (MED) and Reverse Osmosis (RO) processes. Therefore, this research investigates several design options of MED with thermal vapor compression (MED_TVC) coupled with RO system. To achieve this aim, detailed mathematical models for the two processes are developed, which are independently validated against the literature. Then, the integrated model is used to investigate the performance of several configurations of the MED_TVC and RO processes in the hybrid system. The performance indicators include the fresh water productivity, energy consumption, fresh water purity, and recovery ratio. Basically, the sensitivity analysis for each configuration is conducted with respect to seawater conditions and steam supply variation. Most importantly, placing the RO membrane process upstream in the hybrid system generates the overall best configuration in terms of the quantity and quality of fresh water produced. This is attributed to acquiring the best recovery ratio and lower energy consumption over a wide range of seawater salinity.
|
88 |
Reliability-Based Topology Optimization with Analytic SensitivitiesClark, Patrick Ryan 03 August 2017 (has links)
It is a common practice when designing a system to apply safety factors to the critical failure load or event. These safety factors provide a buffer against failure due to the random or un-modeled behavior, which may lead the system to exceed these limits. However these safety factors are not directly related to the likelihood of a failure event occurring. If the safety factors are poorly chosen, the system may fail unexpectedly or it may have a design which is too conservative. Reliability-Based Design Optimization (RBDO) is an alternative approach which directly considers the likelihood of failure by incorporating a reliability analysis step such as the First-Order Reliability Method (FORM). The FORM analysis requires the solution of an optimization problem however, so implementing this approach into an RBDO routine creates a double-loop optimization structure. For large problems such as Reliability-Based Topology Optimization (RBTO), numeric sensitivity analysis becomes computationally intractable. In this thesis, a general approach to the sensitivity analysis of nested functions is developed from the Lagrange Multiplier Theorem and then applied to several Reliability-Based Design Optimization problems, including topology optimization. The proposed approach is computationally efficient, requiring only a single solution of the FORM problem each iteration. / Master of Science / It is a common practice when designing a system to apply safety factors to the critical failure load or event. These safety factors provide a buffer against failure due to the random or unmodeled behavior, which may lead the system to exceed these limits. However these safety factors are not directly related to the likelihood of a failure event occurring. If the safety factors are poorly chosen, the system may fail unexpectedly or it may have a design which is too conservative. Reliability-Based Design Optimization (RBDO) is an alternative approach which directly considers the likelihood of failure by incorporating a reliability analysis step such as the First-Order Reliability Method (FORM). The FORM analysis requires the solution of an optimization problem however, so implementing this approach into an RBDO routine creates a double-loop optimization structure. For large problems such as Reliability-Based Topology Optimization (RBTO), numeric sensitivity analysis becomes computationally intractable. In this thesis, a general approach to the sensitivity analysis of nested functions is developed from the Lagrange Multiplier Theorem and then applied to several Reliability-Based Design Optimization problems, including topology optimization. The proposed approach is computationally efficient, requiring only a single solution of the FORM problem each iteration.
|
89 |
Stochastic Computer Model Calibration and Uncertainty QuantificationFadikar, Arindam 24 July 2019 (has links)
This dissertation presents novel methodologies in the field of stochastic computer model calibration and uncertainty quantification. Simulation models are widely used in studying physical systems, which are often represented by a set of mathematical equations. Inference on true physical system (unobserved or partially observed) is drawn based on the observations from corresponding computer simulation model. These computer models are calibrated based on limited ground truth observations in order produce realistic predictions and associated uncertainties. Stochastic computer model differs from traditional computer model in the sense that repeated execution results in different outcomes from a stochastic simulation. This additional uncertainty in the simulation model requires to be handled accordingly in any calibration set up.
Gaussian process (GP) emulator replaces the actual computer simulation when it is expensive to run and the budget is limited. However, traditional GP interpolator models the mean and/or variance of the simulation output as function of input. For a simulation where marginal gaussianity assumption is not appropriate, it does not suffice to emulate only the mean and/or variance. We present two different approaches addressing the non-gaussianity behavior of an emulator, by (1) incorporating quantile regression in GP for multivariate output, (2) approximating using finite mixture of gaussians. These emulators are also used to calibrate and make forward predictions in the context of an Agent Based disease model which models the Ebola epidemic outbreak in 2014 in West Africa.
The third approach employs a sequential scheme which periodically updates the uncertainty inn the computer model input as data becomes available in an online fashion. Unlike other two methods which use an emulator in place of the actual simulation, the sequential approach relies on repeated run of the actual, potentially expensive simulation. / Doctor of Philosophy / Mathematical models are versatile and often provide accurate description of physical events. Scientific models are used to study such events in order to gain understanding of the true underlying system. These models are often complex in nature and requires advance algorithms to solve their governing equations. Outputs from these models depend on external information (also called model input) supplied by the user. Model inputs may or may not have a physical meaning, and can sometimes be only specific to the scientific model. More often than not, optimal values of these inputs are unknown and need to be estimated from few actual observations. This process is known as inverse problem, i.e. inferring the input from the output. The inverse problem becomes challenging when the mathematical model is stochastic in nature, i.e., multiple execution of the model result in different outcome. In this dissertation, three methodologies are proposed that talk about the calibration and prediction of a stochastic disease simulation model which simulates contagion of an infectious disease through human-human contact. The motivating examples are taken from the Ebola epidemic in West Africa in 2014 and seasonal flu in New York City in USA.
|
90 |
Evaluation of a Water Budget Model for Created Wetland Design and Comparative Natural Wetland HydroperiodsSneesby, Ethan Paul 04 April 2019 (has links)
Wetland impacts in the Mid-Atlantic USA are frequently mitigated via wetland creation in former uplands. Regulatory approval requires a site-specific water budget that predicts the annual water level regime (hydroperiod). However, many studies of created wetlands indicate that post-construction hydroperiods frequently are not similar to impacted wetland systems. My primary objective was to evaluate a water budget model, Wetbud (Basic model), through comparison of model output to on-site water level data for two created forested wetlands in Northern Virginia. Initial sensitivity analyses indicated that watershed curve number and outlet height had the most leverage on model output. Addition of maximum depth of water level drawdown greatly improved model accuracy. I used Nash-Sutcliffe efficiency (NSE) and root mean squared error (RMSE) to evaluate goodness of fit of model output against site monitoring data. The Basic model reproduced the overall seasonal hydroperiod well once fully parameterized, despite NSE values ranging from -0.67 to 0.41 in calibration and from -4.82 to -0.26 during validation. For RMSE, calibration values ranged from 5.9 cm to 12.7 cm during calibration and from 8.2 cm to 18.5 cm during validation. My second objective was to select a group of "design target hydroperiods" for common Mid-Atlantic USA wetland types. From > 90 sites evaluated, I chose four mineral flats, three riverine wetlands, and one depressional wetland that met all selection criteria. Taken together, improved wetland water budget modeling procedures (like Wetbud) combined with the use of appropriate target hydroperiod information should improve the success of wetland creation efforts. / Master of Science / Wetlands in the USA are defined by the combined occurrence of wetland hydrology, hydric soils, and hydrophytic vegetation. Wetlands serve to retain floodwater, sediments and nutrients within their landscape. They may serve as a source of local groundwater recharge and are home to many endangered species of plants and animals. Wetland ecosystems are frequently impacted by human activities including road-building and development. These impacts can range from the destruction of a wetland to increased nutrient contributions from storm- or wastewater. One commonly utilized option to mitigate wetland impacts is via wetland creation in former upland areas. Regulatory approval requires a site-specific water budget that predicts the average monthly water levels (hydroperiod). A hydroperiod is simply a depiction of how the elevation of water changes over time. However, many studies of created wetlands indicate that post-construction hydroperiods frequently are not representative of the impacted wetland systems. Many software packages, called models, seek to predict the hydroperiod for different wetland systems. Improving and vetting these models help to improve our understanding of how these systems function. My primary objective was to evaluate a water budget model, Wetbud (Basic model), through comparison of model output to onsite water level data for two created forested wetlands in Northern Virginia. Initial analyses indicated that watershed curve number (CN) and outlet height had the most influence on model output. Addition of a maximum depth of water level drawdown below the ground surface greatly improved model accuracy. I used statistical analyses to compare model output to site monitoring data. The Basic model reproduced the overall seasonal hydroperiod well once inputs were set to optimum values (calibration). Statistical results for the calibration varied between excellent and acceptable for our selected measure of accuracy, the root mean squared error. My second objective was to select a grouping of “design target hydroperiods” for common Mid-Atlantic USA wetland types. From > 90 sites evaluated, I chose four mineral flats, three riverine wetlands, and one depressional wetland that met all selection criteria. Taken together, improved wetland water budget modeling procedures (like Wetbud) combined with the use of appropriate target hydroperiod information should improve the success of wetland creation efforts.
|
Page generated in 0.0321 seconds