• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 316
  • 113
  • 91
  • 76
  • 36
  • 22
  • 12
  • 8
  • 7
  • 5
  • 5
  • 5
  • 4
  • 3
  • 2
  • Tagged with
  • 861
  • 861
  • 142
  • 124
  • 121
  • 112
  • 111
  • 101
  • 97
  • 85
  • 82
  • 80
  • 73
  • 67
  • 66
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
101

Permanent Coexistence for Omnivory Models

Vance, James Aaron 06 September 2006 (has links)
One of the basic questions of concern in mathematical biology is the long-term survival of each species in a set of populations. This question is particularly puzzling for a natural system with omnivory due to the fact that simple mathematical models of omnivory are prone to species extinction. Omnivory is defined as the consumption of resources from more than one trophic level. In this work, we investigate three omnivory models of increasing complexity. We use the notion of permanent coexistence, or permanence, to study the long-term survival of three interacting species governed by a mixture of competition and predation. We show the permanence of our models under certain parameter restrictions and include the biological interpretations of these parameter restrictions. Sensitivity analysis is used to obtain important information about meaningful parameter data collection. Examples are also given that demonstrate the ubiquity of omnivory in natural systems. / Ph. D.
102

Sensitivity Analysis of Partial Differential Equations With Applications to Fluid Flow

Singler, John 07 July 2005 (has links)
For over 100 years, researchers have attempted to predict transition to turbulence in fluid flows by analyzing the spectrum of the linearized Navier-Stokes equations. However, for many simple flows, this approach has failed to match experimental results. Recently, new scenarios for transition have been proposed that are based on the non-normality of the linearized operator. These new "mostly linear" theories have increased our understanding of the transition process, but the role of nonlinearity has not been explored. The main goal of this work is to begin to study the role of nonlinearity in transition. We use model problems to illustrate that small unmodeled disturbances can cause transition through movement or bifurcation of equilibria. We also demonstrate that small wall roughness can lead to transition by causing the linearized system to become unstable. Sensitivity methods are used to obtain important information about the disturbed problem and to illustrate that it is possible to have a precursor to predict transition. Finally, we apply linear feedback control to the model problems to illustrate the power of feedback to delay transition and even relaminarize fully developed chaotic flows. / Ph. D.
103

Modeling and Analysis for Optimization of Unsteady Aeroelastic Systems

Ghommem, Mehdi 06 December 2011 (has links)
Simulating the complex physics and dynamics associated with unsteady aeroelastic systems is often attempted with high-fidelity numerical models. While these high-fidelity approaches are powerful in terms of capturing the main physical features, they may not discern the role of underlying phenomena that are interrelated in a complex manner. This often makes it difficult to characterize the relevant causal mechanisms of the observed features. Besides, the extensive computational resources and time associated with the use these tools could limit the capability of assessing different configurations for design purposes. These shortcomings present the need for the development of simplified and reduced-order models that embody relevant physical aspects and elucidate the underlying phenomena that help in characterizing these aspects. In this work, different fluid and aeroelastic systems are considered and reduced-order models governing their behavior are developed. In the first part of the dissertation, a methodology, based on the method of multiple scales, is implemented to show its usefulness and effectiveness in the characterization of the physics underlying the system, the implementation of control strategies, and the identification of high-impact system parameters. In the second part, the unsteady aerodynamic aspects of flapping micro air vehicles (MAVs) are modeled. This modeling is required for evaluation of performance requirements associated with flapping flight. The extensive computational resources and time associated with the implementation of high-fidelity simulations limit the ability to perform optimization and sensitivity analyses in the early stages of MAV design. To overcome this and enable rapid and reasonably accurate exploration of a large design space, a medium-fidelity aerodynamic tool (the unsteady vortex lattice method) is implemented to simulate flapping wing flight. This model is then combined with uncertainty quantification and optimization tools to test and analyze the performance of flapping wing MAVs under varying conditions. This analysis can be used to provide guidance and baseline for assessment of MAVs performance in the early stages of decision making on flapping kinematics, flight mechanics, and control strategies. / Ph. D.
104

Multi-physics and Multilevel Fidelity Modeling and Analysis of Olympic Rowing Boat Dynamics

Mola, Andrea 27 July 2010 (has links)
A multidisciplinary approach for the modeling and analysis of the performance of Olympic rowing boats is presented. The goal is to establish methodologies and tools that would determine the effects of variations in applied forces and rowers motions and weights on mean surge speed and oscillatory boat motions. The coupling between the rowers motions with the hull and water forces is modeled with a system of equations. The water forces are computed using several fluid dynamic models that have different levels of accuracy and computational cost. These models include a solution of the Reynolds Averaged Navier--Stokes equations complemented by a Volume of Fluid method, a linearized 3D potential flow simulation and a 2D potential flow simulation that is based on the strip theory approximation. These results show that due to the elongated shape of the boat, the use of Sommerfeld truncation boundary condition does not yield the correct frequency dependence of the radiative coefficients. Thus, the radiative forces are not computed in the time-domain problem by means of a convolution integral, accounting for flow memory effects, but were computed assuming constant damping and added mass matrices. The results also show that accounting for memory effects significantly improves the agreement between the strip theory and the RANS predictions. Further improvements could be obtained by introducing corrections to account for longitudinal radiative forces, which are completely neglected in the strip theory. The coupled dynamical system and the multi-fidelity fluid models of the water forces were then used to perform a sensitivity analysis of boat motions to variations in rowers weights, exerted forces and cadence of motion. The sensitivity analysis is based on the polynomial chaos expansion. The coefficients of each random basis in the polynomial chaos expansion are computed using a non-intrusive strategy. Sampling, quadrature, and linear regression methods have been used to obtain the these coefficients from the outputs generated by the system at each sampling point. The results show that the linear regression method provides a very good approximation of the PCE coefficients. In addition, the number of samples needed for the expansion, does not grow exponentially with the number of varying input parameters. For this reason, this method has been selected for performing the sensitivity analysis. The sensitivity of output parameters to variations in selected input parameters of the system are obtained by taking the derivatives of the expansion with respect to each input parameter. Three test cases are considered: a light-weight female single scull, a male quad scull, and a male coxless four. For all of these cases, results that relate the effects of variations in rowers weights, amplitudes of exerted forces and cadence of rowing on mean boat speed and energy ratio, defined as the ratio of kinetic energy of the forward motion to that of the oscillatory motions, are presented. These results should be useful in the design of rowing boats as well as in the training of rowers. / Ph. D.
105

Computational Tools for Chemical Data Assimilation with CMAQ

Gou, Tianyi 15 February 2010 (has links)
The Community Multiscale Air Quality (CMAQ) system is the Environmental Protection Agency's main modeling tool for atmospheric pollution studies. CMAQ-ADJ, the adjoint model of CMAQ, offers new analysis capabilities such as receptor-oriented sensitivity analysis and chemical data assimilation. This thesis presents the construction, validation, and properties of new adjoint modules in CMAQ, and illustrates their use in sensitivity analyses and data assimilation experiments. The new module of discrete adjoint of advection is implemented with the aid of automatic differentiation tool (TAMC) and is fully validated by comparing the adjoint sensitivities with finite difference values. In addition, adjoint sensitivity with respect to boundary conditions and boundary condition scaling factors are developed and validated in CMAQ. To investigate numerically the impact of the continuous and discrete advection adjoints on data assimilation, various four dimensional variational (4D-Var) data assimilation experiments are carried out with the 1D advection PDE, and with CMAQ advection using synthetic and real observation data. The results show that optimization procedure gives better estimates of the reference initial condition and converges faster when using gradients computed by the continuous adjoint approach. This counter-intuitive result is explained using the nonlinearity properties of the piecewise parabolic method (the numerical discretization of advection in CMAQ). Data assimilation experiments are carried out using real observation data. The simulation domain encompasses Texas and the simulation period is August 30 to September 1, 2006. Data assimilation is used to improve both initial and boundary conditions. These experiments further validate the tools developed in this thesis. / Master of Science
106

Sensitivity Analysis and Forecasting in Network Epidemiology Models

Nsoesie, Elaine O. 05 June 2012 (has links)
In recent years, several methods have been proposed for real-time modeling and forecasting of the epidemic curve. These methods range from simple compartmental models to complex agent-based models. In this dissertation, we present a model-based reasoning approach to forecasting the epidemic curve and estimating underlying disease model parameters. The method is based on building an epidemic library consisting of past and simulated influenza outbreaks. During an influenza epidemic, we use a combination of statistical, optimization and modeling techniques to either assign the epidemic to one of the cases in the library or propose parameters for modeling the epidemic. The method is presented in four steps. First, we discuss a sensitivity analysis study evaluating how minute changes in the disease model parameters influence the dynamics of simulated epidemics. Next, we present a supervised classification method for predicting the epidemic curve. The epidemic curve is forecasted by matching the partial surveillance curve to at least one of the epidemics in the library. We expand on the classification approach by presenting a method which identifies epidemics similar or different from those in the library. Lastly, we discuss a simulation optimization method for estimating model parameters to forecast the epidemic curve of an epidemic distinct from those in the library. / Ph. D.
107

Efficient Time Stepping Methods and Sensitivity Analysis for Large Scale Systems of Differential Equations

Zhang, Hong 09 September 2014 (has links)
Many fields in science and engineering require large-scale numerical simulations of complex systems described by differential equations. These systems are typically multi-physics (they are driven by multiple interacting physical processes) and multiscale (the dynamics takes place on vastly different spatial and temporal scales). Numerical solution of such systems is highly challenging due to the dimension of the resulting discrete problem, and to the complexity that comes from incorporating multiple interacting components with different characteristics. The main contributions of this dissertation are the creation of new families of time integration methods for multiscale and multiphysics simulations, and the development of industrial-strengh tools for sensitivity analysis. This work develops novel implicit-explicit (IMEX) general linear time integration methods for multiphysics and multiscale simulations typically involving both stiff and non-stiff components. In an IMEX approach, one uses an implicit scheme for the stiff components and an explicit scheme for the non-stiff components such that the combined method has the desired stability and accuracy properties. Practical schemes with favorable properties, such as maximized stability, high efficiency, and no order reduction, are constructed and applied in extensive numerical experiments to validate the theoretical findings and to demonstrate their advantages. Approximate matrix factorization (AMF) technique exploits the structure of the Jacobian of the implicit parts, which may lead to further efficiency improvement of IMEX schemes. We have explored the application of AMF within some high order IMEX Runge-Kutta schemes in order to achieve high efficiency. Sensitivity analysis gives quantitative information about the changes in a dynamical model outputs caused by caused by small changes in the model inputs. This information is crucial for data assimilation, model-constrained optimization, inverse problems, and uncertainty quantification. We develop a high performance software package for sensitivity analysis in the context of stiff and nonstiff ordinary differential equations. Efficiency is demonstrated by direct comparisons against existing state-of-art software on a variety of test problems. / Ph. D.
108

Development, Calibration, and Validation of a Finite Element Model of the THOR Crash Test Dummy for Aerospace and Spaceflight Crash Safety Analysis

Putnam, Jacob Breece 17 September 2014 (has links)
Anthropometric test devices (ATDs), commonly referred to as crash test dummies, are tools used to conduct aerospace and spaceflight safety evaluations. Finite element (FE) analysis provides an effective complement to these evaluations. In this work a FE model of the Test Device for Human Occupant Restraint (THOR) dummy was developed, calibrated, and validated for use in aerospace and spaceflight impact analysis. A previously developed THOR FE model was first evaluated under spinal loading. The FE model was then updated to reflect recent updates made to the THOR dummy. A novel calibration methodology was developed to improve both kinematic and kinetic responses of the updated model in various THOR dummy certification tests. The updated THOR FE model was then calibrated and validated under spaceflight loading conditions and used to asses THOR dummy biofidelity. Results demonstrate that the FE model performs well under spinal loading and predicts injury criteria values close to those recorded in testing. Material parameter optimization of the updated model was shown to greatly improve its response. The validated THOR-FE model indicated good dummy biofidelity relative to human volunteer data under spinal loading, but limited biofidelity under frontal loading. The calibration methodology developed in this work is proven as an effective tool for improving dummy model response. Results shown by the dummy model developed in this study recommends its use in future aerospace and spaceflight impact simulations. In addition the biofidelity analysis suggests future improvements to the THOR dummy for spaceflight and aerospace analysis. / Master of Science
109

Stochastic Computer Model Calibration and Uncertainty Quantification

Fadikar, Arindam 24 July 2019 (has links)
This dissertation presents novel methodologies in the field of stochastic computer model calibration and uncertainty quantification. Simulation models are widely used in studying physical systems, which are often represented by a set of mathematical equations. Inference on true physical system (unobserved or partially observed) is drawn based on the observations from corresponding computer simulation model. These computer models are calibrated based on limited ground truth observations in order produce realistic predictions and associated uncertainties. Stochastic computer model differs from traditional computer model in the sense that repeated execution results in different outcomes from a stochastic simulation. This additional uncertainty in the simulation model requires to be handled accordingly in any calibration set up. Gaussian process (GP) emulator replaces the actual computer simulation when it is expensive to run and the budget is limited. However, traditional GP interpolator models the mean and/or variance of the simulation output as function of input. For a simulation where marginal gaussianity assumption is not appropriate, it does not suffice to emulate only the mean and/or variance. We present two different approaches addressing the non-gaussianity behavior of an emulator, by (1) incorporating quantile regression in GP for multivariate output, (2) approximating using finite mixture of gaussians. These emulators are also used to calibrate and make forward predictions in the context of an Agent Based disease model which models the Ebola epidemic outbreak in 2014 in West Africa. The third approach employs a sequential scheme which periodically updates the uncertainty inn the computer model input as data becomes available in an online fashion. Unlike other two methods which use an emulator in place of the actual simulation, the sequential approach relies on repeated run of the actual, potentially expensive simulation. / Doctor of Philosophy / Mathematical models are versatile and often provide accurate description of physical events. Scientific models are used to study such events in order to gain understanding of the true underlying system. These models are often complex in nature and requires advance algorithms to solve their governing equations. Outputs from these models depend on external information (also called model input) supplied by the user. Model inputs may or may not have a physical meaning, and can sometimes be only specific to the scientific model. More often than not, optimal values of these inputs are unknown and need to be estimated from few actual observations. This process is known as inverse problem, i.e. inferring the input from the output. The inverse problem becomes challenging when the mathematical model is stochastic in nature, i.e., multiple execution of the model result in different outcome. In this dissertation, three methodologies are proposed that talk about the calibration and prediction of a stochastic disease simulation model which simulates contagion of an infectious disease through human-human contact. The motivating examples are taken from the Ebola epidemic in West Africa in 2014 and seasonal flu in New York City in USA.
110

Reliability-Based Topology Optimization with Analytic Sensitivities

Clark, Patrick Ryan 03 August 2017 (has links)
It is a common practice when designing a system to apply safety factors to the critical failure load or event. These safety factors provide a buffer against failure due to the random or un-modeled behavior, which may lead the system to exceed these limits. However these safety factors are not directly related to the likelihood of a failure event occurring. If the safety factors are poorly chosen, the system may fail unexpectedly or it may have a design which is too conservative. Reliability-Based Design Optimization (RBDO) is an alternative approach which directly considers the likelihood of failure by incorporating a reliability analysis step such as the First-Order Reliability Method (FORM). The FORM analysis requires the solution of an optimization problem however, so implementing this approach into an RBDO routine creates a double-loop optimization structure. For large problems such as Reliability-Based Topology Optimization (RBTO), numeric sensitivity analysis becomes computationally intractable. In this thesis, a general approach to the sensitivity analysis of nested functions is developed from the Lagrange Multiplier Theorem and then applied to several Reliability-Based Design Optimization problems, including topology optimization. The proposed approach is computationally efficient, requiring only a single solution of the FORM problem each iteration. / Master of Science

Page generated in 0.0872 seconds