• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 130
  • 23
  • 16
  • 8
  • 1
  • Tagged with
  • 249
  • 249
  • 64
  • 58
  • 53
  • 37
  • 37
  • 36
  • 34
  • 29
  • 28
  • 27
  • 25
  • 25
  • 25
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
91

New Methods of Variable Selection and Inference on High Dimensional Data

Ren, Sheng January 2017 (has links)
No description available.
92

Quantification of Model-Form, Predictive, and Parametric Uncertainties in Simulation-Based Design

Riley, Matthew E. 07 September 2011 (has links)
No description available.
93

Probabilistic Flood Forecast Using Bayesian Methods

Han, Shasha January 2019 (has links)
The number of flood events and the estimated costs of floods have increased dramatically over the past few decades. To reduce the negative impacts of flooding, reliable flood forecasting is essential for early warning and decision making. Although various flood forecasting models and techniques have been developed, the assessment and reduction of uncertainties associated with the forecast remain a challenging task. Therefore, this thesis focuses on the investigation of Bayesian methods for producing probabilistic flood forecasts to accurately quantify predictive uncertainty and enhance the forecast performance and reliability. In the thesis, hydrologic uncertainty was quantified by a Bayesian post-processor - Hydrologic Uncertainty Processor (HUP), and the predictability of HUP with different hydrologic models under different flow conditions were investigated. Followed by an extension of HUP into an ensemble prediction framework, which constitutes the Bayesian Ensemble Uncertainty Processor (BEUP). Then the BEUP with bias-corrected ensemble weather inputs was tested to improve predictive performance. In addition, the effects of input and model type on BEUP were investigated through different combinations of BEUP with deterministic/ensemble weather predictions and lumped/semi-distributed hydrologic models. Results indicate that Bayesian method is robust for probabilistic flood forecasting with uncertainty assessment. HUP is able to improve the deterministic forecast from the hydrologic model and produces more accurate probabilistic forecast. Under high flow condition, a better performing hydrologic model yields better probabilistic forecast after applying HUP. BEUP can significantly improve the accuracy and reliability of short-range flood forecasts, but the improvement becomes less obvious as lead time increases. The best results for short-range forecasts are obtained by applying both bias correction and BEUP. Results also show that bias correcting each ensemble member of weather inputs generates better flood forecast than only bias correcting the ensemble mean. The improvement on BEUP brought by the hydrologic model type is more significant than the input data type. BEUP with semi-distributed model is recommended for short-range flood forecasts. / Dissertation / Doctor of Philosophy (PhD) / Flood is one of the top weather related hazards and causes serious property damage and loss of lives every year worldwide. If the timing and magnitude of the flood event could be accurately predicted in advance, it will allow time to get well prepared, and thus reduce its negative impacts. This research focuses on improving flood forecasts through advanced Bayesian techniques. The main objectives are: (1) enhancing reliability and accuracy of flood forecasting system; and (2) improving the assessment of predictive uncertainty associated with the flood forecasts. The key contributions include: (1) application of Bayesian forecasting methods in a semi-urban watershed to advance the predictive uncertainty quantification; and (2) investigation of the Bayesian forecasting methods with different inputs and models and combining bias correction technique to further improve the forecast performance. It is expected that the findings from this research will benefit flood impact mitigation, watershed management and water resources planning.
94

Uncertainty Quantification and Uncertainty Reduction Techniques for Large-scale Simulations

Cheng, Haiyan 03 August 2009 (has links)
Modeling and simulations of large-scale systems are used extensively to not only better understand a natural phenomenon, but also to predict future events. Accurate model results are critical for design optimization and policy making. They can be used effectively to reduce the impact of a natural disaster or even prevent it from happening. In reality, model predictions are often affected by uncertainties in input data and model parameters, and by incomplete knowledge of the underlying physics. A deterministic simulation assumes one set of input conditions, and generates one result without considering uncertainties. It is of great interest to include uncertainty information in the simulation. By ``Uncertainty Quantification,'' we denote the ensemble of techniques used to model probabilistically the uncertainty in model inputs, to propagate it through the system, and to represent the resulting uncertainty in the model result. This added information provides a confidence level about the model forecast. For example, in environmental modeling, the model forecast, together with the quantified uncertainty information, can assist the policy makers in interpreting the simulation results and in making decisions accordingly. Another important goal in modeling and simulation is to improve the model accuracy and to increase the model prediction power. By merging real observation data into the dynamic system through the data assimilation (DA) technique, the overall uncertainty in the model is reduced. With the expansion of human knowledge and the development of modeling tools, simulation size and complexity are growing rapidly. This poses great challenges to uncertainty analysis techniques. Many conventional uncertainty quantification algorithms, such as the straightforward Monte Carlo method, become impractical for large-scale simulations. New algorithms need to be developed in order to quantify and reduce uncertainties in large-scale simulations. This research explores novel uncertainty quantification and reduction techniques that are suitable for large-scale simulations. In the uncertainty quantification part, the non-sampling polynomial chaos (PC) method is investigated. An efficient implementation is proposed to reduce the high computational cost for the linear algebra involved in the PC Galerkin approach applied to stiff systems. A collocation least-squares method is proposed to compute the PC coefficients more efficiently. A novel uncertainty apportionment strategy is proposed to attribute the uncertainty in model results to different uncertainty sources. The apportionment results provide guidance for uncertainty reduction efforts. The uncertainty quantification and source apportionment techniques are implemented in the 3-D Sulfur Transport Eulerian Model (STEM-III) predicting pollute concentrations in the northeast region of the United States. Numerical results confirm the efficacy of the proposed techniques for large-scale systems and the potential impact for environmental protection policy making. ``Uncertainty Reduction'' describes the range of systematic techniques used to fuse information from multiple sources in order to increase the confidence one has in model results. Two DA techniques are widely used in current practice: the ensemble Kalman filter (EnKF) and the four-dimensional variational (4D-Var) approach. Each method has its advantages and disadvantages. By exploring the error reduction directions generated in the 4D-Var optimization process, we propose a hybrid approach to construct the error covariance matrix and to improve the static background error covariance matrix used in current 4D-Var practice. The updated covariance matrix between assimilation windows effectively reduces the root mean square error (RMSE) in the solution. The success of the hybrid covariance updates motivates the hybridization of EnKF and 4D-Var to further reduce uncertainties in the simulation results. Numerical tests show that the hybrid method improves the model accuracy and increases the model prediction quality. / Ph. D.
95

Exploring the Stochastic Performance of Metallic Microstructures With Multi-Scale Models

Senthilnathan, Arulmurugan 01 June 2023 (has links)
Titanium-7%wt-Aluminum (Ti-7Al) has been of interest to the aerospace industry owing to its good structural and thermal properties. However, extensive research is still needed to study the structural behavior and determine the material properties of Ti-7Al. The homogenized macro-scale material properties are directly related to the crystallographic structure at the micro-scale. Furthermore, microstructural uncertainties arising from experiments and computational methods propagate on the material properties used for designing aircraft components. Therefore, multi-scale modeling is employed to characterize the microstructural features of Ti-7Al and computationally predict the macro-scale material properties such as Young's modulus and yield strength using machine learning techniques. Investigation of microstructural features across large domains through experiments requires rigorous and tedious sample preparation procedures that often lead to material waste. Therefore, computational microstructure reconstruction methods that predict the large-scale evolution of microstructural topology given the small-scale experimental information are developed to minimize experimental cost and time. However, it is important to verify the synthetic microstructures with respect to the experimental data by characterizing microstructural features such as grain size and grain shape. While the relationship between homogenized material properties and grain sizes of microstructures is well-studied through the Hall-Petch effect, the influences of grain shapes, especially in complex additively manufactured microstructure topologies, are yet to be explored. Therefore, this work addresses the gap in the mathematical quantification of microstructural topology by developing measures for the computational characterization of microstructures. Moreover, the synthesized microstructures are modeled through crystal plasticity simulations to determine the material properties. However, such crystal plasticity simulations require significant computing times. In addition, the inherent uncertainty of experimental data is propagated on the material properties through the synthetic microstructure representations. Therefore, the aforementioned problems are addressed in this work by explicitly quantifying the microstructural topology and predicting the material properties and their variations through the development of surrogate models. Next, this work extends the proposed multi-scale models of microstructure-property relationships to magnetic materials to investigate the ferromagnetic-paramagnetic phase transition. Here, the same Ising model-based multi-scale approach used for microstructure reconstruction is implemented for investigating the ferromagnetic-paramagnetic phase transition of magnetic materials. The previous research on the magnetic phase transition problem neglects the effects of the long-range interactions between magnetic spins and external magnetic fields. Therefore, this study aims to build a multi-scale modeling environment that can quantify the large-scale interactions between magnetic spins and external fields. / Doctor of Philosophy / Titanium-Aluminum (Ti-Al) alloys are lightweight and temperature-resistant materials with a wide range of applications in aerospace systems. However, there is still a lack of thorough understanding of the microstructural behavior and mechanical performance of Titanium-7wt%-Aluminum (Ti-7Al), a candidate material for jet engine components. This work investigates the multi-scale mechanical behavior of Ti-7Al by computationally characterizing the micro-scale material features, such as crystallographic texture and grain topology. The small-scale experimental data of Ti-7Al is used to predict the large-scale spatial evolution of the microstructures, while the texture and grain topology is modeled using shape moment invariants. Moreover, the effects of the uncertainties, which may arise from measurement errors and algorithmic randomness, on the microstructural features are quantified through statistical parameters developed based on the shape moment invariants. A data-driven surrogate model is built to predict the homogenized mechanical properties and the associated uncertainty as a function of the microstructural texture and topology. Furthermore, the presented multi-scale modeling technique is applied to explore the ferromagnetic-paramagnetic phase transition of magnetic materials, which causes permanent failure of magneto-mechanical components used in aerospace systems. Accordingly, a computational solution is developed based on an Ising model that considers the long-range spin interactions in the presence of external magnetic fields.
96

Contributions to Efficient Statistical Modeling of Complex Data with Temporal Structures

Hu, Zhihao 03 March 2022 (has links)
This dissertation will focus on three research projects: Neighborhood vector auto regression in multivariate time series, uncertainty quantification for agent-based modeling networked anagrams, and a scalable algorithm for multi-class classification. The first project studies the modeling of multivariate time series, with the applications in the environmental sciences and other areas. In this work, a so-called neighborhood vector autoregression (NVAR) model is proposed to efficiently analyze large-dimensional multivariate time series. The time series are assumed to have underlying distances among them based on the inherent setting of the problem. When this distance matrix is available or can be obtained, the proposed NVAR method is demonstrated to provides a computationally efficient and theoretically sound estimation of model parameters. The performance of the proposed method is compared with other existing approaches in both simulation studies and a real application of stream nitrogen study. The second project focuses on the study of group anagram games. In a group anagram game, players are provided letters to form as many words as possible. In this work, the enhanced agent behavior models for networked group anagram games are built, exercised, and evaluated under an uncertainty quantification framework. Specifically, the game data for players is clustered based on their skill levels (forming words, requesting letters, and replying to requests), the multinomial logistic regressions for transition probabilities are performed, and the uncertainty is quantified within each cluster. The result of this process is a model where players are assigned different numbers of neighbors and different skill levels in the game. Simulations of ego agents with neighbors are conducted to demonstrate the efficacy of the proposed methods. The third project aims to develop efficient and scalable algorithms for multi-class classification, which achieve a balance between prediction accuracy and computing efficiency, especially in high dimensional settings. The traditional multinomial logistic regression becomes slow in high dimensional settings where the number of classes (M) and the number of features (p) is large. Our algorithms are computing efficiently and scalable to data with even higher dimensions. The simulation and case study results demonstrate that our algorithms have huge advantage over traditional multinomial logistic regressions, and maintains comparable prediction performance. / Doctor of Philosophy / In many data-central applications, data often have complex structures involving temporal structures and high dimensionality. Modeling of complex data with temporal structures have attracted great attention in many applications such as enviromental sciences, network sciences, data mining, neuroscience, and economics. However, modeling such complex data is quite challenging due to large uncertainty and dimensionality of complex data. This dissertation focuses on modeling and prediction of complex data with temporal structures. Three different types of complex data are modeled. For example, the nitrogen of multiple streams are modeled in a joint manner, human actions in networked group anagrams are modeled and the uncertainty is quantified, and data with multiple labels are classified. Different models are proposed and they are demonstrated to be efficient through simulation and case study.
97

Physics-Informed, Data-Driven Framework for Model-Form Uncertainty Estimation and Reduction in RANS Simulations

Wang, Jianxun 05 April 2017 (has links)
Computational fluid dynamics (CFD) has been widely used to simulate turbulent flows. Although an increased availability of computational resources has enabled high-fidelity simulations (e.g. large eddy simulation and direct numerical simulation) of turbulent flows, the Reynolds-Averaged Navier-Stokes (RANS) equations based models are still the dominant tools for industrial applications. However, the predictive capability of RANS models is limited by potential inaccuracies driven by hypotheses in the Reynolds stress closure. With the ever-increasing use of RANS simulations in mission-critical applications, the estimation and reduction of model-form uncertainties in RANS models have attracted attention in the turbulence modeling community. In this work, I focus on estimating uncertainties stemming from the RANS turbulence closure and calibrating discrepancies in the modeled Reynolds stresses to improve the predictive capability of RANS models. Both on-line and off-line data are utilized to achieve this goal. The main contributions of this dissertation can be summarized as follows: First, a physics-based, data-driven Bayesian framework is developed for estimating and reducing model-form uncertainties in RANS simulations. An iterative ensemble Kalman method is employed to assimilate sparse on-line measurement data and empirical prior knowledge for a full-field inversion. The merits of incorporating prior knowledge and physical constraints in calibrating RANS model discrepancies are demonstrated and discussed. Second, a random matrix theoretic framework is proposed for estimating model-form uncertainties in RANS simulations. Maximum entropy principle is employed to identify the probability distribution that satisfies given constraints but without introducing artificial information. Objective prior perturbations of RANS-predicted Reynolds stresses in physical projections are provided based on comparisons between physics-based and random matrix theoretic approaches. Finally, a physics-informed, machine learning framework towards predictive RANS turbulence modeling is proposed. The functional forms of model discrepancies with respect to mean flow features are extracted from the off-line database of closely related flows based on machine learning algorithms. The RANS-modeled Reynolds stresses of prediction flows can be significantly improved by the trained discrepancy function, which is an important step towards the predictive turbulence modeling. / Ph. D.
98

Optimization Under Uncertainty and Total Predictive Uncertainty for a Tractor-Trailer Base-Drag Reduction Device

Freeman, Jacob Andrew 07 September 2012 (has links)
One key outcome of this research is the design for a 3-D tractor-trailer base-drag reduction device that predicts a 41% reduction in wind-averaged drag coefficient at 57 mph (92 km/h) and that is relatively insensitive to uncertain wind speed and direction and uncertain deflection angles due to mounting accuracy and static aeroelastic loading; the best commercial device of non-optimized design achieves a 12% reduction at 65 mph. Another important outcome is the process by which the optimized design is obtained. That process includes verification and validation of the flow solver, a less complex but much broader 2-D pathfinder study, and the culminating 3-D aerodynamic shape optimization under uncertainty (OUU) study. To gain confidence in the accuracy and precision of a computational fluid dynamics (CFD) flow solver and its Reynolds-averaged Navier-Stokes (RANS) turbulence models, it is necessary to conduct code verification, solution verification, and model validation. These activities are accomplished using two commercial CFD solvers, Cobalt and RavenCFD, with four turbulence models: Spalart-Allmaras (S-A), S-A with rotation and curvature, Menter shear-stress transport (SST), and Wilcox 1998 k-ω. Model performance is evaluated for three low subsonic 2-D applications: turbulent flat plate, planar jet, and NACA 0012 airfoil at α = 0°. The S-A turbulence model is selected for the 2-D OUU study. In the 2-D study, a tractor-trailer base flap model is developed that includes six design variables with generous constraints; 400 design candidates are evaluated. The design optimization loop includes the effect of uncertain wind speed and direction, and post processing addresses several other uncertain effects on drag prediction. The study compares the efficiency and accuracy of two optimization algorithms, evolutionary algorithm (EA) and dividing rectangles (DIRECT), twelve surrogate models, six sampling methods, and surrogate-based global optimization (SBGO) methods. The DAKOTA optimization and uncertainty quantification framework is used to interface the RANS flow solver, grid generator, and optimization algorithm. The EA is determined to be more efficient in obtaining a design with significantly reduced drag (as opposed to more efficient in finding the true drag minimum), and total predictive uncertainty is estimated as ±11%. While the SBGO methods are more efficient than a traditional optimization algorithm, they are computationally inefficient due to their serial nature, as implemented in DAKOTA. Because the S-A model does well in 2-D but not in 3-D under these conditions, the SST turbulence model is selected for the 3-D OUU study that includes five design variables and evaluates a total of 130 design candidates. Again using the EA, the study propagates aleatory (wind speed and direction) and epistemic (perturbations in flap deflection angle) uncertainty within the optimization loop and post processes several other uncertain effects. For the best 3-D design, total predictive uncertainty is +15/-42%, due largely to using a relatively coarse (six million cell) grid. That is, the best design drag coefficient estimate is within 15 and 42% of the true value; however, its improvement relative to the no-flaps baseline is accurate within 3-9% uncertainty. / Ph. D.
99

Advanced Sampling Methods for Solving Large-Scale Inverse Problems

Attia, Ahmed Mohamed Mohamed 19 September 2016 (has links)
Ensemble and variational techniques have gained wide popularity as the two main approaches for solving data assimilation and inverse problems. The majority of the methods in these two approaches are derived (at least implicitly) under the assumption that the underlying probability distributions are Gaussian. It is well accepted, however, that the Gaussianity assumption is too restrictive when applied to large nonlinear models, nonlinear observation operators, and large levels of uncertainty. This work develops a family of fully non-Gaussian data assimilation algorithms that work by directly sampling the posterior distribution. The sampling strategy is based on a Hybrid/Hamiltonian Monte Carlo (HMC) approach that can handle non-normal probability distributions. The first algorithm proposed in this work is the "HMC sampling filter", an ensemble-based data assimilation algorithm for solving the sequential filtering problem. Unlike traditional ensemble-based filters, such as the ensemble Kalman filter and the maximum likelihood ensemble filter, the proposed sampling filter naturally accommodates non-Gaussian errors and nonlinear model dynamics, as well as nonlinear observations. To test the capabilities of the HMC sampling filter numerical experiments are carried out using the Lorenz-96 model and observation operators with different levels of nonlinearity and differentiability. The filter is also tested with shallow water model on the sphere with linear observation operator. Numerical results show that the sampling filter performs well even in highly nonlinear situations where the traditional filters diverge. Next, the HMC sampling approach is extended to the four-dimensional case, where several observations are assimilated simultaneously, resulting in the second member of the proposed family of algorithms. The new algorithm, named "HMC sampling smoother", is an ensemble-based smoother for four-dimensional data assimilation that works by sampling from the posterior probability density of the solution at the initial time. The sampling smoother naturally accommodates non-Gaussian errors and nonlinear model dynamics and observation operators, and provides a full description of the posterior distribution. Numerical experiments for this algorithm are carried out using a shallow water model on the sphere with observation operators of different levels of nonlinearity. The numerical results demonstrate the advantages of the proposed method compared to the traditional variational and ensemble-based smoothing methods. The HMC sampling smoother, in its original formulation, is computationally expensive due to the innate requirement of running the forward and adjoint models repeatedly. The proposed family of algorithms proceeds by developing computationally efficient versions of the HMC sampling smoother based on reduced-order approximations of the underlying model dynamics. The reduced-order HMC sampling smoothers, developed as extensions to the original HMC smoother, are tested numerically using the shallow-water equations model in Cartesian coordinates. The results reveal that the reduced-order versions of the smoother are capable of accurately capturing the posterior probability density, while being significantly faster than the original full order formulation. In the presence of nonlinear model dynamics, nonlinear observation operator, or non-Gaussian errors, the prior distribution in the sequential data assimilation framework is not analytically tractable. In the original formulation of the HMC sampling filter, the prior distribution is approximated by a Gaussian distribution whose parameters are inferred from the ensemble of forecasts. The Gaussian prior assumption in the original HMC filter is relaxed. Specifically, a clustering step is introduced after the forecast phase of the filter, and the prior density function is estimated by fitting a Gaussian Mixture Model (GMM) to the prior ensemble. The base filter developed following this strategy is named cluster HMC sampling filter (ClHMC ). A multi-chain version of the ClHMC filter, namely MC-ClHMC , is also proposed to guarantee that samples are taken from the vicinities of all probability modes of the formulated posterior. These methodologies are tested using a quasi-geostrophic (QG) model with double-gyre wind forcing and bi-harmonic friction. Numerical results demonstrate the usefulness of using GMMs to relax the Gaussian prior assumption in the HMC filtering paradigm. To provide a unified platform for data assimilation research, a flexible and a highly-extensible testing suite, named DATeS , is developed and described in this work. The core of DATeS is implemented in Python to enable for Object-Oriented capabilities. The main components, such as the models, the data assimilation algorithms, the linear algebra solvers, and the time discretization routines are independent of each other, such as to offer maximum flexibility to configure data assimilation studies. / Ph. D.
100

Multiscale Modeling and Uncertainty Quantification of Multiphase Flow and Mass Transfer Processes

Donato, Adam Armido 10 January 2015 (has links)
Most engineering systems have some degree of uncertainty in their input and operating parameters. The interaction of these parameters leads to the uncertain nature of the system performance and outputs. In order to quantify this uncertainty in a computational model, it is necessary to include the full range of uncertainty in the model. Currently, there are two major technical barriers to achieving this: (1) in many situations -particularly those involving multiscale phenomena-the stochastic nature of input parameters is not well defined, and is usually approximated by limited experimental data or heuristics; (2) incorporating the full range of uncertainty across all uncertain input and operating parameters via conventional techniques often results in an inordinate number of computational scenarios to be performed, thereby limiting uncertainty analysis to simple or approximate computational models. This first objective is addressed through combining molecular and macroscale modeling where the molecular modeling is used to quantify the stochastic distribution of parameters that are typically approximated. Specifically, an adsorption separation process is used to demonstrate this computational technique. In this demonstration, stochastic molecular modeling results are validated against a diverse range of experimental data sets. The stochastic molecular-level results are then shown to have a significant role on the macro-scale performance of adsorption systems. The second portion of this research is focused on reducing the computational burden of performing an uncertainty analysis on practical engineering systems. The state of the art for uncertainty analysis relies on the construction of a meta-model (also known as a surrogate model or reduced order model) which can then be sampled stochastically at a relatively minimal computational burden. Unfortunately these meta-models can be very computationally expensive to construct, and the complexity of construction can scale exponentially with the number of relevant uncertain input parameters. In an effort to dramatically reduce this effort, a novel methodology "QUICKER (Quantifying Uncertainty In Computational Knowledge Engineering Rapidly)" has been developed. Instead of building a meta-model, QUICKER focuses exclusively on the output distributions, which are always one-dimensional. By focusing on one-dimensional distributions instead of the multiple dimensions analyzed via meta-models, QUICKER is able to handle systems with far more uncertain inputs. / Ph. D.

Page generated in 0.1287 seconds