• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 126
  • 23
  • 16
  • 8
  • 1
  • Tagged with
  • 243
  • 243
  • 62
  • 58
  • 53
  • 36
  • 35
  • 34
  • 34
  • 28
  • 26
  • 26
  • 25
  • 25
  • 25
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
91

New Methods of Variable Selection and Inference on High Dimensional Data

Ren, Sheng January 2017 (has links)
No description available.
92

Quantification of Model-Form, Predictive, and Parametric Uncertainties in Simulation-Based Design

Riley, Matthew E. 07 September 2011 (has links)
No description available.
93

Probabilistic Flood Forecast Using Bayesian Methods

Han, Shasha January 2019 (has links)
The number of flood events and the estimated costs of floods have increased dramatically over the past few decades. To reduce the negative impacts of flooding, reliable flood forecasting is essential for early warning and decision making. Although various flood forecasting models and techniques have been developed, the assessment and reduction of uncertainties associated with the forecast remain a challenging task. Therefore, this thesis focuses on the investigation of Bayesian methods for producing probabilistic flood forecasts to accurately quantify predictive uncertainty and enhance the forecast performance and reliability. In the thesis, hydrologic uncertainty was quantified by a Bayesian post-processor - Hydrologic Uncertainty Processor (HUP), and the predictability of HUP with different hydrologic models under different flow conditions were investigated. Followed by an extension of HUP into an ensemble prediction framework, which constitutes the Bayesian Ensemble Uncertainty Processor (BEUP). Then the BEUP with bias-corrected ensemble weather inputs was tested to improve predictive performance. In addition, the effects of input and model type on BEUP were investigated through different combinations of BEUP with deterministic/ensemble weather predictions and lumped/semi-distributed hydrologic models. Results indicate that Bayesian method is robust for probabilistic flood forecasting with uncertainty assessment. HUP is able to improve the deterministic forecast from the hydrologic model and produces more accurate probabilistic forecast. Under high flow condition, a better performing hydrologic model yields better probabilistic forecast after applying HUP. BEUP can significantly improve the accuracy and reliability of short-range flood forecasts, but the improvement becomes less obvious as lead time increases. The best results for short-range forecasts are obtained by applying both bias correction and BEUP. Results also show that bias correcting each ensemble member of weather inputs generates better flood forecast than only bias correcting the ensemble mean. The improvement on BEUP brought by the hydrologic model type is more significant than the input data type. BEUP with semi-distributed model is recommended for short-range flood forecasts. / Dissertation / Doctor of Philosophy (PhD) / Flood is one of the top weather related hazards and causes serious property damage and loss of lives every year worldwide. If the timing and magnitude of the flood event could be accurately predicted in advance, it will allow time to get well prepared, and thus reduce its negative impacts. This research focuses on improving flood forecasts through advanced Bayesian techniques. The main objectives are: (1) enhancing reliability and accuracy of flood forecasting system; and (2) improving the assessment of predictive uncertainty associated with the flood forecasts. The key contributions include: (1) application of Bayesian forecasting methods in a semi-urban watershed to advance the predictive uncertainty quantification; and (2) investigation of the Bayesian forecasting methods with different inputs and models and combining bias correction technique to further improve the forecast performance. It is expected that the findings from this research will benefit flood impact mitigation, watershed management and water resources planning.
94

Physics-informed Machine Learning with Uncertainty Quantification

Daw, Arka 12 February 2024 (has links)
Physics Informed Machine Learning (PIML) has emerged as the forefront of research in scientific machine learning with the key motivation of systematically coupling machine learning (ML) methods with prior domain knowledge often available in the form of physics supervision. Uncertainty quantification (UQ) is an important goal in many scientific use-cases, where the obtaining reliable ML model predictions and accessing the potential risks associated with them is crucial. In this thesis, we propose novel methodologies in three key areas for improving uncertainty quantification for PIML. First, we propose to explicitly infuse the physics prior in the form of monotonicity constraints through architectural modifications in neural networks for quantifying uncertainty. Second, we demonstrate a more general framework for quantifying uncertainty with PIML that is compatible with generic forms of physics supervision such as PDEs and closed form equations. Lastly, we study the limitations of physics-based loss in the context of Physics-informed Neural Networks (PINNs), and develop an efficient sampling strategy to mitigate the failure modes. / Doctor of Philosophy / Owing to the success of deep learning in computer vision and natural language processing there is a growing interest of using deep learning in scientific applications. In scientific applications, knowledge is available in the form of closed form equations, partial differential equations, etc. along with labeled data. My work focuses on developing deep learning methods that integrate these forms of supervision. Especially, my work focuses on building methods that can quantify uncertainty in deep learning models, which is an important goal for high-stakes applications.
95

Exploring the Stochastic Performance of Metallic Microstructures With Multi-Scale Models

Senthilnathan, Arulmurugan 01 June 2023 (has links)
Titanium-7%wt-Aluminum (Ti-7Al) has been of interest to the aerospace industry owing to its good structural and thermal properties. However, extensive research is still needed to study the structural behavior and determine the material properties of Ti-7Al. The homogenized macro-scale material properties are directly related to the crystallographic structure at the micro-scale. Furthermore, microstructural uncertainties arising from experiments and computational methods propagate on the material properties used for designing aircraft components. Therefore, multi-scale modeling is employed to characterize the microstructural features of Ti-7Al and computationally predict the macro-scale material properties such as Young's modulus and yield strength using machine learning techniques. Investigation of microstructural features across large domains through experiments requires rigorous and tedious sample preparation procedures that often lead to material waste. Therefore, computational microstructure reconstruction methods that predict the large-scale evolution of microstructural topology given the small-scale experimental information are developed to minimize experimental cost and time. However, it is important to verify the synthetic microstructures with respect to the experimental data by characterizing microstructural features such as grain size and grain shape. While the relationship between homogenized material properties and grain sizes of microstructures is well-studied through the Hall-Petch effect, the influences of grain shapes, especially in complex additively manufactured microstructure topologies, are yet to be explored. Therefore, this work addresses the gap in the mathematical quantification of microstructural topology by developing measures for the computational characterization of microstructures. Moreover, the synthesized microstructures are modeled through crystal plasticity simulations to determine the material properties. However, such crystal plasticity simulations require significant computing times. In addition, the inherent uncertainty of experimental data is propagated on the material properties through the synthetic microstructure representations. Therefore, the aforementioned problems are addressed in this work by explicitly quantifying the microstructural topology and predicting the material properties and their variations through the development of surrogate models. Next, this work extends the proposed multi-scale models of microstructure-property relationships to magnetic materials to investigate the ferromagnetic-paramagnetic phase transition. Here, the same Ising model-based multi-scale approach used for microstructure reconstruction is implemented for investigating the ferromagnetic-paramagnetic phase transition of magnetic materials. The previous research on the magnetic phase transition problem neglects the effects of the long-range interactions between magnetic spins and external magnetic fields. Therefore, this study aims to build a multi-scale modeling environment that can quantify the large-scale interactions between magnetic spins and external fields. / Doctor of Philosophy / Titanium-Aluminum (Ti-Al) alloys are lightweight and temperature-resistant materials with a wide range of applications in aerospace systems. However, there is still a lack of thorough understanding of the microstructural behavior and mechanical performance of Titanium-7wt%-Aluminum (Ti-7Al), a candidate material for jet engine components. This work investigates the multi-scale mechanical behavior of Ti-7Al by computationally characterizing the micro-scale material features, such as crystallographic texture and grain topology. The small-scale experimental data of Ti-7Al is used to predict the large-scale spatial evolution of the microstructures, while the texture and grain topology is modeled using shape moment invariants. Moreover, the effects of the uncertainties, which may arise from measurement errors and algorithmic randomness, on the microstructural features are quantified through statistical parameters developed based on the shape moment invariants. A data-driven surrogate model is built to predict the homogenized mechanical properties and the associated uncertainty as a function of the microstructural texture and topology. Furthermore, the presented multi-scale modeling technique is applied to explore the ferromagnetic-paramagnetic phase transition of magnetic materials, which causes permanent failure of magneto-mechanical components used in aerospace systems. Accordingly, a computational solution is developed based on an Ising model that considers the long-range spin interactions in the presence of external magnetic fields.
96

Contributions to Efficient Statistical Modeling of Complex Data with Temporal Structures

Hu, Zhihao 03 March 2022 (has links)
This dissertation will focus on three research projects: Neighborhood vector auto regression in multivariate time series, uncertainty quantification for agent-based modeling networked anagrams, and a scalable algorithm for multi-class classification. The first project studies the modeling of multivariate time series, with the applications in the environmental sciences and other areas. In this work, a so-called neighborhood vector autoregression (NVAR) model is proposed to efficiently analyze large-dimensional multivariate time series. The time series are assumed to have underlying distances among them based on the inherent setting of the problem. When this distance matrix is available or can be obtained, the proposed NVAR method is demonstrated to provides a computationally efficient and theoretically sound estimation of model parameters. The performance of the proposed method is compared with other existing approaches in both simulation studies and a real application of stream nitrogen study. The second project focuses on the study of group anagram games. In a group anagram game, players are provided letters to form as many words as possible. In this work, the enhanced agent behavior models for networked group anagram games are built, exercised, and evaluated under an uncertainty quantification framework. Specifically, the game data for players is clustered based on their skill levels (forming words, requesting letters, and replying to requests), the multinomial logistic regressions for transition probabilities are performed, and the uncertainty is quantified within each cluster. The result of this process is a model where players are assigned different numbers of neighbors and different skill levels in the game. Simulations of ego agents with neighbors are conducted to demonstrate the efficacy of the proposed methods. The third project aims to develop efficient and scalable algorithms for multi-class classification, which achieve a balance between prediction accuracy and computing efficiency, especially in high dimensional settings. The traditional multinomial logistic regression becomes slow in high dimensional settings where the number of classes (M) and the number of features (p) is large. Our algorithms are computing efficiently and scalable to data with even higher dimensions. The simulation and case study results demonstrate that our algorithms have huge advantage over traditional multinomial logistic regressions, and maintains comparable prediction performance. / Doctor of Philosophy / In many data-central applications, data often have complex structures involving temporal structures and high dimensionality. Modeling of complex data with temporal structures have attracted great attention in many applications such as enviromental sciences, network sciences, data mining, neuroscience, and economics. However, modeling such complex data is quite challenging due to large uncertainty and dimensionality of complex data. This dissertation focuses on modeling and prediction of complex data with temporal structures. Three different types of complex data are modeled. For example, the nitrogen of multiple streams are modeled in a joint manner, human actions in networked group anagrams are modeled and the uncertainty is quantified, and data with multiple labels are classified. Different models are proposed and they are demonstrated to be efficient through simulation and case study.
97

Uncertainty Quantification and Uncertainty Reduction Techniques for Large-scale Simulations

Cheng, Haiyan 03 August 2009 (has links)
Modeling and simulations of large-scale systems are used extensively to not only better understand a natural phenomenon, but also to predict future events. Accurate model results are critical for design optimization and policy making. They can be used effectively to reduce the impact of a natural disaster or even prevent it from happening. In reality, model predictions are often affected by uncertainties in input data and model parameters, and by incomplete knowledge of the underlying physics. A deterministic simulation assumes one set of input conditions, and generates one result without considering uncertainties. It is of great interest to include uncertainty information in the simulation. By ``Uncertainty Quantification,'' we denote the ensemble of techniques used to model probabilistically the uncertainty in model inputs, to propagate it through the system, and to represent the resulting uncertainty in the model result. This added information provides a confidence level about the model forecast. For example, in environmental modeling, the model forecast, together with the quantified uncertainty information, can assist the policy makers in interpreting the simulation results and in making decisions accordingly. Another important goal in modeling and simulation is to improve the model accuracy and to increase the model prediction power. By merging real observation data into the dynamic system through the data assimilation (DA) technique, the overall uncertainty in the model is reduced. With the expansion of human knowledge and the development of modeling tools, simulation size and complexity are growing rapidly. This poses great challenges to uncertainty analysis techniques. Many conventional uncertainty quantification algorithms, such as the straightforward Monte Carlo method, become impractical for large-scale simulations. New algorithms need to be developed in order to quantify and reduce uncertainties in large-scale simulations. This research explores novel uncertainty quantification and reduction techniques that are suitable for large-scale simulations. In the uncertainty quantification part, the non-sampling polynomial chaos (PC) method is investigated. An efficient implementation is proposed to reduce the high computational cost for the linear algebra involved in the PC Galerkin approach applied to stiff systems. A collocation least-squares method is proposed to compute the PC coefficients more efficiently. A novel uncertainty apportionment strategy is proposed to attribute the uncertainty in model results to different uncertainty sources. The apportionment results provide guidance for uncertainty reduction efforts. The uncertainty quantification and source apportionment techniques are implemented in the 3-D Sulfur Transport Eulerian Model (STEM-III) predicting pollute concentrations in the northeast region of the United States. Numerical results confirm the efficacy of the proposed techniques for large-scale systems and the potential impact for environmental protection policy making. ``Uncertainty Reduction'' describes the range of systematic techniques used to fuse information from multiple sources in order to increase the confidence one has in model results. Two DA techniques are widely used in current practice: the ensemble Kalman filter (EnKF) and the four-dimensional variational (4D-Var) approach. Each method has its advantages and disadvantages. By exploring the error reduction directions generated in the 4D-Var optimization process, we propose a hybrid approach to construct the error covariance matrix and to improve the static background error covariance matrix used in current 4D-Var practice. The updated covariance matrix between assimilation windows effectively reduces the root mean square error (RMSE) in the solution. The success of the hybrid covariance updates motivates the hybridization of EnKF and 4D-Var to further reduce uncertainties in the simulation results. Numerical tests show that the hybrid method improves the model accuracy and increases the model prediction quality. / Ph. D.
98

Multiscale Modeling and Uncertainty Quantification of Multiphase Flow and Mass Transfer Processes

Donato, Adam Armido 10 January 2015 (has links)
Most engineering systems have some degree of uncertainty in their input and operating parameters. The interaction of these parameters leads to the uncertain nature of the system performance and outputs. In order to quantify this uncertainty in a computational model, it is necessary to include the full range of uncertainty in the model. Currently, there are two major technical barriers to achieving this: (1) in many situations -particularly those involving multiscale phenomena-the stochastic nature of input parameters is not well defined, and is usually approximated by limited experimental data or heuristics; (2) incorporating the full range of uncertainty across all uncertain input and operating parameters via conventional techniques often results in an inordinate number of computational scenarios to be performed, thereby limiting uncertainty analysis to simple or approximate computational models. This first objective is addressed through combining molecular and macroscale modeling where the molecular modeling is used to quantify the stochastic distribution of parameters that are typically approximated. Specifically, an adsorption separation process is used to demonstrate this computational technique. In this demonstration, stochastic molecular modeling results are validated against a diverse range of experimental data sets. The stochastic molecular-level results are then shown to have a significant role on the macro-scale performance of adsorption systems. The second portion of this research is focused on reducing the computational burden of performing an uncertainty analysis on practical engineering systems. The state of the art for uncertainty analysis relies on the construction of a meta-model (also known as a surrogate model or reduced order model) which can then be sampled stochastically at a relatively minimal computational burden. Unfortunately these meta-models can be very computationally expensive to construct, and the complexity of construction can scale exponentially with the number of relevant uncertain input parameters. In an effort to dramatically reduce this effort, a novel methodology "QUICKER (Quantifying Uncertainty In Computational Knowledge Engineering Rapidly)" has been developed. Instead of building a meta-model, QUICKER focuses exclusively on the output distributions, which are always one-dimensional. By focusing on one-dimensional distributions instead of the multiple dimensions analyzed via meta-models, QUICKER is able to handle systems with far more uncertain inputs. / Ph. D.
99

Physics-Informed, Data-Driven Framework for Model-Form Uncertainty Estimation and Reduction in RANS Simulations

Wang, Jianxun 05 April 2017 (has links)
Computational fluid dynamics (CFD) has been widely used to simulate turbulent flows. Although an increased availability of computational resources has enabled high-fidelity simulations (e.g. large eddy simulation and direct numerical simulation) of turbulent flows, the Reynolds-Averaged Navier-Stokes (RANS) equations based models are still the dominant tools for industrial applications. However, the predictive capability of RANS models is limited by potential inaccuracies driven by hypotheses in the Reynolds stress closure. With the ever-increasing use of RANS simulations in mission-critical applications, the estimation and reduction of model-form uncertainties in RANS models have attracted attention in the turbulence modeling community. In this work, I focus on estimating uncertainties stemming from the RANS turbulence closure and calibrating discrepancies in the modeled Reynolds stresses to improve the predictive capability of RANS models. Both on-line and off-line data are utilized to achieve this goal. The main contributions of this dissertation can be summarized as follows: First, a physics-based, data-driven Bayesian framework is developed for estimating and reducing model-form uncertainties in RANS simulations. An iterative ensemble Kalman method is employed to assimilate sparse on-line measurement data and empirical prior knowledge for a full-field inversion. The merits of incorporating prior knowledge and physical constraints in calibrating RANS model discrepancies are demonstrated and discussed. Second, a random matrix theoretic framework is proposed for estimating model-form uncertainties in RANS simulations. Maximum entropy principle is employed to identify the probability distribution that satisfies given constraints but without introducing artificial information. Objective prior perturbations of RANS-predicted Reynolds stresses in physical projections are provided based on comparisons between physics-based and random matrix theoretic approaches. Finally, a physics-informed, machine learning framework towards predictive RANS turbulence modeling is proposed. The functional forms of model discrepancies with respect to mean flow features are extracted from the off-line database of closely related flows based on machine learning algorithms. The RANS-modeled Reynolds stresses of prediction flows can be significantly improved by the trained discrepancy function, which is an important step towards the predictive turbulence modeling. / Ph. D.
100

Computational Framework for Uncertainty Quantification, Sensitivity Analysis and Experimental Design of Network-based Computer Simulation Models

Wu, Sichao 29 August 2017 (has links)
When capturing a real-world, networked system using a simulation model, features are usually omitted or represented by probability distributions. Verification and validation (V and V) of such models is an inherent and fundamental challenge. Central to V and V, but also to model analysis and prediction, are uncertainty quantification (UQ), sensitivity analysis (SA) and design of experiments (DOE). In addition, network-based computer simulation models, as compared with models based on ordinary and partial differential equations (ODE and PDE), typically involve a significantly larger volume of more complex data. Efficient use of such models is challenging since it requires a broad set of skills ranging from domain expertise to in-depth knowledge including modeling, programming, algorithmics, high- performance computing, statistical analysis, and optimization. On top of this, the need to support reproducible experiments necessitates complete data tracking and management. Finally, the lack of standardization of simulation model configuration formats presents an extra challenge when developing technology intended to work across models. While there are tools and frameworks that address parts of the challenges above, to the best of our knowledge, none of them accomplishes all this in a model-independent and scientifically reproducible manner. In this dissertation, we present a computational framework called GENEUS that addresses these challenges. Specifically, it incorporates (i) a standardized model configuration format, (ii) a data flow management system with digital library functions helping to ensure scientific reproducibility, and (iii) a model-independent, expandable plugin-type library for efficiently conducting UQ/SA/DOE for network-based simulation models. This framework has been applied to systems ranging from fundamental graph dynamical systems (GDSs) to large-scale socio-technical simulation models with a broad range of analyses such as UQ and parameter studies for various scenarios. Graph dynamical systems provide a theoretical framework for network-based simulation models and have been studied theoretically in this dissertation. This includes a broad range of stability and sensitivity analyses offering insights into how GDSs respond to perturbations of their key components. This stability-focused, structure-to-function theory was a motivator for the design and implementation of GENEUS. GENEUS, rooted in the framework of GDS, provides modelers, experimentalists, and research groups access to a variety of UQ/SA/DOE methods with robust and tested implementations without requiring them to necessarily have the detailed expertise in statistics, data management and computing. Even for research teams having all the skills, GENEUS can significantly increase research productivity. / Ph. D.

Page generated in 0.1183 seconds