• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 127
  • 23
  • 16
  • 8
  • 1
  • Tagged with
  • 245
  • 245
  • 62
  • 58
  • 53
  • 36
  • 35
  • 35
  • 34
  • 28
  • 28
  • 26
  • 25
  • 25
  • 25
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
201

A Framework for Uncertainty Quantification in Microstructural Characterization with Application to Additive Manufacturing of Ti-6Al-4V

Loughnane, Gregory Thomas 10 September 2015 (has links)
No description available.
202

Deep Learning Framework for Trajectory Prediction and In-time Prognostics in the Terminal Airspace

Varun S Sudarsanan (13889826) 06 October 2022 (has links)
<p>Terminal airspace around an airport is the biggest bottleneck for commercial operations in the National Airspace System (NAS). In order to prognosticate the safety status of the terminal airspace, effective prediction of the airspace evolution is necessary. While there are fixed procedural structures for managing operations at an airport, the confluence of a large number of aircraft and the complex interactions between the pilots and air traffic controllers make it challenging to predict its evolution. Modeling the high-dimensional spatio-temporal interactions in the airspace given different environmental and infrastructural constraints is necessary for effective predictions of future aircraft trajectories that characterize the airspace state at any given moment. A novel deep learning architecture using Graph Neural Networks is proposed to predict trajectories of aircraft 10 minutes into the future and estimate prog?nostic metrics for the airspace. The uncertainty in the future is quantified by predicting distributions of future trajectories instead of point estimates. The framework’s viability for trajectory prediction and prognosis is demonstrated with terminal airspace data from Dallas Fort Worth International Airport (DFW). </p>
203

Information Field Theory Approach to Uncertainty Quantification for Differential Equations: Theory, Algorithms and Applications

Kairui Hao (8780762) 24 April 2024 (has links)
<p dir="ltr">Uncertainty quantification is a science and engineering subject that aims to quantify and analyze the uncertainty arising from mathematical models, simulations, and measurement data. An uncertainty quantification analysis usually consists of conducting experiments to collect data, creating and calibrating mathematical models, predicting through numerical simulation, making decisions using predictive results, and comparing the model prediction with new experimental data.</p><p dir="ltr">The overarching goal of uncertainty quantification is to determine how likely some quantities in this analysis are if some other information is not exactly known and ultimately facilitate decision-making. This dissertation delivers a complete package, including theory, algorithms, and applications of information field theory, a Bayesian uncertainty quantification tool that leverages the state-of-the-art machine learning framework to accelerate solving the classical uncertainty quantification problems specified by differential equations.</p>
204

Computational Advancements for Solving Large-scale Inverse Problems

Cho, Taewon 10 June 2021 (has links)
For many scientific applications, inverse problems have played a key role in solving important problems by enabling researchers to estimate desired parameters of a system from observed measurements. For example, large-scale inverse problems arise in many global problems and medical imaging problems such as greenhouse gas tracking and computational tomography reconstruction. This dissertation describes advancements in computational tools for solving large-scale inverse problems and for uncertainty quantification. Oftentimes, inverse problems are ill-posed and large-scale. Iterative projection methods have dramatically reduced the computational costs of solving large-scale inverse problems, and regularization methods have been critical in obtaining stable estimations by applying prior information of unknowns via Bayesian inference. However, by combining iterative projection methods and variational regularization methods, hybrid projection approaches, in particular generalized hybrid methods, create a powerful framework that can maximize the benefits of each method. In this dissertation, we describe various advancements and extensions of hybrid projection methods that we developed to address three recent open problems. First, we develop hybrid projection methods that incorporate mixed Gaussian priors, where we seek more sophisticated estimations where the unknowns can be treated as random variables from a mixture of distributions. Second, we describe hybrid projection methods for mean estimation in a hierarchical Bayesian approach. By including more than one prior covariance matrix (e.g., mixed Gaussian priors) or estimating unknowns and hyper-parameters simultaneously (e.g., hierarchical Gaussian priors), we show that better estimations can be obtained. Third, we develop computational tools for a respirometry system that incorporate various regularization methods for both linear and nonlinear respirometry inversions. For the nonlinear systems, blind deconvolution methods are developed and prior knowledge of nonlinear parameters are used to reduce the dimension of the nonlinear systems. Simulated and real-data experiments of the respirometry problems are provided. This dissertation provides advanced tools for computational inversion and uncertainty quantification. / Doctor of Philosophy / For many scientific applications, inverse problems have played a key role in solving important problems by enabling researchers to estimate desired parameters of a system from observed measurements. For example, large-scale inverse problems arise in many global problems such as greenhouse gas tracking where the problem of estimating the amount of added or removed greenhouse gas at the atmosphere gets more difficult. The number of observations has been increased with improvements in measurement technologies (e.g., satellite). Therefore, the inverse problems become large-scale and they are computationally hard to solve. Another example of an inverse problem arises in tomography, where the goal is to examine materials deep underground (e.g., to look for gas or oil) or reconstruct an image of the interior of the human body from exterior measurements (e.g., to look for tumors). For tomography applications, there are typically fewer measurements than unknowns, which results in non-unique solutions. In this dissertation, we treat unknowns as random variables with prior probability distributions in order to compensate for a deficiency in measurements. We consider various additional assumptions on the prior distribution and develop efficient and robust numerical methods for solving inverse problems and for performing uncertainty quantification. We apply our developed methods to many numerical applications such as greenhouse gas tracking, seismic tomography, spherical tomography problems, and the estimation of CO2 of living organisms.
205

Deep Time: Deep Learning Extensions to Time Series Factor Analysis with Applications to Uncertainty Quantification in Economic and Financial Modeling

Miller, Dawson Jon 12 September 2022 (has links)
This thesis establishes methods to quantify and explain uncertainty through high-order moments in time series data, along with first principal-based improvements on the standard autoencoder and variational autoencoder. While the first-principal improvements on the standard variational autoencoder provide additional means of explainability, we ultimately look to non-variational methods for quantifying uncertainty under the autoencoder framework. We utilize Shannon's differential entropy to accomplish the task of uncertainty quantification in a general nonlinear and non-Gaussian setting. Together with previously established connections between autoencoders and principal component analysis, we motivate the focus on differential entropy as a proper abstraction of principal component analysis to this more general framework, where nonlinear and non-Gaussian characteristics in the data are permitted. Furthermore, we are able to establish explicit connections between high-order moments in the data to those in the latent space, which induce a natural latent space decomposition, and by extension, an explanation of the estimated uncertainty. The proposed methods are intended to be utilized in economic and financial factor models in state space form, building on recent developments in the application of neural networks to factor models with applications to financial and economic time series analysis. Finally, we demonstrate the efficacy of the proposed methods on high frequency hourly foreign exchange rates, macroeconomic signals, and synthetically generated autoregressive data sets. / Master of Science / This thesis establishes methods to quantify and explain uncertainty in time series data, along with improvements on some latent variable neural networks called autoencoders and variational autoencoders. Autoencoders and varitational autoencodes are called latent variable neural networks since they can estimate a representation of the data that has less dimension than the original data. These neural network architectures have a fundamental connection to a classical latent variable method called principal component analysis, which performs a similar task of dimension reduction but under more restrictive assumptions than autoencoders and variational autoencoders. In contrast to principal component analysis, a common ailment of neural networks is the lack of explainability, which accounts for the colloquial term black-box models. While the improvements on the standard autoencoders and variational autoencoders help with the problem of explainability, we ultimately look to alternative probabilistic methods for quantifying uncertainty. To accomplish this task, we focus on Shannon's differential entropy, which is entropy applied to continuous domains such as time series data. Entropy is intricately connected to the notion of uncertainty, since it depends on the amount of randomness in the data. Together with previously established connections between autoencoders and principal component analysis, we motivate the focus on differential entropy as a proper abstraction of principal component analysis to a general framework that does not require the restrictive assumptions of principal component analysis. Furthermore, we are able to establish explicit connections between high-order moments in the data to the estimated latent variables (i.e., the reduced dimension representation of the data). Estimating high-order moments allows for a more accurate estimation of the true distribution of the data. By connecting the estimated high-order moments in the data to the latent variables, we obtain a natural decomposition of the uncertainty surrounding the latent variables, which allows for increased explainability of the proposed autoencoder. The methods introduced in this thesis are intended to be utilized in a class of economic and financial models called factor models, which are frequently used in policy and investment analysis. A factor model is another type of latent variable model, which in addition to estimating a reduced dimension representation of the data, provides a means to forecast future observations. Finally, we demonstrate the efficacy of the proposed methods on high frequency hourly foreign exchange rates, macroeconomic signals, and synthetically generated autoregressive data sets. The results support the superiority of the entropy-based autoencoder to the standard variational autoencoder both in capability and computational expense.
206

DATA-DRIVEN APPROACHES FOR UNCERTAINTY QUANTIFICATION WITH PHYSICS MODELS

Huiru Li (18423333) 25 April 2024 (has links)
<p dir="ltr">This research aims to address these critical challenges in uncertainty quantification. The objective is to employ data-driven approaches for UQ with physics models.</p>
207

Probabilistic and Statistical Learning Models for Error Modeling and Uncertainty Quantification

Zavar Moosavi, Azam Sadat 13 March 2018 (has links)
Simulations and modeling of large-scale systems are vital to understanding real world phenomena. However, even advanced numerical models can only approximate the true physics. The discrepancy between model results and nature can be attributed to different sources of uncertainty including the parameters of the model, input data, or some missing physics that is not included in the model due to a lack of knowledge or high computational costs. Uncertainty reduction approaches seek to improve the model accuracy by decreasing the overall uncertainties in models. Aiming to contribute to this area, this study explores uncertainty quantification and reduction approaches for complex physical problems. This study proposes several novel probabilistic and statistical approaches for identifying the sources of uncertainty, modeling the errors, and reducing uncertainty to improve the model predictions for large-scale simulations. We explore different computational models. The first class of models studied herein are inherently stochastic, and numerical approximations suffer from stability and accuracy issues. The second class of models are partial differential equations, which capture the laws of mathematical physics; however, they only approximate a more complex reality, and have uncertainties due to missing dynamics which is not captured by the models. The third class are low-fidelity models, which are fast approximations of very expensive high-fidelity models. The reduced-order models have uncertainty due to loss of information in the dimension reduction process. We also consider uncertainty analysis in the data assimilation framework, specifically for ensemble based methods where the effect of sampling errors is alleviated by localization. Finally, we study the uncertainty in numerical weather prediction models coming from approximate descriptions of physical processes. / Ph. D.
208

Uncertainty Quantification, State and Parameter Estimation in Power Systems Using Polynomial Chaos Based Methods

Xu, Yijun 31 January 2019 (has links)
It is a well-known fact that a power system contains many sources of uncertainties. These uncertainties coming from the loads, the renewables, the model and the measurement, etc, are influencing the steady state and dynamic response of the power system. Facing this problem, traditional methods, such as the Monte Carlo method and the Perturbation method, are either too time consuming or suffering from the strong nonlinearity in the system. To solve these, this Dissertation will mainly focus on developing the polynomial chaos based method to replace the traditional ones. Using it, the uncertainties from the model and the measurement are propagated through the polynomial chaos bases at a set of collocation points. The approximated polynomial chaos coefficients contain the statistical information. The method can greatly accelerate the calculation efficiency while not losing the accuracy, even when the system is highly stressed. In this dissertation, both the forward problem and the inverse problem of uncertainty quantification will be discussed. The forward problems will include the probabilistic power flow problem and statistical power system dynamic simulations. The generalized polynomial chaos method, the adaptive polynomial chaos-ANOVA method and the multi-element polynomial chaos method will be introduced and compared. The case studies show that the proposed methods have great performances in the statistical analysis of the large-scale power systems. The inverse problems will include the state and parameter estimation problem. A novel polynomial-chaos-based Kalman filter will be proposed. The comparison studies with other traditional Kalman filter demonstrate the good performances of the proposed Kalman filter. We further explored the area dynamic parameter estimation problem under the Bayesian inference framework. The polynomial-chaos-expansions are treated as the response surface of the full dynamic solver. Combing with hybrid Markov chain Monte Carlo method, the proposed method yields very high estimation accuracy while greatly reducing the computing time. For both the forward problem and the inverse problems, the polynomial chaos based methods haven shown great advantages over the traditional methods. These computational techniques can improve the efficiency and accuracy in power system planning, guarantee the rationality and reliability in power system operations, and, finally, speed up the power system dynamic security assessment. / PHD / It is a well-known fact that a power system state is inherently stochastic. Sources of stochasticity include load random variations, renewable energy intermittencies, and random outages of generating units, lines, and transformers, to cite a few. These stochasticities translate into uncertainties in the models that are assumed to describe the steady-sate and dynamic behavior of a power system. Now, these models are themselves approximate since they are based on some assumptions that are typically violated in practice. Therefore, it does not come as a surprise if recent research activities in power systems are focusing on how to cope with uncertainties when dealing with power system planning, monitoring and control. This Dissertation is developing polynomial-chaos-based method in quantifying, and managing these uncertainties. Three major topics, including uncertainty quantification, state estimation and parameter estimation are discussed. The developed method can improve the efficiency and accuracy in power system planning, guarantee the rationality and reliability in power system operations in dealing with the uncertainties, and, finally, enhancing the resilience of the power systems.
209

Power Electronics Design Methodologies with Parametric and Model-Form Uncertainty Quantification

Rashidi Mehrabadi, Niloofar 27 April 2018 (has links)
Modeling and simulation have become fully ingrained into the set of design and development tools that are broadly used in the field of power electronics. To state simply, they represent the fastest and safest way to study a circuit or system, thus aiding in the research, design, diagnosis, and debugging phases of power converter development. Advances in computing technologies have also enabled the ability to conduct reliability and production yield analyses to ensure that the system performance can meet given requirements despite the presence of inevitable manufacturing variability and variations in the operating conditions. However, the trustworthiness of all the model-based design techniques depends entirely on the accuracy of the simulation models used, which, thus far, has not yet been fully considered. Prior to this research, heuristic safety factors were used to compensate for deviation of real system performance from the predictions made using modeling and simulation. This approach resulted invariably in a more conservative design process. In this research, a modeling and design approach with parametric and model-form uncertainty quantification is formulated to bridge the modeling and simulation accuracy and reliance gaps that have hindered the full exploitation of model-based design techniques. Prior to this research, a few design approaches were developed to account for variability in the design process; these approaches have not shown the capability to be applicable to complex systems. This research, however, demonstrates that the implementation of the proposed modeling approach is able to handle complex power converters and systems. A systematic study for developing a simplified test bed for uncertainty quantification analysis is introduced accordingly. For illustrative purposes, the proposed modeling approach is applied to the switching model of a modular multilevel converter to improve the existing modeling practice and validate the model used in the design of this large-scale power converter. The proposed modeling and design methodology is also extended to design optimization, where a robust multi-objective design and optimization approach with parametric and model form uncertainty quantification is proposed. A sensitivity index is defined accordingly as a quantitative measure of system design robustness, with regards to manufacturing variability and modeling inaccuracies in the design of systems with multiple performance functions. The optimum design solution is realized by exploring the Pareto Front of the enhanced performance space, where the model-form error associated with each design is used to modify the estimated performance measures. The parametric sensitivity of each design point is also considered to discern between cases and help identify the most parametrically-robust of the Pareto-optimal design solutions. To demonstrate the benefits of incorporating uncertainty quantification analysis into the design optimization from a more practical standpoint, a Vienna-type rectifier is used as a case study to compare the theoretical analysis with a comprehensive experimental validation. This research shows that the model-form error and sensitivity of each design point can potentially change the performance space and the resultant Pareto Front. As a result, ignoring these main sources of uncertainty in the design will result in incorrect decision-making and the choice of a design that is not an optimum design solution in practice. / Ph. D.
210

Modeling and Analysis of a Cantilever Beam Tip Mass System

Meesala, Vamsi Chandra 22 May 2018 (has links)
We model the nonlinear dynamics of a cantilever beam with tip mass system subjected to different excitation and exploit the nonlinear behavior to perform sensitivity analysis and propose a parameter identification scheme for nonlinear piezoelectric coefficients. First, the distributed parameter governing equations taking into consideration the nonlinear boundary conditions of a cantilever beam with a tip mass subjected to principal parametric excitation are developed using generalized Hamilton's principle. Using a Galerkin's discretization scheme, the discretized equation for the first mode is developed for simpler representation assuming linear and nonlinear boundary conditions. We solve the distributed parameter and discretized equations separately using the method of multiple scales. We determine that the cantilever beam tip mass system subjected to parametric excitation is highly sensitive to the detuning. Finally, we show that assuming linearized boundary conditions yields the wrong type of bifurcation. Noting the highly sensitive nature of a cantilever beam with tip mass system subjected to parametric excitation to detuning, we perform sensitivity of the response to small variations in elasticity (stiffness), and the tip mass. The governing equation of the first mode is derived, and the method of multiple scales is used to determine the approximate solution based on the order of the expected variations. We demonstrate that the system can be designed so that small variations in either stiffness or tip mass can alter the type of bifurcation. Notably, we show that the response of a system designed for a supercritical bifurcation can change to yield a subcritical bifurcation with small variations in the parameters. Although such a trend is usually undesired, we argue that it can be used to detect small variations induced by fatigue or small mass depositions in sensing applications. Finally, we consider a cantilever beam with tip mass and piezoelectric layer and propose a parameter identification scheme that exploits the vibration response to estimate the nonlinear piezoelectric coefficients. We develop the governing equations of a cantilever beam with tip mass and piezoelectric layer by considering an enthalpy that accounts for quadratic and cubic material nonlinearities. We then use the method of multiple scales to determine the approximate solution of the response to direct excitation. We show that approximate solution and amplitude and phase modulation equations obtained from the method of multiple scales analysis can be matched with numerical simulation of the response to estimate the nonlinear piezoelectric coefficients. / Master of Science

Page generated in 0.1299 seconds