101 |
Risk-Aware Human-In-The-Loop Multi-Robot Path Planning for Lost Person Search and RescueCangan, Barnabas Gavin 12 July 2019 (has links)
We introduce a framework that would enable using autonomous aerial vehicles in search and rescue scenarios associated with missing person incidents to assist human searchers. We formulate a lost person behavior model and a human searcher model informed by data collected from past search missions. These models are used to generate a probabilistic heatmap of the lost person's position and anticipated searcher trajectories. We use Gaussian processes with a Gibbs' kernel for data fusion to accurately model a limited field-of-view sensor. Our algorithm thereby computes a set of trajectories for a team of aerial vehicles to autonomously navigate, so as to assist and complement human searchers' efforts. / Master of Science / Our goal is to assist human searchers using autonomous aerial vehicles in search and rescue scenarios associated with missing person incidents. We formulate a lost person behavior model and a human searcher model informed by data collected from past search missions. These models are used to generate a probabilistic heatmap of the lost person’s position and anticipated searcher trajectories. We use Gaussian processes for data fusion with Gibbs’ kernel to accurately model a limited field-of-view sensor. Our algorithm thereby computes a set of trajectories for a team of aerial vehicles to autonomously navigate, so as to assist and complement human searchers’ efforts.
|
102 |
Physics-informed Machine Learning for Digital Twins of Metal Additive ManufacturingGnanasambandam, Raghav 07 May 2024 (has links)
Metal additive manufacturing (AM) is an emerging technology for producing parts with virtually no constraint on the geometry. AM builds a part by depositing materials in a layer-by-layer fashion. Despite the benefits in several critical applications, quality issues are one of the primary concerns for the widespread adoption of metal AM. Addressing these issues starts with a better understanding of the underlying physics and includes monitoring and controlling the process in a real-world manufacturing environment. Digital Twins (DTs) are virtual representations of physical systems that enable fast and accurate decision-making. DTs rely on Artificial Intelligence (AI) to process complex information from multiple sources in a manufacturing system at multiple levels. This information typically comes from partially known process physics, in-situ sensor data, and ex-situ quality measurements for a metal AM process. Most current AI models cannot handle ill-structured information from metal AM. Thus, this work proposes three novel machine-learning methods for improving the quality of metal AM processes. These methods enable DTs to control quality in several processes, including laser powder bed fusion (LPBF) and additive friction stir deposition (AFSD). The proposed three methods are as follows 1. Process improvement requires mapping the process parameters with ex-situ quality measurements. These mappings often tend to be non-stationary, with limited experimental data. This work utilizes a novel Deep Gaussian Process-based Bayesian optimization (DGP-SI-BO) method for sequential process design. DGP can model non-stationarity better than a traditional Gaussian Process (GP), but it is challenging for BO. The proposed DGP-SI-BO provides a bagging procedure for acquisition function with a DGP surrogate model inferred via Stochastic Imputation (SI). For a fixed time budget, the proposed method gives 10% better quality for the LPBF process than the widely used BO method while being three times faster than the state-of-the-art method.
2. For metal AM, the process physics information is usually in the form of Partial Differential Equations (PDEs). Though the PDEs, along with in-situ data, can be handled through Physics-informed Neural Networks (PINNs), the activation function in NNs is traditionally not designed to handle multi-scale PDEs. This work proposes a novel activation function Self-scalable tanh (Stan) function for PINNs. The proposed activation function modifies the traditional tanh function. Stan function is smooth, non-saturating, and has a trainable parameter. It can allow an easy flow of gradients and enable systematic scaling of the input-output mapping during training. Apart from solving the heat transfer equations for LPBF and AFSD, this work provides applications in areas including quantum physics and solid and fluid mechanics. Stan function also accelerates notoriously hard and ill-posed inverse discovery of process physics.
3. PDE-based simulations typically need to be much faster for in-situ process control. This work proposes to use a Fourier Neural Operator (FNO) for instantaneous predictions (1000 times speed up) of quality in metal AM. FNO is a data-driven method that maps the process parameters with a high dimensional quality tensor (like thermal distribution in LPBF). Training the FNO with simulated data from PINN ensures a quick response to alter the course of the manufacturing process. Once trained, a DT can readily deploy the model for real-time process monitoring.
The proposed methods combine complex information to provide reliable machine-learning models and improve understanding of metal AM processes. Though these models can be independent, they complement each other to build DTs and achieve quality assurance in metal AM. / Doctor of Philosophy / Metal 3D printing, technically known as metal additive manufacturing (AM), is an emerging technology for making virtually any physical part with a click of a button. For instance, one of the most common AM processes, Laser Powder Bed Fusion (L-PBF), melts metal powder using a laser to build into any desired shape. Despite the attractiveness, the quality of the built part is often not satisfactory for its intended usage. For example, a metal plate built for a fractured bone may not adhere to the required dimensions. Improving the quality of metal AM parts starts with a better understanding the underlying mechanisms at a fine length scale (size of the powder or even smaller). Collecting data during the process and leveraging the known physics can help adjust the AM process to improve quality. Digital Twins (DTs) are exactly suited for the task, as they combine the process physics and the data obtained from sensors on metal AM machines to inform an AM machine on process settings and adjustments. This work develops three specific methods to utilize the known information from metal AM to improve the quality of the parts built from metal AM machines. These methods combine different types of known information to alter the process setting for metal AM machines that produce high-quality parts.
|
103 |
Statistical Methods for Non-Linear Profile MonitoringQuevedo Candela, Ana Valeria 02 January 2020 (has links)
We have seen an increased interest and extensive research in the monitoring of a process over time whose characteristics are represented mathematically in functional forms such as profiles. Most of the current techniques require all of the data for each profile to determine the state of the process. Thus, quality engineers from industrial processes such as agricultural, aquacultural, and chemical cannot make process corrections to the current profile that are essential for correcting their processes at an early stage. In addition, the focus of most of the current techniques is on the statistical significance of the parameters or features of the model instead of the practical significance, which often relates to the actual quality characteristic. The goal of this research is to provide alternatives to address these two main concerns. First, we study the use of a Shewhart type control chart to monitor within profiles, where the central line is the predictive mean profile and the control limits are formed based on the prediction band. Second, we study a statistic based on a non-linear mixed model recognizing that the model leads to correlations among the estimated parameters. / Doctor of Philosophy / Checking the stability over time of the quality of a process which is best expressed by a relationship between a quality characteristic and other variables involved in the process has received increasing attention. The goal of this research is to provide alternative methods to determine the state of such a process. Both methods presented here are compared to the current methodologies. The first method will allow us to monitor a process while the data is still being collected. The second one is based on the quality characteristic of the process and takes full advantage of the model structure. Both methods seem to be more robust than the current most well-known method.
|
104 |
Computer Experimental Design for Gaussian Process SurrogatesZhang, Boya 01 September 2020 (has links)
With a rapid development of computing power, computer experiments have gained popularity in various scientific fields, like cosmology, ecology and engineering. However, some computer experiments for complex processes are still computationally demanding. A surrogate model or emulator, is often employed as a fast substitute for the simulator. Meanwhile, a common challenge in computer experiments and related fields is to efficiently explore the input space using a small number of samples, i.e., the experimental design problem. This dissertation focuses on the design problem under Gaussian process surrogates. The first work demonstrates empirically that space-filling designs disappoint when the model hyperparameterization is unknown, and must be estimated from data observed at the chosen design sites. A purely random design is shown to be superior to higher-powered alternatives in many cases. Thereafter, a new family of distance-based designs are proposed and their superior performance is illustrated in both static (one-shot design) and sequential settings. The second contribution is motivated by an agent-based model(ABM) of delta smelt conservation. The ABM is developed to assist in a study of delta smelt life cycles and to understand sensitivities to myriad natural variables and human interventions. However, the input space is high-dimensional, running the simulator is time-consuming, and its outputs change nonlinearly in both mean and variance. A batch sequential design scheme is proposed, generalizing one-at-a-time variance-based active learning, as a means of keeping multi-core cluster nodes fully engaged with expensive runs. The acquisition strategy is carefully engineered to favor selection of replicates which boost statistical and computational efficiencies. Design performance is illustrated on a range of toy examples before embarking on a smelt simulation campaign and downstream high-fidelity input sensitivity analysis. / Doctor of Philosophy / With a rapid development of computing power, computer experiments have gained popularity in various scientific fields, like cosmology, ecology and engineering. However, some computer experiments for complex processes are still computationally demanding. Thus, a statistical model built upon input-output observations, i.e., a so-called surrogate model or emulator, is needed as a fast substitute for the simulator. Design of experiments, i.e., how to select samples from the input space under budget constraints, is also worth studying. This dissertation focuses on the design problem under Gaussian process (GP) surrogates. The first work demonstrates empirically that commonly-used space-filling designs disappoint when the model hyperparameterization is unknown, and must be estimated from data observed at the chosen design sites. Thereafter, a new family of distance-based designs are proposed and their superior performance is illustrated in both static (design points are allocated at one shot) and sequential settings (data are sampled sequentially). The second contribution is motivated by a stochastic computer simulator of delta smelt conservation. This simulator is developed to assist in a study of delta smelt life cycles and to understand sensitivities to myriad natural variables and human interventions. However, the input space is high-dimensional, running the simulator is time-consuming, and its outputs change nonlinearly in both mean and variance. An innovative batch sequential design method is proposed, generalizing one-at-a-time sequential design to one-batch-at-a-time scheme with the goal of parallel computing. The criterion for subsequent data acquisition is carefully engineered to favor selection of replicates which boost statistical and computational efficiencies. The design performance is illustrated on a range of toy examples before embarking on a smelt simulation campaign and downstream input sensitivity analysis.
|
105 |
Robust and Data-Efficient Metamodel-Based Approaches for Online Analysis of Time-Dependent SystemsXie, Guangrui 04 June 2020 (has links)
Metamodeling is regarded as a powerful analysis tool to learn the input-output relationship of a system based on a limited amount of data collected when experiments with real systems are costly or impractical. As a popular metamodeling method, Gaussian process regression (GPR), has been successfully applied to analyses of various engineering systems. However, GPR-based metamodeling for time-dependent systems (TDSs) is especially challenging due to three reasons. First, TDSs require an appropriate account for temporal effects, however, standard GPR cannot address temporal effects easily and satisfactorily. Second, TDSs typically require analytics tools with a sufficiently high computational efficiency to support online decision making, but standard GPR may not be adequate for real-time implementation. Lastly, reliable uncertainty quantification is a key to success for operational planning of TDSs in real world, however, research on how to construct adequate error bounds for GPR-based metamodeling is sparse. Inspired by the challenges encountered in GPR-based analyses of two representative stochastic TDSs, i.e., load forecasting in a power system and trajectory prediction for unmanned aerial vehicles (UAVs), this dissertation aims to develop novel modeling, sampling, and statistical analysis techniques for enhancing the computational and statistical efficiencies of GPR-based metamodeling to meet the requirements of practical implementations. Furthermore, an in-depth investigation on building uniform error bounds for stochastic kriging is conducted, which sets up a foundation for developing robust GPR-based metamodeling techniques for analyses of TDSs under the impact of strong heteroscedasticity. / Ph.D. / Metamodeling has been regarded as a powerful analysis tool to learn the input-output relationship of an engineering system with a limited amount of experimental data available. As a popular metamodeling method, Gaussian process regression (GPR) has been widely applied to analyses of various engineering systems whose input-output relationships do not depend on time. However, GPR-based metamodeling for time-dependent systems (TDSs), whose input-output relationships depend on time, is especially challenging due to three reasons. First, standard GPR cannot properly address temporal effects for TDSs. Second, standard GPR is typically not computationally efficient enough for real-time implementations in TDSs. Lastly, research on how to adequately quantify the uncertainty associated with the performance of GPR-based metamodeling is sparse. To fill this knowledge gap, this dissertation aims to develop novel modeling, sampling, and statistical analysis techniques for enhancing standard GPR to meet the requirements of practical implementations for TDSs. Effective solutions are provided to address the challenges encountered in GPR-based analyses of two representative stochastic TDSs, i.e., load forecasting in a power system and trajectory prediction for unmanned aerial vehicles (UAVs). Furthermore, an in-depth investigation on quantifying the uncertainty associated with the performance of stochastic kriging (a variant of standard GPR) is conducted, which sets up a foundation for developing robust GPR-based metamodeling techniques for analyses of more complex TDSs.
|
106 |
Linear Parameter Uncertainty Quantification using Surrogate Gaussian ProcessesMacatula, Romcholo Yulo 21 July 2020 (has links)
We consider uncertainty quantification using surrogate Gaussian processes. We take a previous sampling algorithm and provide a closed form expression of the resulting posterior distribution. We extend the method to weighted least squares and a Bayesian approach both with closed form expressions of the resulting posterior distributions. We test methods on 1D deconvolution and 2D tomography. Our new methods improve on the previous algorithm, however fall short in some aspects to a typical Bayesian inference method. / Master of Science / Parameter uncertainty quantification seeks to determine both estimates and uncertainty regarding estimates of model parameters. Example of model parameters can include physical properties such as density, growth rates, or even deblurred images. Previous work has shown that replacing data with a surrogate model can provide promising estimates with low uncertainty. We extend the previous methods in the specific field of linear models. Theoretical results are tested on simulated computed tomography problems.
|
107 |
Scalable Estimation on Linear and Nonlinear Regression Models via Decentralized Processing: Adaptive LMS Filter and Gaussian Process Regression / 分散処理による線形・非線形回帰モデルでのスケーラブルな推定:適応LMSフィルタとガウス過程回帰Nakai, Ayano 24 November 2021 (has links)
京都大学 / 新制・課程博士 / 博士(情報学) / 甲第23588号 / 情博第782号 / 新制||情||133(附属図書館) / 京都大学大学院情報学研究科システム科学専攻 / (主査)教授 田中 利幸, 教授 下平 英寿, 准教授 櫻間 一徳 / 学位規則第4条第1項該当 / Doctor of Informatics / Kyoto University / DFAM
|
108 |
Hierarchical Gaussian Processes for Spatially Dependent Model SelectionFry, James Thomas 18 July 2018 (has links)
In this dissertation, we develop a model selection and estimation methodology for nonstationary spatial fields. Large, spatially correlated data often cover a vast geographical area. However, local spatial regions may have different mean and covariance structures. Our methodology accomplishes three goals: (1) cluster locations into small regions with distinct, stationary models, (2) perform Bayesian model selection within each cluster, and (3) correlate the model selection and estimation in nearby clusters. We utilize the Conditional Autoregressive (CAR) model and Ising distribution to provide intra-cluster correlation on the linear effects and model inclusion indicators, while modeling inter-cluster correlation with separate Gaussian processes. We apply our model selection methodology to a dataset involving the prediction of Brook trout presence in subwatersheds across Pennsylvania. We find that our methodology outperforms the stationary spatial model and that different regions in Pennsylvania are governed by separate Gaussian process regression models. / Ph. D. / In this dissertation, we develop a statistical methodology for analyzing data where observations are related to each other due to spatial proximity. Our overall goal is to determine which attributes are important when predicting the response of interest. However, the effect and importance of an attribute may differ depending on the spatial location of the observation. Our methodology accomplishes three goals: (1) partition the observations into small spatial regions, (2) determine which attributes are important within each region, and (3) enforce that the importance of variables should be similar in regions that are near each other. We apply our technique to a dataset involving the prediction of Brook trout presence in subwatersheds across Pennsylvania.
|
109 |
A Machine Learning Model Predicting Errors in Simplified Continental Ice Sheet SimulationsHeumann, Joakim January 2024 (has links)
Continental ice sheet simulations are commonly based on either the Full Stokes (FS) model, or its simplification, the Shallow Ice Approximation (SIA) model. This thesis examines a machine learning error estimation approach for assessing the accuracy of the solutions to the SIA model, where the reference (exact) solution is that of the Stokes model. We use Gaussian Process (GP) regression through existing GP libraries in Python to model and train a machine learning model. For computational efficiency reasons we use Variational Nearest Neighbor Gaussian Processes (VNNGP), where the input data are the SIA solution and the ice sheet geometry characteristics. The output data is the error between the SIA solution and the FS solution. We find that these models trained on various ice sheet geometries are able to make rough predictions for other simple geometries not trained for; however we observe a poor fit for the much more complex Greenland geometry, which suggests further work to be done, utilizing more diverse geometries for training.
|
110 |
Optimal Q-Space Sampling Scheme : Using Gaussian Process Regression and Mutual InformationHassler, Ture, Berntsson, Jonathan January 2022 (has links)
Diffusion spectrum imaging is a type of diffusion magnetic resonance imaging, capable of capturing very complex tissue structures, but requiring a very large amount of samples in q-space and therefore time. The purpose of this project was to create and evaluate a new sampling scheme in q-space for diffusion MRI, trying to recreate the ensemble averaged propagator (EAP) with fewer samples without significant loss of quality. The sampling scheme was created by greedily selecting the measurements contributing with the most mutual information. The EAP was then recreated using the sampling scheme and interpolation. The mutual information was approximated using the kernel from a Gaussian process machine learning model. The project showed limited but promising results on synthetic data, but was highly restricted by the amount of available computational power. Having to resolve to using a lower resolution mesh when calculating the optimal sampling scheme significantly reduced the overall performance.
|
Page generated in 1.0349 seconds