• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 100
  • 12
  • 10
  • 10
  • 3
  • 1
  • Tagged with
  • 205
  • 205
  • 52
  • 47
  • 45
  • 44
  • 41
  • 38
  • 36
  • 36
  • 30
  • 29
  • 29
  • 22
  • 22
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
101

Linear Parameter Uncertainty Quantification using Surrogate Gaussian Processes

Macatula, Romcholo Yulo 21 July 2020 (has links)
We consider uncertainty quantification using surrogate Gaussian processes. We take a previous sampling algorithm and provide a closed form expression of the resulting posterior distribution. We extend the method to weighted least squares and a Bayesian approach both with closed form expressions of the resulting posterior distributions. We test methods on 1D deconvolution and 2D tomography. Our new methods improve on the previous algorithm, however fall short in some aspects to a typical Bayesian inference method. / Master of Science / Parameter uncertainty quantification seeks to determine both estimates and uncertainty regarding estimates of model parameters. Example of model parameters can include physical properties such as density, growth rates, or even deblurred images. Previous work has shown that replacing data with a surrogate model can provide promising estimates with low uncertainty. We extend the previous methods in the specific field of linear models. Theoretical results are tested on simulated computed tomography problems.
102

Neural Network Gaussian Process considering Input Uncertainty and Application to Composite Structures Assembly

Lee, Cheol Hei 18 May 2020 (has links)
Developing machine learning enabled smart manufacturing is promising for composite structures assembly process. It requires accurate predictive analysis on deformation of the composite structures to improve production quality and efficiency of composite structures assembly. The novel composite structures assembly involves two challenges: (i) the highly nonlinear and anisotropic properties of composite materials; and (ii) inevitable uncertainty in the assembly process. To overcome those problems, we propose a neural network Gaussian process model considering input uncertainty for composite structures assembly. Deep architecture of our model allows us to approximate a complex system better, and consideration of input uncertainty enables robust modeling with complete incorporation of the process uncertainty. Our case study shows that the proposed method performs better than benchmark methods for highly nonlinear systems. / Master of Science / Composite materials are becoming more popular in many areas due to its nice properties, yet computational modeling of them is not an easy task due to their complex structures. More-over, the real-world problems are generally subject to uncertainty that cannot be observed,and it makes the problem more difficult to solve. Therefore, a successful predictive modeling of composite material for a product is subject to consideration of various uncertainties in the problem.The neural network Gaussian process (NNGP) is one of statistical techniques that has been developed recently and can be applied to machine learning. The most interesting property of NNGP is that it is derived from the equivalent relation between deep neural networks and Gaussian process that have drawn much attention in machine learning fields. However,related work have ignored uncertainty in the input data so far, which may be an inappropriate assumption in real problems.In this paper, we derive the NNGP considering input uncertainty (NNGPIU) based on the unique characteristics of composite materials. Although our motivation is come from the manipulation of composite material, NNGPIU can be applied to any problem where the input data is corrupted by unknown noise. Our work provides how NNGPIU can be derived theoretically; and shows that the proposed method performs better than benchmark methods for highly nonlinear systems.
103

Precision Aggregated Local Models

Edwards, Adam Michael 28 January 2021 (has links)
Large scale Gaussian process (GP) regression is infeasible for larger data sets due to cubic scaling of flops and quadratic storage involved in working with covariance matrices. Remedies in recent literature focus on divide-and-conquer, e.g., partitioning into sub-problems and inducing functional (and thus computational) independence. Such approximations can speedy, accurate, and sometimes even more flexible than an ordinary GPs. However, a big downside is loss of continuity at partition boundaries. Modern methods like local approximate GPs (LAGPs) imply effectively infinite partitioning and are thus pathologically good and bad in this regard. Model averaging, an alternative to divide-and-conquer, can maintain absolute continuity but often over-smooth, diminishing accuracy. Here I propose putting LAGP-like methods into a local experts-like framework, blending partition-based speed with model-averaging continuity, as a flagship example of what I call precision aggregated local models (PALM). Using N_C LAGPs, each selecting n from N data pairs, I illustrate a scheme that is at most cubic in n, quadratic in N_C, and linear in N, drastically reducing computational and storage demands. Extensive empirical illustration shows how PALM is at least as accurate as LAGP, can be much faster in terms of speed, and furnishes continuous predictive surfaces. Finally, I propose sequential updating scheme which greedily refines a PALM predictor up to a computational budget, and several variations on the basic PALM that may provide predictive improvements. / Doctor of Philosophy / Occasionally, when describing the relationship between two variables, it may be helpful to use a so-called ``non-parametric" regression that is agnostic to the function that connects them. Gaussian Processes (GPs) are a popular method of non-parametric regression used for their relative flexibility and interpretability, but they have the unfortunate drawback of being computationally infeasible for large data sets. Past work into solving the scaling issues for GPs has focused on ``divide and conquer" style schemes that spread the data out across multiple smaller GP models. While these model make GP methods much more accessible to large data sets they do so either at the expense of local predictive accuracy of global surface continuity. Precision Aggregated Local Models (PALM) is a novel divide and conquer method for GP models that is scalable for large data while maintaining local accuracy and a smooth global model. I demonstrate that PALM can be built quickly, and performs well predictively compared to other state of the art methods. This document also provides a sequential algorithm for selecting the location of each local model, and variations on the basic PALM methodology.
104

Statistical Methods for Variability Management in High-Performance Computing

Xu, Li 15 July 2021 (has links)
High-performance computing (HPC) variability management is an important topic in computer science. Research topics include experimental designs for efficient data collection, surrogate models for predicting the performance variability, and system configuration optimization. Due to the complex architecture of HPC systems, a comprehensive study of HPC variability needs large-scale datasets, and experimental design techniques are useful for improved data collection. Surrogate models are essential to understand the variability as a function of system parameters, which can be obtained by mathematical and statistical models. After predicting the variability, optimization tools are needed for future system designs. This dissertation focuses on HPC input/output (I/O) variability through three main chapters. After the general introduction in Chapter 1, Chapter 2 focuses on the prediction models for the scalar description of I/O variability. A comprehensive comparison study is conducted, and major surrogate models for computer experiments are investigated. In addition, a tool is developed for system configuration optimization based on the chosen surrogate model. Chapter 3 conducts a detailed study for the multimodal phenomena in I/O throughput distribution and proposes an uncertainty estimation method for the optimal number of runs for future experiments. Mixture models are used to identify the number of modes for throughput distributions at different configurations. This chapter also addresses the uncertainty in parameter estimation and derives a formula for sample size calculation. The developed method is then applied to HPC variability data. Chapter 4 focuses on the prediction of functional outcomes with both qualitative and quantitative factors. Instead of a scalar description of I/O variability, the distribution of I/O throughput provides a comprehensive description of I/O variability. We develop a modified Gaussian process for functional prediction and apply the developed method to the large-scale HPC I/O variability data. Chapter 5 contains some general conclusions and areas for future work. / Doctor of Philosophy / This dissertation focuses on three projects that are all related to statistical methods in performance variability management in high-performance computing (HPC). HPC systems are computer systems that create high performance by aggregating a large number of computing units. The performance of HPC is measured by the throughput of a benchmark called the IOZone Filesystem Benchmark. The performance variability is the variation among throughputs when the system configuration is fixed. Variability management involves studying the relationship between performance variability and the system configuration. In Chapter 2, we use several existing prediction models to predict the standard deviation of throughputs given different system configurations and compare the accuracy of predictions. We also conduct HPC system optimization using the chosen prediction model as the objective function. In Chapter 3, we use the mixture model to determine the number of modes in the distribution of throughput under different system configurations. In addition, we develop a model to determine the number of additional runs for future benchmark experiments. In Chapter 4, we develop a statistical model that can predict the throughout distributions given the system configurations. We also compare the prediction of summary statistics of the throughput distributions with existing prediction models.
105

Modeling of the fundamental mechanical interactions of unit load components during warehouse racking storage

Molina Montoya, Eduardo 04 February 2021 (has links)
The global supply chain has been built on the material handling capabilities provided by the use of pallets and corrugated boxes. Current pallet design methodologies frequently underestimate the load carrying capacity of pallets by assuming they will only carry uniformly distributed, flexible payloads. But, by considering the effect of various payload characteristics and their interactions during the pallet design process, the structure of pallets can be optimized. This, in turn, will reduce the material consumption required to support the pallet industry. In order to understand the mechanical interactions between stacked boxes and pallet decks, and how these interactions affect the bending moment of pallets, a finite element model was developed and validated. The model developed was two-dimensional, nonlinear and implicitly dynamic. It allowed for evaluations of the effects of different payload configurations on the pallet bending response. The model accurately predicted the deflection of the pallet segment and the movement of the packages for each scenario simulated. The second phase of the study characterized the effects, significant factors, and interactions influencing load bridging on unit loads. It provided a clear understanding of the load bridging effect and how it can be successfully included during the unit load design process. It was concluded that pallet yield strength could be increased by over 60% when accounting for the load bridging effect. To provide a more efficient and cost-effective solution, a surrogate model was developed using a Gaussian Process regression. A detailed analysis of the payloads' effects on pallet deflection was conducted. Four factors were identified as generating significant influence: the number of columns in the unit load, the height of the payload, the friction coefficient of the payload's contact with the pallet deck, and the contact friction between the packages. Additionally, it was identified that complex interactions exist between these significant factors, so they must always be considered. / Doctor of Philosophy / Pallets are a key element of an efficient global supply chain. Most products that are transported are commonly packaged in corrugated boxes and handled by stacking these boxes on pallets. Currently, pallet design methods do not take into consideration the product that is being carried, instead using generic flexible loads for the determination of the pallet's load carrying capacity. In practice, most pallets carry discrete loads, such as corrugated boxes. It has been proven that a pallet, when carrying certain types of packages, can have increased performance compared to the design's estimated load carrying capacity. This is caused by the load redistribution across the pallet deck through an effect known as load bridging. Being able to incorporate the load bridging effect on pallet performance during the design process can allow for the optimization of pallets for specific uses and the reduction in costs and in material consumption. Historically, this effect has been evaluated through physical testing, but that is a slow and cumbersome process that does not allow control of all of the variables for the development of a general model. This research study developed a computer simulation model of a simplified unit load to demonstrate and replicate the load bridging effect. Additionally, a surrogate model was developed in order to conduct a detailed analysis of the main factors and their interactions. These models provide pallet designers an efficient method to use to identify opportunities to modify the unit load's characteristics and improve pallet performance for specific conditions of use.
106

Likelihood-based testing and model selection for hazard functions with unknown change-points

Williams, Matthew Richard 03 May 2011 (has links)
The focus of this work is the development of testing procedures for the existence of change-points in parametric hazard models of various types. Hazard functions and the related survival functions are common units of analysis for survival and reliability modeling. We develop a methodology to test for the alternative of a two-piece hazard against a simpler one-piece hazard. The location of the change is unknown and the tests are irregular due to the presence of the change-point only under the alternative hypothesis. Our approach is to consider the profile log-likelihood ratio test statistic as a process with respect to the unknown change-point. We then derive its limiting process and find the supremum distribution of the limiting process to obtain critical values for the test statistic. We first reexamine existing work based on Taylor Series expansions for abrupt changes in exponential data. We generalize these results to include Weibull data with known shape parameter. We then develop new tests for two-piece continuous hazard functions using local asymptotic normality (LAN). Finally we generalize our earlier results for abrupt changes to include covariate information using the LAN techniques. While we focus on the cases of no censoring, simple right censoring, and censoring generated by staggered-entry; our derivations reveal that our framework should apply to much broader censoring scenarios. / Ph. D.
107

Optimal Q-Space Sampling Scheme : Using Gaussian Process Regression and Mutual Information

Hassler, Ture, Berntsson, Jonathan January 2022 (has links)
Diffusion spectrum imaging is a type of diffusion magnetic resonance imaging, capable of capturing very complex tissue structures, but requiring a very large amount of samples in q-space and therefore time.  The purpose of this project was to create and evaluate a new sampling scheme in q-space for diffusion MRI, trying to recreate the ensemble averaged propagator (EAP) with fewer samples without significant loss of quality. The sampling scheme was created by greedily selecting the measurements contributing with the most mutual information. The EAP was then recreated using the sampling scheme and interpolation. The mutual information was approximated using the kernel from a Gaussian process machine learning model.  The project showed limited but promising results on synthetic data, but was highly restricted by the amount of available computational power. Having to resolve to using a lower resolution mesh when calculating the optimal sampling scheme significantly reduced the overall performance.
108

Multi-Scale Topology Optimization of Lattice Structures Using Machine Learning / Flerskalig topologioptimering av gitterstrukturer med användning av maskininlärning

Ibstedt, Julia January 2023 (has links)
This thesis explores using multi-scale topology optimization (TO) by utilizing inverse homogenization to automate the adjustment of each unit-cell's geometry and placement in a lattice structure within a pressure vessel (the design domain) to achieve desired structural properties. The aim is to find the optimal material distribution within the design domain as well as desired material properties at each discretized element and use machine learning (ML) to map microstructures with corresponding prescribed effective properties. Effective properties are obtained through homogenization, where microscopic properties are upscaled to macroscopic ones. The symmetry group of a unit-cell's elasticity tensor can be utilized for stiffness directional tunability, i.e., to tune the cell's performance in different load directions.  A few geometrical variations of a chosen unit-cell were homogenized to build an effective anisotropic elastic material model by obtaining their effective elasticity. The symmetry group and the stiffness directionality of the cells’ effective elasticity tensors were identified. This was done using both the pattern of the matrix representation of the effective elasticity tensor and the roots of the monoclinic distance function. A cell library of symmetry-preserving variations with a corresponding material property space was created, displaying the achievable properties within the library. Two ML models were implemented to map material properties to appropriate cells. A TO algorithm was also implemented to produce an optimal material distribution within a design domain of a pressure vessel in 2D to maximize stiffness. However, the TO algorithm to obtain desired material properties for each element in the domain was not realized within the time frame of this thesis.  The cells were successfully homogenized. The effective elasticity tensor of the chosen cell was found to belong to the cubic symmetry group in its natural coordinate system. The results suggest that the symmetry group of an elasticity tensor retrieved through numerical experiments can be identified using the monoclinic distance function. If near-zero minima are present, they can be utilized to find the natural coordinate system. The cubic symmetry allowed the cell library's material property space to be spanned by only three elastic constants, derived from the elasticity matrix. The orthotropic symmetry group can enable a greater directional tunability and design flexibility than the cubic one. However, materials exhibiting cubic symmetry can be described by fewer material properties, limiting the property space, which could make the multi-scale TO less complex. The ML models successfully predicted the cell parameters for given elastic constants with satisfactory results. The TO algorithm was successfully implemented. Two different boundary condition cases were used – fixing the domain’s corner nodes and fixing the middle element’s nodes. The latter was found to produce more sensible results. The formation of a cylindrical outer shape could be distinguished in the produced material design, which was deemed reasonable since cylindrical pressure vessels are consistent with engineering practice due to their inherent ability to evenly distribute load. The TO algorithm must be extended to include the elastic constants as design variables to enable the multi-scale TO.
109

Machine learning in predictive maintenance of industrial robots

Morettini, Simone January 2021 (has links)
Industrial robots are a key component for several industrial applications. Like all mechanical tools, they do not last forever. The solution to extend the life of the machine is to perform maintenance on the degraded components. The optimal approach is called predictive maintenance, which aims to forecast the best moment for performing maintenance on the robot. This minimizes maintenance costs as well as prevents mechanical failure that can lead to unplanned production stops. There already exist methods to perform predictive maintenance on industrial robots, but these methods require additional sensors. This research aims to predict the anomalies by only using data from the sensors that already are used to control the robot. A machine learning approach is proposed for implementing predictive maintenance of industrial robots, using the torque profiles as input data. The algorithms selected are tested on simulated data created using wear and temperature models. The torque profiles from the simulator are used to extract a health index for each joint, which in turn are used to detect anomalous states of the robot. The health index has a fast exponential growth trend which is difficult to predict in advance. A Gaussian process regressor, an Exponentron, and hybrid algorithms are applied for the prediction of the time series of the health state to implement the predictive maintenance. The predictions are evaluated considering the accuracy of the time series prediction and the precision of anomaly forecasting. The investigated methods are shown to be able to predict the development of the wear and to detect the anomalies in advance. The results reveal that the hybrid approach obtained by combining predictions from different algorithms outperforms the other solutions. Eventually, the analysis of the results shows that the algorithms are sensitive to the quality of the data and do not perform well when the data present a low sampling rate or missing samples. / Industrirobotar är en nyckelkomponent för flera industriella applikationer. Likt alla mekaniska verktyg håller de inte för alltid. Lösningen för att förlänga maskinens livslängd är att utföra underhåll på de slitna komponenterna. Det optimala tillvägagångssättet kallas prediktivt underhåll, vilket innebär att förutsäga den bästa tidpunkten för att utföra underhåll på roboten. Detta minimerar både kostnaderna för underhåll samt förebygger mekaniska fel som kan leda till oplanerade produktionsstopp. Det finns redan metoder för att utföra prediktivt underhåll på industriella robotar, men dessa metoder kräver ytterligare sensorer. Denna forskning syftar till att förutsäga avvikelserna genom att endast använda data från de sensorer som redan används för att reglera roboten. En maskininlärningsmetod föreslås för implementering av prediktivt underhåll av industriella robotar, med hjälp av vridmomentprofiler som indata. Metoderna testas på simulerad data som skapats med hjälp av slitage- och temperaturmodeller. Vridmomenten används för att extrahera ett hälsoindex för varje axel, vilket i sin tur används för att upptäcka anomalier hos roboten. Hälsoindexet har en snabb exponentiell tillväxttrend som är svår att förutsäga i förväg. En Gaussisk processregressor, en Exponentron och hybridalgoritmer används för prediktion av tidsserien för hälsoindexet för att implementera det prediktiva underhållet. Förutsägelserna utvärderas baserat på träffsäkerheten av förutsägelsen för tidsserien samt precisionen för förutsagda avvikelser. De undersökta metoderna visar sig kunna förutsäga utvecklingen av slitage och upptäcka avvikelser i förväg. Resultaten uppvisar att hybridmetoden som erhålls genom att kombinera prediktioner från olika algoritmer överträffar de andra lösningarna. I analysen av prestandan visas att algoritmerna är känsliga för kvaliteten av datat och att de inte fungerar bra när datat har låg samplingsfrekvens eller då datapunkter saknas.
110

Design & Analysis of a Computer Experiment for an Aerospace Conformance Simulation Study

Gryder, Ryan W 01 January 2016 (has links)
Within NASA's Air Traffic Management Technology Demonstration # 1 (ATD-1), Interval Management (IM) is a flight deck tool that enables pilots to achieve or maintain a precise in-trail spacing behind a target aircraft. Previous research has shown that violations of aircraft spacing requirements can occur between an IM aircraft and its surrounding non-IM aircraft when it is following a target on a separate route. This research focused on the experimental design and analysis of a deterministic computer simulation which models our airspace configuration of interest. Using an original space-filling design and Gaussian process modeling, we found that aircraft delay assignments and wind profiles significantly impact the likelihood of spacing violations and the interruption of IM operations. However, we also found that implementing two theoretical advancements in IM technologies can potentially lead to promising results.

Page generated in 0.073 seconds