111 |
Multi-Scale Topology Optimization of Lattice Structures Using Machine Learning / Flerskalig topologioptimering av gitterstrukturer med användning av maskininlärningIbstedt, Julia January 2023 (has links)
This thesis explores using multi-scale topology optimization (TO) by utilizing inverse homogenization to automate the adjustment of each unit-cell's geometry and placement in a lattice structure within a pressure vessel (the design domain) to achieve desired structural properties. The aim is to find the optimal material distribution within the design domain as well as desired material properties at each discretized element and use machine learning (ML) to map microstructures with corresponding prescribed effective properties. Effective properties are obtained through homogenization, where microscopic properties are upscaled to macroscopic ones. The symmetry group of a unit-cell's elasticity tensor can be utilized for stiffness directional tunability, i.e., to tune the cell's performance in different load directions. A few geometrical variations of a chosen unit-cell were homogenized to build an effective anisotropic elastic material model by obtaining their effective elasticity. The symmetry group and the stiffness directionality of the cells’ effective elasticity tensors were identified. This was done using both the pattern of the matrix representation of the effective elasticity tensor and the roots of the monoclinic distance function. A cell library of symmetry-preserving variations with a corresponding material property space was created, displaying the achievable properties within the library. Two ML models were implemented to map material properties to appropriate cells. A TO algorithm was also implemented to produce an optimal material distribution within a design domain of a pressure vessel in 2D to maximize stiffness. However, the TO algorithm to obtain desired material properties for each element in the domain was not realized within the time frame of this thesis. The cells were successfully homogenized. The effective elasticity tensor of the chosen cell was found to belong to the cubic symmetry group in its natural coordinate system. The results suggest that the symmetry group of an elasticity tensor retrieved through numerical experiments can be identified using the monoclinic distance function. If near-zero minima are present, they can be utilized to find the natural coordinate system. The cubic symmetry allowed the cell library's material property space to be spanned by only three elastic constants, derived from the elasticity matrix. The orthotropic symmetry group can enable a greater directional tunability and design flexibility than the cubic one. However, materials exhibiting cubic symmetry can be described by fewer material properties, limiting the property space, which could make the multi-scale TO less complex. The ML models successfully predicted the cell parameters for given elastic constants with satisfactory results. The TO algorithm was successfully implemented. Two different boundary condition cases were used – fixing the domain’s corner nodes and fixing the middle element’s nodes. The latter was found to produce more sensible results. The formation of a cylindrical outer shape could be distinguished in the produced material design, which was deemed reasonable since cylindrical pressure vessels are consistent with engineering practice due to their inherent ability to evenly distribute load. The TO algorithm must be extended to include the elastic constants as design variables to enable the multi-scale TO.
|
112 |
Machine learning in predictive maintenance of industrial robotsMorettini, Simone January 2021 (has links)
Industrial robots are a key component for several industrial applications. Like all mechanical tools, they do not last forever. The solution to extend the life of the machine is to perform maintenance on the degraded components. The optimal approach is called predictive maintenance, which aims to forecast the best moment for performing maintenance on the robot. This minimizes maintenance costs as well as prevents mechanical failure that can lead to unplanned production stops. There already exist methods to perform predictive maintenance on industrial robots, but these methods require additional sensors. This research aims to predict the anomalies by only using data from the sensors that already are used to control the robot. A machine learning approach is proposed for implementing predictive maintenance of industrial robots, using the torque profiles as input data. The algorithms selected are tested on simulated data created using wear and temperature models. The torque profiles from the simulator are used to extract a health index for each joint, which in turn are used to detect anomalous states of the robot. The health index has a fast exponential growth trend which is difficult to predict in advance. A Gaussian process regressor, an Exponentron, and hybrid algorithms are applied for the prediction of the time series of the health state to implement the predictive maintenance. The predictions are evaluated considering the accuracy of the time series prediction and the precision of anomaly forecasting. The investigated methods are shown to be able to predict the development of the wear and to detect the anomalies in advance. The results reveal that the hybrid approach obtained by combining predictions from different algorithms outperforms the other solutions. Eventually, the analysis of the results shows that the algorithms are sensitive to the quality of the data and do not perform well when the data present a low sampling rate or missing samples. / Industrirobotar är en nyckelkomponent för flera industriella applikationer. Likt alla mekaniska verktyg håller de inte för alltid. Lösningen för att förlänga maskinens livslängd är att utföra underhåll på de slitna komponenterna. Det optimala tillvägagångssättet kallas prediktivt underhåll, vilket innebär att förutsäga den bästa tidpunkten för att utföra underhåll på roboten. Detta minimerar både kostnaderna för underhåll samt förebygger mekaniska fel som kan leda till oplanerade produktionsstopp. Det finns redan metoder för att utföra prediktivt underhåll på industriella robotar, men dessa metoder kräver ytterligare sensorer. Denna forskning syftar till att förutsäga avvikelserna genom att endast använda data från de sensorer som redan används för att reglera roboten. En maskininlärningsmetod föreslås för implementering av prediktivt underhåll av industriella robotar, med hjälp av vridmomentprofiler som indata. Metoderna testas på simulerad data som skapats med hjälp av slitage- och temperaturmodeller. Vridmomenten används för att extrahera ett hälsoindex för varje axel, vilket i sin tur används för att upptäcka anomalier hos roboten. Hälsoindexet har en snabb exponentiell tillväxttrend som är svår att förutsäga i förväg. En Gaussisk processregressor, en Exponentron och hybridalgoritmer används för prediktion av tidsserien för hälsoindexet för att implementera det prediktiva underhållet. Förutsägelserna utvärderas baserat på träffsäkerheten av förutsägelsen för tidsserien samt precisionen för förutsagda avvikelser. De undersökta metoderna visar sig kunna förutsäga utvecklingen av slitage och upptäcka avvikelser i förväg. Resultaten uppvisar att hybridmetoden som erhålls genom att kombinera prediktioner från olika algoritmer överträffar de andra lösningarna. I analysen av prestandan visas att algoritmerna är känsliga för kvaliteten av datat och att de inte fungerar bra när datat har låg samplingsfrekvens eller då datapunkter saknas.
|
113 |
Design & Analysis of a Computer Experiment for an Aerospace Conformance Simulation StudyGryder, Ryan W 01 January 2016 (has links)
Within NASA's Air Traffic Management Technology Demonstration # 1 (ATD-1), Interval Management (IM) is a flight deck tool that enables pilots to achieve or maintain a precise in-trail spacing behind a target aircraft. Previous research has shown that violations of aircraft spacing requirements can occur between an IM aircraft and its surrounding non-IM aircraft when it is following a target on a separate route. This research focused on the experimental design and analysis of a deterministic computer simulation which models our airspace configuration of interest. Using an original space-filling design and Gaussian process modeling, we found that aircraft delay assignments and wind profiles significantly impact the likelihood of spacing violations and the interruption of IM operations. However, we also found that implementing two theoretical advancements in IM technologies can potentially lead to promising results.
|
114 |
Nouvel algorithme d'optimisation bayésien utilisant une approche Monte-Carlo séquentielle. / New Bayesian optimization algorithm using a sequential Monte-Carlo approachBenassi, Romain 19 June 2013 (has links)
Ce travail de thèse s'intéresse au problème de l'optimisation globale d'une fonction coûteuse dans un cadre bayésien. Nous disons qu'une fonction est coûteuse lorsque son évaluation nécessite l’utilisation de ressources importantes (simulations numériques très longues, notamment). Dans ce contexte, il est important d'utiliser des algorithmes d'optimisation utilisant un faible nombre d'évaluations de cette dernière. Nous considérons ici une approche bayésienne consistant à affecter à la fonction à optimiser un a priori sous la forme d'un processus aléatoire gaussien, ce qui permet ensuite de choisir les points d'évaluation de la fonction en maximisant un critère probabiliste indiquant, conditionnellement aux évaluations précédentes, les zones les plus intéressantes du domaine de recherche de l'optimum. Deux difficultés dans le cadre de cette approche peuvent être identifiées : le choix de la valeur des paramètres du processus gaussien et la maximisation efficace du critère. La première difficulté est généralement résolue en substituant aux paramètres l'estimateur du maximum de vraisemblance, ce qui est une méthode peu robuste à laquelle nous préférons une approche dite complètement bayésienne. La contribution de cette thèse est de présenter un nouvel algorithme d'optimisation bayésien, maximisant à chaque étape le critère dit de l'espérance de l'amélioration, et apportant une réponse conjointe aux deux difficultés énoncées à l'aide d'une approche Sequential Monte Carlo. Des résultats numériques, obtenus à partir de cas tests et d'applications industrielles, montrent que les performances de notre algorithme sont bonnes par rapport à celles d’algorithmes concurrents. / This thesis deals with the problem of global optimization of expensive-to-evaluate functions in a Bayesian framework. We say that a function is expensive-to-evaluate when its evaluation requires a significant amount of resources (e.g., very long numerical simulations).In this context, it is important to use optimization algorithms that can deal with a limited number of function evaluations. We consider here a Bayesian approach which consists in assigning a prior to the function, under the form of a Gaussian random process. The idea is then to choose the next evaluation points using a probabilistic criterion that indicates, conditional on the previous evaluations, the most interesting regions of the research domain for the optimizer. Two difficulties in this approach can be identified: the choice of the Gaussian process prior and the maximization of the criterion. The first problem is usually solved by using a maximum likelihood approach, which turns out to be a poorly robust method, and to which we prefer a fully Bayesian approach. The contribution of this work is the introduction of a new Bayesian optimization algorithm, which maximizes the Expected Improvement (EI) criterion, and provides an answer to both problems thanks to a Sequential Monte Carlo approach. Numerical results on benchmark tests show good performances of our algorithm compared to those of several other methods of the literature.
|
115 |
Multi-layer designs and composite gaussian process models with engineering applicationsBa, Shan 21 May 2012 (has links)
This thesis consists of three chapters, covering topics in both the design and modeling aspects of computer experiments as well as their engineering applications. The first chapter systematically develops a new class of space-filling designs for computer experiments by splitting two-level factorial designs into multiple layers. The new design is easy to generate, and our numerical study shows that it can have better space-filling properties than the optimal Latin hypercube design. The second chapter proposes a novel modeling approach for approximating computationally expensive functions that are not second-order stationary. The new model is a composite of two Gaussian processes, where the first one captures the smooth global trend and the second one models local details. The new predictor also incorporates a flexible variance model, which makes it more capable of approximating surfaces with varying volatility. The third chapter is devoted to a two-stage sequential strategy which integrates analytical models with finite element simulations for a micromachining process.
|
116 |
Exponential Smoothing for Forecasting and Bayesian Validation of Computer ModelsWang, Shuchun 22 August 2006 (has links)
Despite their success and widespread usage in industry and business, ES methods have received little attention from the statistical community. We investigate three types of statistical models that have been found to underpin ES methods. They are ARIMA models, state space models with multiple sources of error (MSOE), and state space models with a single source of error (SSOE). We establish the relationship among the three classes of models and conclude that the class of SSOE state space models is broader than the other two and provides a formal statistical foundation for ES methods. To better understand ES methods, we investigate the behaviors of ES methods for time series generated from different processes. We mainly focus on time series of ARIMA type.
ES methods forecast a time series using only the series own history. To include covariates into ES methods for better forecasting a time series, we propose a new forecasting method, Exponential Smoothing with Covariates (ESCov). ESCov uses an ES method to model what left unexplained in a time series by covariates. We establish the optimality of ESCov, identify SSOE state space models underlying ESCov, and derive analytically the variances of forecasts by ESCov. Empirical studies show that ESCov outperforms ES methods and regression with ARIMA errors. We suggest a model selection procedure for choosing appropriate covariates and ES methods in practice.
Computer models have been commonly used to investigate complex systems for which physical experiments are highly expensive or very time-consuming. Before using a computer model, we need to address an important question ``How well does the computer model represent the real system?" The process of addressing this question is called computer model validation that generally involves the comparison of computer outputs and physical observations. In this thesis, we propose a Bayesian approach to computer model validation. This approach integrates together computer outputs and physical observation to give a better prediction of the real system output. This prediction is then used to validate the computer model. We investigate the impacts of several factors on the performance of the proposed approach and propose a generalization to the proposed approach.
|
117 |
A pareto frontier intersection-based approach for efficient multiobjective optimization of competing concept alternativesRousis, Damon 01 July 2011 (has links)
The expected growth of civil aviation over the next twenty years places significant emphasis on revolutionary technology development aimed at mitigating the environmental impact of commercial aircraft. As the number of technology alternatives grows along with model complexity, current methods for Pareto finding and multiobjective optimization quickly become computationally infeasible. Coupled with the large uncertainty in the early stages of design, optimal designs are sought while avoiding the computational burden of excessive function calls when a single design change or technology assumption could alter the results. This motivates the need for a robust and efficient evaluation methodology for quantitative assessment of competing concepts.
This research presents a novel approach that combines Bayesian adaptive sampling with surrogate-based optimization to efficiently place designs near Pareto frontier intersections of competing concepts. Efficiency is increased over sequential multiobjective optimization by focusing computational resources specifically on the location in the design space where optimality shifts between concepts. At the intersection of Pareto frontiers, the selection decisions are most sensitive to preferences place on the objectives, and small perturbations can lead to vastly different final designs. These concepts are incorporated into an evaluation methodology that ultimately reduces the number of failed cases, infeasible designs, and Pareto dominated solutions across all concepts.
A set of algebraic samples along with a truss design problem are presented as canonical examples for the proposed approach. The methodology is applied to the design of ultra-high bypass ratio turbofans to guide NASA's technology development efforts for future aircraft. Geared-drive and variable geometry bypass nozzle concepts are explored as enablers for increased bypass ratio and potential alternatives over traditional configurations. The method is shown to improve sampling efficiency and provide clusters of feasible designs that motivate a shift towards revolutionary technologies that reduce fuel burn, emissions, and noise on future aircraft.
|
118 |
Efficient Bayesian Nonparametric Methods for Model-Free Reinforcement Learning in Centralized and Decentralized Sequential EnvironmentsLiu, Miao January 2014 (has links)
<p>As a growing number of agents are deployed in complex environments for scientific research and human well-being, there are increasing demands for designing efficient learning algorithms for these agents to improve their control polices. Such policies must account for uncertainties, including those caused by environmental stochasticity, sensor noise and communication restrictions. These challenges exist in missions such as planetary navigation, forest firefighting, and underwater exploration. Ideally, good control policies should allow the agents to deal with all the situations in an environment and enable them to accomplish their mission within the budgeted time and resources. However, a correct model of the environment is not typically available in advance, requiring the policy to be learned from data. Model-free reinforcement learning (RL) is a promising candidate for agents to learn control policies while engaged in complex tasks, because it allows the control policies to be learned directly from a subset of experiences and with time efficiency. Moreover, to ensure persistent performance improvement for RL, it is important that the control policies be concisely represented based on existing knowledge, and have the flexibility to accommodate new experience. Bayesian nonparametric methods (BNPMs) both allow the complexity of models to be adaptive to data, and provide a principled way for discovering and representing new knowledge.</p><p>In this thesis, we investigate approaches for RL in centralized and decentralized sequential decision-making problems using BNPMs. We show how the control policies can be learned efficiently under model-free RL schemes with BNPMs. Specifically, for centralized sequential decision-making, we study Q learning with Gaussian processes to solve Markov decision processes, and we also employ hierarchical Dirichlet processes as the prior for the control policy parameters to solve partially observable Markov decision processes. For decentralized partially observable Markov decision processes, we use stick-breaking processes as the prior for the controller of each agent. We develop efficient inference algorithms for learning the corresponding control policies. We demonstrate that by combining model-free RL and BNPMs with efficient algorithm design, we are able to scale up RL methods for complex problems that cannot be solved due to the lack of model knowledge. We adaptively learn control policies with concise structure and high value, from a relatively small amount of data.</p> / Dissertation
|
119 |
Multi-fidelity Gaussian process regression for computer experimentsLe Gratiet, Loic 04 October 2013 (has links) (PDF)
This work is on Gaussian-process based approximation of a code which can be run at different levels of accuracy. The goal is to improve the predictions of a surrogate model of a complex computer code using fast approximations of it. A new formulation of a co-kriging based method has been proposed. In particular this formulation allows for fast implementation and for closed-form expressions for the predictive mean and variance for universal co-kriging in the multi-fidelity framework, which is a breakthrough as it really allows for the practical application of such a method in real cases. Furthermore, fast cross validation, sequential experimental design and sensitivity analysis methods have been extended to the multi-fidelity co-kriging framework. This thesis also deals with a conjecture about the dependence of the learning curve (ie the decay rate of the mean square error) with respect to the smoothness of the underlying function. A proof in a fairly general situation (which includes the classical models of Gaussian-process based metamodels with stationary covariance functions) has been obtained while the previous proofs hold only for degenerate kernels (ie when the process is in fact finite-dimensional). This result allows for addressing rigorously practical questions such as the optimal allocation of the budget between different levels of codes in the multi-fidelity framework.
|
120 |
A computational model of engineering decision makingHeller, Collin M. 13 January 2014 (has links)
The research objective of this thesis is to formulate and demonstrate a computational framework for modeling the design decisions of engineers. This framework is intended to be descriptive in nature as opposed to prescriptive or normative; the output of the model represents a plausible result of a designer's decision making process. The framework decomposes the decision into three elements: the problem statement, the designer's beliefs about the alternatives, and the designer's preferences. Multi-attribute utility theory is used to capture designer preferences for multiple objectives under uncertainty. Machine-learning techniques are used to store the designer's knowledge and to make Bayesian inferences regarding the attributes of alternatives. These models are integrated into the framework of a Markov decision process to simulate multiple sequential decisions. The overall framework enables the designer's decision problem to be transformed into an optimization problem statement; the simulated designer selects the alternative with the maximum expected utility. Although utility theory is typically viewed as a normative decision framework, the perspective in this research is that the approach can be used in a descriptive context for modeling rational and non-time critical decisions by engineering designers. This approach is intended to enable the formalisms of utility theory to be used to design human subjects experiments involving engineers in design organizations based on pairwise lotteries and other methods for preference elicitation. The results of these experiments would substantiate the selection of parameters in the model to enable it to be used to diagnose potential problems in engineering design projects.
The purpose of the decision-making framework is to enable the development of a design process simulation of an organization involved in the development of a large-scale complex engineered system such as an aircraft or spacecraft. The decision model will allow researchers to determine the broader effects of individual engineering decisions on the aggregate dynamics of the design process and the resulting performance of the designed artifact itself. To illustrate the model's applicability in this context, the framework is demonstrated on three example problems: a one-dimensional decision problem, a multidimensional turbojet design problem, and a variable fidelity analysis problem. Individual utility functions are developed for designers in a requirements-driven design problem and then combined into a multi-attribute utility function. Gaussian process models are used to represent the designer's beliefs about the alternatives, and a custom covariance function is formulated to more accurately represent a designer's uncertainty in beliefs about the design attributes.
|
Page generated in 0.1184 seconds