• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 26
  • 7
  • 4
  • 3
  • 2
  • Tagged with
  • 52
  • 52
  • 22
  • 13
  • 12
  • 11
  • 11
  • 10
  • 8
  • 8
  • 8
  • 8
  • 8
  • 7
  • 7
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
11

Reliability-based design optimization using surrogate model with assessment of confidence level

Zhao, Liang 01 July 2011 (has links)
The objective of this study is to develop an accurate surrogate modeling method for construction of the surrogate model to represent the performance measures of the compute-intensive simulation model in reliability-based design optimization (RBDO). In addition, an assessment method for the confidence level of the surrogate model and a conservative surrogate model to account the uncertainty of the prediction on the untested design domain when the number of samples are limited, are developed and integrated into the RBDO process to ensure the confidence of satisfying the probabilistic constraints at the optimal design. The effort involves: (1) developing a new surrogate modeling method that can outperform the existing surrogate modeling methods in terms of accuracy for reliability analysis in RBDO; (2) developing a sampling method that efficiently and effectively inserts samples into the design domain for accurate surrogate modeling; (3) generating a surrogate model to approximate the probabilistic constraint and the sensitivity of the probabilistic constraint with respect to the design variables in most-probable-point-based RBDO; (4) using the sampling method with the surrogate model to approximate the performance function in sampling-based RBDO; (5) generating a conservative surrogate model to conservatively approximate the performance function in sampling-based RBDO and assure the obtained optimum satisfy the probabilistic constraints. In applying RBDO to a large-scale complex engineering application, the surrogate model is commonly used to represent the compute-intensive simulation model of the performance function. However, the accuracy of the surrogate model is still challenging for highly nonlinear and large dimension applications. In this work, a new method, the Dynamic Kriging (DKG) method is proposed to construct the surrogate model accurately. In this DKG method, a generalized pattern search algorithm is used to find the accurate optimum for the correlation parameter, and the optimal mean structure is set using the basis functions that are selected by a genetic algorithm from the candidate basis functions based on a new accuracy criterion. Plus, a sequential sampling strategy based on the confidence interval of the surrogate model from the DKG method, is proposed. By combining the sampling method with the DKG method, the efficiency and accuracy can be rapidly achieved. Using the accurate surrogate model, the most-probable-point (MPP)-based RBDO and the sampling-based RBDO can be carried out. In applying the surrogate models to MPP-based RBDO and sampling-based RBDO, several efficiency strategies, which include: (1) using local window for surrogate modeling; (2) adaptive window size for different design candidates; (3) reusing samples in the local window; (4) using violated constraints for surrogate model accuracy check; (3) adaptive initial point for correlation parameter estimation, are proposed. To assure the accuracy of the surrogate model when the number of samples is limited, and to assure the obtained optimum design can satisfy the probabilistic constraints, a conservative surrogate model, using the weighted Kriging variance, is developed, and implemented for sampling-based RBDO.
12

Développement d’une nouvelle méthode de réduction de modèle basée sur les hypersurfaces NURBS (Non-Uniform Rational B-Splines) / Development of a new metamodelling method based on NURBS (Non-Uniform Rational B-Splines) hypersurfaces

Audoux, Yohann 14 June 2019 (has links)
Malgré des décennies d’incontestables progrès dans le domaine des sciences informatiques, un certain nombre de problèmes restent difficiles à traiter en raison, soit de leur complexité numérique (problème d’optimisation, …), soit de contraintes spécifiques telle que la nécessité de traitement en temps réel (réalité virtuelle, augmentée, …). Dans ce contexte, il existe des méthodes de réduction de modèle qui permettent de réduire les temps de calcul de simulations multi-champs et/ou multi-échelles complexes. Le processus de réduction de modèle consiste à paramétrer un métamodèle qui requiert moins de ressources pour être évalué que le modèle complexe duquel il a été obtenu, tout en garantissant une certaine précision. Les méthodes actuelles nécessitent, en général, soit une expertise de l’utilisateur, soit un grand nombre de choix arbitraires de sa part. De plus, elles sont bien souvent adaptées à une application spécifique mais difficilement transposable à d’autres domaines. L’objectif de notre approche est donc d’obtenir, s'il n'est pas le meilleur, un bon métamodèle quel que soit le problème considéré. La stratégie développée s’appuie sur l’utilisation des hypersurfaces NURBS et se démarque des approches existantes par l’absence d’hypothèses simplificatrices sur les paramètres de celles-ci. Pour ce faire, une méta heuristique (de type algorithme génétique), capable de traiter des problèmes d’optimisation dont le nombre de variables n’est pas constant, permet de déterminer automatiquement l’ensemble des paramètres de l’hypersurface sans transférer la complexité des choix à l’utilisateur. / Despite undeniable progress achieved in computer sciences over the last decades, some problems remain intractable either by their numerical complexity (optimisation problems, …) or because they are subject to specific constraints such as real-time processing (virtual and augmented reality, …). In this context, metamodeling techniques can minimise the computational effort to realize complex multi-field and/or multi-scale simulations. The metamodeling process consists of setting up a metamodel that needs less resources to be evaluated than the complex one that is extracted from by guaranteeing, meanwhile, a minimal accuracy. Current methods generally require either the user’s expertise or arbitrary choices. Moreover, they are often tailored for a specific application, but they can be hardly transposed to other fields. Thus, even if it is not the best, our approach aims at obtaining a metamodel that remains a good one for whatever problem at hand. The developed strategy relies on NURBS hypersurfaces and stands out from existing ones by avoiding the use of empiric criteria to set its parameters. To do so, a metaheuristic (a genetic algorithm) able to deal with optimisation problems defined over a variable number of optimisation variables sets automatically all the hypersurface parameters so that the complexity is not transferred to the user.
13

Machine Learning for Inverse Design

Thomas, Evan 08 February 2023 (has links)
"Inverse design" formulates the design process as an inverse problem; optimal values of a parameterized design space are sought so to best reproduce quantitative outcomes from the forwards dynamics of the design's intended environment. Arguably, two subtasks are necessary to iteratively solve such a design problem; the generation and evaluation of designs. This thesis work documents two experiments leveraging machine learning (ML) to facilitate each subtask. Included first is a review of relevant physics and machine learning theory. Then, analysis on the theoretical foundations of ensemble methods realizes a novel equation describing the effect of Bagging and Random Forests on the expected mean squared error of a base model. Complex models of design evaluation may capture environmental dynamics beyond those that are useful for a design optimization. These constitute unnecessary time and computational costs. The first experiment attempts to avoid these by replacing EGSnrc, a Monte Carlo simulation of coupled electron-photon transport, with an efficient ML "surrogate model". To investigate the benefits of surrogate models, a simulated annealing design optimization is twice conducted to reproduce an arbitrary target design, once using EGSnrc and once using a random forest regressor as a surrogate model. It is found that using the surrogate model produced approximately an 100x speed-up, and converged upon an effective design in fewer iterations. In conclusion, using a surrogate model is faster and (in this case) also more effective per-iteration. The second experiment of this thesis work leveraged machine learning for design generation. As a proof-of-concept design objective, the work seeks to efficiently sample 2D Ising spin model configurations from an optimized design space with a uniform distribution of internal energies. Randomly sampling configurations yields a narrow Gaussian distribution of internal energies. Convolutional neural networks (CNN) trained with NeuroEvolution, a mutation-only genetic algorithm, were used to statistically shape the design space. Networks contribute to sampling by processing random inputs, their outputs are then regularized into acceptable configurations. Samples produced with CNNs had more uniform distribution of internal energies, and ranged across the entire space of possible values. In combination with conventional sampling methods, these CNNs can facilitate the sampling of configurations with uniformly distributed internal energies.
14

Comparative Analysis of Surrogate Models for the Dissolution of Spent Nuclear Fuel

Awe, Dayo 01 May 2024 (has links) (PDF)
This thesis presents a comparative analysis of surrogate models for the dissolution of spent nuclear fuel, with a focus on the use of deep learning techniques. The study explores the accuracy and efficiency of different machine learning methods in predicting the dissolution behavior of nuclear waste, and compares them to traditional modeling approaches. The results show that deep learning models can achieve high accuracy in predicting the dissolution rate, while also being computationally efficient. The study also discusses the potential applications of surrogate modeling in the field of nuclear waste management, including the optimization of waste disposal strategies and the design of more effective containment systems. Overall, this research highlights the importance of surrogate modeling in improving our understanding of nuclear waste behavior and developing more sustainable waste management practices.
15

Development of Surrogate Model for FEM Error Prediction using Deep Learning

Jain, Siddharth 07 July 2022 (has links)
This research is a proof-of-concept study to develop a surrogate model, using deep learning (DL), to predict solution error for a given model with a given mesh. For this research, we have taken the von Mises stress contours and have predicted two different types of error indicators contours, namely (i) von Mises error indicator (MISESERI), and (ii) energy density error indicator (ENDENERI). Error indicators are designed to identify the solution domain areas where the gradient has not been properly captured. It uses the spatial gradient distribution of the existing solution for a given mesh to estimate the error. Due to poor meshing and nature of the finite element method, these error indicators are leveraged to study and reduce errors in the finite element solution using an adaptive remeshing scheme. Adaptive re-meshing is an iterative and computationally expensive process to reduce the error computed during the post-processing step. To overcome this limitation we propose an approach to replace it using data-driven techniques. We have introduced an image processing-based surrogate model designed to solve an image-to-image regression problem using convolutional neural networks (CNN) that takes a 256 × 256 colored image of von mises stress contour and outputs the required error indicator. To train this model with good generalization performance we have developed four different geometries for each of the three case studies: (i) quarter plate with a hole, (b) simply supported plate with multiple holes, and (c) simply supported stiffened plate. The entire research is implemented in a three phase approach, phase I involves the design and development of a CNN to perform training on stress contour images with their corresponding von Mises stress values volume-averaged over the entire domain. Phase II involves developing a surrogate model to perform image-to-image regression and the final phase III involves extending the capabilities of phase II and making the surrogate model more generalized and robust. The final surrogate model used to train the global dataset of 12,000 images consists of three auto encoders, one encoder-decoder assembly, and two multi-output regression neural networks. With the error of less than 1% in the neural network training shows good memorization and generalization performance. Our final surrogate model takes 15.5 hours to train and less than a minute to predict the error indicators on testing datasets. Thus, this present study can be considered a good first step toward developing an adaptive remeshing scheme using deep neural networks. / Master of Science / This research is a proof-of-concept study to develop an image processing-based neural network (NN) model to solve an image-to-image regression problem. In finite element analysis (FEA), due to poor meshing and nature of the finite element method, these error indicators are used to study and reduce errors. For this research, we have predicted two different types of error indicator contours by using stress images as inputs to the NN model. In popular FEA packages, adaptive remeshing scheme is used to optimize mesh quality by iteratively computing error indicators making the process computationally expensive. To overcome this limitation we propose an approach to replace it using convolutional neural networks (CNN). Such neural networks are particularly used for image based data. To train our CNN model with good generalization performance we have developed four different geometries with varying load cases. The entire research is implemented in a three phase approach, phase I involves the design and development of a CNN model to perform initial level training on small image size. Phase II involves developing an assembled neural network to perform image-to-image regression and the final phase III involves extending the capabilities of phase II for more generalized and robust results. With the error of less than 1% in the neural network training shows good memorization and generalization performance. Our final surrogate model takes 15.5 hours to train and less than a minute to predict the error indicators on testing datasets. Thus, this present study can be considered a good first step toward developing an adaptive remeshing scheme using deep neural networks.
16

Transfer Learning for Multi-surrogate-model Optimization

Gvozdetska, Nataliia 14 January 2021 (has links)
Surrogate-model-based optimization is widely used to solve black-box optimization problems if the evaluation of a target system is expensive. However, when the optimization budget is limited to a single or several evaluations, surrogate-model-based optimization may not perform well due to the lack of knowledge about the search space. In this case, transfer learning helps to get a good optimization result due to the usage of experience from the previous optimization runs. And if the budget is not strictly limited, transfer learning is capable of improving the final results of black-box optimization. The recent work in surrogate-model-based optimization showed that using multiple surrogates (i.e., applying multi-surrogate-model optimization) can be extremely efficient in complex search spaces. The main assumption of this thesis suggests that transfer learning can further improve the quality of multi-surrogate-model optimization. However, to the best of our knowledge, there exist no approaches to transfer learning in the multi-surrogate-model context yet. In this thesis, we propose an approach to transfer learning for multi-surrogate-model optimization. It encompasses an improved method of defining the expediency of knowledge transfer, adapted multi-surrogate-model recommendation, multi-task learning parameter tuning, and few-shot learning techniques. We evaluated the proposed approach with a set of algorithm selection and parameter setting problems, comprising mathematical functions optimization and the traveling salesman problem, as well as random forest hyperparameter tuning over OpenML datasets. The evaluation shows that the proposed approach helps to improve the quality delivered by multi-surrogate-model optimization and ensures getting good optimization results even under a strictly limited budget.:1 Introduction 1.1 Motivation 1.2 Research objective 1.3 Solution overview 1.4 Thesis structure 2 Background 2.1 Optimization problems 2.2 From single- to multi-surrogate-model optimization 2.2.1 Classical surrogate-model-based optimization 2.2.2 The purpose of multi-surrogate-model optimization 2.2.3 BRISE 2.5.0: Multi-surrogate-model-based software product line for parameter tuning 2.3 Transfer learning 2.3.1 Definition and purpose of transfer learning 2.4 Summary of the Background 3 Related work 3.1 Questions to transfer learning 3.2 When to transfer: Existing approaches to determining the expediency of knowledge transfer 3.2.1 Meta-features-based approaches 3.2.2 Surrogate-model-based similarity 3.2.3 Relative landmarks-based approaches 3.2.4 Sampling landmarks-based approaches 3.2.5 Similarity threshold problem 3.3 What to transfer: Existing approaches to knowledge transfer 3.3.1 Ensemble learning 3.3.2 Search space pruning 3.3.3 Multi-task learning 3.3.4 Surrogate model recommendation 3.3.5 Few-shot learning 3.3.6 Other approaches to transferring knowledge 3.4 How to transfer (discussion): Peculiarities and required design decisions for the TL implementation in multi-surrogate-model setup 3.4.1 Peculiarities of model recommendation in multi-surrogate-model setup 3.4.2 Required design decisions in multi-task learning 3.4.3 Few-shot learning problem 3.5 Summary of the related work analysis 4 Transfer learning for multi-surrogate-model optimization 4.1 Expediency of knowledge transfer 4.1.1 Experiments’ similarity definition as a variability point 4.1.2 Clustering to filter the most suitable experiments 4.2 Dynamic model recommendation in multi-surrogate-model setup 4.2.1 Variable recommendation granularity 4.2.2 Model recommendation by time and performance criteria 4.3 Multi-task learning 4.4 Implementation of the proposed concept 4.5 Conclusion of the proposed concept 5 Evaluation 5.1 Benchmark suite 5.1.1 APSP for the meta-heuristics 5.1.2 Hyperparameter optimization of the Random Forest algorithm 5.2 Environment setup 5.3 Evaluation plan 5.4 Baseline evaluation 5.5 Meta-tuning for a multi-task learning approach 5.5.1 Revealing the dependencies between the parameters of multi-task learning and its performance 5.5.2 Multi-task learning performance with the best found parameters 5.6 Expediency determination approach 5.6.1 Expediency determination as a variability point 5.6.2 Flexible number of the most similar experiments with the help of clustering 5.6.3 Influence of the number of initial samples on the quality of expediency determination 5.7 Multi-surrogate-model recommendation 5.8 Few-shot learning 5.8.1 Transfer of the built surrogate models’ combination 5.8.2 Transfer of the best configuration 5.8.3 Transfer from different experiment instances 5.9 Summary of the evaluation results 6 Conclusion and Future work
17

Modelling the performance of an integrated urban wastewater system under future conditions

Astaraie Imani, Maryam January 2012 (has links)
The performance of the Integrated Urban Wastewater Systems (IUWS) including: sewer system, WWTP and river, in both operational control and design, under unavoidable future climate change and urbanisation is a concern for water engineers which still needs to be improved. Additionally, with regard to the recent attention around the world to the environment, the quality of water, as the main component of that, has received significant attention as it can have impacts on health of human life, aquatic life and so on. Hence, the necessity of improving systems performance under the future changes to maintain the quality of water is observed. The research presented in this thesis describes the development of risk-based and non-risk-based models to improve the operational control and design of the IUWS under future climate change and urbanisation aiming to maintain the quality of water in recipients. In this thesis, impacts of climate change and urbanisation on the IUWS performance in terms of the receiving water quality was investigated. In the line with this, different indicators of climate change and urbanisation were selected for evaluation. Also the performance of the IUWS under future climate change and urbanisation was improved by development of a novel non-risk-based operational control and design models aiming to maintain the quality of water in the river to meet the water quality standards in the recipient. This is initiated by applying a scenario-based approach to describe the possible features of future climate change and /or urbanisation. Additionally the performance of the IUWS under future climate change and urbanisation was improved by development of a novel risk-based operational control and design models to reduce the risk of water quality failures to maintain the health of aquatic life. This is initiated by considering the uncertainties involved with the urbanisation parameters considered. The risk concept is applied to estimate the risk of water quality breaches for the aquatic life. Also due to the complexity and time-demanding nature of the IUWS simulation models (which are called about the optimisation process), there is the concern about excessive running times in this study. The novel “MOGA-ANNβ” algorithm was developed for the optimisation process throughout the thesis to speed it up while preserving the accuracy. The meta-model developed was tested and its performance was evaluated. In this study, the results obtained from the impact analysis of the future climate change and urbanisation (on the performance of the IUWS) showed that the future conditions have potential to influence the performance of the IUWS in both quality and quantity of water. In line with this, selecting proper future conditions’ parameters is important for the system impact analysis. Also the observations demonstrated that the system improvement is required under future conditions. In line with this, the results showed that both risk-based and non-risk-based operational control optimisation of the IUWS in isolation is not good enough to cope with the future conditions and therefore the IUWS design optimisation was carried out to improve the system performance. The riskbased design improvement of the IUWS in this study showed a better potential than the non-risk-based design improvement to meet all the water quality criteria considered in this study.
18

Construction automatique de modèles multi-corps de substitution aux simulations de crashtests / Automatized multi-body surrogate models creation to replace crashtests simulations

Loreau, Tanguy 18 December 2019 (has links)
Chez Renault, pour réaliser les études amont, les équipes en charge de la prestation du choc automobile disposent de modèles très simples leur permettant de pré-dimensionner le véhicule. Aujourd'hui, ils sont construits à partir du comportement d'un ou quelques véhicules de référence. Ils sont fonctionnels et permettent le dimensionnement. Mais à présent, l'entreprise souhaite construire ses modèles amont en s'appuyant sur l'ensemble de ses véhicules. En d'autres termes, elle souhaite disposer d'une méthode d'analyse automatique de simulations de crashtests afin de capitaliser leurs résultats dans une base de données de modèles simplifiés.Pour répondre à cet objectif, nous développons une méthode permettant d'extraire des simulations de crashtests les données nécessaires à la construction d'un modèle multi-corps de substitution : CrashScan. Le processus d'analyse implémenté dans CrashScan se résume en trois étapes majeures.La première consiste à identifier l'ensemble des zones peu déformées sur une simulation de crashtest. Cela nous permet de dresser le graphe topologique du futur modèle de substitution. La seconde étape est une analyse des cinématiques relatives entre les portions peu déformées : les directions principales et les modes de déformation (e.g. compression, flexion) sont identifiés en analysant le mouvement relatif. La dernière étape consiste à analyser les efforts et les moments situés entre les zones peu déformées dans les repères associés aux directions principales des déformations en fonction des déformations. Cela nous permet d'identifier des modèles hystérétiques de Bouc-Wen équivalents. Ces modèles disposent de trois paramètres utiles dans notre cas : une raideur, un effort seuil avant plastification et une pente d'écrouissage. Ces paramètres peuvent être utilisés directement par les experts des études amont.Enfin, nous construisons les modèles multi-corps de substitution pour trois cas d'étude différents. Nous les comparons alors à leur référence sur les résultats qu'ils fournissent pour les critères utilisés en amont : les modèles générés par CrashScan semblent apporter la précision et la fidélité nécessaires pour être utilisés en amont du développement automobile.Pour poursuivre ces travaux de recherche et aboutir à une solution industrielle, il reste néanmoins des verrous à lever dont les principaux sont la synthèse d'un mouvement quelconque en six mouvements élémentaires et la synthèse multi-corps sur des éléments autres que des poutres. / At Renault, to fulfill upstream studies, teams in charge of crashworthiness use very simple models to pre-size the vehicle. Today, these models are built from the physical behavior of only one or some reference vehicles. They work and allow to size the project. But today, the company wishes to build its upstream models using all its vehicles. In other words, it wishes to get an automatic method to analyze crashtests simulations to capitalize their results in a database of simplified models.To meet this goal, we decide to use the multi-body model theory. We develop a method to analyze crashtests simulations in order to extract the data required to build a surrogate multi-body model : CrashScan. The analysis process implemented in CrashScan can be split into three major steps.The first one allows to identify the low deformed zones on a crashtest simulation. Then, we can build the topological graph of the future surrogate model. The second step is to analyze the relative kinematics between the low deformed zones : major directions and deformation modes (e.g. crushing or bending) are identified analysing relative movements. The last step is to analyze strengths and moments located between the low deformed zones, viewed in the frames associated to the major directions of deformations in function of the deformations. This allows us to identify equivalent Bouc-Wen hysteretic models. These models have three parameters that we can use : a stiffness, a threshold strength before plastification and a strain of hardening. These parameters can directly be used by upstream studies experts.Finally, we build multi-body models for three different use case. We compare them to their reference over the results they produce for the upstream criteria : models generated with CrashScan seems to grant the precision and the fidelity required to be used during automotive development's upstream phases.To continue this research work and get an industrial solution, there are still some locks to lift, the main ones are : synthesis of any movement into six elementary ones and multi-body synthesis on elements other than beams.
19

Metody evoluční optimalizace založené na modelech / Model-based evolutionary optimization methods

Bajer, Lukáš January 2018 (has links)
Model-based black-box optimization is a topic that has been intensively studied both in academia and industry. Especially real-world optimization tasks are often characterized by expensive or time-demanding objective functions for which statistical models can save resources or speed-up the optimization. Each of three parts of the thesis concerns one such model: first, copulas are used instead of a graphical model in estimation of distribution algorithms, second, RBF networks serve as surrogate models in mixed-variable genetic algorithms, and third, Gaussian processes are employed in Bayesian optimization algorithms as a sampling model and in the Covariance matrix adaptation Evolutionary strategy (CMA-ES) as a surrogate model. The last combination, described in the core part of the thesis, resulted in the Doubly trained surrogate CMA-ES (DTS-CMA-ES). This algorithm uses the uncertainty prediction of a Gaussian process for selecting only a part of the CMA-ES population for evaluation with the expensive objective function while the mean prediction is used for the rest. The DTS-CMA-ES improves upon the state-of-the-art surrogate continuous optimizers in several benchmark tests.
20

BAYESIAN OPTIMIZATION FOR DESIGN PARAMETERS OF AUTOINJECTORS.pdf

Heliben Naimeshkum Parikh (15340111) 24 April 2023 (has links)
<p>The document describes the computational framework to optimize spring-driven Autoinjectors. It involves Bayesian Optimization for efficient and cost-effective design of Autoinjectors.</p>

Page generated in 0.0554 seconds