• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 25
  • 7
  • 4
  • 2
  • 2
  • Tagged with
  • 49
  • 49
  • 20
  • 12
  • 12
  • 11
  • 10
  • 10
  • 8
  • 8
  • 8
  • 8
  • 7
  • 7
  • 7
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
11

Développement d’une nouvelle méthode de réduction de modèle basée sur les hypersurfaces NURBS (Non-Uniform Rational B-Splines) / Development of a new metamodelling method based on NURBS (Non-Uniform Rational B-Splines) hypersurfaces

Audoux, Yohann 14 June 2019 (has links)
Malgré des décennies d’incontestables progrès dans le domaine des sciences informatiques, un certain nombre de problèmes restent difficiles à traiter en raison, soit de leur complexité numérique (problème d’optimisation, …), soit de contraintes spécifiques telle que la nécessité de traitement en temps réel (réalité virtuelle, augmentée, …). Dans ce contexte, il existe des méthodes de réduction de modèle qui permettent de réduire les temps de calcul de simulations multi-champs et/ou multi-échelles complexes. Le processus de réduction de modèle consiste à paramétrer un métamodèle qui requiert moins de ressources pour être évalué que le modèle complexe duquel il a été obtenu, tout en garantissant une certaine précision. Les méthodes actuelles nécessitent, en général, soit une expertise de l’utilisateur, soit un grand nombre de choix arbitraires de sa part. De plus, elles sont bien souvent adaptées à une application spécifique mais difficilement transposable à d’autres domaines. L’objectif de notre approche est donc d’obtenir, s'il n'est pas le meilleur, un bon métamodèle quel que soit le problème considéré. La stratégie développée s’appuie sur l’utilisation des hypersurfaces NURBS et se démarque des approches existantes par l’absence d’hypothèses simplificatrices sur les paramètres de celles-ci. Pour ce faire, une méta heuristique (de type algorithme génétique), capable de traiter des problèmes d’optimisation dont le nombre de variables n’est pas constant, permet de déterminer automatiquement l’ensemble des paramètres de l’hypersurface sans transférer la complexité des choix à l’utilisateur. / Despite undeniable progress achieved in computer sciences over the last decades, some problems remain intractable either by their numerical complexity (optimisation problems, …) or because they are subject to specific constraints such as real-time processing (virtual and augmented reality, …). In this context, metamodeling techniques can minimise the computational effort to realize complex multi-field and/or multi-scale simulations. The metamodeling process consists of setting up a metamodel that needs less resources to be evaluated than the complex one that is extracted from by guaranteeing, meanwhile, a minimal accuracy. Current methods generally require either the user’s expertise or arbitrary choices. Moreover, they are often tailored for a specific application, but they can be hardly transposed to other fields. Thus, even if it is not the best, our approach aims at obtaining a metamodel that remains a good one for whatever problem at hand. The developed strategy relies on NURBS hypersurfaces and stands out from existing ones by avoiding the use of empiric criteria to set its parameters. To do so, a metaheuristic (a genetic algorithm) able to deal with optimisation problems defined over a variable number of optimisation variables sets automatically all the hypersurface parameters so that the complexity is not transferred to the user.
12

Machine Learning for Inverse Design

Thomas, Evan 08 February 2023 (has links)
"Inverse design" formulates the design process as an inverse problem; optimal values of a parameterized design space are sought so to best reproduce quantitative outcomes from the forwards dynamics of the design's intended environment. Arguably, two subtasks are necessary to iteratively solve such a design problem; the generation and evaluation of designs. This thesis work documents two experiments leveraging machine learning (ML) to facilitate each subtask. Included first is a review of relevant physics and machine learning theory. Then, analysis on the theoretical foundations of ensemble methods realizes a novel equation describing the effect of Bagging and Random Forests on the expected mean squared error of a base model. Complex models of design evaluation may capture environmental dynamics beyond those that are useful for a design optimization. These constitute unnecessary time and computational costs. The first experiment attempts to avoid these by replacing EGSnrc, a Monte Carlo simulation of coupled electron-photon transport, with an efficient ML "surrogate model". To investigate the benefits of surrogate models, a simulated annealing design optimization is twice conducted to reproduce an arbitrary target design, once using EGSnrc and once using a random forest regressor as a surrogate model. It is found that using the surrogate model produced approximately an 100x speed-up, and converged upon an effective design in fewer iterations. In conclusion, using a surrogate model is faster and (in this case) also more effective per-iteration. The second experiment of this thesis work leveraged machine learning for design generation. As a proof-of-concept design objective, the work seeks to efficiently sample 2D Ising spin model configurations from an optimized design space with a uniform distribution of internal energies. Randomly sampling configurations yields a narrow Gaussian distribution of internal energies. Convolutional neural networks (CNN) trained with NeuroEvolution, a mutation-only genetic algorithm, were used to statistically shape the design space. Networks contribute to sampling by processing random inputs, their outputs are then regularized into acceptable configurations. Samples produced with CNNs had more uniform distribution of internal energies, and ranged across the entire space of possible values. In combination with conventional sampling methods, these CNNs can facilitate the sampling of configurations with uniformly distributed internal energies.
13

Comparative Analysis of Surrogate Models for the Dissolution of Spent Nuclear Fuel

Awe, Dayo 01 May 2024 (has links) (PDF)
This thesis presents a comparative analysis of surrogate models for the dissolution of spent nuclear fuel, with a focus on the use of deep learning techniques. The study explores the accuracy and efficiency of different machine learning methods in predicting the dissolution behavior of nuclear waste, and compares them to traditional modeling approaches. The results show that deep learning models can achieve high accuracy in predicting the dissolution rate, while also being computationally efficient. The study also discusses the potential applications of surrogate modeling in the field of nuclear waste management, including the optimization of waste disposal strategies and the design of more effective containment systems. Overall, this research highlights the importance of surrogate modeling in improving our understanding of nuclear waste behavior and developing more sustainable waste management practices.
14

Development of Surrogate Model for FEM Error Prediction using Deep Learning

Jain, Siddharth 07 July 2022 (has links)
This research is a proof-of-concept study to develop a surrogate model, using deep learning (DL), to predict solution error for a given model with a given mesh. For this research, we have taken the von Mises stress contours and have predicted two different types of error indicators contours, namely (i) von Mises error indicator (MISESERI), and (ii) energy density error indicator (ENDENERI). Error indicators are designed to identify the solution domain areas where the gradient has not been properly captured. It uses the spatial gradient distribution of the existing solution for a given mesh to estimate the error. Due to poor meshing and nature of the finite element method, these error indicators are leveraged to study and reduce errors in the finite element solution using an adaptive remeshing scheme. Adaptive re-meshing is an iterative and computationally expensive process to reduce the error computed during the post-processing step. To overcome this limitation we propose an approach to replace it using data-driven techniques. We have introduced an image processing-based surrogate model designed to solve an image-to-image regression problem using convolutional neural networks (CNN) that takes a 256 × 256 colored image of von mises stress contour and outputs the required error indicator. To train this model with good generalization performance we have developed four different geometries for each of the three case studies: (i) quarter plate with a hole, (b) simply supported plate with multiple holes, and (c) simply supported stiffened plate. The entire research is implemented in a three phase approach, phase I involves the design and development of a CNN to perform training on stress contour images with their corresponding von Mises stress values volume-averaged over the entire domain. Phase II involves developing a surrogate model to perform image-to-image regression and the final phase III involves extending the capabilities of phase II and making the surrogate model more generalized and robust. The final surrogate model used to train the global dataset of 12,000 images consists of three auto encoders, one encoder-decoder assembly, and two multi-output regression neural networks. With the error of less than 1% in the neural network training shows good memorization and generalization performance. Our final surrogate model takes 15.5 hours to train and less than a minute to predict the error indicators on testing datasets. Thus, this present study can be considered a good first step toward developing an adaptive remeshing scheme using deep neural networks. / Master of Science / This research is a proof-of-concept study to develop an image processing-based neural network (NN) model to solve an image-to-image regression problem. In finite element analysis (FEA), due to poor meshing and nature of the finite element method, these error indicators are used to study and reduce errors. For this research, we have predicted two different types of error indicator contours by using stress images as inputs to the NN model. In popular FEA packages, adaptive remeshing scheme is used to optimize mesh quality by iteratively computing error indicators making the process computationally expensive. To overcome this limitation we propose an approach to replace it using convolutional neural networks (CNN). Such neural networks are particularly used for image based data. To train our CNN model with good generalization performance we have developed four different geometries with varying load cases. The entire research is implemented in a three phase approach, phase I involves the design and development of a CNN model to perform initial level training on small image size. Phase II involves developing an assembled neural network to perform image-to-image regression and the final phase III involves extending the capabilities of phase II for more generalized and robust results. With the error of less than 1% in the neural network training shows good memorization and generalization performance. Our final surrogate model takes 15.5 hours to train and less than a minute to predict the error indicators on testing datasets. Thus, this present study can be considered a good first step toward developing an adaptive remeshing scheme using deep neural networks.
15

Transfer Learning for Multi-surrogate-model Optimization

Gvozdetska, Nataliia 14 January 2021 (has links)
Surrogate-model-based optimization is widely used to solve black-box optimization problems if the evaluation of a target system is expensive. However, when the optimization budget is limited to a single or several evaluations, surrogate-model-based optimization may not perform well due to the lack of knowledge about the search space. In this case, transfer learning helps to get a good optimization result due to the usage of experience from the previous optimization runs. And if the budget is not strictly limited, transfer learning is capable of improving the final results of black-box optimization. The recent work in surrogate-model-based optimization showed that using multiple surrogates (i.e., applying multi-surrogate-model optimization) can be extremely efficient in complex search spaces. The main assumption of this thesis suggests that transfer learning can further improve the quality of multi-surrogate-model optimization. However, to the best of our knowledge, there exist no approaches to transfer learning in the multi-surrogate-model context yet. In this thesis, we propose an approach to transfer learning for multi-surrogate-model optimization. It encompasses an improved method of defining the expediency of knowledge transfer, adapted multi-surrogate-model recommendation, multi-task learning parameter tuning, and few-shot learning techniques. We evaluated the proposed approach with a set of algorithm selection and parameter setting problems, comprising mathematical functions optimization and the traveling salesman problem, as well as random forest hyperparameter tuning over OpenML datasets. The evaluation shows that the proposed approach helps to improve the quality delivered by multi-surrogate-model optimization and ensures getting good optimization results even under a strictly limited budget.:1 Introduction 1.1 Motivation 1.2 Research objective 1.3 Solution overview 1.4 Thesis structure 2 Background 2.1 Optimization problems 2.2 From single- to multi-surrogate-model optimization 2.2.1 Classical surrogate-model-based optimization 2.2.2 The purpose of multi-surrogate-model optimization 2.2.3 BRISE 2.5.0: Multi-surrogate-model-based software product line for parameter tuning 2.3 Transfer learning 2.3.1 Definition and purpose of transfer learning 2.4 Summary of the Background 3 Related work 3.1 Questions to transfer learning 3.2 When to transfer: Existing approaches to determining the expediency of knowledge transfer 3.2.1 Meta-features-based approaches 3.2.2 Surrogate-model-based similarity 3.2.3 Relative landmarks-based approaches 3.2.4 Sampling landmarks-based approaches 3.2.5 Similarity threshold problem 3.3 What to transfer: Existing approaches to knowledge transfer 3.3.1 Ensemble learning 3.3.2 Search space pruning 3.3.3 Multi-task learning 3.3.4 Surrogate model recommendation 3.3.5 Few-shot learning 3.3.6 Other approaches to transferring knowledge 3.4 How to transfer (discussion): Peculiarities and required design decisions for the TL implementation in multi-surrogate-model setup 3.4.1 Peculiarities of model recommendation in multi-surrogate-model setup 3.4.2 Required design decisions in multi-task learning 3.4.3 Few-shot learning problem 3.5 Summary of the related work analysis 4 Transfer learning for multi-surrogate-model optimization 4.1 Expediency of knowledge transfer 4.1.1 Experiments’ similarity definition as a variability point 4.1.2 Clustering to filter the most suitable experiments 4.2 Dynamic model recommendation in multi-surrogate-model setup 4.2.1 Variable recommendation granularity 4.2.2 Model recommendation by time and performance criteria 4.3 Multi-task learning 4.4 Implementation of the proposed concept 4.5 Conclusion of the proposed concept 5 Evaluation 5.1 Benchmark suite 5.1.1 APSP for the meta-heuristics 5.1.2 Hyperparameter optimization of the Random Forest algorithm 5.2 Environment setup 5.3 Evaluation plan 5.4 Baseline evaluation 5.5 Meta-tuning for a multi-task learning approach 5.5.1 Revealing the dependencies between the parameters of multi-task learning and its performance 5.5.2 Multi-task learning performance with the best found parameters 5.6 Expediency determination approach 5.6.1 Expediency determination as a variability point 5.6.2 Flexible number of the most similar experiments with the help of clustering 5.6.3 Influence of the number of initial samples on the quality of expediency determination 5.7 Multi-surrogate-model recommendation 5.8 Few-shot learning 5.8.1 Transfer of the built surrogate models’ combination 5.8.2 Transfer of the best configuration 5.8.3 Transfer from different experiment instances 5.9 Summary of the evaluation results 6 Conclusion and Future work
16

Modelling the performance of an integrated urban wastewater system under future conditions

Astaraie Imani, Maryam January 2012 (has links)
The performance of the Integrated Urban Wastewater Systems (IUWS) including: sewer system, WWTP and river, in both operational control and design, under unavoidable future climate change and urbanisation is a concern for water engineers which still needs to be improved. Additionally, with regard to the recent attention around the world to the environment, the quality of water, as the main component of that, has received significant attention as it can have impacts on health of human life, aquatic life and so on. Hence, the necessity of improving systems performance under the future changes to maintain the quality of water is observed. The research presented in this thesis describes the development of risk-based and non-risk-based models to improve the operational control and design of the IUWS under future climate change and urbanisation aiming to maintain the quality of water in recipients. In this thesis, impacts of climate change and urbanisation on the IUWS performance in terms of the receiving water quality was investigated. In the line with this, different indicators of climate change and urbanisation were selected for evaluation. Also the performance of the IUWS under future climate change and urbanisation was improved by development of a novel non-risk-based operational control and design models aiming to maintain the quality of water in the river to meet the water quality standards in the recipient. This is initiated by applying a scenario-based approach to describe the possible features of future climate change and /or urbanisation. Additionally the performance of the IUWS under future climate change and urbanisation was improved by development of a novel risk-based operational control and design models to reduce the risk of water quality failures to maintain the health of aquatic life. This is initiated by considering the uncertainties involved with the urbanisation parameters considered. The risk concept is applied to estimate the risk of water quality breaches for the aquatic life. Also due to the complexity and time-demanding nature of the IUWS simulation models (which are called about the optimisation process), there is the concern about excessive running times in this study. The novel “MOGA-ANNβ” algorithm was developed for the optimisation process throughout the thesis to speed it up while preserving the accuracy. The meta-model developed was tested and its performance was evaluated. In this study, the results obtained from the impact analysis of the future climate change and urbanisation (on the performance of the IUWS) showed that the future conditions have potential to influence the performance of the IUWS in both quality and quantity of water. In line with this, selecting proper future conditions’ parameters is important for the system impact analysis. Also the observations demonstrated that the system improvement is required under future conditions. In line with this, the results showed that both risk-based and non-risk-based operational control optimisation of the IUWS in isolation is not good enough to cope with the future conditions and therefore the IUWS design optimisation was carried out to improve the system performance. The riskbased design improvement of the IUWS in this study showed a better potential than the non-risk-based design improvement to meet all the water quality criteria considered in this study.
17

Construction automatique de modèles multi-corps de substitution aux simulations de crashtests / Automatized multi-body surrogate models creation to replace crashtests simulations

Loreau, Tanguy 18 December 2019 (has links)
Chez Renault, pour réaliser les études amont, les équipes en charge de la prestation du choc automobile disposent de modèles très simples leur permettant de pré-dimensionner le véhicule. Aujourd'hui, ils sont construits à partir du comportement d'un ou quelques véhicules de référence. Ils sont fonctionnels et permettent le dimensionnement. Mais à présent, l'entreprise souhaite construire ses modèles amont en s'appuyant sur l'ensemble de ses véhicules. En d'autres termes, elle souhaite disposer d'une méthode d'analyse automatique de simulations de crashtests afin de capitaliser leurs résultats dans une base de données de modèles simplifiés.Pour répondre à cet objectif, nous développons une méthode permettant d'extraire des simulations de crashtests les données nécessaires à la construction d'un modèle multi-corps de substitution : CrashScan. Le processus d'analyse implémenté dans CrashScan se résume en trois étapes majeures.La première consiste à identifier l'ensemble des zones peu déformées sur une simulation de crashtest. Cela nous permet de dresser le graphe topologique du futur modèle de substitution. La seconde étape est une analyse des cinématiques relatives entre les portions peu déformées : les directions principales et les modes de déformation (e.g. compression, flexion) sont identifiés en analysant le mouvement relatif. La dernière étape consiste à analyser les efforts et les moments situés entre les zones peu déformées dans les repères associés aux directions principales des déformations en fonction des déformations. Cela nous permet d'identifier des modèles hystérétiques de Bouc-Wen équivalents. Ces modèles disposent de trois paramètres utiles dans notre cas : une raideur, un effort seuil avant plastification et une pente d'écrouissage. Ces paramètres peuvent être utilisés directement par les experts des études amont.Enfin, nous construisons les modèles multi-corps de substitution pour trois cas d'étude différents. Nous les comparons alors à leur référence sur les résultats qu'ils fournissent pour les critères utilisés en amont : les modèles générés par CrashScan semblent apporter la précision et la fidélité nécessaires pour être utilisés en amont du développement automobile.Pour poursuivre ces travaux de recherche et aboutir à une solution industrielle, il reste néanmoins des verrous à lever dont les principaux sont la synthèse d'un mouvement quelconque en six mouvements élémentaires et la synthèse multi-corps sur des éléments autres que des poutres. / At Renault, to fulfill upstream studies, teams in charge of crashworthiness use very simple models to pre-size the vehicle. Today, these models are built from the physical behavior of only one or some reference vehicles. They work and allow to size the project. But today, the company wishes to build its upstream models using all its vehicles. In other words, it wishes to get an automatic method to analyze crashtests simulations to capitalize their results in a database of simplified models.To meet this goal, we decide to use the multi-body model theory. We develop a method to analyze crashtests simulations in order to extract the data required to build a surrogate multi-body model : CrashScan. The analysis process implemented in CrashScan can be split into three major steps.The first one allows to identify the low deformed zones on a crashtest simulation. Then, we can build the topological graph of the future surrogate model. The second step is to analyze the relative kinematics between the low deformed zones : major directions and deformation modes (e.g. crushing or bending) are identified analysing relative movements. The last step is to analyze strengths and moments located between the low deformed zones, viewed in the frames associated to the major directions of deformations in function of the deformations. This allows us to identify equivalent Bouc-Wen hysteretic models. These models have three parameters that we can use : a stiffness, a threshold strength before plastification and a strain of hardening. These parameters can directly be used by upstream studies experts.Finally, we build multi-body models for three different use case. We compare them to their reference over the results they produce for the upstream criteria : models generated with CrashScan seems to grant the precision and the fidelity required to be used during automotive development's upstream phases.To continue this research work and get an industrial solution, there are still some locks to lift, the main ones are : synthesis of any movement into six elementary ones and multi-body synthesis on elements other than beams.
18

BAYESIAN OPTIMIZATION FOR DESIGN PARAMETERS OF AUTOINJECTORS.pdf

Heliben Naimeshkum Parikh (15340111) 24 April 2023 (has links)
<p>The document describes the computational framework to optimize spring-driven Autoinjectors. It involves Bayesian Optimization for efficient and cost-effective design of Autoinjectors.</p>
19

Vibration and Buckling Analysis of Unitized Structure Using Meshfree Method and Kriging Model

Yeilaghi Tamijani, Ali 07 June 2011 (has links)
The Element Free Galerkin (EFG) method, which is based on the Moving Least Squares (MLS) approximation, is developed here for vibration, buckling and static analysis of homogenous and FGM plate with curvilinear stiffeners. Numerical results for different stiffeners configurations and boundary conditions are presented. All results are verified using the commercial finite element software ANSYS® and other available results in literature. In addition, the vibration analysis of plates with curvilinear stiffeners is carried out using Ritz method. A 24 by 28 in. curvilinear stiffened panel was machined from 2219-T851 aluminum for experimental validation of the Ritz and meshfree methods of vibration mode shape predictions. Results were obtained for this panel mounted vertically to a steel clamping bracket using acoustic excitation and a laser vibrometer. Experimental results appear to correlate well with the meshfree and Ritz method results. In reality, many engineering structures are subjected to random pressure loads in nature and cannot be assumed to be deterministic. Typical engineering structures include buildings and towers, offshore structures, vehicles and ships, are subjected to random pressure. The vibrations induced from gust loads, engine noise, and other auxiliary electrical system can also produce noise inside aircraft. Consequently, all flight vehicles operate in random vibration environment. These random loads can be modeled by using their statistical properties. The dynamical responses of the structures which are subjected to random excitations are very complicated. To investigate their dynamic responses under random loads, the meshfree method is developed for random vibration analysis of curvilinearly-stiffened plates. Since extensive efforts have been devoted to study the buckling and vibration analysis of stiffened panel to maximize their natural frequencies and critical buckling loads, these structures are subjected to in-plane loading while the vibration analysis is considered. In these cases the natural frequencies calculated by neglecting the in-plane compression are usually over predicted. In order to have more accurate results it might be necessary to take into account the effects of in-plane load since it can change the natural frequency of plate considerably. To provide a better view of the free vibration behavior of the plate with curvilinear stiffeners subjected to axial/biaxial or shear stresses several numerical examples are studied. The FEM analysis of curvilinearly stiffened plate is quite computationally expensive, and the meshfree method seems to be a proper substitution to reduce the CPU time. However it will still require many simulations. Because of the number of simulations may be required in the solution of an engineering optimization problem, many researchers have tried to find approaches and techniques in optimization which can reduce the number of function evaluations. In these problems, surrogate models for analysis and optimization can be very efficient. The basic idea in surrogate model is to reduce computational cost and giving a better understanding of the influence of the design variables on the different objectives and constrains. To use the advantage of both meshfree method and surrogate model in reducing CPU time, the meshfree method is used to generate the sample points and combination of Kriging (a surrogate model) and Genetic Algorithms is used for design of curvilinearly stiffened plate. The meshfree and kriging results and CPU time were compared with those obtained using EBF3PanelOpt. / Ph. D.
20

Fiabilité résiduelle des ouvrages en béton dégradés par réaction alcali-granulat : application au barrage hydroélectrique de Song Loulou / Residual reliability of alkali-aggregate reaction affected concrete structures : application to the song Loulou hydroelectric dam

Ftatsi Mbetmi, Guy-De-Patience 31 August 2018 (has links)
Ce travail de thèse propose une méthodologie multi-échelle basée sur l'utilisation de modèles de substitution fonction de variables aléatoires, pour évaluer la fiabilité résiduelle d'ouvrages en béton atteints de réaction alcali-granulat (RAG), dans l'optique d'une meilleure maintenance. Les modèles de substitution, basés sur des développements en chaos de polynômes des paramètres d'une fonction de forme (sigmoïde dans les cas traités), ont été constitués à plusieurs échelles, afin notamment de réduire les temps de calculs des modèles physiques sous-jacents. A l'échelle microscopique, le modèle de RAG employé est celui développé par Multon, Sellier et Cyr en 2009, comprenant initialement une vingtaine de variables aléatoires potentielles. A l'issue d'une analyse de sensibilité de Morris, le modèle de substitution permet de reproduire la courbe de gonflement dans le temps du volume élémentaire représentatif en fonction de neuf variables aléatoires. L'utilisation du modèle de substitution construit, pour la prédiction des effets mécaniques du gonflement dû à la RAG sur une éprouvette, a nécessité de prendre en compte l'anisotropie de ces effets en améliorant les fonctions poids proposées par Saouma et Perotti en 2006. L'échelle de l'éprouvette étant validée par la confrontation des prédictions aux données expérimentales des travaux de thèse de Multon, une application à l'échelle du barrage de Song Loulou a été entreprise. Le calcul du comportement thermo-chemo-mécanique d'une pile d'évacuateur de crues, dont les résultats en déplacements ont pu être confrontés aux données d'auscultation fournies par l'entreprise AES-SONEL (devenue ENEO), a été réalisé. Des modèles de substitution ont été construits ensuite à l'échelle de la structure afin d'obtenir les déplacements aux points d'intérêt, liés aux états limites de fonctionnement des évacuateurs, et procéder ainsi à l'estimation de la fiabilité résiduelle du barrage. Les calculs d'analyse de sensibilité et la construction des modèles de substitution ont été implémentés en Fortran, Java et OpenTURNS Les calculs sur éprouvette et pile de barrage ont été effectués sous Cast3M. / This work proposes a multi-scale methodology based on the use of surrogate models function of random variables, to evaluate the residual reliability of concrete structures suffering from alkali-aggregate reaction (AAR), for a better maintenance purpose. Surrogate models, based on polynomial chaos expansion of the parameters of a shape function (sigmoid in the studied cases), have been constituted at several scales, in particular in order to reduce computation time of the underlying physical models. At the microscopic scale, the AAR model employed is that developed by Multon, Sellier and Cyr in 2009, initially comprising about twenty potential random variables. At the end of a Morris sensitivity analysis, the surrogate model enables to reproduce the expansion curve over time of the representative elementary volume as a function of nine random variables. The use of the built-in surrogate model in predicting the mechanical effects of AAR expansion on a concrete core required to take into account the anisotropy of these effects by improving the weight functions proposed by Saouma and Perotti in 2006. The core's scale being validated by the comparison of the predictions with the experimental data of Multon's thesis work, an application at the scale of the Song Loulou dam was undertaken. The computation of the thermo-chemo-mechanical behavior of a spillway stack, whose results in displacement could be compared with the auscultation data provided by the company AES-SONEL (now ENEO), was realized. Surrogate models were then constructed at the scale of the structure to obtain displacements at the points of interest, related to the operating limit states of the spillways, and thus to estimate the residual reliability of the dam. The sensitivity analysis computations as well as the construction of the surrogate models were implemented in Fortran, Java and OpenTURNS. Computations on concrete cores and Song Loulou dam spillway were performed under Cast3M.

Page generated in 0.029 seconds