• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 29
  • 7
  • 4
  • 3
  • 2
  • Tagged with
  • 56
  • 56
  • 23
  • 14
  • 14
  • 12
  • 11
  • 10
  • 9
  • 9
  • 8
  • 8
  • 8
  • 8
  • 8
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
31

Metamodel based multi-objective optimization

Amouzgar, Kaveh January 2015 (has links)
As a result of the increase in accessibility of computational resources and the increase in the power of the computers during the last two decades, designers are able to create computer models to simulate the behavior of a complex products. To address global competitiveness, companies are forced to optimize their designs and products. Optimizing the design needs several runs of computationally expensive simulation models. Therefore, using metamodels as an efficient and sufficiently accurate approximate of the simulation model is necessary. Radial basis functions (RBF) is one of the several metamodeling methods that can be found in the literature. The established approach is to add a bias to RBF in order to obtain a robust performance. The a posteriori bias is considered to be unknown at the beginning and it is defined by imposing extra orthogonality constraints. In this thesis, a new approach in constructing RBF with the bias to be set a priori by using the normal equation is proposed. The performance of the suggested approach is compared to the classic RBF with a posteriori bias. Another comprehensive comparison study by including several modeling criteria, such as problem dimension, sampling technique and size of samples is conducted. The studies demonstrate that the suggested approach with a priori bias is in general as good as the performance of RBF with a posteriori bias. Using the a priori RBF, it is clear that the global response is modeled with the bias and that the details are captured with radial basis functions. Multi-objective optimization and the approaches used in solving such problems are briefly described in this thesis. One of the methods that proved to be efficient in solving multi-objective optimization problems (MOOP) is the strength Pareto evolutionary algorithm (SPEA2). Multi-objective optimization of a disc brake system of a heavy truck by using SPEA2 and RBF with a priori bias is performed. As a result, the possibility to reduce the weight of the system without extensive compromise in other objectives is found. Multi-objective optimization of material model parameters of an adhesive layer with the aim of improving the results of a previous study is implemented. The result of the original study is improved and a clear insight into the nature of the problem is revealed.
32

Contributions à l'optimisation multidisciplinaire sous incertitude, application à la conception de lanceurs / Contributions to Multidisciplinary Design Optimization under uncertainty, application to launch vehicle design

Brevault, Loïc 06 October 2015 (has links)
La conception de lanceurs est un problème d’optimisation multidisciplinaire dont l’objectif est de trouverl’architecture du lanceur qui garantit une performance optimale tout en assurant un niveau de fiabilité requis.En vue de l’obtention de la solution optimale, les phases d’avant-projet sont cruciales pour le processus deconception et se caractérisent par la présence d’incertitudes dues aux phénomènes physiques impliqués etaux méconnaissances existantes sur les modèles employés. Cette thèse s’intéresse aux méthodes d’analyse et d’optimisation multidisciplinaire en présence d’incertitudes afin d’améliorer le processus de conception de lanceurs. Trois sujets complémentaires sont abordés. Tout d’abord, deux nouvelles formulations du problème de conception ont été proposées afin d’améliorer la prise en compte des interactions disciplinaires. Ensuite, deux nouvelles méthodes d’analyse de fiabilité, permettant de tenir compte d’incertitudes de natures variées, ont été proposées, impliquant des techniques d’échantillonnage préférentiel et des modèles de substitution. Enfin, une nouvelle technique de gestion des contraintes pour l’algorithme d’optimisation ”Covariance Matrix Adaptation - Evolutionary Strategy” a été développée, visant à assurer la faisabilité de la solution optimale. Les approches développées ont été comparées aux techniques proposées dans la littérature sur des cas tests d’analyse et de conception de lanceurs. Les résultats montrent que les approches proposées permettent d’améliorer l’efficacité du processus d’optimisation et la fiabilité de la solution obtenue. / Launch vehicle design is a Multidisciplinary Design Optimization problem whose objective is to find the launch vehicle architecture providing the optimal performance while ensuring the required reliability. In order to obtain an optimal solution, the early design phases are essential for the design process and are characterized by the presence of uncertainty due to the involved physical phenomena and the lack of knowledge on the used models. This thesis is focused on methodologies for multidisciplinary analysis and optimization under uncertainty for launch vehicle design. Three complementary topics are tackled. First, two new formulations have been developed in order to ensure adequate interdisciplinary coupling handling. Then, two new reliability techniques have been proposed in order to take into account the various natures of uncertainty, involving surrogate models and efficient sampling methods. Eventually, a new approach of constraint handling for optimization algorithm ”Covariance Matrix Adaptation - Evolutionary Strategy” has been developed to ensure the feasibility of the optimal solution. All the proposed methods have been compared to existing techniques in literature on analysis and design test cases of launch vehicles. The results illustrate that the proposed approaches allow the improvement of the efficiency of the design process and of the reliability of the found solution.
33

Statistical inverse problem in nonlinear high-speed train dynamics / Problème statistique inverse en dynamique non-linéaire des trains à grande vitesse

Lebel, David 30 November 2018 (has links)
Ce travail de thèse traite du développement d'une méthode de télédiagnostique de l'état de santé des suspensions des trains à grande vitesse à partir de mesures de la réponse dynamique du train en circulation par des accéléromètres embarqués. Un train en circulation est un système dynamique dont l'excitation provient des irrégularités de la géométrie de la voie ferrée. Ses éléments de suspension jouent un rôle fondamental de sécurité et de confort. La réponse dynamique du train étant dépendante des caractéristiques mécaniques des éléments de suspension, il est possible d'obtenir en inverse des informations sur l'état de ces éléments à partir de mesures accélérométriques embarquées. Connaître l'état de santé réel des suspensions permettrait d'améliorer la maintenance des trains. D’un point de vue mathématique, la méthode de télédiagnostique proposée consiste à résoudre un problème statistique inverse. Elle s'appuie sur un modèle numérique de dynamique ferroviaire et prend en compte l'incertitude de modèle ainsi que les erreurs de mesures. Les paramètres mécaniques associés aux éléments de suspension sont identifiés par calibration Bayésienne à partir de mesures simultanées des entrées (les irrégularités de la géométrie de la voie) et sorties (la réponse dynamique du train) du système. La calibration Bayésienne classique implique le calcul de la fonction de vraisemblance à partir du modèle stochastique de réponse et des données expérimentales. Le modèle numérique étant numériquement coûteux d'une part, ses entrées et sorties étant fonctionnelles d'autre part, une méthode de calibration Bayésienne originale est proposée. Elle utilise un métamodèle par processus Gaussien de la fonction de vraisemblance. Cette thèse présente comment un métamodèle aléatoire peut être utilisé pour estimer la loi de probabilité des paramètres du modèle. La méthode proposée permet la prise en compte du nouveau type d'incertitude induit par l'utilisation d'un métamodèle. Cette prise en compte est nécessaire pour une estimation correcte de la précision de la calibration. La nouvelle méthode de calibration Bayésienne a été testée sur le cas applicatif ferroviaire, et a produit des résultats concluants. La validation a été faite par expériences numériques. Par ailleurs, l'évolution à long terme des paramètres mécaniques de suspensions a été étudiée à partir de mesures réelles de la réponse dynamique du train / The work presented here deals with the development of a health-state monitoring method for high-speed train suspensions using in-service measurements of the train dynamical response by embedded acceleration sensors. A rolling train is a dynamical system excited by the track-geometry irregularities. The suspension elements play a key role for the ride safety and comfort. The train dynamical response being dependent on the suspensions mechanical characteristics, information about the suspensions state can be inferred from acceleration measurements in the train by embedded sensors. This information about the actual suspensions state would allow for providing a more efficient train maintenance. Mathematically, the proposed monitoring solution consists in solving a statistical inverse problem. It is based on a train-dynamics computational model, and takes into account the model uncertainty and the measurement errors. A Bayesian calibration approach is adopted to identify the probability distribution of the mechanical parameters of the suspension elements from joint measurements of the system input (the track-geometry irregularities) and output (the train dynamical response).Classical Bayesian calibration implies the computation of the likelihood function using the stochastic model of the system output and experimental data. To cope with the fact that each run of the computational model is numerically expensive, and because of the functional nature of the system input and output, a novel Bayesian calibration method using a Gaussian-process surrogate model of the likelihood function is proposed. This thesis presents how such a random surrogate model can be used to estimate the probability distribution of the model parameters. The proposed method allows for taking into account the new type of uncertainty induced by the use of a surrogate model, which is necessary to correctly assess the calibration accuracy. The novel Bayesian calibration method has been tested on the railway application and has achieved conclusive results. Numerical experiments were used for validation. The long-term evolution of the suspension mechanical parameters has been studied using actual measurements of the train dynamical response
34

Instabilités dynamiques de systèmes frottants en présence de variabilités paramétriques - Application au phénomène de crissement

Cazier, Olivier 18 December 2012 (has links)
Lors de la conception d’un frein, le confort et le bien-être du consommateur font partie des critères principaux. En effet, les instabilités de crissement, qui engendrent une des pollutions acoustiques les plus importantes, représentent un challenge actuel pour la communauté scientifique et les industriels du domaine. Dans le cadre de cette thèse, nous nous sommes intéressés à la mise en évidence du caractère variable du crissement, observé pour deux systèmes de freinage d’un même véhicule, grâce à des plans d’expériences, expérimental et numérique. Pour être représentatif d’une famille de structures, il est désormais indéniable qu’il faille prendre en compte les variabilités observées sur de multiples paramètres liés au système étudié dès la phase de conception. L’enrichissement des simulations déterministes actuelles nécessite la mise en place d’outils non déterministes rapides et respectant le conservatisme des solutions étudiées. Pour ce faire, nous avons contribué au développement de méthodes numériques dédiées à la propagation des données floues dans le cas des graphes de coalescence, à la détermination des positions d’équilibre de corps en contact frottant à partir d’une méthode de régulation basée sur la logique floue. Cette solution permet d’appliquer une technique de projection pour réduire le coût numérique en utilisant des bases modales des composants réanalysées par un développement homotopique. / During a brake design, consumer comfort and well-being are the main criteria. Indeed, squeal instabilities, that produce main acoustic pollution, represent a current challenge in the scientific community and for industrials. In this thesis, we interest first in the highlight of the variability of squeal, observed for two brake systems of a same vehicle, thanks to experimental and numerical designs of experiments. To be representative of a structure family, it is now undeniable that we must take into account variability observed in various parameters of the studied system, from the design phase. To enrich existing deterministic simulations, quick non deterministic tools must be established, respecting the studied solutions conservatism. For this, we have contributed to the development of numerical methods to propagate fuzzy data in the case of diagram of coalescence, to determine the equilibrium position of frictional contact bodies with a fuzzy logic controller. This solution allows to apply a projection technique for reducing the computational cost. The modal bases of components are reanalyzed by homotopy perturbation.
35

Surrogate models coupled with machine learning to approximate complex physical phenomena involving aerodynamic and aerothermal simulations / Modèles de substitution couplés à de l'apprentissage automatique pour approcher des phénomènes complexes mettant en jeu des simulations aérodynamiques et aérothermiques

Dupuis, Romain 04 February 2019 (has links)
Les simulations numériques représentent un élément central du processus de conception d’un avion complétant les tests physiques et essais en vol. Elles peuvent notamment bénéficier de méthodes innovantes, telle que l’intelligence artificielle qui se diffuse largement dans l’aviation. Simuler une mission de vol complète pour plusieurs disciplines pose d’importants problèmes à cause des coûts de calcul et des conditions d’opérations changeantes. De plus, des phénomènes complexes peuvent se produire. Par exemple, des chocs peuvent apparaître sur l’aile pour l’aérodynamique alors que le mélange entre les écoulements du moteur et de l’air extérieur impacte fortement l’aérothermie autour de la nacelle et du mât. Des modèles de substitution peuvent être utilisés pour remplacer les simulations haute-fidélité par des approximations mathématiques afin de réduire le coût de calcul et de fournir une méthode construite autour des données de simulations. Deux développements sont proposés dans cette thèse : des modèles de substitution utilisant l’apprentissage automatique pour approximer des calculs aérodynamiques et l’intégration de modèles de substitution classiques dans un processus aérothermique industriel. La première approche sépare les solutions en sous-ensembles selon leurs formes grâce à de l’apprentissage automatique. En outre, une méthode de reéchantillonnage complète la base d’entrainement en ajoutant de l’information dans des sous-ensembles spécifiques. Le deuxième développement se concentre sur le dimensionnement du mât moteur en remplaçant les simulations aérothermiques par des modèles de substitution. Ces deux développements sont appliqués sur des configurations avions afin de combler l’écart entre méthode académique et industrielle. On peut noter que des améliorations significatives en termes de coût et de précision ont été atteintes. / Numerical simulations provide a key element in aircraft design process, complementing physical tests and flight tests. They could take advantage of innovative methods, such as artificial intelligence technologies spreading in aviation. Simulating the full flight mission for various disciplines pose important problems due to significant computational cost coupled to varying operating conditions. Moreover, complex physical phenomena can occur. For instance, the aerodynamic field on the wing takes different shapes and can encounter shocks, while aerothermal simulations around nacelle and pylon are sensitive to the interaction between engine flows and external flows. Surrogate models can be used to substitute expensive high-fidelitysimulations by mathematical and statistical approximations in order to reduce overall computation cost and to provide a data-driven approach. In this thesis, we propose two developments: (i) machine learning-based surrogate models capable of approximating aerodynamic experiments and (ii) integrating more classical surrogate models into industrial aerothermal process. The first approach mitigates aerodynamic issues by separating solutions with very different shapes into several subsets using machine learning algorithms. Moreover, a resampling technique takes advantage of the subdomain decomposition by adding extra information in relevant regions. The second development focuses on pylon sizing by building surrogate models substitutingaerothermal simulations. The two approaches are applied to aircraft configurations in order to bridge the gap between academic methods and real-world applications. Significant improvements are highlighted in terms of accuracy and cost gains
36

Joint Calibration of a Cladding Oxidation and a Hydrogen Pick-up Model for Westinghouse Electric Sweden AB

Nyman, Joakim January 2020 (has links)
Knowledge regarding a nuclear power plants potential and limitations is of utmost importance when working in the nuclear field. One way to extend the knowledge is using fuel performance codes that to its best ability mimics the real-world phenomena. Fuel performance codes involve a system of interlinked and complex models to predict the thermo-mechanical behaviour of the fuel rods. These models use several different model parameters that can be imprecise and therefore the parameters need to be fitted/calibrated against measurement data. This thesis presents two methods to calibrate model parameters in the presence of unknown sources of uncertainty. The case where these methods have been tested are the oxidation and hydrogen pickup of the zirconium cladding around the fuel rods. Initially, training and testing data were sampled by using the Dakota software in combination with the nuclear simulation program TRANSURANUS so that a Gaussian process surrogate model could be built. The model parameters were then calibrated in a Bayesian way by a MCMC algorithm. Additionally, two models are presented to handle unknown sources of uncertainty that may arise from model inadequacies, nuisance parameters or hidden measurement errors, these are the Marginal likelihood optimization method and the Margin method. To calibrate the model parameters, data from two sources were used. One source that only had data regarding the oxide thickness but the data was extensive, and another that had both oxide data and hydrogen concentration data, but less data was available.  The model parameters were calibrated by the use of the presented methods. But an unforeseen non-linearity for the joint oxidation and hydrogen pick-up case when predicting the correlation of the model parameters made this result unreliable.
37

Uncertainty Quantification Using Simulation-based and Simulation-free methods with Active Learning Approaches

Zhang, Chi January 2022 (has links)
No description available.
38

Scalable Estimation and Testing for Complex, High-Dimensional Data

Lu, Ruijin 22 August 2019 (has links)
With modern high-throughput technologies, scientists can now collect high-dimensional data of various forms, including brain images, medical spectrum curves, engineering signals, etc. These data provide a rich source of information on disease development, cell evolvement, engineering systems, and many other scientific phenomena. To achieve a clearer understanding of the underlying mechanism, one needs a fast and reliable analytical approach to extract useful information from the wealth of data. The goal of this dissertation is to develop novel methods that enable scalable estimation, testing, and analysis of complex, high-dimensional data. It contains three parts: parameter estimation based on complex data, powerful testing of functional data, and the analysis of functional data supported on manifolds. The first part focuses on a family of parameter estimation problems in which the relationship between data and the underlying parameters cannot be explicitly specified using a likelihood function. We introduce a wavelet-based approximate Bayesian computation approach that is likelihood-free and computationally scalable. This approach will be applied to two applications: estimating mutation rates of a generalized birth-death process based on fluctuation experimental data and estimating the parameters of targets based on foliage echoes. The second part focuses on functional testing. We consider using multiple testing in basis-space via p-value guided compression. Our theoretical results demonstrate that, under regularity conditions, the Westfall-Young randomization test in basis space achieves strong control of family-wise error rate and asymptotic optimality. Furthermore, appropriate compression in basis space leads to improved power as compared to point-wise testing in data domain or basis-space testing without compression. The effectiveness of the proposed procedure is demonstrated through two applications: the detection of regions of spectral curves associated with pre-cancer using 1-dimensional fluorescence spectroscopy data and the detection of disease-related regions using 3-dimensional Alzheimer's Disease neuroimaging data. The third part focuses on analyzing data measured on the cortical surfaces of monkeys' brains during their early development, and subjects are measured on misaligned time markers. In this analysis, we examine the asymmetric patterns and increase/decrease trend in the monkeys' brains across time. / Doctor of Philosophy / With modern high-throughput technologies, scientists can now collect high-dimensional data of various forms, including brain images, medical spectrum curves, engineering signals, and biological measurements. These data provide a rich source of information on disease development, engineering systems, and many other scientific phenomena. The goal of this dissertation is to develop novel methods that enable scalable estimation, testing, and analysis of complex, high-dimensional data. It contains three parts: parameter estimation based on complex biological and engineering data, powerful testing of high-dimensional functional data, and the analysis of functional data supported on manifolds. The first part focuses on a family of parameter estimation problems in which the relationship between data and the underlying parameters cannot be explicitly specified using a likelihood function. We introduce a computation-based statistical approach that achieves efficient parameter estimation scalable to high-dimensional functional data. The second part focuses on developing a powerful testing method for functional data that can be used to detect important regions. We will show nice properties of our approach. The effectiveness of this testing approach will be demonstrated using two applications: the detection of regions of the spectrum that are related to pre-cancer using fluorescence spectroscopy data and the detection of disease-related regions using brain image data. The third part focuses on analyzing brain cortical thickness data, measured on the cortical surfaces of monkeys’ brains during early development. Subjects are measured on misaligned time-markers. By using functional data estimation and testing approach, we are able to: (1) identify asymmetric regions between their right and left brains across time, and (2) identify spatial regions on the cortical surface that reflect increase or decrease in cortical measurements over time.
39

Calibrating Constitutive Models Using Data-Driven Method : Material Parameter Identification for an Automotive Sheet Metal

Haller, Anton, Fridström, Nicke January 2024 (has links)
The automotive industry is reliant on accurate finite element simulations for developing new parts or machines and to achieve this, accurate material models are essential. Material cards contain input about the material model, and are significant; however, time-consuming to calibrate with traditional methods. Therefore a newer method involving Machine Learning (ML) and Feed-Forward Neural Networks (FFNN) is studied in the thesis. The direct application of calibration with FFNN has never been applied to calibrate the Swift hardening law and Barlat yield 2000 criteria, which is done in this thesis. All steps for calibration are performed to achieve a high-fidelity database capable of training the FFNN. The outline of the thesis involves four different phases; experiments, simulations, building the high-fidelity database, and building and optimizing the FFNN. The experiment phase involves tensile testing of three different types of specimens in three material directions with Digital Image Correlation (DIC) to capture local strain. The simulation phase is to replicate all the experiments in LS-DYNA and perform finite element simulation. The finite element models are simulated 100 times and, respectively, 1000 times with different material parameters within a specific range. This range has a lower and upper bound that covers the experimental results. The database phase involves extracting the data from a huge amount of simulations and then extracting the key characteristics from the force-displacement curve. The last phase is building the FFNN and optimizing the network to find the best parameters. It’s first optimized based on Root Mean Square Error (RMSE) and then points from the Swift hardening curve and Barlat yield 2000 criteria are compared with experimental points. The result shows that the FFNN with the high-fidelity database can predict material parameters with an accuracy of over 99 % for the hardening law at the points chosen for optimization and the anisotropy parameters are optimized to 97 % accuracy for the yielding points and Lankford coefficients. The thesis concludes that the FFNN can accurately predict the material parameters with real experimental data. The effectiveness of using this method is significantly faster than traditional methods because only one type of test is needed. / Bilindustrin är beroende av trovärdiga och noggranna finita element simuleringar för utveckling av nya komponenter eller maskiner, och för det behövs noggranna materialmodeller. Materialkort innehåller information om materialmodellerna och är av stor betydelse, men är tidskrävande att kalibrera med traditionella metoder. Därför är en ny metod som involverar Maskininlärning (ML) och Feed-Forward Neurala Nätverk (FFNN) undersökt i avhandlingen. Applikationen av att kalibrera med FFNN har aldrig blivit undersökt för ”Swift hardening law” och anisotropi kriteriet ”Barlat yield 2000”. Alla steg för att kalibrera materialet är utförda för att uppnå en högkvalitativ databas som är kapabel att träna ett FFNN. Arbetets översikt involverar fyra faser som är; experiment, simulationer, databasensuppbyggnad och utvecklingen samt optimeringen av FFNN. Experimentfasen involverar dragprov för tre olika geometrier i tre materialriktningar tillsammans med Digital Image Correlation (DIC) för att fånga lokala töjningspunkter. Simulationsfasen går ut på att replikera experimentfasen genom finita element simuleringar i LS-DYNA. Finita element modellerna är simulerade 100 respektive 1000 gånger med olika materialparametrar inom ett specifikt intervall med en övre och undre gräns som ska täcka experimentdatan. Databasfasen handlar om att extrahera data från de massiva antalet simuleringar och extrahera nyckelbeteenden från kraft-förflyttningskurvan. Den sista fasen är att bygga FFNN och optimera för att hitta bästa möjliga parametrar. Det är först optimerat baserat på Root Mean Squared Error (RMSE) och sedan punkter från Swift härdningskurvan och beteenden genererat från Barlat yield 2000 som är jämförda med experimentella värden som Lankfordkoefficienter och sträckgränser för rullningsriktningarna. Resultatet visar att ett FFNN med en högkvalitiativ databas kan estimera materialparametrar med en noggrannhet över 99 % för härdningskurvan för jämförelsepunkterna och med en 97 % noggrannhet för anisotropipunkterna som Lankfordkoefficienter och sträckgränser i rullningsriktningarna. Exjobbet avslutas med att dra slutsatsen att FFNN kan estimera riktiga materialparametrar med en viss noggrannhet. Effektiviteten av att använda metoden är betydligt snabbare än traditionella metoder eftersom det endast tar några sekunder att estimera parametrarna när datan är extraherad och enbart en typ av test behövs.
40

Input Calibration, Code Validation and Surrogate Model Development for Analysis of Two-phase Circulation Instability and Core Relocation Phenomena

Phung, Viet-Anh January 2017 (has links)
Code validation and uncertainty quantification are important tasks in nuclear reactor safety analysis. Code users have to deal with large number of uncertain parameters, complex multi-physics, multi-dimensional and multi-scale phenomena. In order to make results of analysis more robust, it is important to develop and employ procedures for guiding user choices in quantification of the uncertainties.   The work aims to further develop approaches and procedures for system analysis code validation and application to practical problems of safety analysis. The work is divided into two parts.   The first part presents validation of two reactor system thermal-hydraulic (STH) codes RELAP5 and TRACE for prediction of two-phase circulation flow instability.   The goals of the first part are to: (a) develop and apply efficient methods for input calibration and STH code validation against unsteady flow experiments with two-phase circulation flow instability, and (b) examine the codes capability to predict instantaneous thermal hydraulic parameters and flow regimes during the transients.   Two approaches have been developed: a non-automated procedure based on separate treatment of uncertain input parameters (UIPs) and an automated method using genetic algorithm. Multiple measured parameters and system response quantities (SRQs) are employed in both calibration of uncertain parameters in the code input deck and validation of RELAP5 and TRACE codes. The effect of improvement in RELAP5 flow regime identification on code prediction of thermal-hydraulic parameters has been studied.   Result of the code validations demonstrates that RELAP5 and TRACE can reproduce qualitative behaviour of two-phase flow instability. However, both codes misidentified instantaneous flow regimes, and it was not possible to predict simultaneously experimental values of oscillation period and maximum inlet flow rate. The outcome suggests importance of simultaneous consideration of multiple SRQs and different test regimes for quantitative code validation.   The second part of this work addresses core degradation and relocation to the lower head of a boiling water reactor (BWR). Properties of the debris in the lower head provide initial conditions for vessel failure, melt release and ex-vessel accident progression.   The goals of the second part are to: (a) obtain a representative database of MELCOR solutions for characteristics of debris in the reactor lower plenum for different accident scenarios, and (b) develop a computationally efficient surrogate model (SM) that can be used in extensive uncertainty analysis for prediction of the debris bed characteristics.   MELCOR code coupled with genetic algorithm, random and grid sampling methods was used to generate a database of the full model solutions and to investigate in-vessel corium debris relocation in a Nordic BWR. Artificial neural networks (ANNs) with classification (grouping) of scenarios have been used for development of the SM in order to address the issue of chaotic response of the full model especially in the transition region.   The core relocation analysis shows that there are two main groups of scenarios: with relatively small (&lt;20 tons) and large (&gt;100 tons) amounts of total relocated debris in the reactor lower plenum. The domains are separated by transition regions, in which small variation of the input can result in large changes in the final mass of debris.  SMs using multiple ANNs with/without weighting between different groups effectively filter out the noise and provide a better prediction of the output cumulative distribution function, but increase the mean squared error compared to a single ANN. / Validering av datorkoder och kvantifiering av osäkerhetsfaktorer är viktiga delar vid säkerhetsanalys av kärnkraftsreaktorer. Datorkodanvändaren måste hantera ett stort antal osäkra parametrar vid beskrivningen av fysikaliska fenomen i flera dimensioner från mikro- till makroskala. För att göra analysresultaten mer robusta, är det viktigt att utveckla och tillämpa rutiner för att vägleda användaren vid kvantifiering av osäkerheter.Detta arbete syftar till att vidareutveckla metoder och förfaranden för validering av systemkoder och deras tillämpning på praktiska problem i säkerhetsanalysen. Arbetet delas in i två delar.Första delen presenterar validering av de termohydrauliska systemkoderna (STH) RELAP5 och TRACE vid analys av tvåfasinstabilitet i cirkulationsflödet.Målen för den första delen är att: (a) utveckla och tillämpa effektiva metoder för kalibrering av indatafiler och validering av STH mot flödesexperiment med tvåfas cirkulationsflödeinstabilitet och (b) granska datorkodernas förmåga att förutsäga momentana termohydrauliska parametrar och flödesregimer under transienta förlopp.Två metoder har utvecklats: en icke-automatisk procedur baserad på separat hantering av osäkra indataparametrar (UIPs) och en automatiserad metod som använder genetisk algoritm. Ett flertal uppmätta parametrar och systemresponser (SRQs) används i både kalibrering av osäkra parametrar i indatafilen och validering av RELAP5 och TRACE. Resultatet av modifikationer i hur RELAP5 identifierar olika flödesregimer, och särskilt hur detta påverkar datorkodens prediktioner av termohydrauliska parametrar, har studerats.Resultatet av valideringen visar att RELAP5 och TRACE kan återge det kvalitativa beteende av två-fas flödets instabilitet. Däremot kan ingen av koderna korrekt identifiera den momentana flödesregimen, det var därför ej möjligt att förutsäga experimentella värden på svängningsperiod och maximal inloppsflödeshastighet samtidigt. Resultatet belyser betydelsen av samtidig behandling av flera SRQs liksom olika experimentella flödesregimer för kvantitativ kodvalidering.Den andra delen av detta arbete behandlar härdnedbrytning och omfördelning till reaktortankens nedre plenumdel i en kokarvatten reaktor (BWR). Egenskaper hos härdrester i nedre plenum ger inledande förutsättningar för reaktortanksgenomsmältning, hur smältan rinner ut ur reaktortanken och händelseförloppet i reaktorinneslutningen.Målen i den andra delen är att: (a) erhålla en representativ databas över koden MELCOR:s analysresultat för egenskaperna hos härdrester i nedre plenum under olika händelseförlopp, och (b) utveckla en beräkningseffektiv surrogatsmodell som kan användas i omfattande osäkerhetsanalyser för att förutsäga partikelbäddsegenskaper.MELCOR, kopplad till en genetisk algoritm med slumpmässigt urval användes för att generera en databas av analysresultat med tillämpning på smältans omfördelning i reaktortanken i en Nordisk BWR.Analysen av hur härden omfördelas visar att det finns två huvudgrupper av scenarier: med relativt liten (&lt;20 ton) och stor (&gt; 100 ton) total mängd omfördelade härdrester i nedre plenum. Dessa domäner är åtskilda av övergångsregioner, där små variationer i indata kan resultera i stora ändringar i den slutliga partikelmassan. Flergrupps artificiella neurala nätverk med klassificering av händelseförloppet har använts för utvecklingen av en surrogatmodell för att hantera problemet med kaotiska resultat av den fullständiga modellen, särskilt i övergångsregionen. / <p>QC 20170309</p>

Page generated in 0.0475 seconds