• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 191
  • 36
  • 31
  • 30
  • 22
  • 10
  • 5
  • 4
  • 3
  • 3
  • 3
  • 1
  • 1
  • 1
  • 1
  • Tagged with
  • 434
  • 98
  • 96
  • 81
  • 64
  • 56
  • 51
  • 50
  • 41
  • 39
  • 35
  • 34
  • 33
  • 29
  • 29
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
261

Construction automatique de modèles multi-corps de substitution aux simulations de crashtests / Automatized multi-body surrogate models creation to replace crashtests simulations

Loreau, Tanguy 18 December 2019 (has links)
Chez Renault, pour réaliser les études amont, les équipes en charge de la prestation du choc automobile disposent de modèles très simples leur permettant de pré-dimensionner le véhicule. Aujourd'hui, ils sont construits à partir du comportement d'un ou quelques véhicules de référence. Ils sont fonctionnels et permettent le dimensionnement. Mais à présent, l'entreprise souhaite construire ses modèles amont en s'appuyant sur l'ensemble de ses véhicules. En d'autres termes, elle souhaite disposer d'une méthode d'analyse automatique de simulations de crashtests afin de capitaliser leurs résultats dans une base de données de modèles simplifiés.Pour répondre à cet objectif, nous développons une méthode permettant d'extraire des simulations de crashtests les données nécessaires à la construction d'un modèle multi-corps de substitution : CrashScan. Le processus d'analyse implémenté dans CrashScan se résume en trois étapes majeures.La première consiste à identifier l'ensemble des zones peu déformées sur une simulation de crashtest. Cela nous permet de dresser le graphe topologique du futur modèle de substitution. La seconde étape est une analyse des cinématiques relatives entre les portions peu déformées : les directions principales et les modes de déformation (e.g. compression, flexion) sont identifiés en analysant le mouvement relatif. La dernière étape consiste à analyser les efforts et les moments situés entre les zones peu déformées dans les repères associés aux directions principales des déformations en fonction des déformations. Cela nous permet d'identifier des modèles hystérétiques de Bouc-Wen équivalents. Ces modèles disposent de trois paramètres utiles dans notre cas : une raideur, un effort seuil avant plastification et une pente d'écrouissage. Ces paramètres peuvent être utilisés directement par les experts des études amont.Enfin, nous construisons les modèles multi-corps de substitution pour trois cas d'étude différents. Nous les comparons alors à leur référence sur les résultats qu'ils fournissent pour les critères utilisés en amont : les modèles générés par CrashScan semblent apporter la précision et la fidélité nécessaires pour être utilisés en amont du développement automobile.Pour poursuivre ces travaux de recherche et aboutir à une solution industrielle, il reste néanmoins des verrous à lever dont les principaux sont la synthèse d'un mouvement quelconque en six mouvements élémentaires et la synthèse multi-corps sur des éléments autres que des poutres. / At Renault, to fulfill upstream studies, teams in charge of crashworthiness use very simple models to pre-size the vehicle. Today, these models are built from the physical behavior of only one or some reference vehicles. They work and allow to size the project. But today, the company wishes to build its upstream models using all its vehicles. In other words, it wishes to get an automatic method to analyze crashtests simulations to capitalize their results in a database of simplified models.To meet this goal, we decide to use the multi-body model theory. We develop a method to analyze crashtests simulations in order to extract the data required to build a surrogate multi-body model : CrashScan. The analysis process implemented in CrashScan can be split into three major steps.The first one allows to identify the low deformed zones on a crashtest simulation. Then, we can build the topological graph of the future surrogate model. The second step is to analyze the relative kinematics between the low deformed zones : major directions and deformation modes (e.g. crushing or bending) are identified analysing relative movements. The last step is to analyze strengths and moments located between the low deformed zones, viewed in the frames associated to the major directions of deformations in function of the deformations. This allows us to identify equivalent Bouc-Wen hysteretic models. These models have three parameters that we can use : a stiffness, a threshold strength before plastification and a strain of hardening. These parameters can directly be used by upstream studies experts.Finally, we build multi-body models for three different use case. We compare them to their reference over the results they produce for the upstream criteria : models generated with CrashScan seems to grant the precision and the fidelity required to be used during automotive development's upstream phases.To continue this research work and get an industrial solution, there are still some locks to lift, the main ones are : synthesis of any movement into six elementary ones and multi-body synthesis on elements other than beams.
262

Surrogate Modeling for Uncertainty Quantification in systems Characterized by expensive and high-dimensional numerical simulators

Rohit Tripathy (8734437) 24 April 2020 (has links)
<div>Physical phenomena in nature are typically represented by complex systems of ordinary differential equations (ODEs) or partial differential equations (PDEs), modeling a wide range of spatio-temporal scales and multi-physics. The field of computational science has achieved indisputable success in advancing our understanding of the natural world - made possible through a combination of increasingly sophisticated mathematical models, numerical techniques and hardware resources. Furthermore, there has been a recent revolution in the data-driven sciences - spurred on by advances in the deep learning/stochastic optimization communities and the democratization of machine learning (ML) software.</div><div><br></div><div><div>With the ubiquity of use of computational models for analysis and prediction of physical systems, there has arisen a need for rigorously characterizing the effects of unknown variables in a system. Unfortunately, Uncertainty quantification (UQ) tasks such as model calibration, uncertainty propagation, and optimization under uncertainty, typically require several thousand evaluations of the underlying physical models. In order to deal with the high cost of the forward model, one typically resorts to the surrogate idea - replacing the true response surface with an approximation that is both accurate as well cheap (computationally speaking). However, state-ofart numerical systems are often characterized by a very large number of stochastic parameters - of the order of hundreds or thousands. The high cost of individual evaluations of the forward model, coupled with the limited real world computational budget one is constrained to work with, means that one is faced with the task of constructing a surrogate model for a system with high input dimensionality and small dataset sizes. In other words, one faces the <i>curse of dimensionality</i>.</div></div><div><br></div><div><div>In this dissertation, we propose multiple ways of overcoming the<i> curse of dimensionality</i> when constructing surrogate models for high-dimensional numerical simulators. The core idea binding all of our proposed approach is simple - we try to discover special structure in the stochastic parameter which captures most of the variance of the output quantity of interest. Our strategies first identify such a low-rank structure, project the high-dimensional input onto it, and then link the projection to the output. If the dimensionality of the low dimensional structure is small enough, learning the map between this reduced input space to the output is a much easier task in</div><div>comparison to the original surrogate modeling task.</div></div>
263

Analýza surogát pro určení významnosti interakce mezi kardiovaskulárními signály / Surrogate data analysis for assessing the significance of interaction between cardiovascular signals

Javorčeková, Lenka January 2019 (has links)
The aim of this diploma thesis was to get familiar with methods to generate surrogates and how to apply them on cardiovascular signals. The first part of this diploma thesis describes the basic theory of baroreflex function and methods to generate surrogate data. Surrogate data were generated from data, acquired from the database, by using three different methods. In the next part of this diploma thesis, coherence significance between blood pressure and heart intervals was calculated by using surrogates. In the end two hypotheses were defined and tested by which it was detected whether the orthostatic change of the measurement position has effect on the causal coherence change and baroreflex function.
264

Black-box optimization of simulated light extraction efficiency from quantum dots in pyramidal gallium nitride structures

Olofsson, Karl-Johan January 2019 (has links)
Microsized hexagonal gallium nitride pyramids show promise as next generation Light Emitting Diodes (LEDs) due to certain quantum properties within the pyramids. One metric for evaluating the efficiency of a LED device is by studying its Light Extraction Efficiency (LEE). To calculate the LEE for different pyramid designs, simulations can be performed using the FDTD method. Maximizing the LEE is treated as a black-box optimization problem with an interpolation method that utilizes radial basis functions. A simple heuristic is implemented and tested for various pyramid parameters. The LEE is shown to be highly dependent on the pyramid size, the source position and the polarization. Under certain circumstances, a LEE over 17% is found above the pyramid. The results are however in some situations very sensitive to the simulation parameters, leading to results not converging properly. Establishing convergence for all simulation evaluations must be done with further care. The results imply a high LEE for the pyramids is possible, which motivates the need for further research.
265

Metody evoluční optimalizace založené na modelech / Model-based evolutionary optimization methods

Bajer, Lukáš January 2018 (has links)
Model-based black-box optimization is a topic that has been intensively studied both in academia and industry. Especially real-world optimization tasks are often characterized by expensive or time-demanding objective functions for which statistical models can save resources or speed-up the optimization. Each of three parts of the thesis concerns one such model: first, copulas are used instead of a graphical model in estimation of distribution algorithms, second, RBF networks serve as surrogate models in mixed-variable genetic algorithms, and third, Gaussian processes are employed in Bayesian optimization algorithms as a sampling model and in the Covariance matrix adaptation Evolutionary strategy (CMA-ES) as a surrogate model. The last combination, described in the core part of the thesis, resulted in the Doubly trained surrogate CMA-ES (DTS-CMA-ES). This algorithm uses the uncertainty prediction of a Gaussian process for selecting only a part of the CMA-ES population for evaluation with the expensive objective function while the mean prediction is used for the rest. The DTS-CMA-ES improves upon the state-of-the-art surrogate continuous optimizers in several benchmark tests.
266

Take-over performance in evasive manoeuvres

Happee, Riender, Gold, Christian, Radlmayr, Jonas, Hergeth, Sebastian, Bengler, Klaus 30 September 2020 (has links)
We investigated after effects of automation in take-over scenarios in a high-end moving-base driving simulator. Drivers performed evasive manoeuvres encountering a blocked lane in highway driving. We compared the performance of drivers 1) during manual driving, 2) after automated driving with eyes on the road while performing the cognitively demanding n-back task, and 3) after automated driving with eyes off the road performing the visually demanding SuRT task. Both minimum time to collision (TTC) and minimum clearance towards the obstacle disclosed a substantial number of near miss events and are regarded as valuable surrogate safety metrics in evasive manoeuvres. TTC proved highly sensitive to the applied definition of colliding paths, and we prefer robust solutions using lane position while disregarding heading. The extended time to collision (ETTC) which takes into account acceleration was close to the more robust conventional TTC. In line with other publications, the initial steering or braking intervention was delayed after using automation compared to manual driving. This resulted in lower TTC values and stronger steering and braking actions. Using automation, effects of cognitive distraction were similar to visual distraction for the intervention time with effects on the surrogate safety metric TTC being larger with visual distraction. However the precision of the evasive manoeuvres was hardly affected with a similar clearance towards the obstacle, similar overshoots and similar excursions to the hard shoulder. Further research is needed to validate and complement the current simulator based results with human behaviour in real world driving conditions. Experiments with real vehicles can disclose possible systematic differences in behaviour, and naturalistic data can serve to validate surrogate safety measures like TTC and obstacle clearance in evasive manoeuvres.
267

Surogátní mateřství - srovnání právní úpravy České republiky a Spolkové republiky Německo / Surrogate Motherhood - Comparison of the Legislation in the Czech Republic and the Federal Republic of Germany

Kratochvílová, Johana January 2019 (has links)
The main subject of this master thesis is the issue of surrogacy in the Czech Republic and the Federal Republic of Germany. At the beginning, this thesis aspires to define the term surrogacy in general as well as other connected terminology, and subsequently to specify its categories and describe its major historic milestones. Afterwards, it deals with the rather brief Czech legislation concerning this institute and the consequences this has inevitably led to. It also concentrates on the methods of the assisted reproduction which help to put the surrogate motherhood into practice. This thesis does include ethical problems of surrogacy and some of the psychological and sociological aspects as well. Consequently, it summarizes sanctions which may arose as a result of surrogacy. After that, it explains the legislation related to this issue in the Federal Republic of Germany, its history and legal limits. It deals with the German sanctions which the realization as well as mere arrangement may be subjected to. It examines the standpoint of the German legislator regarding the right of a child to know his or her origin including its consequences, such as non-anonymous sperm donation and obligation of the legal parents to undergo a DNA test. It describes most recent demands of the society regarding the new...
268

Gaussian process regression of two nested computer codes / Métamodélisation par processus gaussien de deux codes couplés

Marque-Pucheu, Sophie 10 October 2018 (has links)
Cette thèse traite de la métamodélisation (ou émulation) par processus gaussien de deux codes couplés. Le terme « deux codes couplés » désigne ici un système de deux codes chaînés : la sortie du premier code est une des entrées du second code. Les deux codes sont coûteux. Afin de réaliser une analyse de sensibilité de la sortie du code couplé, on cherche à construire un métamodèle de cette sortie à partir d'un faible nombre d'observations. Trois types d'observations du système existent : celles de la chaîne complète, celles du premier code uniquement, celles du second code uniquement.Le métamodèle obtenu doit être précis dans les zones les plus probables de l'espace d'entrée.Les métamodèles sont obtenus par krigeage universel, avec une approche bayésienne.Dans un premier temps, le cas sans information intermédiaire, avec sortie scalaire, est traité. Une méthode innovante de définition de la fonction de la moyenne du processus gaussien, basée sur le couplage de deux polynômes, est proposée. Ensuite le cas avec information intermédiaire est traité. Un prédicteur basé sur le couplage des prédicteurs gaussiens associés aux deux codes est proposé. Des méthodes pour évaluer rapidement la moyenne et la variance du prédicteur obtenu sont proposées. Les résultats obtenus pour le cas scalaire sont ensuite étendus au cas où les deux codes sont à sortie de grande dimension. Pour ce faire, une méthode de réduction de dimension efficace de la variable intermédiaire de grande dimension est proposée pour faciliter la régression par processus gaussien du deuxième code.Les méthodes proposées sont appliquées sur des exemples numériques. / Three types of observations of the system exist: those of the chained code, those of the first code only and those of the second code only. The surrogate model has to be accurate on the most likely regions of the input domain of the nested code.In this work, the surrogate models are constructed using the Universal Kriging framework, with a Bayesian approach.First, the case when there is no information about the intermediary variable (the output of the first code) is addressed. An innovative parametrization of the mean function of the Gaussian process modeling the nested code is proposed. It is based on the coupling of two polynomials.Then, the case with intermediary observations is addressed. A stochastic predictor based on the coupling of the predictors associated with the two codes is proposed.Methods aiming at computing quickly the mean and the variance of this predictor are proposed. Finally, the methods obtained for the case of codes with scalar outputs are extended to the case of codes with high dimensional vectorial outputs.We propose an efficient dimension reduction method of the high dimensional vectorial input of the second code in order to facilitate the Gaussian process regression of this code. All the proposed methods are applied to numerical examples.
269

Minimisation du risque empirique avec des fonctions de perte nonmodulaires / Empirical risk minimization with non-modular loss functions

Yu, Jiaqian 22 March 2017 (has links)
Cette thèse aborde le problème de l’apprentissage avec des fonctions de perte nonmodulaires. Pour les problèmes de prédiction, où plusieurs sorties sont prédites simultanément, l’affichage du résultat comme un ensemble commun de prédiction est essentiel afin de mieux incorporer les circonstances du monde réel. Dans la minimisation du risque empirique, nous visons à réduire au minimum une somme empirique sur les pertes encourues sur l’échantillon fini avec une certaine perte fonction qui pénalise sur la prévision compte tenu de la réalité du terrain. Dans cette thèse, nous proposons des méthodes analytiques et algorithmiquement efficaces pour traiter les fonctions de perte non-modulaires. L’exactitude et l’évolutivité sont validées par des résultats empiriques. D’abord, nous avons introduit une méthode pour les fonctions de perte supermodulaires, qui est basé sur la méthode d’orientation alternée des multiplicateurs, qui ne dépend que de deux problémes individuels pour la fonction de perte et pour l’infèrence. Deuxièmement, nous proposons une nouvelle fonction de substitution pour les fonctions de perte submodulaires, la Lovász hinge, qui conduit à une compléxité en O(p log p) avec O(p) oracle pour la fonction de perte pour calculer un gradient ou méthode de coupe. Enfin, nous introduisons un opérateur de fonction de substitution convexe pour des fonctions de perte nonmodulaire, qui fournit pour la première fois une solution facile pour les pertes qui ne sont ni supermodular ni submodular. Cet opérateur est basé sur une décomposition canonique submodulairesupermodulaire. / This thesis addresses the problem of learning with non-modular losses. In a prediction problem where multiple outputs are predicted simultaneously, viewing the outcome as a joint set prediction is essential so as to better incorporate real-world circumstances. In empirical risk minimization, we aim at minimizing an empirical sum over losses incurred on the finite sample with some loss function that penalizes on the prediction given the ground truth. In this thesis, we propose tractable and efficient methods for dealing with non-modular loss functions with correctness and scalability validated by empirical results. First, we present the hardness of incorporating supermodular loss functions into the inference term when they have different graphical structures. We then introduce an alternating direction method of multipliers (ADMM) based decomposition method for loss augmented inference, that only depends on two individual solvers for the loss function term and for the inference term as two independent subproblems. Second, we propose a novel surrogate loss function for submodular losses, the Lovász hinge, which leads to O(p log p) complexity with O(p) oracle accesses to the loss function to compute a subgradient or cutting-plane. Finally, we introduce a novel convex surrogate operator for general non-modular loss functions, which provides for the first time a tractable solution for loss functions that are neither supermodular nor submodular. This surrogate is based on a canonical submodular-supermodular decomposition.
270

Příprava keramických materiálů se zvýšenou tepelnou vodivostí pro jaderné aplikace / Design of nuclear ceramic materials with enhanced thermal conductivity

Roleček, Jakub January 2014 (has links)
Oxid uraničitý (UO2) je v současnosti nejčastěji používaným materiálem jakožto palivo v komerčních jaderných reaktorech. Největší nevýhodou UO2 je jeho velmi nízká tepelná vodivost, a protože se při štěpení UO2 v jaderném reaktoru vytváří velké množství tepla, vzniká v UO2 peletě velký teplotní gradient. Tento teplotní gradient způsobuje vznik velkého tepelného napětí uvnitř pelety, což následně vede k tvorbě trhlin. Tyto trhliny napomáhají k šíření štěpných plynů při vysoké míře vyhoření paliva. Tvorba trhlin a zvýšený vývin štěpného plynu posléze vede ke značnému snížení odolnosti jaderného paliva. Tato práce se zabývá problematikou zvyšování tepelné vodivosti jaderného paliva na modelu materiálu (CeO2). V této práci jsou studovány podobnosti chování CeO2 a UO2 při konvenčním slinováním a při „spark plasma sintering.“ Způsob jak zvýšit tepelnou vodivost použitý v této práci je včlenění vysoce tepelně vodivého materiálu, karbidu křemíku (SiC), do struktury CeO2 pelet. Od karbidu křemíku je očekáváno, že zvýší tok tepla z jádra pelety, a tím zvýší tepelnou vodivost CeO2. V této práci je také porovnávána podobnost chování SiC v CeO2 matrici s chováním SiC v UO2, které bylo popsáno v literatuře.

Page generated in 0.064 seconds