• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 29
  • 10
  • 2
  • 1
  • Tagged with
  • 46
  • 46
  • 15
  • 12
  • 9
  • 9
  • 7
  • 7
  • 7
  • 6
  • 6
  • 6
  • 6
  • 5
  • 5
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
21

Ensemble for Deterministic Sampling with positive weights : Uncertainty quantification with deterministically chosen samples

Sahlberg, Arne January 2016 (has links)
Knowing the uncertainty of a calculated result is always important, but especially so when performing calculations for safety analysis. A traditional way of propagating the uncertainty of input parameters is Monte Carlo (MC) methods. A quicker alternative to MC, especially useful when computations are heavy, is Deterministic Sampling (DS). DS works by hand-picking a small set of samples, rather than randomizing a large set as in MC methods. The samples and its corresponding weights are chosen to represent the uncertainty one wants to propagate by encoding the first few statistical moments of the parameters' distributions. Finding a suitable ensemble for DS in not easy, however. Given a large enough set of samples, one can always calculate weights to encode the first couple of moments, but there is good reason to want an ensemble with only positive weights. How to choose the ensemble for DS so that all weights are positive is the problem investigated in this project. Several methods for generating such ensembles have been derived, and an algorithm for calculating weights while forcing them to be positive has been found. The methods and generated ensembles have been tested for use in uncertainty propagation in many different cases and the ensemble sizes have been compared. In general, encoding two or four moments in an ensemble seems to be enough to get a good result for the propagated mean value and standard deviation. Regarding size, the most favorable case is when the parameters are independent and have symmetrical distributions. In short, DS can work as a quicker alternative to MC methods in uncertainty propagation as well as in other applications.
22

Mathematical modelling and numerical simulation in materials science

Boyaval, Sébastien 16 December 2009 (has links) (PDF)
In a first part, we study numerical schemes using the finite-element method to discretize the Oldroyd-B system of equations, modelling a viscoelastic fluid under no flow boundary condition in a 2- or 3- dimensional bounded domain. The goal is to get schemes which are stable in the sense that they dissipate a free-energy, mimicking that way thermodynamical properties of dissipation similar to those actually identified for smooth solutions of the continuous model. This study adds to numerous previous ones about the instabilities observed in the numerical simulations of viscoelastic fluids (in particular those known as High Weissenberg Number Problems). To our knowledge, this is the first study that rigorously considers the numerical stability in the sense of an energy dissipation for Galerkin discretizations. In a second part, we adapt and use ideas of a numerical method initially developped in the works of Y. Maday, A.T. Patera et al., the reduced-basis method, in order to efficiently simulate some multiscale models. The principle is to numerically approximate each element of a parametrized family of complicate objects in a Hilbert space through the closest linear combination within the best linear subspace spanned by a few elementswell chosen inside the same parametrized family. We apply this principle to numerical problems linked : to the numerical homogenization of second-order elliptic equations, with two-scale oscillating diffusion coefficients, then ; to the propagation of uncertainty (computations of the mean and the variance) in an elliptic problem with stochastic coefficients (a bounded stochastic field in a boundary condition of third type), last ; to the Monte-Carlo computation of the expectations of numerous parametrized random variables, in particular functionals of parametrized Itô stochastic processes close to what is encountered in micro-macro models of polymeric fluids, with a control variate to reduce its variance. In each application, the goal of the reduced-basis approach is to speed up the computations without any loss of precision
23

Instrumentation optimale pour le suivi des performances énergétiques d’un procédé industriel / Optimal sensor network design to monitor the energy performances of a process plant

Rameh, Hala 07 November 2018 (has links)
L’efficacité énergétique devient un domaine de recherche incontournable dans la communauté scientifique vu son importance dans la lutte contre les crises énergétiques actuelles et futures. L'analyse des performances énergétiques, pour les procédés industriels, nécessite la connaissance des grandeurs physiques impliquées dans les équilibres de masse et d'énergie. D’où la problématique : comment choisir les points de mesure sur un site industriel de façon à trouver les valeurs de tous les indicateurs énergétiques sans avoir des redondances de mesure (respect des contraintes économiques), et en conservant un niveau de précision des résultats ? La première partie présente la formulation du problème d’instrumentation ayant pour but de garantir une observabilité minimale du système en faveur des variables clés. Ce problème est combinatoire. Une méthode de validation des différentes combinaisons de capteurs a été introduite. Elle est basée sur l’interprétation structurelle de la matrice représentant le procédé. Le verrou de long temps de calcul lors du traitement des procédés de moyenne et grande taille a été levé. Des méthodes séquentielles ont été développées pour trouver un ensemble de schémas de capteurs pouvant être employés, en moins de 1% du temps de calcul initialement requis. La deuxième partie traite le choix du schéma d’instrumentation optimal. Le verrou de propagation des incertitudes dans un problème de taille variable a été levé. Une modélisation du procédé basée sur des paramètres binaires a été proposée pour automatiser les calculs, et évaluer les incertitudes des schémas trouvés. Enfin la méthodologie complète a été appliquée sur un cas industriel et les résultats ont été présentés. / Energy efficiency is becoming an essential research area in the scientific community given its importance in the fight against current and future energy crises. The analysis of the energy performances of the industrial processes requires the determination of the quantities involved in the mass and energy balances. Hence: how to choose the placement of the measurement points in an industrial site to find the values of all the energy indicators, without engendering an excess of unnecessary information due to redundancies (reducing measurements costs) and while respecting an accepted level of accuracy of the results ? The first part presents the formulation of the instrumentation problem which aims to guaranteeing a minimal observability of the system in favor of the key variables. This problem is combinatory. A method of validation of the different sensors combinations has been introduced. It is based on the structural interpretation of the matrix representing the process. The issue of long computing times while addressing medium and large processes was tackled. Sequential methods were developed to find a set of different sensor networks to be used satisfying the observability requirements, in less than 1% of the initial required computation time. The second part deals with the choice of the optimal instrumentation scheme. The difficulty of uncertainty propagation in a problem of variable size was addressed. To automate the evaluation of the uncertainty for all the found sensor networks, the proposed method suggested modeling the process based on binary parameters. Finally, the complete methodology is applied to an industrial case and the results were presented.
24

Développement d’une méthode stochastique de propagation des incertitudes neutroniques associées aux grands coeurs de centrales nucléaires : application aux réacteurs de génération III / Development of a neutronics uncertainty propagation stochastic method associated to large cores : application to GEN-III nuclear power plants

Volat, Ludovic 10 October 2018 (has links)
Les réacteurs nucléaires de génération III s'inscrivent dans la continuité des réacteurs à eau sous pression actuels, tout en présentant un certain nombre d'améliorations en terme de sûreté, de rendement et d'environnement.Parmi les caractéristiques de ces réacteurs, la taille importante du coeur et l'utilisation d'un réflecteur lourd se traduisent par une meilleure efficacité neutronique et une meilleure protection de la cuve.Du fait de leur grande taille, le risque de basculement de la nappe de puissance neutronique est exacerbé. Le basculement est donc un paramètre d'intérêt à prendre en compte dans les études de sûreté. Par ailleurs, le calcul de l'incertitude associée à la nappe de puissance neutronique est difficilement atteignable par les méthodes déterministes actuellement implémentées dans les codes de neutronique.Ce travail de thèse a donc porté sur le développement d'une méthode stochastique innovante de propagation des incertitudes neutroniques. Tout en étant basée sur des résultats probabilistes, elle tire parti de la puissance croissante des moyens de calcul informatique afin de parcourir tous les états du réacteur statistiquement prévus.Après avoir été validée, cette méthode a été appliquée à un benchmark de grand coeur de l'OCDE/AEN avec des valeurs de covariances issues d'une analyse critique. Ainsi, pour ce système, l'incertitude associée au facteur de multiplication effectif des neutrons keff $(1\sigma)$ vaut 638 pcm . Par ailleurs, le basculement total vaut 8.8 \% $(1\sigma)$, et l'incertitude maximale associée à l'insertion d'un groupe de barres absorbantes utilisées pour son pilotage vaut 11~\% $(1\sigma)$. / Generation III Light Water Reactors undoubtedly follow design guidelines comparable to those of current PWRs. Furthermore, they take advantage of enhanced features in terms of safety, energy efficiency, radiation protection and environment. Then, we talk about an evolutionary approach. Amongst those improvements, the significant size and the use of a heavy reflector translate into a better neutronics efficacy, leading to intrinsic enrichment benefits then to natural uranium profits. They contribute to the core vessel preservation as well.Because of their large dimensions, the neutronic bulge of this kind of reactors is emphasized. Therefore, it is a parameter of interest in the reactor safety studies. Nevertheless, the uncertainty related to the radial power map is hardly reachable by using the numerical methods implemented in the neutronics codes.Given the above, this PhD work aims to develop an innovative stochastic neutronics uncertainties propagation method. By using recent probabilistic results, it makes good use of the growing calculation means in order to explore all the physical states of the reactor statiscally foreseen.After being validated , our method has been applied to a reactor proposed in the framework of a large core OECD/NEA international benchmark with carefully chosen covariances values. Thus, for this system, the uncertainties related to the keff reaches 638~pcm $(1\sigma)$. What is more, the total bulge equals 8.8~\% $(1\sigma)$ and the maximal uncertainty related to the insertion of a group of control rods reaches 11~\% $(1\sigma)$.
25

Traitement de l'incertitude pour la reconnaissance de la parole robuste au bruit / Uncertainty learning for noise robust ASR

Tran, Dung Tien 20 November 2015 (has links)
Cette thèse se focalise sur la reconnaissance automatique de la parole (RAP) robuste au bruit. Elle comporte deux parties. Premièrement, nous nous focalisons sur une meilleure prise en compte des incertitudes pour améliorer la performance de RAP en environnement bruité. Deuxièmement, nous présentons une méthode pour accélérer l'apprentissage d'un réseau de neurones en utilisant une fonction auxiliaire. Dans la première partie, une technique de rehaussement multicanal est appliquée à la parole bruitée en entrée. La distribution a posteriori de la parole propre sous-jacente est alors estimée et représentée par sa moyenne et sa matrice de covariance, ou incertitude. Nous montrons comment propager la matrice de covariance diagonale de l'incertitude dans le domaine spectral à travers le calcul des descripteurs pour obtenir la matrice de covariance pleine de l'incertitude sur les descripteurs. Le décodage incertain exploite cette distribution a posteriori pour modifier dynamiquement les paramètres du modèle acoustique au décodage. La règle de décodage consiste simplement à ajouter la matrice de covariance de l'incertitude à la variance de chaque gaussienne. Nous proposons ensuite deux estimateurs d'incertitude basés respectivement sur la fusion et sur l'estimation non-paramétrique. Pour construire un nouvel estimateur, nous considérons la combinaison linéaire d'estimateurs existants ou de fonctions noyaux. Les poids de combinaison sont estimés de façon générative en minimisant une mesure de divergence par rapport à l'incertitude oracle. Les mesures de divergence utilisées sont des versions pondérées des divergences de Kullback-Leibler (KL), d'Itakura-Saito (IS) ou euclidienne (EU). En raison de la positivité inhérente de l'incertitude, ce problème d'estimation peut être vu comme une instance de factorisation matricielle positive (NMF) pondérée. De plus, nous proposons deux estimateurs d'incertitude discriminants basés sur une transformation linéaire ou non linéaire de l'incertitude estimée de façon générative. Cette transformation est entraînée de sorte à maximiser le critère de maximum d'information mutuelle boosté (bMMI). Nous calculons la dérivée de ce critère en utilisant la règle de dérivation en chaîne et nous l'optimisons par descente de gradient stochastique. Dans la seconde partie, nous introduisons une nouvelle méthode d'apprentissage pour les réseaux de neurones basée sur une fonction auxiliaire sans aucun réglage de paramètre. Au lieu de maximiser la fonction objectif, cette technique consiste à maximiser une fonction auxiliaire qui est introduite de façon récursive couche par couche et dont le minimum a une expression analytique. Grâce aux propriétés de cette fonction, la décroissance monotone de la fonction objectif est garantie / This thesis focuses on noise robust automatic speech recognition (ASR). It includes two parts. First, we focus on better handling of uncertainty to improve the performance of ASR in a noisy environment. Second, we present a method to accelerate the training process of a neural network using an auxiliary function technique. In the first part, multichannel speech enhancement is applied to input noisy speech. The posterior distribution of the underlying clean speech is then estimated, as represented by its mean and its covariance matrix or uncertainty. We show how to propagate the diagonal uncertainty covariance matrix in the spectral domain through the feature computation stage to obtain the full uncertainty covariance matrix in the feature domain. Uncertainty decoding exploits this posterior distribution to dynamically modify the acoustic model parameters in the decoding rule. The uncertainty decoding rule simply consists of adding the uncertainty covariance matrix of the enhanced features to the variance of each Gaussian component. We then propose two uncertainty estimators based on fusion to nonparametric estimation, respectively. To build a new estimator, we consider a linear combination of existing uncertainty estimators or kernel functions. The combination weights are generatively estimated by minimizing some divergence with respect to the oracle uncertainty. The divergence measures used are weighted versions of Kullback-Leibler (KL), Itakura-Saito (IS), and Euclidean (EU) divergences. Due to the inherent nonnegativity of uncertainty, this estimation problem can be seen as an instance of weighted nonnegative matrix factorization (NMF). In addition, we propose two discriminative uncertainty estimators based on linear or nonlinear mapping of the generatively estimated uncertainty. This mapping is trained so as to maximize the boosted maximum mutual information (bMMI) criterion. We compute the derivative of this criterion using the chain rule and optimize it using stochastic gradient descent. In the second part, we introduce a new learning rule for neural networks that is based on an auxiliary function technique without parameter tuning. Instead of minimizing the objective function, this technique consists of minimizing a quadratic auxiliary function which is recursively introduced layer by layer and which has a closed-form optimum. Based on the properties of this auxiliary function, the monotonic decrease of the new learning rule is guaranteed.
26

Schemes and Strategies to Propagate and Analyze Uncertainties in Computational Fluid Dynamics Applications / Schémas et stratégies pour la propagation et l’analyse des incertitudes dans la simulation d’écoulements

Geraci, Gianluca 05 December 2013 (has links)
Ce manuscrit présente des contributions aux méthodes de propagation et d’analyse d’incertitude pour des applications en Mécanique des Fluides Numérique. Dans un premier temps, deux schémas numériques innovantes sont présentées: une approche de type ”Collocation”, et une autre qui est basée sur une représentation de type ”Volumes Finis” dans l’espace stochastique. Dans les deux, l’élément clé est donné par l’introduction d’une représentation de type ”Multirésolution” dans l’espace stochastique. L’objectif est à la fois de réduire le nombre de dimensions et d’appliquer un algorithme d’adaptation de maillage qui puisse être utilisé dans l’espace couplé physique/stochastique pour des problèmes non-stationnaires. Pour finir, une stratégie d’optimisation robuste est proposée, qui est basée sur une analyse de décompositionde la variance et des moments statistiques d’ordre plus élevé. Dans ce cas, l’objectif est de traiter des problèmes avec un grand nombre d’incertitudes. / In this manuscript, three main contributions are illustrated concerning the propagation and the analysis of uncertainty for computational fluid dynamics (CFD) applications. First, two novel numerical schemes are proposed : one based on a collocation approach, and the other one based on a finite volume like representation in the stochastic space. In both the approaches, the key element is the introduction of anon-linear multiresolution representation in the stochastic space. The aim is twofold : reducing the dimensionality of the discrete solution and applying a time-dependent refinement/coarsening procedure in the combined physical/stochastic space. Finally, an innovative strategy, based on variance-based analysis, is proposed for handling problems with a moderate large number of uncertainties in the context of the robust design optimization. Aiming to make more robust this novel optimization strategies, the common ANOVA-like approach is also extended to high-order central moments (up to fourth order). The new approach is more robust, with respect to the original variance-based one, since the analysis relies on new sensitivity indexes associated to a more complete statistic description.
27

Modelling And Analyzing The Uncertainty Propagation In Vector-based Network Structures In Gis

Yarkinoglu Gucuk, Oya 01 September 2007 (has links) (PDF)
Uncertainty is a quantitative attribute that represents the difference between reality and representation of reality. Uncertainty analysis and error propagation modeling reveals the propagation of input error through output. Main objective of this thesis is to model the uncertainty and its propagation for dependent line segments considering positional correlation. The model is implemented as a plug-in, called Propagated Band Model (PBM) Plug-in, to a commercial desktop application, GeoKIT Explorer. Implementation of the model is divided into two parts. In the first one, model is applied to each line segment of the selected network, separately. In the second one, error in each segment is transmitted through the line segments from the start node to the end node of the network. Outcomes are then compared with the results of the G-Band model which is the latest uncertainty model for vector features. To comment on similarities and differences of the outcomes, implementation is handled for two different cases. In the first case, users digitize the selected road network. In the second case recently developed software called Interactive Drawer (ID) is used to allow user to define a new network and simulate this network through Monte Carlo Simulation Method. PBM Plug-in is designed to accept the outputs of these implementation cases as an input, as well as generating and visualizing the uncertainty bands of the given line network. Developed implementations and functionality are basically for expressing the importance and effectiveness of uncertainty handling in vector based geometric features, especially for line segments which construct a network.
28

A methodology for the validated design space exploration of fuel cell powered unmanned aerial vehicles

Moffitt, Blake Almy 05 April 2010 (has links)
Unmanned Aerial Vehicles (UAVs) are the most dynamic growth sector of the aerospace industry today. The need to provide persistent intelligence, surveillance, and reconnaissance for military operations is driving the planned acquisition of over 5,000 UAVs over the next five years. The most pressing need is for quiet, small UAVs with endurance beyond what is capable with advanced batteries or small internal combustion propulsion systems. Fuel cell systems demonstrate high efficiency, high specific energy, low noise, low temperature operation, modularity, and rapid refuelability making them a promising enabler of the small, quiet, and persistent UAVs that military planners are seeking. Despite the perceived benefits, the actual near-term performance of fuel cell powered UAVs is unknown. Until the auto industry began spending billions of dollars in research, fuel cell systems were too heavy for useful flight applications. However, the last decade has seen rapid development with fuel cell gravimetric and volumetric power density nearly doubling every 2-3 years. As a result, a few design studies and demonstrator aircraft have appeared, but overall the design methodology and vehicles are still in their infancy. The design of fuel cell aircraft poses many challenges. Fuel cells differ fundamentally from combustion based propulsion in how they generate power and interact with other aircraft subsystems. As a result, traditional multidisciplinary analysis (MDA) codes are inappropriate. Building new MDAs is difficult since fuel cells are rapidly changing in design, and various competitive architectures exist for balance of plant, hydrogen storage, and all electric aircraft subsystems. In addition, fuel cell design and performance data is closely protected which makes validation difficult and uncertainty significant. Finally, low specific power and high volumes compared to traditional combustion based propulsion result in more highly constrained design spaces that are problematic for design space exploration. To begin addressing the current gaps in fuel cell aircraft development, a methodology has been developed to explore and characterize the near-term performance of fuel cell powered UAVs. The first step of the methodology is the development of a valid MDA. This is accomplished by using propagated uncertainty estimates to guide the decomposition of a MDA into key contributing analyses (CAs) that can be individually refined and validated to increase the overall accuracy of the MDA. To assist in MDA development, a flexible framework for simultaneously solving the CAs is specified. This enables the MDA to be easily adapted to changes in technology and the changes in data that occur throughout a design process. Various CAs that model a polymer electrolyte membrane fuel cell (PEMFC) UAV are developed, validated, and shown to be in agreement with hardware-in-the-loop simulations of a fully developed fuel cell propulsion system. After creating a valid MDA, the final step of the methodology is the synthesis of the MDA with an uncertainty propagation analysis, an optimization routine, and a chance constrained problem formulation. This synthesis allows an efficient calculation of the probabilistic constraint boundaries and Pareto frontiers that will govern the design space and influence design decisions relating to optimization and uncertainty mitigation. A key element of the methodology is uncertainty propagation. The methodology uses Systems Sensitivity Analysis (SSA) to estimate the uncertainty of key performance metrics due to uncertainties in design variables and uncertainties in the accuracy of the CAs. A summary of SSA is provided and key rules for properly decomposing a MDA for use with SSA are provided. Verification of SSA uncertainty estimates via Monte Carlo simulations is provided for both an example problem as well as a detailed MDA of a fuel cell UAV. Implementation of the methodology was performed on a small fuel cell UAV designed to carry a 2.2 kg payload with 24 hours of endurance. Uncertainty distributions for both design variables and the CAs were estimated based on experimental results and were found to dominate the design space. To reduce uncertainty and test the flexibility of the MDA framework, CAs were replaced with either empirical, or semi-empirical relationships during the optimization process. The final design was validated via a hardware-in-the loop simulation. Finally, the fuel cell UAV probabilistic design space was studied. A graphical representation of the design space was generated and the optima due to deterministic and probabilistic constraints were identified. The methodology was used to identify Pareto frontiers of the design space which were shown on contour plots of the design space. Unanticipated discontinuities of the Pareto fronts were observed as different constraints became active providing useful information on which to base design and development decisions.
29

Schemes and Strategies to Propagate and Analyze Uncertainties in Computational Fluid Dynamics Applications

Geraci, Gianluca 05 December 2013 (has links) (PDF)
In this manuscript, three main contributions are illustrated concerning the propagation and the analysis of uncertainty for computational fluid dynamics (CFD) applications. First, two novel numerical schemes are proposed : one based on a collocation approach, and the other one based on a finite volume like representation in the stochastic space. In both the approaches, the key element is the introduction of anon-linear multiresolution representation in the stochastic space. The aim is twofold : reducing the dimensionality of the discrete solution and applying a time-dependent refinement/coarsening procedure in the combined physical/stochastic space. Finally, an innovative strategy, based on variance-based analysis, is proposed for handling problems with a moderate large number of uncertainties in the context of the robust design optimization. Aiming to make more robust this novel optimization strategies, the common ANOVA-like approach is also extended to high-order central moments (up to fourth order). The new approach is more robust, with respect to the original variance-based one, since the analysis relies on new sensitivity indexes associated to a more complete statistic description.
30

Methodology for the conceptual design of a robust and opportunistic system-of-systems

Talley, Diana Noonan 18 November 2008 (has links)
Systems are becoming more complicated, complex, and interrelated. Designers have recognized the need to develop systems from a holistic perspective and design them as Systems-of-Systems (SoS). The design of the SoS, especially in the conceptual design phase, is generally characterized by significant uncertainty. As a result, it is possible for all three types of uncertainty (aleatory, epistemic, and error) and the associated factors of uncertainty (randomness, sampling, confusion, conflict, inaccuracy, ambiguity, vagueness, coarseness, and simplification) to affect the design process. While there are a number of existing SoS design methods, several gaps have been identified: the ability to modeling all of the factors of uncertainty at varying levels of knowledge; the ability to consider both the pernicious and propitious aspects of uncertainty; and, the ability to determine the value of reducing the uncertainty in the design process. While there are numerous uncertainty modeling theories, no one theory can effectively model every kind of uncertainty. This research presents a Hybrid Uncertainty Modeling Method (HUMM) that integrates techniques from the following theories: Probability Theory, Evidence Theory, Fuzzy Set Theory, and Info-Gap theory. The HUMM is capable of modeling all of the different factors of uncertainty and can model the uncertainty for multiple levels of knowledge. In the design process, there are both pernicious and propitious characteristics associated with the uncertainty. Existing design methods typically focus on developing robust designs that are insensitive to the associated uncertainty. These methods do not capitalize on the possibility of maximizing the potential benefit associated with the uncertainty. This research demonstrates how these deficiencies can be overcome by identifying the most robust and opportunistic design. In a design process it is possible that the most robust and opportunistic design will not be selected from the set of potential design alternatives due to the related uncertainty. This research presents a process called the Value of Reducing Uncertainty Method (VRUM) that can determine the value associated with reducing the uncertainty in the design problem before a final decision is made by utilizing two concepts: the Expected Value of Reducing Uncertainty (EVRU) and the Expected Cost to Reducing Uncertainty (ECRU).

Page generated in 0.5325 seconds