• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 112
  • 95
  • 72
  • 10
  • 8
  • 3
  • 2
  • 1
  • 1
  • 1
  • 1
  • 1
  • Tagged with
  • 325
  • 101
  • 56
  • 43
  • 37
  • 34
  • 31
  • 24
  • 24
  • 24
  • 24
  • 21
  • 21
  • 21
  • 18
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
81

Conditionnement de la modélisation stochastique 3D des réseaux de failles / Conditioning of the 3D stochastic modeling of fault networks

Julio, Charline 23 June 2015 (has links)
Les failles sont des zones de rupture de la roche qui affectent le comportement mécanique et fluide des réservoirs. De nombreuses incertitudes existent sur la géométrie et la topologie des réseaux de failles dues à la résolution et la qualité des données, mais aussi aux lacunes d'informations. Des approches stochastiques ont été utilisées dans la littérature pour gérer les incertitudes structurales. Ces méthodes génèrent un ensemble de modèles possibles de failles conditionné par les données disponibles. Dans cette thèse, nous explorons deux principales stratégies de conditionnement de la modélisation stochastique de réseaux de failles. La première stratégie élaborée permet de prendre en compte des observations d'absences de failles sur des données, par exemple, des zones où les réflecteurs sismiques sont continus. Dans ce but, le réservoir est divisé en deux sous-volumes délimités par une enveloppe surfacique 3D : un volume non-faillé et un volume potentiellement-faillé. Les surfaces de failles sont ensuite simulées et optimisées de manière à être entièrement positionnées dans la zone identifiée comme potentiellement faillée. La seconde stratégie de conditionnement présentée dans cette thèse gère les incertitudes relatives à l'interprétation de la segmentation des failles. La méthode génère un ensemble de modèles de segments de failles en-échelon à partir d'une interprétation continue à plus grande échelle d'une faille segmentée. La méthode utilise les variations d'orientations de la faille segmentée pour identifier la position des différents segments la composant. L'impact des différentes configurations de segmentation sur les simulations d'écoulements est étudié / Faults are discontinuities in rock volumes that affect mechanical properties and flow paths of hydrocarbon reservoirs. However, subsurface modeling remains limited by the incompleteness and resolution of available data, so that uncertainties remain on the geometry and the connectivity of fault networks. To assess fault network uncertainties, several stochastic approaches have been introduced in the literature. These methods generate a set of possible fault models conditioned by reservoir data. In this thesis, we investigate two main conditioning strategies of stochastic fault modeling methods. The first one takes into account the observations of the fault absence, for instance, as indicated by seismic reflector continuity. To do this, the reservoir volume is divided into two sub-volumes delimited by a 3D envelope surface: (1) a volume where no faults occur, and (2) a potentially-faulted volume. Then, faults are simulated and optimized in such a way as to be entirely confined to the potentially-faulted volume. The second presented strategy deals with the uncertainties related to the seismic interpretation of fault segmentation. It generates a set of fine-scale segmented faults from a larger-scale and continuous interpretation of the fault. The method uses the orientation variations of the continuous fault to subdivide it into several possible fault segments. The effects of the different segmentation configurations on flow simulations are studied
82

Metodika pro kalibraci objemu nádob a nádrží / Methodology for calibration of containers and tanks

Vrátil, Šimon January 2013 (has links)
This diplomathesis work is dealing with a concept of volumetric calibration of vessels using the volumetric method. The analysis considers current regulations together with influence of uncertainties in measurements. The principal part of the thesis is analysis of metrological uncertainties resulting from application of various volumetric methods during the calibration process. A practical outcome from this research is development of metrological process for calibration of vessels that could be used by the accredited laboratories.
83

Kalibrace kontaktních a bezkontaktních teploměrů / Calibration of contact and non-contact thermometers

Skalický, David January 2016 (has links)
This work deals with the calibration of contact and contactless thermometers. The theoretical part describes physical laws, which are important for contact and contactless measuring of temperature, requirements for thermometers, suitability of use different types of thermometers and materials used for construction of temperature sensors. The following part describes metrology with the focus on metrology system of the Czech Republic. This thesis also describes errors and uncertainties of measurments, especially on their division, sources of uncertainties and methodology of type A and type B uncertainty determination. The practical part is focused on the calibration of thermometers within the Czech Metrological Institute and the Faculty of Electrical Engineering and Communication in Brno.
84

Cartographie topographique et radiologique 3D en temps réel : acquisition, traitement, fusion des données et gestion des incertitudes / Real-time 3D topographical and radiological mapping : acquisition, fusion, data processing and uncertainties management.

Hautot, Félix 16 June 2017 (has links)
Dans le cadre des activités de maintenance, de décontamination et de démantèlement d’installations nucléaires, il est nécessaire d’effectuer un état des lieux précis des structures potentiellement contaminées ou activées, préalablement à toute intervention. Pour des raisons économiques, cet état des lieux doit être le plus souvent réalisé dans un temps court. Par ailleurs, il est généralement effectué par un opérateur, dont le temps d’exposition aux rayonnements doit être minimisé. Une des difficultés récurrentes réside dans l’éventuelle obsolescence ou de l’inexistence des plans, et dans le cas d’investigations en intérieur, de la perte de signaux GPS, et de la difficulté d’employer des systèmes de localisations externes et pré-calibrés. En effet, l’état des lieux est obtenu en couplant une cartographie de l’environnement avec des mesures nucléaires destinées à évaluer le niveau de radiations dans les lieux étudiés. Dans ce cadre, il est nécessaire de disposer d’un instrument portatif permettant de délivrer une cartographie radiologique et topographique la plus exhaustive possible des locaux afin d’établir des scénarii d’intervention. Afin de minimiser le temps d’exposition de l’opérateur, il est essentiel que les données acquises soient exploitables en temps réel. Ce type d’instrument doit permettre de procéder à des interventions complexes et doit fournir les meilleures prévisions dosimétriques afin d’optimiser les temps d’intervention lors du démantèlement ainsi que la gestion des éventuels déchets. À ces fins, Areva STMI a développé un système autonome de positionnement et de calcul de déplacement de sondes de mesures nucléaires en temps-réel basé sur les techniques de SLAM (Simultaneous Localization And Mapping). Ces développements ont conduit au dépôt d’un brevet. Ce travail de thèse a consisté à poursuive cette étude, et en particulier à décomposer l’ensemble des sous-systèmes, à poursuivre les développements inhérents à la fusion de données topographiques et radiologiques, à proposer des moyens d’optimisation, et à poser les bases d’une technique d’analyse, en temps réel, des incertitudes associées. Les méthodes SLAM utilisent l’odométrie visuelle qui peut reposer sur la capture d’images à l’aide de caméras RGB-D (caméras de type Microsoft Kinect®). Le processus d’acquisition délivre une carte tridimensionnelle contenant la position et l’orientation en 3D des appareils de mesure ainsi que les mesures elles-mêmes (débit de dose et spectrométrie gamma CZT) sans implication d’infrastructure préexistante. Par ailleurs, des méthodes de détections de sources basées sur les techniques d’interpolation spatiale et de rétroprojection de signal en « proche temps-réel » ont été développées. Ainsi, il est possible d’évaluer la position des sources radioactives dans l’environnement acquis. Il est ainsi possible de calculer rapidement des cartes de son état radiologique sans délai après l’acquisition. La dernière partie de ce travail a consisté à poser les bases d’une méthode originale pour l’estimation, en proche temps réel, de la précision des résultats issus de la chaîne d’acquisition et de traitement. Cette première approche nous a permis de formuler l’évaluation et la propagation des incertitudes tout au long de cette chaîne d’acquisition en temps réel, afin d’évaluer les méthodes que nous avons employées en termes de précision et de fiabilité de chaque acquisition réalisée. Enfin, une phase de benchmark permet d’estimer les résultats par rapport à des méthodes de référence. / In the field of nuclear related activities such as maintenance, decontamination and dismantling status reports of potentially contaminated or activated elements are required beforehand. For economic reasons, this status report must be quickly performed. So as to be done quickly, the operation is realized by an operator, and his exposure time must be reduced as much as possible. Concerning indoor environments, it can be hard to make such investigations due to out-of-date plans or maps, loose of GPS signal, pre-positioning of underlying or precalibrated systems. Indeed, the premises status report is obtained by coupling nuclear measurements and topographical mapping. In such kind of situation it is necessary to have a portative instrument that delivers an exhaustive radiological and topographical mapping in order to deliver a decision support concerning the best intervention scenario to set up as fast as possible. Furthermore, and so as to reduce operator’s exposure time, such kind of method must be usable in real time. This method enables to proceed to complex intervention within the best radiological previsions for optimizing operator’s exposition time and waste management. In this goal, Areva STMI then developed a nuclear measurement probes autonomous positioning and motion estimation system based on visual SLAM (Simultaneous Localization And Mapping). These developments led to apply a patent. This thesis consisted in pursuing this survey, especially decomposing all the underlying systems, continuing the data fusion developments, proposing optimisations, and setting the basis of a real-time associated uncertainties analysis. SLAM based on visual odometry can be performed with RGB-D sensor (Microsoft Kinect®-like sensors). The acquisition process delivers a 3D map containing radiological sensors poses (positions and orientations in 3D) and measurements (dose rate and CZT gamma spectrometry) without any external signal or device. Moreover, a few radioactive sources localization algorithms based on geostatistics and back projection of measurements can also be performed in near real-time. It is then possible to evaluate the position of radioactive sources in the scene and compute fast radiological mappings of premises close to the acquisition. The last part of this work consisted in developing an original method for real-time evaluation of the process chain and results accuracies. The evaluation of uncertainties and their propagation along the acquisition and process chain in real-time provide feedbacks on employed methods for investigations or intervention processes and enable to evaluate the reliability of acquired data. Finally, a set of benchmarks has been performed in order to estimate the results quality by comparing them to reference methods.
85

Efficient Uncertainty Characterization Framework in Neutronics Core Simulation with Application to Thermal-Spectrum Reactor Systems

Dongli Huang (7473860) 16 April 2020 (has links)
<div>This dissertation is devoted to developing a first-of-a-kind uncertainty characterization framework (UCF) providing comprehensive, efficient and scientifically defendable methodologies for uncertainty characterization (UC) in best-estimate (BE) reactor physics simulations. The UCF is designed with primary application to CANDU neutronics calculations, but could also be applied to other thermal-spectrum reactor systems. The overarching goal of the UCF is to propagate and prioritize all sources of uncertainties, including those originating from nuclear data uncertainties, modeling assumptions, and other approximations, in order to reliably use the results of BE simulations in the various aspects of reactor design, operation, and safety. The scope of this UCF is to propagate nuclear data uncertainties from the multi-group format, representing the input to lattice physics calculations, to the few-group format, representing the input to nodal diffusion-based core simulators and quantify the uncertainties in reactor core attributes.</div><div>The main contribution of this dissertation addresses two major challenges in current uncertainty analysis approaches. The first is the feasibility of the UCF due to the complex nature of nuclear reactor simulation and computational burden of conventional uncertainty quantification (UQ) methods. The second goal is to assess the impact of other sources of uncertainties that are typically ignored in the course of propagating nuclear data uncertainties, such as various modeling assumptions and approximations.</div>To deal with the first challenge, this thesis work proposes an integrated UC process employing a number of approaches and algorithms, including the physics-guided coverage mapping (PCM) method in support of model validation, and the reduced order modeling (ROM) techniques as well as the sensitivity analysis (SA) on uncertainty sources, to reduce the dimensionality of uncertainty space at each interface of neutronics calculations. In addition to the efficient techniques to reduce the computational cost, the UCF aims to accomplish four primary functions in uncertainty analysis of neutronics simulations. The first function is to identify all sources of uncertainties, including nuclear data uncertainties, modeling assumptions, numerical approximations and technological parameter uncertainties. Second, the proposed UC process will be able to propagate the identified uncertainties to the responses of interest in core simulation and provide uncertainty quantifications (UQ) analysis for these core attributes. Third, the propagated uncertainties will be mapped to a wide range of reactor core operation conditions. Finally, the fourth function is to prioritize the identified uncertainty sources, i.e., to generate a priority identification and ranking table (PIRT) which sorts the major sources of uncertainties according to the impact on the core attributes’ uncertainties. In the proposed implementation, the nuclear data uncertainties are first propagated from multi-group level through lattice physics calculation to generate few-group parameters uncertainties, described using a vector of mean values and a covariance matrix. Employing an ROM-based compression of the covariance matrix, the few-group uncertainties are then propagated through downstream core simulation in a computationally efficient manner.<div>To explore on the impact of uncertainty sources except for nuclear data uncertainties on the UC process, a number of approximations and assumptions are investigated in this thesis, e.g., modeling assumptions such as resonance treatment, energy group structure, etc., and assumptions associated with the uncertainty analysis itself, e.g., linearity assumption, level of ROM reduction and associated number of degrees of freedom employed. These approximations and assumptions have been employed in the literature of neutronic uncertainty analysis yet without formal verifications. The major argument here is that these assumptions may introduce another source of uncertainty whose magnitude needs to be quantified in tandem with nuclear data uncertainties. In order to assess whether modeling uncertainties have an impact on parameter uncertainties, this dissertation proposes a process to evaluate the influence of various modeling assumptions and approximations and to investigate the interactions between the two major uncertainty sources. To explore this endeavor, the impact of a number of modeling assumptions on core attributes uncertainties is quantified.</div><div>The proposed UC process has first applied to a BWR application, in order to test the uncertainty propagation and prioritization process with the ROM implementation in a wide range of core conditions. Finally, a comprehensive uncertainty library for CANDU uncertainty analysis with NESTLE-C as core simulator is generated compressed uncertainty sources from the proposed UCF. The modeling uncertainties as well as their impact on the parameter uncertainty propagation process are investigated on the CANDU application with the uncertainty library.</div>
86

Posouzení schopnosti regionálních klimatických modelů simulovat klima na území ČR / Assessment of regional climate models performance in simulating present-day climate over the area of the Czech Republic

Crhová, Lenka January 2011 (has links)
Title: Assessment of regional climate models performance in simulating present-day climate over the area of the Czech Republic Author: Lenka Crhová Department: Department of Meteorology and Environment Protection Supervisor:doc. RNDr. Jaroslava Kalvová, CSc. Supervisor's e-mail address: Jaroslava.Kalvova@mff.cuni.cz Abstract: Today a great attention is turned to climate changes and their impacts. Since eighties the Regional Climate Models (RCMs) are developed for assessment of future climate at regional scales. But their outputs suffer from many uncertain- ties. Therefore, it is necessary to assess models ability to simulate observed climate characteristics and uncertainties in their outputs before they are applied in consecu- tive studies. In the first chapters of this thesis the sources of uncertainties in climate model outputs and selected methods of climate models performance evaluation are reviewed. Several methods of model performance assessment are then applied to si- mulations of the Czech regional climate model ALADIN-Climate/CZ and selected RCMs from the ENSEMBLES project for the reference period 1961-1990 in the area of the Czech Republic. The attention is paid especially to comparison of simulated and observed spatial and temporal variability of several climatic elements. Within this thesis the...
87

Can we reliably assess climate mitigation options for air trafficscenarios despite large uncertainties in atmospheric processes?

Dahlmann, Katrin, Grewe, Volker, Frömming, Christine, Burkhardt, Ulrike 23 September 2020 (has links)
Air traffic has an increasing influence on climate; therefore identifying mitigation options to reduce the climate impact of aviation becomes more and more important. Aviation influences climate through several climate agents, which show different dependencies on the magnitude and location of emission and the spatial and temporal impacts. Even counteracting effects can occur. Therefore, it is important to analyse all effects with high accuracy to identify mitigation potentials. However, the uncertainties in calculating the climate impact of aviation are partly large (up to a factor of about 2). In this study, we present a methodology, based on a Monte Carlo simulation of an updated non-linear climate-chemistry response model AirClim, to integrate above mentioned uncertainties in the climate assessment of mitigation options. Since mitigation options often represent small changes in emissions, we concentrate on a more generalised approach and use exemplarily different normalised global air traffic inventories to test the methodology. These inventories are identical in total emissions but differ in the spatial emission distribution. We show that using the Monte Carlo simulation and analysing relative differences between scenarios lead to a reliable assessment of mitigation potentials. In a use case we show that the presented methodology can be used to analyse even small differences between scenarios with mean flight altitude variations.
88

High-Dimensional Analysis of Regularized Convex Optimization Problems with Application to Massive MIMO Wireless Communication Systems

Alrashdi, Ayed 03 1900 (has links)
In the past couple of decades, the amount of data available has dramatically in- creased. Thus, in modern large-scale inference problems, the dimension of the signal to be estimated is comparable or even larger than the number of available observa- tions. Yet the desired properties of the signal typically lie in some low-dimensional structure, such as sparsity, low-rankness, finite alphabet, etc. Recently, non-smooth regularized convex optimization has risen as a powerful tool for the recovery of such structured signals from noisy linear measurements in an assortment of applications in signal processing, wireless communications, machine learning, computer vision, etc. With the advent of Compressed Sensing (CS), there has been a huge number of theoretical results that consider the estimation performance of non-smooth convex optimization in such a high-dimensional setting. In this thesis, we focus on precisely analyzing the high dimensional error perfor- mance of such regularized convex optimization problems under the presence of im- pairments (such as uncertainties) in the measurement matrix, which has independent Gaussian entries. The precise nature of our analysis allows performance compari- son between different types of these estimators and enables us to optimally tune the involved hyper-parameters. In particular, we study the performance of some of the most popular cases in linear inverse problems, such as the LASSO, Elastic Net, Least Squares (LS), Regularized Least Squares (RLS) and their box-constrained variants. In each context, we define appropriate performance measures, and we sharply an- alyze them in the High-Dimensional Statistical Regime. We use our results for a concrete application of designing efficient decoders for modern massive multi-input multi-output (MIMO) wireless communication systems and optimally allocate their power. The framework used for the analysis is based on Gaussian process methods, in particular, on a recently developed strong and tight version of the classical Gor- don Comparison Inequality which is called the Convex Gaussian Min-max Theorem (CGMT). We use some results from Random Matrix Theory (RMT) in our analysis as well.
89

Intégration de modèles approchés pour mieux transmettre l’impact des incertitudes statiques sur les courbes de réponse des simulateurs d’écoulements / Integration of approximated models in order to better assess impact of static uncertainties on flow simulator's response curves

Bardy, Gaétan 27 October 2015 (has links)
Alors que l’on utilise couramment de nombreux modèles numériques différents pour la description statique des réservoirs souterrains et des incertitudes associées, les incertitudes sur les écoulements des fluides à l’intérieur de ces réservoirs ne peuvent, pour des raisons de performance, que s’appuyer que sur quelques simulations d’écoulements. Les travaux de cette thèse ont donc pour objectif d’améliorer la transmission de l’impact des incertitudes statiques sur les réponses du simulateur dynamique d’écoulements sans augmenter le temps de calcul, grâce à des modèles approchés (proxy). Pour cela deux axes de recherche ont été menés : - L’implémentation de nouveaux proxys basés sur le Fast Marching, afin de modéliser la propagation d’un fluide dans un réservoir avec seulement quelques paramètres. Cela permet d’obtenir des courbes de réponse similaires à celles fournit par le simulateur d’écoulement pour un temps de calcul très court ; - La mise en place d’une procédure de minimisation mathématique afin de prédire les courbes de réponses du simulateur d’écoulement à partir d’un modèle analytique et des distances entre les modèles calculées avec les réponses des proxys. Les méthodes développées ont été appliquées sur deux cas d’études réels afin de les valider face aux données disponibles dans l’industrie. Les résultats ont montrés que les proxys que nous avons implémentés apportent de meilleures informations que les proxys disponibles bien que les nôtres soient toujours perfectibles. Nous avons aussi mis en évidence l’intérêt de notre procédure de minimisation pour mieux évaluer les incertitudes dynamiques à partir du moment où le proxy utilisé est suffisamment fiable / Although it is common to use many different numerical models for the static description of underground reservoirs and their associated uncertainties, for fluid flow uncertainties through these reservoirs only few dynamic simulations can be used due to performance reasons. The objective of this thesis’ work is to better transmit the impact of static uncertainties on flow simulator’s responses without increasing computation time, using approximated models (proxies). Research has been undertaken in 2 directions: - Implementation of new proxies based on Fast Marching in order to better approach fluid propagation behavior in a reservoir using only a few parameters. This allows to obtain response curves close to those provided by the flow simulator in a very short period of time - Set up a mathematical minimization’s procedure in order to predict flow simulator’s response curves using an analytical model and distances between proxy responses computed on every model. The methods developed during this PhD have been applied on two different real cases in order to validate them with industry data. Results have shown that our new proxy improve the quality of the information about fluid behavior compared to the available proxy even though ours can still be improved. We also highlight that our minimization procedure better assesses dynamic uncertainties if the proxy used is reliable enough
90

Establishing a cost model when estimating product cost in early design phases

Jeppsson, Johanna, Sjöberg, Jessica January 2017 (has links)
About 75% of the total product cost is determined in the early design phase, which means that the possibilities to affect costs are relatively small when the design phase is completed. For companies, it is therefore vital to conduct reliable cost estimates in the early design phase, when selecting between different design choices. When conducting a cost estimate there are many uncertainties. The aim with this study is therefore to explore how uncertainties regarding product cost can be considered when estimating product cost and how expert’s knowledge can be integrated within cost estimation. A case study has been conducted within the aerospace industry at the company GKN Aerospace Sweden (GAS) in Trollhättan, from which a model to estimate product cost has been developed. The model is developed for space turbines, but can with modifications be used for other products. Space turbines are highly advanced products, produced in small batches with complex manufacturing processes and high costs. Because of the heavy capital investment, long lead times and high risks, cost estimates become very important, which made GAS suitable for the case study. The new cost estimation model (NCEM) developed is a combination between intuitive, analogical and analytical cost estimation techniques. Product cost at GAS is built up by the following cost elements; raw material, purchased parts, material surcharge, manufacturing cost, manufacturing surcharge, outsourced operations, method support, delivery cost, warranty and scrap, which are studied more in depth. The material cost is estimated based on historical data and a list of previous purchased alloys is created. The manufacturing cost is determined more in detail where the cost for each operation is estimated, based on operation time, amount of removed material or welding speed. The method support cost is estimated based on a study of an internal prognosis where the amount of time from each discipline needed to support the product is determined. Included in the NCEM is also a risk assessment. The main insights from this study is that transparency is vital when estimating product cost. It is important to state what assumptions that have been made. Breaking down the product cost into smaller units and create awareness about the cost drivers will identify risks and reduce uncertainness. Experts possess a great deal of knowledge about cost drivers and should be integrated when estimating product cost.

Page generated in 0.08 seconds