• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 44
  • 43
  • 11
  • 4
  • 4
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • Tagged with
  • 138
  • 138
  • 30
  • 27
  • 22
  • 18
  • 17
  • 16
  • 15
  • 12
  • 12
  • 11
  • 11
  • 10
  • 9
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
121

Estudo comparativo de métodos geoestatísticos de estimativas e simulações estocásticas condicionais / Comparative study of geostatistical estimation methods and conditional stochastic simulations

Rafael de Aguiar Furuie 05 October 2009 (has links)
Diferentes métodos geoestatísticos são apresentados como a melhor solução para diferentes contextos de acordo com a natureza dos dados a serem analisados. Alguns dos métodos de estimativa mais populares incluem a krigagem ordinária e a krigagem ordinária lognormal, esta ultima requerendo a transformação dos dados originais para uma distribuição gaussiana. No entanto, esses métodos apresentam limitações, sendo uma das mais discutidas o efeito de suavização apresentado pelas estimativas obtidas. Alguns algoritmos recentes foram propostos como meios de se corrigir este efeito, e são avaliados neste trabalho para a sua eficiência, assim como alguns algoritmos para a transformada reversa dos valores convertidos na krigagem ordinária lognormal. Outra abordagem para o problema é por meio do grupo de métodos denominado de simulação estocástica, alguns dos mais populares sendo a simulação gaussiana seqüencial e a simulação por bandas rotativas, que apesar de não apresentar o efeito de suavização da krigagem, não possuem a precisão local característica dos métodos de estimativa. Este trabalho busca avaliar a eficiência dos diferentes métodos de estimativa (krigagem ordinária, krigagem ordinária lognormal, assim como suas estimativas corrigidas) e simulação (simulação seqüencial gaussiana e simulação por bandas rotativas) para diferentes cenários de dados. Vinte e sete conjuntos de dados exaustivos (em grid 50x50) foram amostrados em 90 pontos por meio da amostragem aleatória simples. Estes conjuntos de dados partiam de uma distribuição gaussiana (Log1) e tinham seus coeficientes de variação progressivamente aumentados até se chegar a uma distribuição altamente assimétrica (Log27). Semivariogramas amostrais foram computados e modelados para os processos geoestatísticos de estimativa e simulação. As estimativas ou realizações resultantes foram então comparadas com os dados exaustivos originais de maneira a se avaliar quão bem esses dados originais eram reproduzidos. Isto foi feito pela comparação de parâmetros estatísticos dos dados originais com os dos dados reconstruídos, assim como por meio de análise gráfica. Resultados demonstraram que o método que apresentou melhores resultados foi a krigagem ordinária lognormal, estes ainda melhores quando aplicada a transformação reversa de Yamamoto, com grande melhora principalmente nos resultados para os dados altamente assimétricos. A krigagem ordinária apresentou sérias limitações na reprodução da cauda inferior dos conjuntos de dados mais assimétricos, apresentando para estes resultados piores que as estimativas não corrigidas. Ambos os métodos de simulação utilizados apresentaram uma baixa correlação como os dados exaustivos, seus resultados também cada vez menos representativos de acordo com o aumento do coeficiente de variação, apesar de apresentar a vantagem de fornecer diferentes cenários para tomada de decisões. / Different geostatistical methods present themselves as the optimal solution to different realities according to the characteristics displayed by the data in analysis. Some of the most popular estimation methods include ordinary kriging and lognormal ordinary kriging, this last one involving the transformation of data from their original space to a Gaussian distribution. However, these methods present some limitations, one of the most prominent ones being the smoothing effect observed in the resulting estimates. Some recent algorithms have been proposed as a way to correct this effect, and are tested in this work for their effectiveness, as well as some methods for the backtransformation of the lognormal converted values. Another approach to the problem is by means of the group of methods known as stochastic simulation, some of the most popular ones being the sequential Gaussian simulation and turning bands simulation, which although do not present the smoothing effect, lack the local accuracy characteristic of the estimation methods. This work seeks to assess the effectiveness of the different estimation (ordinary kriging, lognormal ordinary kriging, and their corrected estimates) and simulation (sequential Gaussian simulation and turning bands simulation) methods for different scenarios. Twenty seven exhaustive data sets (in a 50x50 grid) have been sampled at 90 points based on simple random sampling. These data sets started from a Gaussian distribution (Log1) and had their variation coefficients increased progressively, up to a highly asymmetrical distribution (Log27). Experimental semivariograms have been computed and modeled for geostatistical estimation and simulation processes. The resulting estimates or realizations were then compared to the original exhaustive data in order to assess how well these reproduced the original data. This was done by comparing statistical parameters of the original data and the ones of the reconstructed data, as well as graphically. Results showed that the method that presented the best correlation with the exhaustive data was lognormal ordinary kriging, even better when the backtransformation technique by Yamamoto is applied, which much improved the results for the more asymmetrical data sets. Ordinary kriging and its correction had some severe limitations in reproducing the lower tail of the more asymmetrical data sets, with worst results than those for the uncorrected estimates. Both simulation methods used presented a very small degree of correlation to the exhaustive data, their results also progressively less representative as the variation coefficient grew, even though it has the advantage of presenting several scenarios for decision making.
122

Évaluation de politiques de séquençage d'arrivées d'avions par Simulation Monte Carlo

Sboui, Wael 09 1900 (has links)
No description available.
123

Coherent gas flow patterns in heterogeneous permeability fields

Samani, Shirin 16 February 2012 (has links) (PDF)
Gas injection into saturated porous media has a high practical relevance. It is applied in groundwater remediation (air sparging), in CO2 sequestration into saline aquifers, and in enhanced oil recovery of petroleum reservoirs. This wide range of application necessitates a comprehensive understanding of gas flow patterns that may develop within the porous media and required modeling of multi-phase flow. There is an ongoing controversy in literature, if continuum models are able to describe the complex flow pattern observed in heterogeneous porous media, especially the channelized stochastic flow pattern. Based on Selker’s stochastic hypothesis, a gas channel is caused by a Brownian-motion process during gas injection. Therefore, the pore-scale heterogeneity will determine the shape of the single stochastic gas channels. On the other hand there are many studies on air sparging, which are based on continuum modeling. Up to date it is not clear under which conditions a continuum model can describe the essential features of the complex gas flow pattern. The aim of this study is to investigate the gas flow pattern on bench-scale and field scale using the continuum model TOUGH2. Based on a comprehensive data set of bench-scale experiments and field-scale experiments, we conduct for the first time a systematic study and evaluate the prediction ability of the continuum model. A second focus of this study is the development of a “real world”-continuum model, since on all scales – pore-scale, bench scale, field scale – heterogeneity is a key driver for the stochastic gas flow pattern. Therefore, we use different geostatistical programs to include stochastic conditioned and unconditioned parameter fields. Our main conclusion from bench-scale experiments is that a continuum model, which is calibrated by different independent measurements, has excellent prediction ability for the average flow behavior (e.g. the gas volume-injection rate relation). Moreover, we investigate the impact of both weak and strong heterogeneous parameter fields (permeability and capillary pressure) on gas flow pattern. The results show that a continuum model with weak stochastic heterogeneity cannot represent the essential features of the experimental gas flow pattern (e.g., the single stochastic gas channels). Contrary, applying a strong heterogeneity the continuum model can represent the channelized flow. This observation supports Stauffer’s statement that a so-called subscale continuum model with strong heterogeneity is able to describe the channelized flow behavior. On the other hand, we compare the theoretical integral gas volumes with our experiments and found that strong heterogeneity always yields too large gas volumes. At field-scale the 3D continuum model is used to design and optimize the direct gas injection technology. The field-scale study is based on the working hypotheses that the key parameters are the same as at bench-scale. Therefore, we assume that grain size and injection rate will determine whether coherent channelized flow or incoherent bubbly flow will develop at field-scale. The results of four different injection regimes were compared with the data of the corresponding field experiments. The main conclusion is that because of the buoyancy driven gas flow the vertical permeability has a crucial impact. Hence, the vertical and horizontal permeability should be implemented independently in numerical modeling by conditioned parameter fields.
124

Cartographie des événements hydrologiques extrêmes et estimation SCHADEX en sites non jaugés / Cartography of the extreme rain falls and use of the SCHADEX method for ungauged sites

Penot, David 17 October 2014 (has links)
Depuis 2006, à EDF, les études de crues extrêmes sont réalisées avec la méthode SCHADEX (Simulation Climato-Hydrologique pour l'Appréciation des Débits EXtrêmes). Elle s'appuie sur un modèle probabiliste MEWP (distribution saisonnière utilisant une classification par type de temps) pour décrire l'aléa pluie et sur une simulation stochastique croisant l'aléa pluie et l'aléa de saturation du bassin. Les approches par simulation, type SCHADEX, ont montré de bonnes performances pour estimer les distributions de crues extrêmes (projet ANR ExtraFlo , 2013). Cependant, l'utilisation de SCHADEX en l'absence de données (pluie, température, débit) sur le bassin à étudier reste problématique. Cette thèse propose une adaptation de la méthode en site non jaugé en essayant de conserver ses points forts, à savoir: - une structuration spatiale et probabiliste des précipitations conditionnée par les types de temps. - un croisement des aléas pluie et saturation du bassin par simulation stochastique. Ce travail s'est limité au pas de temps journalier afin d'aborder la problématique de régionalisation avec un maximum de données. La démarche s'est alors articulée autour de quatre grands axes: - proposer une méthode de régionalisation des précipitations journalières extrêmes ponctuelles et construire des cartes de pluies aux temps de retour remarquables. Évaluer l'intérêt d'une classification par type de temps pour la régionalisation des distributions de pluies extrêmes et qualifier l'interpolateur de pluie SPAZM pour l'estimation des pluies extrêmes. - s'intéresser à la construction de pluies de bassin (ou pluies spatiales) et en particulier à l'impact des choix de construction de cette pluie sur l'estimation des précipitations extrêmes concernant le bassin. - développer une méthode de simulation stochastique régionale permettant de proposer une distribution de débits journaliers issue d'un croisement des aléas pluies et saturation du bassin. - étudier le passage de la distribution des débits journaliers à la distribution des débits de pointe. Les principaux apports de cette thèse sont les suivants: - la prise en compte des types de temps permet d'améliorer la description des structures spatiales des précipitations extrêmes. - l'information apportée par les pluies SPAZM se révèle être précieuse pour l'estimation des pluies extrêmes en site non jaugé. - une étude de sensibilité du calcul de la pluie spatiale en fonction du nombre de postes utilisés (comparaison des pluies SPAZM et Thiessen) donne une indication sur le biais d'estimation. - le générateur de champs de pluie par bandes tournantes SAMPO permet d'étudier l'abattement sur les précipitations extrêmes et de mettre en place un modèle de correction pour les quantiles élevés des pluies spatiales SPAZM. - une nouvelle méthode de simulation stochastique peu paramétrée mais analogue à la méthode SCHADEX (croisement d'un aléa pluie et d'un aléa de saturation du bassin pour produire une distribution des débits journaliers) est proposée pour l'estimation en site non jaugé. - enfin, un travail préliminaire donne des premiers éléments sur le passage à la distribution des débits de pointe par un générateur d'hydrogrammes s'adaptant à la séquence des débits journaliers simulés. Tous ces développements et conclusions sont détaillés et justifiés dans le mémoire de thèse. / Since 2006, at EDF, extreme flood estimations are computed with the SCHADEX method (Climatic-hydrological simulation of extreme floods). This method relies on a MEWP probabilistic model (seasonal rainfall distribution using a weather pattern concept) and on a stochastic simulation to cross rainy events hazard and catchment saturation states. Simulation approaches, as SCHADEX, have shown good performances to estimate extreme flood distributions. However, the use of SCHADEX method without data for a considered catchment (rain, temperature, runoff) remains a main issue. This thesis suggests an adaptation of the method in ungauged context, trying to keep the key points of the SCHADEX method: - spatial and probabilistic structure of rainfall conditioned by weather patterns. - a cross of rainfall and catchment saturation hazards by stochastic simulation. This work is limited to a daily step to address the issue of regionalization with a maximum of data. The approach is then structured around four main points: - regionalize punctual daily extreme precipitations and construct maps of return period rainfalls. Evaluate the contribution of a weather type classification for the regionalization of extreme rainfall distributions and qualify the SPAZM interpolator for the estimation of extreme rainfall. - wonder about the construction of an areal rainfall and in particular about the impact of its construction choices on the estimation of extreme precipitations. - develop a regional stochastic simulation method to estimate a distribution of daily runoffs which crosses rainy events and catchment saturation hazards. - study the transposition from a daily runoff distribution to a peak flow distribution. The main contributions of this thesis are: - taking into account the weather types improves the description of spatial patterns of extreme precipitations. - information provided by the SPAZM rainfall interpolator proves to be valuable for the estimation of extreme rainfall in ungauged site. - a sensitivity analysis of the calculation of the areal rainfall based on the number of stations used (comparison SPAZM and Thiessen areal rainfalls) gives an indication of the estimation bias. - the SAMPO rainfall generator used to study the areal reduction factor of extreme precipitation and implement a correction model for high quantiles of SPAZM areal rainfall. - a simplified method of stochastic simulation similar to SCHADEX method (cross between a rainfall hazard and a catchment saturation hazard) is developed to produce a distribution of daily flows in ungauged site. - finally, preliminary work provides a way for the transition to the peak flow distribution using a hydrograph generator adapted to the sequence of daily simulated runoffs. All these developments and conclusions are detailed and justified in the thesis.STAR
125

"Testes de hipótese e critério bayesiano de seleção de modelos para séries temporais com raiz unitária" / "Hypothesis testing and bayesian model selection for time series with a unit root"

Ricardo Gonçalves da Silva 23 June 2004 (has links)
A literatura referente a testes de hipótese em modelos auto-regressivos que apresentam uma possível raiz unitária é bastante vasta e engloba pesquisas oriundas de diversas áreas. Nesta dissertação, inicialmente, buscou-se realizar uma revisão dos principais resultados existentes, oriundos tanto da visão clássica quanto da bayesiana de inferência. No que concerne ao ferramental clássico, o papel do movimento browniano foi apresentado de forma detalhada, buscando-se enfatizar a sua aplicabilidade na dedução de estatísticas assintóticas para a realização dos testes de hipótese relativos à presença de uma raíz unitária. Com relação à inferência bayesiana, foi inicialmente conduzido um exame detalhado do status corrente da literatura. A seguir, foi realizado um estudo comparativo em que se testa a hipótese de raiz unitária com base na probabilidade da densidade a posteriori do parâmetro do modelo, considerando as seguintes densidades a priori: Flat, Jeffreys, Normal e Beta. A inferência foi realizada com base no algoritmo Metropolis-Hastings, usando a técnica de simulação de Monte Carlo por Cadeias de Markov (MCMC). Poder, tamanho e confiança dos testes apresentados foram computados com o uso de séries simuladas. Finalmente, foi proposto um critério bayesiano de seleção de modelos, utilizando as mesmas distribuições a priori do teste de hipótese. Ambos os procedimentos foram ilustrados com aplicações empíricas à séries temporais macroeconômicas. / Testing for unit root hypothesis in non stationary autoregressive models has been a research topic disseminated along many academic areas. As a first step for approaching this issue, this dissertation includes an extensive review highlighting the main results provided by Classical and Bayesian inferences methods. Concerning Classical approach, the role of brownian motion is discussed in a very detailed way, clearly emphasizing its application for obtaining good asymptotic statistics when we are testing for the existence of a unit root in a time series. Alternatively, for Bayesian approach, a detailed discussion is also introduced in the main text. Then, exploring an empirical façade of this dissertation, we implemented a comparative study for testing unit root based on a posteriori model's parameter density probability, taking into account the following a priori densities: Flat, Jeffreys, Normal and Beta. The inference is based on the Metropolis-Hastings algorithm and on the Monte Carlo Markov Chains (MCMC) technique. Simulated time series are used for calculating size, power and confidence intervals for the developed unit root hypothesis test. Finally, we proposed a Bayesian criterion for selecting models based on the same a priori distributions used for developing the same hypothesis tests. Obviously, both procedures are empirically illustrated through application to macroeconomic time series.
126

Coherent gas flow patterns in heterogeneous permeability fields: Coherent gas flow patterns in heterogeneous permeability fields: from bench-scale to field-scale

Samani, Shirin 02 August 2012 (has links)
Gas injection into saturated porous media has a high practical relevance. It is applied in groundwater remediation (air sparging), in CO2 sequestration into saline aquifers, and in enhanced oil recovery of petroleum reservoirs. This wide range of application necessitates a comprehensive understanding of gas flow patterns that may develop within the porous media and required modeling of multi-phase flow. There is an ongoing controversy in literature, if continuum models are able to describe the complex flow pattern observed in heterogeneous porous media, especially the channelized stochastic flow pattern. Based on Selker’s stochastic hypothesis, a gas channel is caused by a Brownian-motion process during gas injection. Therefore, the pore-scale heterogeneity will determine the shape of the single stochastic gas channels. On the other hand there are many studies on air sparging, which are based on continuum modeling. Up to date it is not clear under which conditions a continuum model can describe the essential features of the complex gas flow pattern. The aim of this study is to investigate the gas flow pattern on bench-scale and field scale using the continuum model TOUGH2. Based on a comprehensive data set of bench-scale experiments and field-scale experiments, we conduct for the first time a systematic study and evaluate the prediction ability of the continuum model. A second focus of this study is the development of a “real world”-continuum model, since on all scales – pore-scale, bench scale, field scale – heterogeneity is a key driver for the stochastic gas flow pattern. Therefore, we use different geostatistical programs to include stochastic conditioned and unconditioned parameter fields. Our main conclusion from bench-scale experiments is that a continuum model, which is calibrated by different independent measurements, has excellent prediction ability for the average flow behavior (e.g. the gas volume-injection rate relation). Moreover, we investigate the impact of both weak and strong heterogeneous parameter fields (permeability and capillary pressure) on gas flow pattern. The results show that a continuum model with weak stochastic heterogeneity cannot represent the essential features of the experimental gas flow pattern (e.g., the single stochastic gas channels). Contrary, applying a strong heterogeneity the continuum model can represent the channelized flow. This observation supports Stauffer’s statement that a so-called subscale continuum model with strong heterogeneity is able to describe the channelized flow behavior. On the other hand, we compare the theoretical integral gas volumes with our experiments and found that strong heterogeneity always yields too large gas volumes. At field-scale the 3D continuum model is used to design and optimize the direct gas injection technology. The field-scale study is based on the working hypotheses that the key parameters are the same as at bench-scale. Therefore, we assume that grain size and injection rate will determine whether coherent channelized flow or incoherent bubbly flow will develop at field-scale. The results of four different injection regimes were compared with the data of the corresponding field experiments. The main conclusion is that because of the buoyancy driven gas flow the vertical permeability has a crucial impact. Hence, the vertical and horizontal permeability should be implemented independently in numerical modeling by conditioned parameter fields.
127

Gaussian Reaction Diffusion Master Equation: A Reaction Diffusion Master Equation With an Efficient Diffusion Model for Fast Exact Stochastic Simulations

Subic, Tina 13 September 2023 (has links)
Complex spatial structures in biology arise from random interactions of molecules. These molecular interactions can be studied using spatial stochastic models, such as Reaction Diffusion Master Equation (RDME), a mesoscopic model that subdivides the spatial domain into smaller, well mixed grid cells, in which the macroscopic diffusion-controlled reactions take place. While RDME has been widely used to study how fluctuations in number of molecules affect spatial patterns, simulations are computationally expensive and it requires a lower bound for grid cell size to avoid an apparent unphysical loss of bimolecular reactions. In this thesis, we propose Gaussian Reaction Diffusion Master Equation (GRDME), a novel model in the RDME framework, based on the discretization of the Laplace operator with Particle Strength Exchange (PSE) method with a Gaussian kernel. We show that GRDME is a computationally efficient model compared to RDME. We further resolve the controversy regarding the loss of bimolecular reactions and argue that GRDME can flexibly bridge the diffusion-controlled and ballistic regimes in mesoscopic simulations involving multiple species. To efficiently simulate GRDME, we develop Gaussian Next Subvolume Method (GNSM). GRDME simulated with GNSM up to six-times lower computational cost for a three-dimensional simulation, providing a significant computational advantage for modeling three-dimensional systems. The computational cost can be further lowered by increasing the so-called smoothing length of the Gassian jumps. We develop a guideline to estimate the grid resolution below which RDME and GRDME exhibit loss of bimolecular reactions. This loss of reactions has been considered unphysical by others. Here we show that this loss of bimolecular reactions is consistent with the well-established theory on diffusion-controlled reaction rates by Collins and Kimball, provided that the rate of bimolecular propensity is interpreted as the rate of the ballistic step, rather than the macroscopic reaction rate. We show that the reaction radius is set by the grid resolution. Unlike RDME, GRDME enables us to explicitly model various sizes of the molecules. Using this insight, we explore the diffusion-limited regime of reaction dynamics and discover that diffusion-controlled systems resemble small, discrete systems. Others have shown that a reaction system can have discreteness-induced state inversion, a phenomenon where the order of the concentrations differs when the system size is small. We show that the same reaction system also has diffusion-controlled state inversion, where the order of concentrations changes, when the diffusion is slow. In summary, we show that GRDME is a computationally efficient model, which enables us to include the information of the molecular sizes into the model.:1 Modeling Mesoscopic Biology 1.1 RDME Models Mesoscopic Stochastic Spatial Phenomena 1.2 A New Diffusion Model Presents an Opportunity For A More Efficient RDME 1.3 Can A New Diffusion Model Provide Insights Into The Loss Of Reactions? 1.4 Overview 2 Preliminaries 2.1 Reaction Diffusion Master Equation 2.1.1 Chemical Master Equation 2.1.2 Diffusion-controlled Bimolecular Reaction Rate 2.1.3 RDME is an Extention of CME to Spatial Problems 2.2 Next Subvolume Method 2.2.1 First Reaction Method 2.2.2 NSM is an Efficient Spatial Stochastic Algorithm for RDME 2.3 Discretization of the Laplace Operator Using Particle Strength Exchange 2.4 Summary 3 Gaussian Reaction Diffusion Master Equation 3.1 Design Constraints for the Diffusion Model in the RDME Framework 3.2 Gaussian-jump-based Model for RDME 3.3 Summary 4 Gaussian Next Subvolume Method 4.1 Constructing the neighborhood N 4.2 Finding the Diffusion Event 4.3 Comparing GNSM to NSM 4.4 Summary 5 Limits of Validity for (G)RDME with Macroscopic Bimolecular Propensity Rate 5.1 Previous Works 5.2 hmin Based on the Kuramoto length of a Grid Cell 5.3 hmin of the Two Limiting Regimes 5.4 hmin of Bimolecular Reactions for the Three Cases of Dimensionality 5.5 hmin of GRDME in Comparison to hmin of RDME 5.6 Summary 6 Numerical Experiments To Verify Accuracy, Efficiency and Validity of GRDME 6.1 Accuracy of the Diffusion Model 6.2 Computational Cost 6.3 hmin and Reaction Loss for (G)RDME With Macroscopic Bimolecular Propensity Rate kCK 6.3.1 Homobiomlecular Reaction With kCK at the Ballistic Limit 6.3.2 Homobiomlecular Reaction With kCK at the Diffusional Limit 6.3.3 Heterobiomlecular Reaction With kCK at the Ballistic Limit 6.4 Summary 7 (G)RDME as a Spatial Model of Collins-Kimball Diffusion-controlled Reaction Dynamics 7.1 Loss of Reactions in Diffusion-controlled Reaction Systems 7.2 The Loss of Reactions in (G)RDME Can Be Explained by Collins Kimball Theory 7.3 Cell Width h Sets the Reaction Radius σ∗ 7.4 Smoothing Length ε′ Sets the Size of the Molecules in the System 7.5 Heterobimolecular Reactions Can Only Be Modeled With GRDME 7.6 Zeroth Order Reactions Impose a Lower Limit on Diffusivity Dmin 7.6.1 Consistency of (G)RDME Could Be Improved by Redesigning Zeroth Order Reactions 7.7 Summary 8 Difussion-Controlled State Inversion 8.1 Diffusion-controlled Systems Resemble Small Systems 8.2 Slow Diffusion Leads to an Inversion of Steady States 8.3 Summary 9 Conclusion and Outlook 9.1 Two Physical Interpretations of (G)RDME 9.2 Advantages of GRDME 9.3 Towards Numerically Consistent (G)RDME 9.4 Exploring Mesoscopic Biology With GRDME Bibliography
128

Optimization and uncertainty handling in air traffic management / Optimisation et gestion de l'incertitude du trafic aérien

Marceau Caron, Gaetan 22 September 2014 (has links)
Cette thèse traite de la gestion du trafic aérien et plus précisément, de l’optimisation globale des plans de vol déposés par les compagnies aériennes sous contrainte du respect de la capacité de l’espace aérien. Une composante importante de ce travail concerne la gestion de l’incertitude entourant les trajectoires des aéronefs. Dans la première partie du travail, nous identifions les principales causes d’incertitude au niveau de la prédiction de trajectoires. Celle-ci est la composante essentielle à l’automatisation des systèmes de gestion du trafic aérien. Nous étudions donc le problème du réglage automatique et en-ligne des paramètres de la prédiction de trajectoires au cours de la phase de montée avec l’algorithme d’optimisation CMA-ES. La principale conclusion, corroborée par d’autres travaux de la littérature, implique que la prédiction de trajectoires des centres de contrôle n’est pas suffisamment précise aujourd’hui pour supporter l’automatisation complète des tâches critiques. Ainsi, un système d’optimisation centralisé de la gestion du traficaérien doit prendre en compte le facteur humain et l’incertitude de façon générale.Par conséquent, la seconde partie traite du développement des modèles et des algorithmes dans une perspective globale. De plus, nous décrivons un modèle stochastique qui capture les incertitudes sur les temps de passage sur des balises de survol pour chaque trajectoire. Ceci nous permet d’inférer l’incertitude engendrée sur l’occupation des secteurs de contrôle par les aéronefs à tout moment.Dans la troisième partie, nous formulons une variante du problème classique du Air Traffic Flow and Capacity Management au cours de la phase tactique. L’intérêt est de renforcer les échanges d’information entre le gestionnaire du réseau et les contrôleurs aériens. Nous définissons donc un problème d’optimisation dont l’objectif est de minimiser conjointement les coûts de retard et de congestion tout en respectant les contraintes de séquencement au cours des phases de décollage et d’attérissage. Pour combattre le nombre de dimensions élevé de ce problème, nous choisissons un algorithme évolutionnaire multiobjectif avec une représentation indirecte du problème en se basant sur des ordonnanceurs gloutons. Enfin, nous étudions les performances et la robustesse de cette approche en utilisant le modèle stochastique défini précédemment. Ce travail est validé à l’aide de problèmes réels obtenus du Central Flow Management Unit en Europe, que l’on a aussi densifiés artificiellement. / In this thesis, we investigate the issue of optimizing the aircraft operators' demand with the airspace capacity by taking into account uncertainty in air traffic management. In the first part of the work, we identify the main causes of uncertainty of the trajectory prediction (TP), the core component underlying automation in ATM systems. We study the problem of online parameter-tuning of the TP during the climbing phase with the optimization algorithm CMA-ES. The main conclusion, corroborated by other works in the literature, is that ground TP is not sufficiently accurate nowadays to support fully automated safety-critical applications. Hence, with the current data sharing limitations, any centralized optimization system in Air Traffic Control should consider the human-in-the-loop factor, as well as other uncertainties. Consequently, in the second part of the thesis, we develop models and algorithms from a network global perspective and we describe a generic uncertainty model that captures flight trajectories uncertainties and infer their impact on the occupancy count of the Air Traffic Control sectors. This usual indicator quantifies coarsely the complexity managed by air traffic controllers in terms of number of flights. In the third part of the thesis, we formulate a variant of the Air Traffic Flow and Capacity Management problem in the tactical phase for bridging the gap between the network manager and air traffic controllers. The optimization problem consists in minimizing jointly the cost of delays and the cost of congestion while meeting sequencing constraints. In order to cope with the high dimensionality of the problem, evolutionary multi-objective optimization algorithms are used with an indirect representation and some greedy schedulers to optimize flight plans. An additional uncertainty model is added on top of the network model, allowing us to study the performances and the robustness of the proposed optimization algorithm when facing noisy context. We validate our approach on real-world and artificially densified instances obtained from the Central Flow Management Unit in Europe.
129

Analyse der EU-Milchmarktpolitik bei Unsicherheit

Grams, Michael 10 March 2004 (has links)
Agrarmärkte sind oft durch Unsicherheit gekennzeichnet - hervorgerufen vor allem durch Zufallsschwankungen in Angebot und Nachfrage. Das Ziel der vorliegenden Studie besteht darin, die Konsequenzen solcher Unsicherheiten für die Bewertung und Gestaltung der vor einer grundlegenden Neuausrichtung stehenden EU-Milchmarktpolitik zu untersuchen. Zunächst legen empirische Betrachtungen anhand der Zeitreihen verschiedener Marktgrößen nahe, dass Unsicherheit für die Akteure auf dem EU-Milchmarkt tatsächlich ein relevantes Phänomen ist. So sind etwa Preisschwankungen trotz der auf eine Marktstabilisierung ausgerichteten staatlichen Eingriffe zu beobachten. Anhaltspunkte konnten auch zu den Ursachen der Marktunsicherheiten gewonnen werden. Während Angebot und Nachfrage in der EU eine eher stabile Entwicklung aufweisen, neigen die internationalen Milchproduktmärkte zu Fluktuationen. Zur Analyse der Auswirkungen staatlicher Eingriffe auf dem Milchmarkt bei Unsicherheit dient ein stochastisches partielles Marktgleichgewichtsmodell. Das Modell bildet die spezifischen Strukturen des Milchmarkts mit dem Rohmilchangebot, der Milchverarbeitung und der Nachfrage nach den verschiedenen Milchprodukten ab. Zur Integration von Unsicherheit wird die Modellstruktur um stochastische Variablen in den Angebots- und Nachfragefunktionen erweitert. Mit Quotenregelung, Zöllen und Exporterstattungen lassen sich wesentliche Politikinstrumente untersuchen. Gegenstand der Betrachtungen sind mögliche Auswirkungen einer neuen multilateralen Handelsvereinbarung im Rahmen der Welthandelsorganisation (WTO) sowie die Effekte dreier für den Milchmarkt formulierter Politikszenarien. Diese Politikoptionen sind die im Juni 2003 in Luxemburg beschlossene Agrarreform, eine in der Fachöffentlichkeit oft diskutierte Quotenkürzung und eine vollständige Liberalisierung des Milchmarkts samt Quotenabschaffung. Die Ergebnisse zeigen, dass veränderte Preis- und Mengeneingriffe nicht nur zu Verschiebungen im Niveau von Zielgrößen, wie beispielsweise von Erzeugerpreisen und Erlösen in der EU und auf Drittlandsmärkten führen, sondern ebenso zu veränderten Streuungen. Zusätzliche Einsichten vermitteln die Ergebnisse darüber hinaus bezüglich der Unsicherheit in der Planung der öffentlichen Ausgaben am Milchmarkt und in der Vorhersage der Wohlfahrtseffekte von Politikänderungen. Gegenüber einer deterministischen Betrachtung wird eine Politikanalyse am Milchmarkt unter expliziter Berücksichtigung von Unsicherheit damit komplexer und die Beurteilung von Politikoptionen differenzierter.
130

Simulações Financeiras em GPU / Finance and Stochastic Simulation on GPU

Souza, Thársis Tuani Pinto 26 April 2013 (has links)
É muito comum modelar problemas em finanças com processos estocásticos, dada a incerteza de suas variáveis de análise. Além disso, problemas reais nesse domínio são, em geral, de grande custo computacional, o que sugere a utilização de plataformas de alto desempenho (HPC) em sua implementação. As novas gerações de arquitetura de hardware gráfico (GPU) possibilitam a programação de propósito geral enquanto mantêm alta banda de memória e grande poder computacional. Assim, esse tipo de arquitetura vem se mostrando como uma excelente alternativa em HPC. Com isso, a proposta principal desse trabalho é estudar o ferramental matemático e computacional necessário para modelagem estocástica em finanças com a utilização de GPUs como plataforma de aceleração. Para isso, apresentamos a GPU como uma plataforma de computação de propósito geral. Em seguida, analisamos uma variedade de geradores de números aleatórios, tanto em arquitetura sequencial quanto paralela. Além disso, apresentamos os conceitos fundamentais de Cálculo Estocástico e de método de Monte Carlo para simulação estocástica em finanças. Ao final, apresentamos dois estudos de casos de problemas em finanças: \"Stops Ótimos\" e \"Cálculo de Risco de Mercado\". No primeiro caso, resolvemos o problema de otimização de obtenção do ganho ótimo em uma estratégia de negociação de ações de \"Stop Gain\". A solução proposta é escalável e de paralelização inerente em GPU. Para o segundo caso, propomos um algoritmo paralelo para cálculo de risco de mercado, bem como técnicas para melhorar a solução obtida. Nos nossos experimentos, houve uma melhora de 4 vezes na qualidade da simulação estocástica e uma aceleração de mais de 50 vezes. / Given the uncertainty of their variables, it is common to model financial problems with stochastic processes. Furthermore, real problems in this area have a high computational cost. This suggests the use of High Performance Computing (HPC) to handle them. New generations of graphics hardware (GPU) enable general purpose computing while maintaining high memory bandwidth and large computing power. Therefore, this type of architecture is an excellent alternative in HPC and comptutational finance. The main purpose of this work is to study the computational and mathematical tools needed for stochastic modeling in finance using GPUs. We present GPUs as a platform for general purpose computing. We then analyze a variety of random number generators, both in sequential and parallel architectures, and introduce the fundamental mathematical tools for Stochastic Calculus and Monte Carlo simulation. With this background, we present two case studies in finance: ``Optimal Trading Stops\'\' and ``Market Risk Management\'\'. In the first case, we solve the problem of obtaining the optimal gain on a stock trading strategy of ``Stop Gain\'\'. The proposed solution is scalable and with inherent parallelism on GPU. For the second case, we propose a parallel algorithm to compute market risk, as well as techniques for improving the quality of the solutions. In our experiments, there was a 4 times improvement in the quality of stochastic simulation and an acceleration of over 50 times.

Page generated in 0.1659 seconds