Spelling suggestions: "subject:"ehe fonte carlo 3methods"" "subject:"ehe fonte carlo 4methods""
91 |
Ecoulement dans une pompe à vide turbomoléculaire : modélisation et analyse par voie numérique / Flow in a turbomolecular vacuum pump : numerical modelling and analysisWang, Ye 22 November 2013 (has links)
La thèse est consacrée à la modélisation et à l'analyse par voie numérique de l'écoulement dans une pompe à vide turbomoléculaire hybride, combinant une succession d'étages de type rotor et stator et un Holweck. Une approche de type Test Particle Monte Carlo 3D a été développée pour des configurations de pompes industrielles (géométries complexes d'aubes, gestion des étages rotor et stator) dans un souci d'optimisation des coûts de simulation. L'outil numérique développé a été validé pour des configurations académiques et industrielles, en s'appuyant notamment sur des résultats expérimentaux obtenus grâce au banc d'essai de l'entreprise aVP. L'apport de l'approche TPMC3D par rapport aux méthodes de design disponibles en début de thèse a été clairement démontré pour le régime moléculaire libre. Quelques préconisations de design ont également pu être formulées en utilisant le code développé. Le potentiel d'une approche de type Direct Simulation Monte Carlo, prenant en compte les interactions entre molécules du gaz, a également été établi en 2D pour le régime de transition. / The thesis is devoted to the modeling and the numerical analysis of the flow in a turbomolecular vacuum pump of hybrid type, that is combining a succession of rotor and stator stages with an Holweck. A 3D Test Particle Monte Carlo approach has been developed for simulating industrial pump configurations (complex blade geometries, management of rotor and stator stages), with attention paid to the optimization of the computational cost. The numerical tool developed in the thesis has been validated for academic and industrial test cases, relying in particular on reference experimental results obtained on the test rig of the aVP company. The prediction improvement brought by the TPMC 3D approach with respect to the design tools available at the start of the thesis has been clearly demonstrated for the free molecular flow regime. Some design recommendations have also been formulated using the developed solver. The potential of a Direct Simulation Monte Carlo approach, taking into account the interactions between gas molecules, has also been established in 2D for the transition regime.
|
92 |
Estudos teóricos de propriedades estruturais e eletrônicas da molécula emodina em solução / Theoretical studies of structural and electronic properties of emodin molecule in solutionAntonio Rodrigues da Cunha 14 October 2009 (has links)
Estudamos as propriedades estruturais e eletrônicas da molécula emodina (EM), em diferentes condições, do ponto de vista experimental e teórico. Numa primeira parte, realizamos medidas do espectro eletrônico de absorção da EM, em meio solvente (água, clorofórmio e metanol). Nessa parte, obtivemos que o solvente provoca pouco efeito nos deslocamentos das bandas. Numa segunda parte, estudamos a EM, isoladamente e nos três solventes, através de cálculos quânticos com funcional de densidade (B3LYP), conjunto de função base de Pople (6-31G*) e modelo contínuo polariz ável (PCM). Como principais resultados obtivemos que a EM é rígida a menos da orientação relativa das 3 hidroxilas. A mudança orientacional nessas hidroxilas pode provocar formação de até 2 ligações de hidrogênio intramolecular (o que estabiliza sua geometria) e conseqüente uma diminuição no momento dipolo de 5.5 a 1.7D (o que desestabiliza sua interação com a água). Numa terceira parte, realizamos simulações com método Monte Carlo e Dinâmica Molecular em solução. Nessa parte, obtivemos que as ligações de hidrogênio intramoleculares são raramente quebradas devido as interações com o solvente e isso atribui a EM um caráter hidrofóbico. Adicionalmente, utilizando Teoria de Perturbação Termodinâmica nas simulações, calculamos a variação de energia livre de solvatação da EM em partição água/clorofórmio e água/- metanol e obtivemos -2.6 e -4.9 kcal/mol, respectivamente. Esse resultado está em boa concordância com o resultado experimental de -5.6 kcal/mol para partição de água/octanol. Por último, realizamos cálculos do espectro eletrônico de absorção da EM, isoladamente e nos três solventes, considerando as moléculas através do modelo, contínuo de solvente (SCRF) e explícito de solvente, com o método INDO/CIS. Nessa parte, obtivemos que o efeito do solvente é bem descrito teoricamente. / We study the structural and electronic properties of the emodin (EM) in different solvents of experimental and theoretical the point of view. We started performing measurements of the UV-Vis absorption spectrum of the EM in solution (water, chloroform and methanol). Our main result is that the solvent causes little effect on shifts the bands. In the second part of this work, we performing quantum calculations of isolated EM and in the three solutions using density functional (B3LYP), a set of Pople basis function (6-31G*) and the polarizable continuum model (PCM). In this part, our result is that EM presents a rigid conformation unless the orientation of its 3 hydroxyls. The change in these hydroxyls orientation can form up to 2 intramolecular H-bonds (which stabilizes its geometry) and causes a decrease in the dipole moment from 5.5 to 1.7D (which destabilizes its interaction with water). In the third part of this work, we performing Monte Carlo and Molecular Dynamics simulations in solution. Our main result is that the intramolecular H-bonds are rarely broken, even in aqueous solution, and these give to EM a hydrophobic character. Additionally, using Thermodynamics Perturbation Theory in the simulations, we calculate variations of free energy of solvation of EM in partition of water/chloroform and water/methanol and obtained -2.6 and -4.9kcal/mol, respectively. This last result is in good agreement with the experimental result[3] of -5.6kcal/mol for partition of water/octanol. Finally, we performing calculations of UV-Vis absorption spectrum of isolated EM and in the three solutions. In this calculations, we considering the molecules through the continuum solvent (SCRF) and explicit solvent model with the method INDO/CIS. In this part, we obtaining that effect of solvent is well described theoretically.
|
93 |
A two-level Probabilistic Risk Assessment of cascading failures leading to blackout in transmission power systemsHenneaux, Pierre 19 September 2013 (has links)
In our society, private and industrial activities increasingly rest on the implicit assumption that electricity is available at any time and at an affordable price. Even if operational data and feedback from the electrical sector is very positive, a residual risk of blackout or undesired load shedding in critical zones remains. The occurrence of such a situation is likely to entail major direct and indirect economical consequences, as observed in recent blackouts. Assessing this residual risk and identifying scenarios likely to lead to these feared situations is crucial to control and optimally reduce this risk of blackout or major system disturbance. The objective of this PhD thesis is to develop a methodology able to reveal scenarios leading to a blackout or a major system disturbance and to estimate their frequencies and their consequences with a satisfactory accuracy.<p><p>A blackout is a collapse of the electrical grid on a large area, leading to a power cutoff, and is due to a cascading failure. Such a cascade is composed of two phases: a slow cascade, starting with the occurrence of an initiating event and displaying characteristic times between successive events from minutes to hours, and a fast cascade, displaying characteristic times between successive events from milliseconds to tens of seconds. In cascading failures, there is a strong coupling between events: the loss of an element increases the stress on other elements and, hence, the probability to have another failure. It appears that probabilistic methods proposed previously do not consider correctly these dependencies between failures, mainly because the two very different phases are analyzed with the same model. Thus, there is a need to develop a conceptually satisfying probabilistic approach, able to take into account all kinds of dependencies, by using different models for the slow and the fast cascades. This is the aim of this PhD thesis.<p><p>This work first focuses on the level-I which is the analysis of the slow cascade progression up to the transition to the fast cascade. We propose to adapt dynamic reliability, an integrated approach of Probabilistic Risk Analysis (PRA) developed initially for the nuclear sector, to the case of transmission power systems. This methodology will account for the double interaction between power system dynamics and state transitions of the grid elements. This PhD thesis also introduces the development of the level-II to analyze the fast cascade, up to the transition towards an operational state with load shedding or a blackout. The proposed method is applied to two test systems. Results show that thermal effects can play an important role in cascading failures, during the first phase. They also show that the level-II analysis after the level-I is necessary to have an estimation of the loss of supplied power that a scenario can lead to: two types of level-I scenarios with a similar frequency can induce very different risks (in terms of loss of supplied power) and blackout frequencies. The level-III, i.e. the restoration process analysis, is however needed to have an estimation of the risk in terms of loss of supplied energy. This PhD thesis also presents several perspectives to improve the approach in order to scale up applications to real grids.<p> / Doctorat en Sciences de l'ingénieur / info:eu-repo/semantics/nonPublished
|
94 |
Parallelism in Event-Based Computations with Applications in BiologyBauer, Pavol January 2017 (has links)
Event-based models find frequent usage in fields such as computational physics and biology as they may contain both continuous and discrete state variables and may incorporate both deterministic and stochastic state transitions. If the state transitions are stochastic, computer-generated random numbers are used to obtain the model solution. This type of event-based computations is also known as Monte-Carlo simulation. In this thesis, I study different approaches to execute event-based computations on parallel computers. This ultimately allows users to retrieve their simulation results in a fraction of the original computation time. As system sizes grow continuously or models have to be simulated at longer time scales, this is a necessary approach for current computational tasks. More specifically, I propose several ways to asynchronously simulate such models on parallel shared-memory computers, for example using parallel discrete-event simulation or task-based computing. The particular event-based models studied herein find applications in systems biology, computational epidemiology and computational neuroscience. In the presented studies, the proposed methods allow for high efficiency of the parallel simulation, typically scaling well with the number of used computer cores. As the scaling typically depends on individual model properties, the studies also investigate which quantities have the greatest impact on the simulation performance. Finally, the presented studies include other insights into event-based computations, such as methods how to estimate parameter sensitivity in stochastic models and how to simulate models that include both deterministic and stochastic state transitions. / UPMARC
|
95 |
Bayesian and Quasi-Monte Carlo spherical integration for global illumination / Intégration sphérique Bayésien et Quasi-Monte Carlo pour l'illumination globaleMarques, Ricardo 22 October 2013 (has links)
La qualité du résultat des opérations d’échantillonnage pour la synthèse d'images est fortement dépendante du placement et de la pondération des échantillons. C’est pourquoi plusieurs travaux ont porté sur l’amélioration de l’échantillonnage purement aléatoire utilisée dans les techniques classiques de Monte Carlo. Leurs approches consistent à utiliser des séquences déterministes qui améliorent l’uniformité de la distribution des échantillons sur le domaine de l’intégration. L’estimateur résultant est alors appelé un estimateur de quasi-Monte Carlo (QMC).Dans cette thèse, nous nous focalisons sur le cas de l’échantillonnage pour l’intégration hémisphérique. Nous allons montrer que les approches existantes peuvent être améliorées en exploitant pleinement l’information disponible (par exemple, les propriétés statistiques de la fonction à intégrer) qui est ensuite utilisée pour le placement des échantillons et pour leur pondération. / The spherical sampling of the incident radiance function entails a high computational cost. Therefore the llumination integral must be evaluated using a limited set of samples. Such a restriction raises the question of how to obtain the most accurate approximation possible with such a limited set of samples. In this thesis, we show that existing Monte Carlo-based approaches can be improved by fully exploiting the information available which is later used for careful samples placement and weighting.The first contribution of this thesis is a strategy for producing high quality Quasi-Monte Carlo (QMC) sampling patterns for spherical integration by resorting to spherical Fibonacci point sets. We show that these patterns, when applied to the rendering integral, are very simple to generate and consistently outperform existing approaches. Furthermore, we introduce theoretical aspects on QMC spherical integration that, to our knowledge, have never been used in the graphics community, such as spherical cap discrepancy and point set spherical energy. These metrics allow assessing the quality of a spherical points set for a QMC estimate of a spherical integral.In the next part of the thesis, we propose a new heoretical framework for computing the Bayesian Monte Carlo quadrature rule. Our contribution includes a novel method of quadrature computation based on spherical Gaussian functions that can be generalized to a broad class of BRDFs (any BRDF which can be approximated sum of one or more spherical Gaussian functions) and potentially to other rendering applications. We account for the BRDF sharpness by using a new computation method for the prior mean function. Lastly, we propose a fast hyperparameters evaluation method that avoids the learning step.Our last contribution is the application of BMC with an adaptive approach for evaluating the illumination integral. The idea is to compute a first BMC estimate (using a first sample set) and, if the quality criterion is not met, directly inject the result as prior knowledge on a new estimate (using another sample set). The new estimate refines the previous estimate using a new set of samples, and the process is repeated until a satisfying result is achieved.
|
96 |
Numerical Methods for Darcy Flow Problems with Rough and Uncertain DataHellman, Fredrik January 2017 (has links)
We address two computational challenges for numerical simulations of Darcy flow problems: rough and uncertain data. The rapidly varying and possibly high contrast permeability coefficient for the pressure equation in Darcy flow problems generally leads to irregular solutions, which in turn make standard solution techniques perform poorly. We study methods for numerical homogenization based on localized computations. Regarding the challenge of uncertain data, we consider the problem of forward propagation of uncertainty through a numerical model. More specifically, we consider methods for estimating the failure probability, or a point estimate of the cumulative distribution function (cdf) of a scalar output from the model. The issue of rough coefficients is discussed in Papers I–III by analyzing three aspects of the localized orthogonal decomposition (LOD) method. In Paper I, we define an interpolation operator that makes the localization error independent of the contrast of the coefficient. The conditions for its applicability are studied. In Paper II, we consider time-dependent coefficients and derive computable error indicators that are used to adaptively update the multiscale space. In Paper III, we derive a priori error bounds for the LOD method based on the Raviart–Thomas finite element. The topic of uncertain data is discussed in Papers IV–VI. The main contribution is the selective refinement algorithm, proposed in Paper IV for estimating quantiles, and further developed in Paper V for point evaluation of the cdf. Selective refinement makes use of a hierarchy of numerical approximations of the model and exploits computable error bounds for the random model output to reduce the cost complexity. It is applied in combination with Monte Carlo and multilevel Monte Carlo methods to reduce the overall cost. In Paper VI we quantify the gains from applying selective refinement to a two-phase Darcy flow problem.
|
97 |
Ocenění společnosti König-Porzellan Sokolov, spol. s r.o. / Evaluation of König-Porzellan Sokolov, spol. s r.o.Hájek, Jan January 2015 (has links)
This diploma thesis, entitled as evaluation of König-Porzellan Limited company, Sokolov, Czech Republic, aims to determine the objectified value of the König-Porzellan, Limited company to the date of December 31, 2015. This research is proceed especially for the needs of management and owners of the company. The work is divided into four main parts. The first part consists of the introduction, setting out the objectives and main preconditions for the creation of the work, as well as theoretical and methodological part which presents the methodology for individual instruments involved in the work. This is followed by a practical part, which contains the presentation of the award-winning company, defining the company's position in the market through strategic analysis. The economy aspect of the company will be represented by financial analysis. After compilation of the financial plan for the period 2015 - 2020 to approach to the actual evaluation of the company using the yield discounted cash flow method variant FCFF. The total value of the company to date of December 31, 2015 is 105 177 CZK. This value is further analyzed by Monte Carlo simulations.
|
98 |
Quelques problèmes liés à l'erreur statistique en homogénéisation stochastique / Some problems related to statistical error in stochastic homogenizationMinvielle, William 25 September 2015 (has links)
Le travail de cette thèse a porté sur le développement de techniques numériques pour l'homogénéisation d'équations dont les coefficients présentent des hétérogénéités aléatoires à petite échelle. Les difficultés liées à la résolution de telles équations aux dérivées partielles peuvent être résolues grâce à la théorie de l'homogénéisation stochastique. On substitue alors la résolution d'une équation dont les coefficients sont aléatoires et oscillants à l'échelle la plus fine du problème par la résolution d'une équation à coefficients constants. Cependant, une difficulté subsiste : le calcul de ces coefficients dits homogénéisés sont définis par une moyenne ergodique, que l'on ne peut atteindre en pratique. Seuls des approximations aléatoires de ces quantités déterministes sont calculables, et l'erreur commise lors de l'approximation est importante. Ces questions sont développées en détail dans le Chapitre 1 qui tient lieu d'introduction. L'objet du Chapitre 2 de cette thèse est de réduire l'erreur de cette approximation dans un cas nonlinéaire, en réduisant la variance de l'estimateur par la méthode des variables antithétiques. Dans le Chapitre 3, on montre comment obtenir une meilleure réduction de variance par la méthode des vari- ables de contrôle. Cette approche repose sur un modèle approché, disponible dans le cas étudié. Elle est plus invasive et moins générique, on l'étudie dans un cas linéaire. Dans le Chapitre 4, à nouveau dans un cas linéaire, on introduit une méthode de sélection pour réduire l'erreur commise. Enfin, le Chapitre 5 porte sur l'analyse d'un problème in- verse, où l'on recherche des paramètres à l'échelle la plus fine, ne connaissant que quelques quantités macroscopiques, par exemple les coefficients homogénéisés du modèle / In this thesis, we design numerical techniques to address the homogenization of equations the coefficients of which exhibit small scale random heterogeneities. Solving such elliptic partial differential equations is prohibitively expensive. One may use stochastic homogenization theory to reduce the complexity of this task. We then substitute the random, fine scale oscillating coefficients of the equation with constant homogenized coefficients. These coefficients are defined through an ergodic average inaccessible to practical computation. Only random approximations thereof are available. The error committed in this approximation is significant. These issues are detailed in the introductory Chapter 1. In Chapter 2, we show how to reduce the error in this approximation, in a nonlinear case, by using an antithetic variable estimator that has a smaller variance than the standard Monte Carlo estimator. In Chapter 3, in a linear case, we show how to obtain an even better variance reduction with the control variate method. Such a method is based on a surrogate model. In Chapter 4, we use a selection method to reduce the global error. Chapter 5 is devoted to the analysis of an inverse problem, wherein we seek parameters at the fine scale whilst only being provided with a handful of macroscopic quantities, among which the homogenized coefficients
|
99 |
Contributions aux méthodes de Monte Carlo et leur application au filtrage statistique / Contributions to Monte Carlo methods and their application to statistical filteringLamberti, Roland 22 November 2018 (has links)
Cette thèse s’intéresse au problème de l’inférence bayésienne dans les modèles probabilistes dynamiques. Plus précisément nous nous focalisons sur les méthodes de Monte Carlo pour l’intégration. Nous revisitons tout d’abord le mécanisme d’échantillonnage d’importance avec rééchantillonnage, puis son extension au cadre dynamique connue sous le nom de filtrage particulaire, pour enfin conclure nos travaux par une application à la poursuite multi-cibles.En premier lieu nous partons du problème de l’estimation d’un moment suivant une loi de probabilité, connue à une constante près, par une méthode de Monte Carlo. Tout d’abord,nous proposons un nouvel estimateur apparenté à l’estimateur d’échantillonnage d’importance normalisé mais utilisant deux lois de proposition différentes au lieu d’une seule. Ensuite,nous revisitons le mécanisme d’échantillonnage d’importance avec rééchantillonnage dans son ensemble afin de produire des tirages Monte Carlo indépendants, contrairement au mécanisme usuel, et nous construisons ainsi deux nouveaux estimateurs.Dans un second temps nous nous intéressons à l’aspect dynamique lié au problème d’inférence bayésienne séquentielle. Nous adaptons alors dans ce contexte notre nouvelle technique de rééchantillonnage indépendant développée précédemment dans un cadre statique.Ceci produit le mécanisme de filtrage particulaire avec rééchantillonnage indépendant, que nous interprétons comme cas particulier de filtrage particulaire auxiliaire. En raison du coût supplémentaire en tirages requis par cette technique, nous proposons ensuite une procédure de rééchantillonnage semi-indépendant permettant de le contrôler.En dernier lieu, nous considérons une application de poursuite multi-cibles dans un réseau de capteurs utilisant un nouveau modèle bayésien, et analysons empiriquement les résultats donnés dans cette application par notre nouvel algorithme de filtrage particulaire ainsi qu’un algorithme de Monte Carlo par Chaînes de Markov séquentiel / This thesis deals with integration calculus in the context of Bayesian inference and Bayesian statistical filtering. More precisely, we focus on Monte Carlo integration methods. We first revisit the importance sampling with resampling mechanism, then its extension to the dynamic setting known as particle filtering, and finally conclude our work with a multi-target tracking application. Firstly, we consider the problem of estimating some moment of a probability density, known up to a constant, via Monte Carlo methodology. We start by proposing a new estimator affiliated with the normalized importance sampling estimator but using two proposition densities rather than a single one. We then revisit the importance sampling with resampling mechanism as a whole in order to produce Monte Carlo samples that are independent, contrary to the classical mechanism, which enables us to develop two new estimators. Secondly, we consider the dynamic aspect in the framework of sequential Bayesian inference. We thus adapt to this framework our new independent resampling technique, previously developed in a static setting. This yields the particle filtering with independent resampling mechanism, which we reinterpret as a special case of auxiliary particle filtering. Because of the increased cost required by this technique, we next propose a semi independent resampling procedure which enables to control this additional cost. Lastly, we consider an application of multi-target tracking within a sensor network using a new Bayesian model, and empirically analyze the results from our new particle filtering algorithm as well as a sequential Markov Chain Monte Carlo algorithm
|
100 |
Computing strategies for complex Bayesian models / Stratégies computationnelles pour des modèles Bayésiens complexesBanterle, Marco 21 July 2016 (has links)
Cette thèse présente des contributions à la littérature des méthodes de Monte Carlo utilisé dans l'analyse des modèles complexes en statistique Bayésienne; l'accent est mis à la fois sur la complexité des modèles et sur les difficultés de calcul.Le premier chapitre élargit Delayed Acceptance, une variante computationellement efficace du Metropolis--Hastings, et agrandit son cadre théorique fournissant une justification adéquate pour la méthode, des limits pour sa variance asymptotique par rapport au Metropolis--Hastings et des idées pour le réglage optimal de sa distribution instrumentale.Nous allons ensuite développer une méthode Bayésienne pour analyser les processus environnementaux non stationnaires, appelées Expansion Dimension, qui considère le processus observé comme une projection depuis une dimension supérieure, où l'hypothèse de stationnarité pourrait etre acceptée. Le dernier chapitre sera finalement consacrée à l'étude des structures de dépendances conditionnelles par une formulation entièrement Bayésienne du modèle de Copule Gaussien graphique. / This thesis presents contributions to the Monte Carlo literature aimed toward the analysis of complex models in Bayesian Statistics; the focus is on both complexity related to complicate models and computational difficulties.We will first expand Delayed Acceptance, a computationally efficient variant ofMetropolis--Hastings, to a multi-step procedure and enlarge its theoretical background, providing proper justification for the method, asymptotic variance bounds relative to its parent MH kernel and optimal tuning for the scale of its proposal.We will then develop a flexible Bayesian method to analyse nonlinear environmentalprocesses, called Dimension Expansion, that essentially consider the observed process as a projection from a higher dimension, where the assumption of stationarity could hold.The last chapter will finally be dedicated to the investigation of conditional (in)dependence structures via a fully Bayesian formulation of the Gaussian Copula graphical model.
|
Page generated in 0.0585 seconds