Spelling suggestions: "subject:"month carlo methods"" "subject:"fonte carlo methods""
91 |
Mathematical and algorithmic analysis of modified Langevin dynamics / L'analyse mathématique et algorithmique de la dynamique de Langevin modifiéTrstanova, Zofia 25 November 2016 (has links)
En physique statistique, l’information macroscopique d’intérêt pour les systèmes considérés peut être dé-duite à partir de moyennes sur des configurations microscopiques réparties selon des mesures de probabilitéµ caractérisant l’état thermodynamique du système. En raison de la haute dimensionnalité du système (quiest proportionnelle au nombre de particules), les configurations sont le plus souvent échantillonnées en util-isant des trajectoires d’équations différentielles stochastiques ou des chaînes de Markov ergodiques pourla mesure de Boltzmann-Gibbs µ, qui décrit un système à température constante. Un processus stochas-tique classique permettant d’échantillonner cette mesure est la dynamique de Langevin. En pratique, leséquations de la dynamique de Langevin ne peuvent pas être intégrées analytiquement, la solution est alorsapprochée par un schéma numérique. L’analyse numérique de ces schémas de discrétisation est maintenantbien maîtrisée pour l’énergie cinétique quadratique standard. Une limitation importante des estimateurs desmoyennes sontleurs éventuelles grandes erreurs statistiques.Sous certaines hypothèsessur lesénergies ciné-tique et potentielle, il peut être démontré qu’un théorème de limite central est vrai. La variance asymptotiquepeut être grande en raison de la métastabilité du processus de Langevin, qui se produit dès que la mesure deprobabilité µ est multimodale.Dans cette thèse, nous considérons la discrétisation de la dynamique de Langevin modifiée qui améliorel’échantillonnage de la distribution de Boltzmann-Gibbs en introduisant une fonction cinétique plus généraleà la place de la formulation quadratique standard. Nous avons en fait deux situations en tête : (a) La dy-namique de Langevin Adaptativement Restreinte, où l’énergie cinétique s’annule pour les faibles moments,et correspond à l’énergie cinétique standard pour les forts moments. L’intérêt de cette dynamique est que lesparticules avec une faible énergie sont restreintes. Le gain vient alors du fait que les interactions entre lesparticules restreintes ne doivent pas être mises à jour. En raison de la séparabilité des positions et des mo-ments marginaux de la distribution, les moyennes des observables qui dépendent de la variable de positionsont égales à celles calculées par la dynamique de Langevin standard. L’efficacité de cette méthode résidedans le compromis entre le gain de calcul et la variance asymptotique des moyennes ergodiques qui peutaugmenter par rapport à la dynamique standards car il existe a priori plus des corrélations dans le tempsen raison de particules restreintes. De plus, étant donné que l’énergie cinétique est nulle sur un ouvert, ladynamique de Langevin associé ne parvient pas à être hypoelliptique. La première tâche de cette thèse est deprouver que la dynamique de Langevin avec une telle énergie cinétique est ergodique. L’étape suivante con-siste à présenter une analyse mathématique de la variance asymptotique de la dynamique AR-Langevin. Afinde compléter l’analyse de ce procédé, on estime l’accélération algorithmique du coût d’une seule itération,en fonction des paramètres de la dynamique. (b) Nous considérons aussi la dynamique de Langevin avecdes énergies cinétiques dont la croissance est plus que quadratique à l’infini, dans une tentative de réduire lamétastabilité. La liberté supplémentaire fournie par le choix de l’énergie cinétique doit être utilisée afin deréduire la métastabilité de la dynamique. Dans cette thèse, nous explorons le choix de l’énergie cinétique etnous démontrons une convergence améliorée des moyennes ergodiques sur un exemple de faible dimension.Un des problèmes avec les situations que nous considérons est la stabilité des régimes discrétisés. Afind’obtenir une méthode de discrétisation faiblement cohérente d’ordre 2 (ce qui n’est plus trivial dans le casde l’énergie cinétique générale), nous nous reposons sur les schémas basés sur des méthodes de Metropolis. / In statistical physics, the macroscopic information of interest for the systems under consideration can beinferred from averages over microscopic configurations distributed according to probability measures µcharacterizing the thermodynamic state of the system. Due to the high dimensionality of the system (whichis proportional to the number of particles), these configurations are most often sampled using trajectories ofstochastic differential equations or Markov chains ergodic for the probability measure µ, which describesa system at constant temperature. One popular stochastic process allowing to sample this measure is theLangevin dynamics. In practice, the Langevin dynamics cannot be analytically integrated, its solution istherefore approximated with a numerical scheme. The numerical analysis of such discretization schemes isby now well-understood when the kinetic energy is the standard quadratic kinetic energy.One important limitation of the estimators of the ergodic averages are their possibly large statisticalerrors.Undercertainassumptionsonpotentialandkineticenergy,itcanbeshownthatacentrallimittheoremholds true. The asymptotic variance may be large due to the metastability of the Langevin process, whichoccurs as soon as the probability measure µ is multimodal.In this thesis, we consider the discretization of modified Langevin dynamics which improve the samplingof the Boltzmann–Gibbs distribution by introducing a more general kinetic energy function U instead of thestandard quadratic one. We have in fact two situations in mind:(a) Adaptively Restrained (AR) Langevin dynamics, where the kinetic energy vanishes for small momenta,while it agrees with the standard kinetic energy for large momenta. The interest of this dynamics isthat particles with low energy are restrained. The computational gain follows from the fact that theinteractions between restrained particles need not be updated. Due to the separability of the positionand momenta marginals of the distribution, the averages of observables which depend on the positionvariable are equal to the ones computed with the standard Langevin dynamics. The efficiency of thismethod lies in the trade-off between the computational gain and the asymptotic variance on ergodic av-erages which may increase compared to the standard dynamics since there are a priori more correlationsin time due to restrained particles. Moreover, since the kinetic energy vanishes on some open set, theassociated Langevin dynamics fails to be hypoelliptic. In fact, a first task of this thesis is to prove thatthe Langevin dynamics with such modified kinetic energy is ergodic. The next step is to present a math-ematical analysis of the asymptotic variance for the AR-Langevin dynamics. In order to complementthe analysis of this method, we estimate the algorithmic speed-up of the cost of a single iteration, as afunction of the parameters of the dynamics.(b) We also consider Langevin dynamics with kinetic energies growing more than quadratically at infinity,in an attempt to reduce metastability. The extra freedom provided by the choice of the kinetic energyshould be used in order to reduce the metastability of the dynamics. In this thesis, we explore thechoice of the kinetic energy and we demonstrate on a simple low-dimensional example an improvedconvergence of ergodic averages.An issue with the situations we consider is the stability of discretized schemes. In order to obtain aweakly consistent method of order 2 (which is no longer trivial for a general kinetic energy), we rely on therecently developped Metropolis schemes.
|
92 |
Méthodes d'inférence statistique pour champs de Gibbs / Statistical inference methods for Gibbs random fieldsStoehr, Julien 29 October 2015 (has links)
La constante de normalisation des champs de Markov se présente sous la forme d'une intégrale hautement multidimensionnelle et ne peut être calculée par des méthodes analytiques ou numériques standard. Cela constitue une difficulté majeure pour l'estimation des paramètres ou la sélection de modèle. Pour approcher la loi a posteriori des paramètres lorsque le champ de Markov est observé, nous remplaçons la vraisemblance par une vraisemblance composite, c'est à dire un produit de lois marginales ou conditionnelles du modèle, peu coûteuses à calculer. Nous proposons une correction de la vraisemblance composite basée sur une modification de la courbure au maximum afin de ne pas sous-estimer la variance de la loi a posteriori. Ensuite, nous proposons de choisir entre différents modèles de champs de Markov cachés avec des méthodes bayésiennes approchées (ABC, Approximate Bayesian Computation), qui comparent les données observées à de nombreuses simulations de Monte-Carlo au travers de statistiques résumées. Afin de pallier l'absence de statistiques exhaustives pour ce choix de modèle, des statistiques résumées basées sur les composantes connexes des graphes de dépendance des modèles en compétition sont introduites. Leur efficacité est étudiée à l'aide d'un taux d'erreur conditionnel original mesurant la puissance locale de ces statistiques à discriminer les modèles. Nous montrons alors que nous pouvons diminuer sensiblement le nombre de simulations requises tout en améliorant la qualité de décision, et utilisons cette erreur locale pour construire une procédure ABC qui adapte le vecteur de statistiques résumés aux données observées. Enfin, pour contourner le calcul impossible de la vraisemblance dans le critère BIC (Bayesian Information Criterion) de choix de modèle, nous étendons les approches champs moyens en substituant la vraisemblance par des produits de distributions de vecteurs aléatoires, à savoir des blocs du champ. Le critère BLIC (Block Likelihood Information Criterion), que nous en déduisons, permet de répondre à des questions de choix de modèle plus large que les méthodes ABC, en particulier le choix conjoint de la structure de dépendance et du nombre d'états latents. Nous étudions donc les performances de BLIC dans une optique de segmentation d'images. / Due to the Markovian dependence structure, the normalizing constant of Markov random fields cannot be computed with standard analytical or numerical methods. This forms a central issue in terms of parameter inference or model selection as the computation of the likelihood is an integral part of the procedure. When the Markov random field is directly observed, we propose to estimate the posterior distribution of model parameters by replacing the likelihood with a composite likelihood, that is a product of marginal or conditional distributions of the model easy to compute. Our first contribution is to correct the posterior distribution resulting from using a misspecified likelihood function by modifying the curvature at the mode in order to avoid overly precise posterior parameters.In a second part we suggest to perform model selection between hidden Markov random fields with approximate Bayesian computation (ABC) algorithms that compare the observed data and many Monte-Carlo simulations through summary statistics. To make up for the absence of sufficient statistics with regard to this model choice, we introduce summary statistics based on the connected components of the dependency graph of each model in competition. We assess their efficiency using a novel conditional misclassification rate that evaluates their local power to discriminate between models. We set up an efficient procedure that reduces the computational cost while improving the quality of decision and using this local error rate we build up an ABC procedure that adapts the summary statistics to the observed data.In a last part, in order to circumvent the computation of the intractable likelihood in the Bayesian Information Criterion (BIC), we extend the mean field approaches by replacing the likelihood with a product of distributions of random vectors, namely blocks of the lattice. On that basis, we derive BLIC (Block Likelihood Information Criterion) that answers model choice questions of a wider scope than ABC, such as the joint selection of the dependency structure and the number of latent states. We study the performances of BLIC in terms of image segmentation.
|
93 |
Méthodes de Monte Carlo stratifiées pour la simulation des chaines de Markov / Stratified Monte Carlo Methods for the simulation of Markov chainsEl maalouf, Joseph 16 December 2016 (has links)
Les méthodes de Monte Carlo sont des méthodes probabilistes qui utilisent des ordinateurs pour résoudre de nombreux problèmes de la science à l’aide de nombres aléatoires. Leur principal inconvénient est leur convergence lente. La mise au point de techniques permettant d’accélérer la convergence est un domaine de recherche très actif. C’est l’objectif principal des méthodes déterministes quasi-Monte Carlo qui remplacent les points pseudo-aléatoires de simulation par des points quasi-aléatoires ayant une excellente répartition uniforme. Ces méthodes ne fournissent pas d’intervalles de confiance permettant d’estimer l’erreur. Nous étudions dans ce travail des méthodes stochastiques qui permettent de réduire la variance des estimateurs Monte Carlo : ces techniques de stratification le font en divisant le domaine d’échantillonnageen sous-domaines. Nous examinons l’intérêt de ces méthodes pour l’approximation des chaînes de Markov, la simulation de la diffusion physique et la résolution numérique de la fragmentation.Dans un premier chapitre, nous présentons les méthodes de Monte Carlo pour l’intégration numérique. Nous donnons le cadre général des méthodes de stratification. Nous insistons sur deux techniques : la stratification simple (MCS) et la stratification Sudoku (SS), qui place les points sur des grilles analogues à celle du jeu. Nous pressentons également les méthodesquasi-Monte Carlo qui partagent avec les méthodes de stratification certaines propriétés d'équipartition des points d’échantillonnage.Le second chapitre décrit l’utilisation des méthodes de Monte Carlo stratifiées pour la simulation des chaînes de Markov. Nous considérons des chaînes homogènes uni-dimensionnelles à espace d’états discret ou continu. Dans le premier cas, nous démontrons une réduction de variance par rapport `a la méthode de Monte Carlo classique ; la variance des schémas MCSou SS est d’ordre 3/2, alors que celle du schéma MC est de 1. Les résultats d’expériences numériques, pour des espaces d’états discrets ou continus, uni- ou multi-dimensionnels montrent une réduction de variance liée à la stratification, dont nous estimons l’ordre.Dans le troisième chapitre, nous examinons l’intérêt de la méthode de stratification Sudoku pour la simulation de la diffusion physique. Nous employons une technique de marche aléatoire et nous examinons successivement la résolution d’une équation de la chaleur, d’une équation de convection-diffusion, de problèmes de réaction-diffusion (équations de Kolmogorov et équation de Nagumo) ; enfin nous résolvons numériquement l’équation de Burgers. Dans chacun de ces cas, des tests numériques mettent en évidence une réduction de la variance due à l’emploi de la méthode de stratification Sudoku.Le quatrième chapitre décrit un schéma de Monte Carlo stratifie permettant de simuler un phénomène de fragmentation. La comparaison des performances dans plusieurs cas permet de constater que la technique de stratification Sudoku réduit la variance d’une estimation Monte Carlo. Nous testons enfin un algorithme de résolution d’un problème inverse, permettant d’approcher le noyau de fragmentation, à partir de résultats de l’évolution d’une distribution ;nous utilisons dans ce cas des points quasi-Monte Carlo pour résoudre le problème direct. / Monte Carlo methods are probabilistic schemes that use computers for solving various scientific problems with random numbers. The main disadvantage to this approach is the slow convergence. Many scientists are working hard to find techniques that may accelerate Monte Carlo simulations. This is the aim of some deterministic methods called quasi-Monte Carlo, where random points are replaced with special sets of points with enhanced uniform distribution. These methods do not provide confidence intervals that permit to estimate the errordone. In the present work, we are interested with random methods that reduce the variance of a Monte Carlo estimator : the stratification techniques consist of splitting the sampling area into strata where random samples are chosen. We focus here on applications of stratified methods for approximating Markov chains, simulating diffusion in materials, or solving fragmentationequations.In the first chapter, we present Monte Carlo methods in the framework of numerical quadrature, and we introduce the stratification strategies. We focus on two techniques : the simple stratification (MCS) and the Sudoku stratification (SS), where the points repartitions are similar to Sudoku grids. We also present quasi-Monte Carlo methods, where quasi-random pointsshare common features with stratified points.The second chapter describes the use of stratified algorithms for the simulation of Markov chains. We consider time-homogeneous Markov chains with one-dimensional discrete or continuous state space. We establish theoretical bounds for the variance of some estimator, in the case of a discrete state space, that indicate a variance reduction with respect to usual MonteCarlo. The variance of MCS and SS methods is of order 3/2, instead of 1 for usual MC. The results of numerical experiments, for one-dimensional or multi-dimensional, discrete or continuous state spaces show improved variances ; the order is estimated using linear regression.In the third chapter, we investigate the interest of stratified Monte Carlo methods for simulating diffusion in various non-stationary physical processes. This is done by discretizing time and performing a random walk at every time-step. We propose algorithms for pure diffusion, for convection-diffusion, and reaction-diffusion (Kolmogorov equation or Nagumo equation) ; we finally solve Burgers equation. In each case, the results of numerical tests show an improvement of the variance due to the use of stratified Sudoku sampling.The fourth chapter describes a stratified Monte Carlo scheme for simulating fragmentation phenomena. Through several numerical comparisons, we can see that the stratified Sudoku sampling reduces the variance of Monte Carlo estimates. We finally test a method for solving an inverse problem : knowing the evolution of the mass distribution, it aims to find a fragmentation kernel. In this case quasi-random points are used for solving the direct problem.
|
94 |
Reconstitution par filtrage non-linéaire de milieux turbulents et rétrodiffusants à l'aide de LIDARs Doppler et aérosols / Retrival of the propertiers of turbulent and backscattering media using non linear filtering techniques applied to the observation data from a combination of a dopller and an aerosol lidarCampi, Antoine 09 December 2015 (has links)
Le but de cette thèse était de mettre en place un algorithme permettant de traiter des données de LIDAR (LIght Detection And Ranging). On a principalement eu recours à des LIDAR de type Doppler et aérosols. Les mesures de ces appareils sont obtenues de telle sorte que l'on dispose en fait d'une grille d'observation. Cependant on souhaite avoir des informations sur l'atmosphère qui est un milieu continu. Nous avons utilisé des méthodes d'analyse multi-résolution pour se placer dans un cadre mathématique correspondant au problème physique. On a donc obtenu un découpage de l'espace d'évolution du processus en deux sous-espaces orthogonaux, imposé par la structure des observations. Nous avons alors pu étendre la théorie du filtrage non linéaire dans ce cadre. Pour cela nous avons utilisé les noyaux de filtrage de type Feynman-Kac. Il nous a fallu reprendre les calculs et formulés des hypothèses cohérentes avec le problème physique pour obtenir des résultats de convergence semblable à ceux de la théorie classique. Nous nous sommes alors ramené dans le cadre adapté aux filtres à particules. Nous avons alors développé différents algorithmes basés sur les résultats théoriques obtenus. Différentes applications de notre méthode nous a permis de mettre en valeur le fait que nous pouvions, dans une certaine mesure, retrouver des paramètres de taille inférieur à la résolution donnée par la grille. Finalement nous avons mis en place un cadre théorique ainsi qu'un algorithme permettant de traiter à la fois des données de LIDAR Doppler et aérosols. Nous avons ainsi pu vérifier que nos estimations se raffinaient par l'ajout de traceurs passifs. / The aim of this thesis was to set up an algorithm for processing LIDAR data (LIght Detection And Ranging). LIDARs of the Doppler and aerosol type were mainly used. The measurements of these measuring devices are obtained in such a way that only an observation grid is in fact available. However, it is desired to have information about the atmosphere which is a continuous medium. We used multi-resolution analysis methods to place ourselves in a mathematical framework corresponding to the physical problem. We have thus obtained a division of the space of evolution of the process into two orthogonal subspaces, imposed by the structure of the observations. We were able to extend the theory of nonlinear ltering in this framework. For this we used the filter kernels of Feynman-Kac type. We had to resume the calculations and formulate hypotheses consistent with the physical problem in order to obtain results of convergence similar to those of the classical theory. We then returned to the frame suitable for particle filters. We developed different algorithms based on the theoretical results obtained. Different applications of our method allowed us to highlight the fact that we could, to some extent, find parameters smaller than the resolution given by the grid. Finally, we set up a theoretical framework as well as an algorithm for processing LIDAR Doppler and aerosol data. We were able to verify that our estimates were refined by the addition of passive tracers.
|
95 |
Target Tracking in Environments of Rapidly Changing ClutterJanuary 2015 (has links)
abstract: Tracking targets in the presence of clutter is inevitable, and presents many challenges. Additionally, rapid, drastic changes in clutter density between different environments or scenarios can make it even more difficult for tracking algorithms to adapt. A novel approach to target tracking in such dynamic clutter environments is proposed using a particle filter (PF) integrated with Interacting Multiple Models (IMMs) to compensate and adapt to the transition between different clutter densities. This model was implemented for the case of a monostatic sensor tracking a single target moving with constant velocity along a two-dimensional trajectory, which crossed between regions of drastically different clutter densities. Multiple combinations of clutter density transitions were considered, using up to three different clutter densities. It was shown that the integrated IMM PF algorithm outperforms traditional approaches such as the PF in terms of tracking results and performance. The minimal additional computational expense of including the IMM more than warrants the benefits of having it supplement and amplify the advantages of the PF. / Dissertation/Thesis / Masters Thesis Electrical Engineering 2015
|
96 |
Ecoulement dans une pompe à vide turbomoléculaire : modélisation et analyse par voie numérique / Flow in a turbomolecular vacuum pump : numerical modelling and analysisWang, Ye 22 November 2013 (has links)
La thèse est consacrée à la modélisation et à l'analyse par voie numérique de l'écoulement dans une pompe à vide turbomoléculaire hybride, combinant une succession d'étages de type rotor et stator et un Holweck. Une approche de type Test Particle Monte Carlo 3D a été développée pour des configurations de pompes industrielles (géométries complexes d'aubes, gestion des étages rotor et stator) dans un souci d'optimisation des coûts de simulation. L'outil numérique développé a été validé pour des configurations académiques et industrielles, en s'appuyant notamment sur des résultats expérimentaux obtenus grâce au banc d'essai de l'entreprise aVP. L'apport de l'approche TPMC3D par rapport aux méthodes de design disponibles en début de thèse a été clairement démontré pour le régime moléculaire libre. Quelques préconisations de design ont également pu être formulées en utilisant le code développé. Le potentiel d'une approche de type Direct Simulation Monte Carlo, prenant en compte les interactions entre molécules du gaz, a également été établi en 2D pour le régime de transition. / The thesis is devoted to the modeling and the numerical analysis of the flow in a turbomolecular vacuum pump of hybrid type, that is combining a succession of rotor and stator stages with an Holweck. A 3D Test Particle Monte Carlo approach has been developed for simulating industrial pump configurations (complex blade geometries, management of rotor and stator stages), with attention paid to the optimization of the computational cost. The numerical tool developed in the thesis has been validated for academic and industrial test cases, relying in particular on reference experimental results obtained on the test rig of the aVP company. The prediction improvement brought by the TPMC 3D approach with respect to the design tools available at the start of the thesis has been clearly demonstrated for the free molecular flow regime. Some design recommendations have also been formulated using the developed solver. The potential of a Direct Simulation Monte Carlo approach, taking into account the interactions between gas molecules, has also been established in 2D for the transition regime.
|
97 |
Estudos teóricos de propriedades estruturais e eletrônicas da molécula emodina em solução / Theoretical studies of structural and electronic properties of emodin molecule in solutionAntonio Rodrigues da Cunha 14 October 2009 (has links)
Estudamos as propriedades estruturais e eletrônicas da molécula emodina (EM), em diferentes condições, do ponto de vista experimental e teórico. Numa primeira parte, realizamos medidas do espectro eletrônico de absorção da EM, em meio solvente (água, clorofórmio e metanol). Nessa parte, obtivemos que o solvente provoca pouco efeito nos deslocamentos das bandas. Numa segunda parte, estudamos a EM, isoladamente e nos três solventes, através de cálculos quânticos com funcional de densidade (B3LYP), conjunto de função base de Pople (6-31G*) e modelo contínuo polariz ável (PCM). Como principais resultados obtivemos que a EM é rígida a menos da orientação relativa das 3 hidroxilas. A mudança orientacional nessas hidroxilas pode provocar formação de até 2 ligações de hidrogênio intramolecular (o que estabiliza sua geometria) e conseqüente uma diminuição no momento dipolo de 5.5 a 1.7D (o que desestabiliza sua interação com a água). Numa terceira parte, realizamos simulações com método Monte Carlo e Dinâmica Molecular em solução. Nessa parte, obtivemos que as ligações de hidrogênio intramoleculares são raramente quebradas devido as interações com o solvente e isso atribui a EM um caráter hidrofóbico. Adicionalmente, utilizando Teoria de Perturbação Termodinâmica nas simulações, calculamos a variação de energia livre de solvatação da EM em partição água/clorofórmio e água/- metanol e obtivemos -2.6 e -4.9 kcal/mol, respectivamente. Esse resultado está em boa concordância com o resultado experimental de -5.6 kcal/mol para partição de água/octanol. Por último, realizamos cálculos do espectro eletrônico de absorção da EM, isoladamente e nos três solventes, considerando as moléculas através do modelo, contínuo de solvente (SCRF) e explícito de solvente, com o método INDO/CIS. Nessa parte, obtivemos que o efeito do solvente é bem descrito teoricamente. / We study the structural and electronic properties of the emodin (EM) in different solvents of experimental and theoretical the point of view. We started performing measurements of the UV-Vis absorption spectrum of the EM in solution (water, chloroform and methanol). Our main result is that the solvent causes little effect on shifts the bands. In the second part of this work, we performing quantum calculations of isolated EM and in the three solutions using density functional (B3LYP), a set of Pople basis function (6-31G*) and the polarizable continuum model (PCM). In this part, our result is that EM presents a rigid conformation unless the orientation of its 3 hydroxyls. The change in these hydroxyls orientation can form up to 2 intramolecular H-bonds (which stabilizes its geometry) and causes a decrease in the dipole moment from 5.5 to 1.7D (which destabilizes its interaction with water). In the third part of this work, we performing Monte Carlo and Molecular Dynamics simulations in solution. Our main result is that the intramolecular H-bonds are rarely broken, even in aqueous solution, and these give to EM a hydrophobic character. Additionally, using Thermodynamics Perturbation Theory in the simulations, we calculate variations of free energy of solvation of EM in partition of water/chloroform and water/methanol and obtained -2.6 and -4.9kcal/mol, respectively. This last result is in good agreement with the experimental result[3] of -5.6kcal/mol for partition of water/octanol. Finally, we performing calculations of UV-Vis absorption spectrum of isolated EM and in the three solutions. In this calculations, we considering the molecules through the continuum solvent (SCRF) and explicit solvent model with the method INDO/CIS. In this part, we obtaining that effect of solvent is well described theoretically.
|
98 |
A two-level Probabilistic Risk Assessment of cascading failures leading to blackout in transmission power systemsHenneaux, Pierre 19 September 2013 (has links)
In our society, private and industrial activities increasingly rest on the implicit assumption that electricity is available at any time and at an affordable price. Even if operational data and feedback from the electrical sector is very positive, a residual risk of blackout or undesired load shedding in critical zones remains. The occurrence of such a situation is likely to entail major direct and indirect economical consequences, as observed in recent blackouts. Assessing this residual risk and identifying scenarios likely to lead to these feared situations is crucial to control and optimally reduce this risk of blackout or major system disturbance. The objective of this PhD thesis is to develop a methodology able to reveal scenarios leading to a blackout or a major system disturbance and to estimate their frequencies and their consequences with a satisfactory accuracy.<p><p>A blackout is a collapse of the electrical grid on a large area, leading to a power cutoff, and is due to a cascading failure. Such a cascade is composed of two phases: a slow cascade, starting with the occurrence of an initiating event and displaying characteristic times between successive events from minutes to hours, and a fast cascade, displaying characteristic times between successive events from milliseconds to tens of seconds. In cascading failures, there is a strong coupling between events: the loss of an element increases the stress on other elements and, hence, the probability to have another failure. It appears that probabilistic methods proposed previously do not consider correctly these dependencies between failures, mainly because the two very different phases are analyzed with the same model. Thus, there is a need to develop a conceptually satisfying probabilistic approach, able to take into account all kinds of dependencies, by using different models for the slow and the fast cascades. This is the aim of this PhD thesis.<p><p>This work first focuses on the level-I which is the analysis of the slow cascade progression up to the transition to the fast cascade. We propose to adapt dynamic reliability, an integrated approach of Probabilistic Risk Analysis (PRA) developed initially for the nuclear sector, to the case of transmission power systems. This methodology will account for the double interaction between power system dynamics and state transitions of the grid elements. This PhD thesis also introduces the development of the level-II to analyze the fast cascade, up to the transition towards an operational state with load shedding or a blackout. The proposed method is applied to two test systems. Results show that thermal effects can play an important role in cascading failures, during the first phase. They also show that the level-II analysis after the level-I is necessary to have an estimation of the loss of supplied power that a scenario can lead to: two types of level-I scenarios with a similar frequency can induce very different risks (in terms of loss of supplied power) and blackout frequencies. The level-III, i.e. the restoration process analysis, is however needed to have an estimation of the risk in terms of loss of supplied energy. This PhD thesis also presents several perspectives to improve the approach in order to scale up applications to real grids.<p> / Doctorat en Sciences de l'ingénieur / info:eu-repo/semantics/nonPublished
|
99 |
Parallelism in Event-Based Computations with Applications in BiologyBauer, Pavol January 2017 (has links)
Event-based models find frequent usage in fields such as computational physics and biology as they may contain both continuous and discrete state variables and may incorporate both deterministic and stochastic state transitions. If the state transitions are stochastic, computer-generated random numbers are used to obtain the model solution. This type of event-based computations is also known as Monte-Carlo simulation. In this thesis, I study different approaches to execute event-based computations on parallel computers. This ultimately allows users to retrieve their simulation results in a fraction of the original computation time. As system sizes grow continuously or models have to be simulated at longer time scales, this is a necessary approach for current computational tasks. More specifically, I propose several ways to asynchronously simulate such models on parallel shared-memory computers, for example using parallel discrete-event simulation or task-based computing. The particular event-based models studied herein find applications in systems biology, computational epidemiology and computational neuroscience. In the presented studies, the proposed methods allow for high efficiency of the parallel simulation, typically scaling well with the number of used computer cores. As the scaling typically depends on individual model properties, the studies also investigate which quantities have the greatest impact on the simulation performance. Finally, the presented studies include other insights into event-based computations, such as methods how to estimate parameter sensitivity in stochastic models and how to simulate models that include both deterministic and stochastic state transitions. / UPMARC
|
100 |
Bayesian and Quasi-Monte Carlo spherical integration for global illumination / Intégration sphérique Bayésien et Quasi-Monte Carlo pour l'illumination globaleMarques, Ricardo 22 October 2013 (has links)
La qualité du résultat des opérations d’échantillonnage pour la synthèse d'images est fortement dépendante du placement et de la pondération des échantillons. C’est pourquoi plusieurs travaux ont porté sur l’amélioration de l’échantillonnage purement aléatoire utilisée dans les techniques classiques de Monte Carlo. Leurs approches consistent à utiliser des séquences déterministes qui améliorent l’uniformité de la distribution des échantillons sur le domaine de l’intégration. L’estimateur résultant est alors appelé un estimateur de quasi-Monte Carlo (QMC).Dans cette thèse, nous nous focalisons sur le cas de l’échantillonnage pour l’intégration hémisphérique. Nous allons montrer que les approches existantes peuvent être améliorées en exploitant pleinement l’information disponible (par exemple, les propriétés statistiques de la fonction à intégrer) qui est ensuite utilisée pour le placement des échantillons et pour leur pondération. / The spherical sampling of the incident radiance function entails a high computational cost. Therefore the llumination integral must be evaluated using a limited set of samples. Such a restriction raises the question of how to obtain the most accurate approximation possible with such a limited set of samples. In this thesis, we show that existing Monte Carlo-based approaches can be improved by fully exploiting the information available which is later used for careful samples placement and weighting.The first contribution of this thesis is a strategy for producing high quality Quasi-Monte Carlo (QMC) sampling patterns for spherical integration by resorting to spherical Fibonacci point sets. We show that these patterns, when applied to the rendering integral, are very simple to generate and consistently outperform existing approaches. Furthermore, we introduce theoretical aspects on QMC spherical integration that, to our knowledge, have never been used in the graphics community, such as spherical cap discrepancy and point set spherical energy. These metrics allow assessing the quality of a spherical points set for a QMC estimate of a spherical integral.In the next part of the thesis, we propose a new heoretical framework for computing the Bayesian Monte Carlo quadrature rule. Our contribution includes a novel method of quadrature computation based on spherical Gaussian functions that can be generalized to a broad class of BRDFs (any BRDF which can be approximated sum of one or more spherical Gaussian functions) and potentially to other rendering applications. We account for the BRDF sharpness by using a new computation method for the prior mean function. Lastly, we propose a fast hyperparameters evaluation method that avoids the learning step.Our last contribution is the application of BMC with an adaptive approach for evaluating the illumination integral. The idea is to compute a first BMC estimate (using a first sample set) and, if the quality criterion is not met, directly inject the result as prior knowledge on a new estimate (using another sample set). The new estimate refines the previous estimate using a new set of samples, and the process is repeated until a satisfying result is achieved.
|
Page generated in 0.1118 seconds