• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 57
  • 11
  • 6
  • 5
  • 5
  • 4
  • 1
  • 1
  • 1
  • 1
  • 1
  • Tagged with
  • 92
  • 92
  • 47
  • 47
  • 27
  • 23
  • 21
  • 16
  • 12
  • 12
  • 11
  • 11
  • 11
  • 11
  • 10
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
51

[en] COMBINING TO SUCCEED: A NOVEL STRATEGY TO IMPROVE FORECASTS FROM EXPONENTIAL SMOOTHING MODELS / [pt] COMBINANDO PARA TER SUCESSO: UMA NOVA ESTRATÉGIA PARA MELHORAR A PREVISÕES DE MODELOS DE AMORTECIMENTO EXPONENCIAL

TIAGO MENDES DANTAS 04 February 2019 (has links)
[pt] A presente tese se insere no contexto de previsão de séries temporais. Nesse sentido, embora muitas abordagens tenham sido desenvolvidas, métodos simples como o de amortecimento exponencial costumam gerar resultados extremamente competitivos muitas vezes superando abordagens com maior nível de complexidade. No contexto previsão, papers seminais na área mostraram que a combinação de previsões tem potencial para reduzir de maneira acentuada o erro de previsão. Especificamente, a combinação de previsões geradas por amortecimento exponencial tem sido explorada em papers recentes. Apesar da combinação de previsões utilizando Amortecimento Exponencial poder ser feita de diversas formas, um método proposto recentemente e chamado de Bagged.BLD.MBB.ETS utiliza uma técnica chamada Bootstrap Aggregating (Bagging) em combinação com métodos de amortecimento exponencial para gerar previsões mostrando que a abordagem é capaz de gerar previsões mensais mais precisas que todos os benchmarks analisados. A abordagem era considerada o estado da arte na utilização de Bagging e Amortecimento Exponencial até o desenvolvimento dos resultados obtidos nesta tese. A tese em questão se ocupa de, inicialmente, validar o método Bagged.BLD.MBB.ETS em um conjunto de dados relevante do ponto de vista de uma aplicação real, expandindo assim os campos de aplicação da metodologia. Posteriormente, são identificados motivos relevantes para redução do erro de e é proposta uma nova metodologia que utiliza Bagging, Amortecimento Exponencial e Clusters para tratar o efeito covariância, até então não identificado anteriormente na literatura do método. A abordagem proposta foi testada utilizando diferentes tipo de séries temporais da competição M3, CIF 2016 e M4, bem como utilizando dados simulados. Os resultados empíricos apontam para uma redução substancial na variância e no erro de previsão. / [en] This thesis is inserted in the context of time series forecasting. In this sense, although many approaches have been developed, simple methods such as exponential smoothing usually produce extremely competitive results, often surpassing approaches with a higher level of complexity. Seminal papers in time series forecasting showed that the combination of forecasts has the potential to dramatically reduce the forecast error. Specifically, the combination of forecasts generated by Exponential Smoothing has been explored in recent papers. Although this can be done in many ways, a specific method called Bagged.BLD.MBB.ETS uses a technique called Bootstrap Aggregating (Bagging) in combination with Exponential Smoothing methods to generate forecasts, showing that the approach can generate more accurate monthly forecasts than all the analyzed benchmarks. The approach was considered the state of the art in the use of Bagging and Exponential Smoothing until the development of the results obtained in this thesis. This thesis initially deals with validating Bagged.BLD.MBB.ETS in a data set relevant from the point of view of a real application, thus expanding the fields of application of the methodology. Subsequently, relevant motifs for error reduction are identified and a new methodology using Bagging, Exponential Smoothing and Clusters is proposed to treat the covariance effect, not previously identified in the method s literature. The proposed approach was tested using data from three time series competitions (M3, CIF 2016 and M4), as well as using simulated data. The empirical results point to a substantial reduction in variance and forecast error.
52

Estudo de algoritmos de otimização estocástica aplicados em aprendizado de máquina / Study of algorithms of stochastic optimization applied in machine learning problems

Jessica Katherine de Sousa Fernandes 23 August 2017 (has links)
Em diferentes aplicações de Aprendizado de Máquina podemos estar interessados na minimização do valor esperado de certa função de perda. Para a resolução desse problema, Otimização estocástica e Sample Size Selection têm um papel importante. No presente trabalho se apresentam as análises teóricas de alguns algoritmos destas duas áreas, incluindo algumas variações que consideram redução da variância. Nos exemplos práticos pode-se observar a vantagem do método Stochastic Gradient Descent em relação ao tempo de processamento e memória, mas, considerando precisão da solução obtida juntamente com o custo de minimização, as metodologias de redução da variância obtêm as melhores soluções. Os algoritmos Dynamic Sample Size Gradient e Line Search with variable sample size selection apesar de obter soluções melhores que as de Stochastic Gradient Descent, a desvantagem se encontra no alto custo computacional deles. / In different Machine Learnings applications we can be interest in the minimization of the expected value of some loss function. For the resolution of this problem, Stochastic optimization and Sample size selection has an important role. In the present work, it is shown the theoretical analysis of some algorithms of these two areas, including some variations that considers variance reduction. In the practical examples we can observe the advantage of Stochastic Gradient Descent in relation to the processing time and memory, but considering accuracy of the solution obtained and the cost of minimization, the methodologies of variance reduction has the best solutions. In the algorithms Dynamic Sample Size Gradient and Line Search with variable sample size selection, despite of obtaining better solutions than Stochastic Gradient Descent, the disadvantage lies in their high computational cost.
53

Risk Measurement, Management And Option Pricing Via A New Log-normal Sum Approximation Method

Zeytun, Serkan 01 October 2012 (has links) (PDF)
In this thesis we mainly focused on the usage of the Conditional Value-at-Risk (CVaR) in risk management and on the pricing of the arithmetic average basket and Asian options in the Black-Scholes framework via a new log-normal sum approximation method. Firstly, we worked on the linearization procedure of the CVaR proposed by Rockafellar and Uryasev. We constructed an optimization problem with the objective of maximizing the expected return under a CVaR constraint. Due to possible intermediate payments we assumed, we had to deal with a re-investment problem which turned the originally one-period problem into a multiperiod one. For solving this multi-period problem, we used the linearization procedure of CVaR and developed an iterative scheme based on linear optimization. Our numerical results obtained from the solution of this problem uncovered some surprising weaknesses of the use of Value-at-Risk (VaR) and CVaR as a risk measure. In the next step, we extended the problem by including the liabilities and the quantile hedging to obtain a reasonable problem construction for managing the liquidity risk. In this problem construction the objective of the investor was assumed to be the maximization of the probability of liquid assets minus liabilities bigger than a threshold level, which is a type of quantile hedging. Since the quantile hedging is not a perfect hedge, a non-zero probability of having a liability value higher than the asset value exists. To control the amount of the probable deficient amount we used a CVaR constraint. In the Black-Scholes framework, the solution of this problem necessitates to deal with the sum of the log-normal distributions. It is known that sum of the log-normal distributions has no closed-form representation. We introduced a new, simple and highly efficient method to approximate the sum of the log-normal distributions using shifted log-normal distributions. The method is based on a limiting approximation of the arithmetic mean by the geometric mean. Using our new approximation method we reduced the quantile hedging problem to a simpler optimization problem. Our new log-normal sum approximation method could also be used to price some options in the Black-Scholes model. With the help of our approximation method we derived closed-form approximation formulas for the prices of the basket and Asian options based on the arithmetic averages. Using our approximation methodology combined with the new analytical pricing formulas for the arithmetic average options, we obtained a very efficient performance for Monte Carlo pricing in a control variate setting. Our numerical results show that our control variate method outperforms the well-known methods from the literature in some cases.
54

Statistical Yield Analysis and Design for Nanometer VLSI

Jaffari, Javid January 2010 (has links)
Process variability is the pivotal factor impacting the design of high yield integrated circuits and systems in deep sub-micron CMOS technologies. The electrical and physical properties of transistors and interconnects, the building blocks of integrated circuits, are prone to significant variations that directly impact the performance and power consumption of the fabricated devices, severely impacting the manufacturing yield. However, the large number of the transistors on a single chip adds even more challenges for the analysis of the variation effects, a critical task in diagnosing the cause of failure and designing for yield. Reliable and efficient statistical analysis methodologies in various design phases are key to predict the yield before entering such an expensive fabrication process. In this thesis, the impacts of process variations are examined at three different levels: device, circuit, and micro-architecture. The variation models are provided for each level of abstraction, and new methodologies are proposed for efficient statistical analysis and design under variation. At the circuit level, the variability analysis of three crucial sub-blocks of today's system-on-chips, namely, digital circuits, memory cells, and analog blocks, are targeted. The accurate and efficient yield analysis of circuits is recognized as an extremely challenging task within the electronic design automation community. The large scale of the digital circuits, the extremely high yield requirement for memory cells, and the time-consuming analog circuit simulation are major concerns in the development of any statistical analysis technique. In this thesis, several sampling-based methods have been proposed for these three types of circuits to significantly improve the run-time of the traditional Monte Carlo method, without compromising accuracy. The proposed sampling-based yield analysis methods benefit from the very appealing feature of the MC method, that is, the capability to consider any complex circuit model. However, through the use and engineering of advanced variance reduction and sampling methods, ultra-fast yield estimation solutions are provided for different types of VLSI circuits. Such methods include control variate, importance sampling, correlation-controlled Latin Hypercube Sampling, and Quasi Monte Carlo. At the device level, a methodology is proposed which introduces a variation-aware design perspective for designing MOS devices in aggressively scaled geometries. The method introduces a yield measure at the device level which targets the saturation and leakage currents of an MOS transistor. A statistical method is developed to optimize the advanced doping profiles and geometry features of a device for achieving a maximum device-level yield. Finally, a statistical thermal analysis framework is proposed. It accounts for the process and thermal variations simultaneously, at the micro-architectural level. The analyzer is developed, based on the fact that the process variations lead to uncertain leakage power sources, so that the thermal profile, itself, would have a probabilistic nature. Therefore, by a co-process-thermal-leakage analysis, a more reliable full-chip statistical leakage power yield is calculated.
55

Contributions to Infinite Divisibility for Financial Modeling

Kawai, Reiichiro 10 December 2004 (has links)
Infinitely divisible distributions and processes have been the object of extensive research not only from the theoretical point of view but also for practical use, for example, in queueing theory or mathematical finance. In this thesis, we will study some of their subclasses with a view towards financial modeling. As generalizations of stable distributions, we study the tempered stable distributions and introduce the new classes of layered stable distributions as well as the mixed stable distributions, along with the corresponding Levy processes. As a further generalization of infinitely divisible processes, fractional tempered stable motions are defined. These theoretical studies will be complemented by some more practical ones, such as the simulation of sample paths, parameter estimations, financial portfolio hedging, and solving stochastic differential equations.
56

Statistical Yield Analysis and Design for Nanometer VLSI

Jaffari, Javid January 2010 (has links)
Process variability is the pivotal factor impacting the design of high yield integrated circuits and systems in deep sub-micron CMOS technologies. The electrical and physical properties of transistors and interconnects, the building blocks of integrated circuits, are prone to significant variations that directly impact the performance and power consumption of the fabricated devices, severely impacting the manufacturing yield. However, the large number of the transistors on a single chip adds even more challenges for the analysis of the variation effects, a critical task in diagnosing the cause of failure and designing for yield. Reliable and efficient statistical analysis methodologies in various design phases are key to predict the yield before entering such an expensive fabrication process. In this thesis, the impacts of process variations are examined at three different levels: device, circuit, and micro-architecture. The variation models are provided for each level of abstraction, and new methodologies are proposed for efficient statistical analysis and design under variation. At the circuit level, the variability analysis of three crucial sub-blocks of today's system-on-chips, namely, digital circuits, memory cells, and analog blocks, are targeted. The accurate and efficient yield analysis of circuits is recognized as an extremely challenging task within the electronic design automation community. The large scale of the digital circuits, the extremely high yield requirement for memory cells, and the time-consuming analog circuit simulation are major concerns in the development of any statistical analysis technique. In this thesis, several sampling-based methods have been proposed for these three types of circuits to significantly improve the run-time of the traditional Monte Carlo method, without compromising accuracy. The proposed sampling-based yield analysis methods benefit from the very appealing feature of the MC method, that is, the capability to consider any complex circuit model. However, through the use and engineering of advanced variance reduction and sampling methods, ultra-fast yield estimation solutions are provided for different types of VLSI circuits. Such methods include control variate, importance sampling, correlation-controlled Latin Hypercube Sampling, and Quasi Monte Carlo. At the device level, a methodology is proposed which introduces a variation-aware design perspective for designing MOS devices in aggressively scaled geometries. The method introduces a yield measure at the device level which targets the saturation and leakage currents of an MOS transistor. A statistical method is developed to optimize the advanced doping profiles and geometry features of a device for achieving a maximum device-level yield. Finally, a statistical thermal analysis framework is proposed. It accounts for the process and thermal variations simultaneously, at the micro-architectural level. The analyzer is developed, based on the fact that the process variations lead to uncertain leakage power sources, so that the thermal profile, itself, would have a probabilistic nature. Therefore, by a co-process-thermal-leakage analysis, a more reliable full-chip statistical leakage power yield is calculated.
57

Monte Carlo dose calculations in advanced radiotherapy

Bush, Karl Kenneth 15 September 2009 (has links)
The remarkable accuracy of Monte Carlo (MC) dose calculation algorithms has led to the widely accepted view that these methods should and will play a central role in the radiotherapy treatment verification and planning of the future. The advantages of using MC clinically are particularly evident for radiation fields passing through inhomogeneities, such as lung and air cavities, and for small fields, including those used in today's advanced intensity modulated radiotherapy techniques. Many investigators have reported significant dosimetric differences between MC and conventional dose calculations in such complex situations, and have demonstrated experimentally the unmatched ability of MC calculations in modeling charged particle disequilibrium. The advantages of using MC dose calculations do come at a cost. The nature of MC dose calculations require a highly detailed, in-depth representation of the physical system (accelerator head geometry/composition, anatomical patient geometry/composition and particle interaction physics) to allow accurate modeling of external beam radiation therapy treatments. To perform such simulations is computationally demanding and has only recently become feasible within mainstream radiotherapy practices. In addition, the output of the accelerator head simulation can be highly sensitive to inaccuracies within a model that may not be known with sufficient detail. The goal of this dissertation is to both improve and advance the implementation of MC dose calculations in modern external beam radiotherapy. To begin, a novel method is proposed to fine-tune the output of an accelerator model to better represent the measured output. In this method an intensity distribution of the electron beam incident on the model is inferred by employing a simulated annealing algorithm. The method allows an investigation of arbitrary electron beam intensity distributions and is not restricted to the commonly assumed Gaussian intensity. In a second component of this dissertation the design, implementation and evaluation of a technique for reducing a latent variance inherent from the recycling of phase space particle tracks in a simulation is presented. In the technique a random azimuthal rotation about the beam's central axis is applied to each recycled particle, achieving a significant reduction of the latent variance. In a third component, the dissertation presents the first MC modeling of Varian's new RapidArc delivery system and a comparison of dose calculations with the Eclipse treatment planning system. A total of four arc plans are compared including an oropharynx patient phantom containing tissue inhomogeneities. Finally, in a step toward introducing MC dose calculation into the planning of treatments such as RapidArc, a technique is presented to feasibly generate and store a large set of MC calculated dose distributions. A novel 3-D dyadic multi-resolution (MR) decomposition algorithm is presented and the compressibility of the dose data using this algorithm is investigated. The presented MC beamlet generation method, in conjunction with the presented 3-D data MR decomposition, represents a viable means to introduce MC dose calculation in the planning and optimization stages of advanced radiotherapy.
58

Mathematical and algorithmic analysis of modified Langevin dynamics / L'analyse mathématique et algorithmique de la dynamique de Langevin modifié

Trstanova, Zofia 25 November 2016 (has links)
En physique statistique, l’information macroscopique d’intérêt pour les systèmes considérés peut être dé-duite à partir de moyennes sur des configurations microscopiques réparties selon des mesures de probabilitéµ caractérisant l’état thermodynamique du système. En raison de la haute dimensionnalité du système (quiest proportionnelle au nombre de particules), les configurations sont le plus souvent échantillonnées en util-isant des trajectoires d’équations différentielles stochastiques ou des chaînes de Markov ergodiques pourla mesure de Boltzmann-Gibbs µ, qui décrit un système à température constante. Un processus stochas-tique classique permettant d’échantillonner cette mesure est la dynamique de Langevin. En pratique, leséquations de la dynamique de Langevin ne peuvent pas être intégrées analytiquement, la solution est alorsapprochée par un schéma numérique. L’analyse numérique de ces schémas de discrétisation est maintenantbien maîtrisée pour l’énergie cinétique quadratique standard. Une limitation importante des estimateurs desmoyennes sontleurs éventuelles grandes erreurs statistiques.Sous certaines hypothèsessur lesénergies ciné-tique et potentielle, il peut être démontré qu’un théorème de limite central est vrai. La variance asymptotiquepeut être grande en raison de la métastabilité du processus de Langevin, qui se produit dès que la mesure deprobabilité µ est multimodale.Dans cette thèse, nous considérons la discrétisation de la dynamique de Langevin modifiée qui améliorel’échantillonnage de la distribution de Boltzmann-Gibbs en introduisant une fonction cinétique plus généraleà la place de la formulation quadratique standard. Nous avons en fait deux situations en tête : (a) La dy-namique de Langevin Adaptativement Restreinte, où l’énergie cinétique s’annule pour les faibles moments,et correspond à l’énergie cinétique standard pour les forts moments. L’intérêt de cette dynamique est que lesparticules avec une faible énergie sont restreintes. Le gain vient alors du fait que les interactions entre lesparticules restreintes ne doivent pas être mises à jour. En raison de la séparabilité des positions et des mo-ments marginaux de la distribution, les moyennes des observables qui dépendent de la variable de positionsont égales à celles calculées par la dynamique de Langevin standard. L’efficacité de cette méthode résidedans le compromis entre le gain de calcul et la variance asymptotique des moyennes ergodiques qui peutaugmenter par rapport à la dynamique standards car il existe a priori plus des corrélations dans le tempsen raison de particules restreintes. De plus, étant donné que l’énergie cinétique est nulle sur un ouvert, ladynamique de Langevin associé ne parvient pas à être hypoelliptique. La première tâche de cette thèse est deprouver que la dynamique de Langevin avec une telle énergie cinétique est ergodique. L’étape suivante con-siste à présenter une analyse mathématique de la variance asymptotique de la dynamique AR-Langevin. Afinde compléter l’analyse de ce procédé, on estime l’accélération algorithmique du coût d’une seule itération,en fonction des paramètres de la dynamique. (b) Nous considérons aussi la dynamique de Langevin avecdes énergies cinétiques dont la croissance est plus que quadratique à l’infini, dans une tentative de réduire lamétastabilité. La liberté supplémentaire fournie par le choix de l’énergie cinétique doit être utilisée afin deréduire la métastabilité de la dynamique. Dans cette thèse, nous explorons le choix de l’énergie cinétique etnous démontrons une convergence améliorée des moyennes ergodiques sur un exemple de faible dimension.Un des problèmes avec les situations que nous considérons est la stabilité des régimes discrétisés. Afind’obtenir une méthode de discrétisation faiblement cohérente d’ordre 2 (ce qui n’est plus trivial dans le casde l’énergie cinétique générale), nous nous reposons sur les schémas basés sur des méthodes de Metropolis. / In statistical physics, the macroscopic information of interest for the systems under consideration can beinferred from averages over microscopic configurations distributed according to probability measures µcharacterizing the thermodynamic state of the system. Due to the high dimensionality of the system (whichis proportional to the number of particles), these configurations are most often sampled using trajectories ofstochastic differential equations or Markov chains ergodic for the probability measure µ, which describesa system at constant temperature. One popular stochastic process allowing to sample this measure is theLangevin dynamics. In practice, the Langevin dynamics cannot be analytically integrated, its solution istherefore approximated with a numerical scheme. The numerical analysis of such discretization schemes isby now well-understood when the kinetic energy is the standard quadratic kinetic energy.One important limitation of the estimators of the ergodic averages are their possibly large statisticalerrors.Undercertainassumptionsonpotentialandkineticenergy,itcanbeshownthatacentrallimittheoremholds true. The asymptotic variance may be large due to the metastability of the Langevin process, whichoccurs as soon as the probability measure µ is multimodal.In this thesis, we consider the discretization of modified Langevin dynamics which improve the samplingof the Boltzmann–Gibbs distribution by introducing a more general kinetic energy function U instead of thestandard quadratic one. We have in fact two situations in mind:(a) Adaptively Restrained (AR) Langevin dynamics, where the kinetic energy vanishes for small momenta,while it agrees with the standard kinetic energy for large momenta. The interest of this dynamics isthat particles with low energy are restrained. The computational gain follows from the fact that theinteractions between restrained particles need not be updated. Due to the separability of the positionand momenta marginals of the distribution, the averages of observables which depend on the positionvariable are equal to the ones computed with the standard Langevin dynamics. The efficiency of thismethod lies in the trade-off between the computational gain and the asymptotic variance on ergodic av-erages which may increase compared to the standard dynamics since there are a priori more correlationsin time due to restrained particles. Moreover, since the kinetic energy vanishes on some open set, theassociated Langevin dynamics fails to be hypoelliptic. In fact, a first task of this thesis is to prove thatthe Langevin dynamics with such modified kinetic energy is ergodic. The next step is to present a math-ematical analysis of the asymptotic variance for the AR-Langevin dynamics. In order to complementthe analysis of this method, we estimate the algorithmic speed-up of the cost of a single iteration, as afunction of the parameters of the dynamics.(b) We also consider Langevin dynamics with kinetic energies growing more than quadratically at infinity,in an attempt to reduce metastability. The extra freedom provided by the choice of the kinetic energyshould be used in order to reduce the metastability of the dynamics. In this thesis, we explore thechoice of the kinetic energy and we demonstrate on a simple low-dimensional example an improvedconvergence of ergodic averages.An issue with the situations we consider is the stability of discretized schemes. In order to obtain aweakly consistent method of order 2 (which is no longer trivial for a general kinetic energy), we rely on therecently developped Metropolis schemes.
59

Pricing methods for Asian options

Mudzimbabwe, Walter January 2010 (has links)
>Magister Scientiae - MSc / We present various methods of pricing Asian options. The methods include Monte Carlo simulations designed using control and antithetic variates, numerical solution of partial differential equation and using lower bounds.The price of the Asian option is known to be a certain risk-neutral expectation. Using the Feynman-Kac theorem, we deduce that the problem of determining the expectation implies solving a linear parabolic partial differential equation. This partial differential equation does not admit explicit solutions due to the fact that the distribution of a sum of lognormal variables is not explicit. We then solve the partial differential equation numerically using finite difference and Monte Carlo methods.Our Monte Carlo approach is based on the pseudo random numbers and not deterministic sequence of numbers on which Quasi-Monte Carlo methods are designed. To make the Monte Carlo method more effective, two variance reduction techniques are discussed.Under the finite difference method, we consider explicit and the Crank-Nicholson’s schemes. We demonstrate that the explicit method gives rise to extraneous solutions because the stability conditions are difficult to satisfy. On the other hand, the Crank-Nicholson method is unconditionally stable and provides correct solutions. Finally, we apply the pricing methods to a similar problem of determining the price of a European-style arithmetic basket option under the Black-Scholes framework. We find the optimal lower bound, calculate it numerically and compare this with those obtained by the Monte Carlo and Moment Matching methods.Our presentation here includes some of the most recent advances on Asian options, and we contribute in particular by adding detail to the proofs and explanations. We also contribute some novel numerical methods. Most significantly, we include an original contribution on the use of very sharp lower bounds towards pricing European basket options.
60

Analyse mathématique de méthodes numériques stochastiques en dynamique moléculaire / Mathematical analysis of stochastic numerical methods in molecular dynamics

Alrachid, Houssam 05 November 2015 (has links)
En physique statistique computationnelle, de bonnes techniques d'échantillonnage sont nécessaires pour obtenir des propriétés macroscopiques à travers des moyennes sur les états microscopiques. La principale difficulté est que ces états microscopiques sont généralement regroupés autour de configurations typiques, et un échantillonnage complet de l'espace configurationnel est donc typiquement très complexe à réaliser. Des techniques ont été proposées pour échantillonner efficacement les états microscopiques dans l'ensemble canonique. Un exemple important de quantités d'intérêt dans un tel cas est l'énergie libre. Le calcul d'énergie libre est très important dans les calculs de dynamique moléculaire, afin d'obtenir une description réduite d'un système physique complexe de grande dimension. La première partie de cette thèse est consacrée à une extension de la méthode adaptative de force biaisante classique (ABF), qui est utilisée pour calculer l'énergie libre associée à la mesure de Boltzmann-Gibbs et une coordonnée de réaction. Le problème de cette méthode est que le gradient approché de l'énergie libre, dit force moyenne, n'est pas un gradient en général. La contribution à ce domaine, présentée dans le chapitre 2, est de projeter la force moyenne estimée sur un gradient en utilisant la décomposition de Helmholtz. Dans la pratique, la nouvelle force gradient est obtenue à partir de la solution d'un problème de Poisson. En utilisant des techniques d'entropie, on étudie le comportement à la limite de l'équation de Fokker-Planck non linéaire associée au processus stochastique. On montre la convergence exponentielle vers l'équilibre de l'énergie libre estimée, avec un taux précis de convergence en fonction des constantes de l'inégalité de Sobolev logarithmiques des mesures canoniques conditionnelles à la coordonnée de réaction. L'intérêt de la méthode d'ABF projetée par rapport à l'approche originale ABF est que la variance de la nouvelle force moyenne est plus petite. On observe que cela implique une convergence plus rapide vers l'équilibre. En outre, la méthode permet d'avoir accès à une estimation de l'énergie libre en tout temps. La deuxième partie (voir le chapitre 3) est consacrée à étudier l'existence locale et globale, l'unicité et la régularité des solutions d'une équation non linéaire de Fokker-Planck associée à la méthode adaptative de force biaisante. Il s'agit d'un problème parabolique semilinéaire avec une non-linéarité non locale. L'équation de Fokker-Planck décrit l'évolution de la densité d'un processus stochastique associé à la méthode adaptative de force biaisante. Le terme non linéaire est non local et est utilisé lors de la simulation afin d'éliminer les caractéristiques métastables de la dynamique. Il est lié à une espérance conditionnelle qui définit la force biaisante. La preuve est basée sur des techniques de semi-groupe pour l'existence locale en temps, ainsi que sur une estimée a priori utilisant une sursolution pour montrer l'existence globale / In computational statistical physics, good sampling techniques are required to obtain macroscopic properties through averages over microscopic states. The main difficulty is that these microscopic states are typically clustered around typical configurations, and a complete sampling of the configurational space is thus in general very complex to achieve. Techniques have been proposed to efficiently sample the microscopic states in the canonical ensemble. An important example of quantities of interest in such a case is the free energy. Free energy computation techniques are very important in molecular dynamics computations, in order to obtain a coarse-grained description of a high-dimensional complex physical system. The first part of this thesis is dedicated to explore an extension of the classical adaptive biasing force (ABF) technique, which is used to compute the free energy associated to the Boltzmann-Gibbs measure and a reaction coordinate function. The problem of this method is that the approximated gradient of the free energy, called biasing force, is not a gradient. The contribution to this field, presented in Chapter 2, is to project the estimated biasing force on a gradient using the Helmholtz decomposition. In practice, the new gradient force is obtained by solving Poisson problem. Using entropy techniques, we study the longtime behavior of the nonlinear Fokker-Planck equation associated with the ABF process. We prove exponential convergence to equilibrium of the estimated free energy, with a precise rate of convergence in terms of the Logarithmic Sobolev inequality constants of the canonical measure conditioned to fixed values of the reaction coordinate. The interest of this projected ABF method compared to the original ABF approach is that the variance of the new biasing force is smaller, which yields quicker convergence to equilibrium. The second part, presented in Chapter 3, is dedicated to study local and global existence, uniqueness and regularity of the mild, Lp and classical solution of a nonlinear Fokker-Planck equation, arising in an adaptive biasing force method for molecular dynamics calculations. The partial differential equation is a semilinear parabolic initial boundary value problem with a nonlocal nonlinearity and periodic boundary conditions on the torus of dimension n, as presented in Chapter 3. The Fokker-Planck equation rules the evolution of the density of a given stochastic process that is a solution to Adaptive biasing force method. The nonlinear term is non local and is used during the simulation in order to remove the metastable features of the dynamics

Page generated in 0.1091 seconds