• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 288
  • 67
  • 48
  • 32
  • 28
  • 18
  • 14
  • 13
  • 12
  • 9
  • 3
  • 3
  • 3
  • 2
  • 2
  • Tagged with
  • 668
  • 668
  • 360
  • 360
  • 150
  • 148
  • 101
  • 72
  • 66
  • 66
  • 65
  • 63
  • 62
  • 60
  • 60
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
451

Essays on forecasting and Bayesian model averaging

Eklund, Jana January 2006 (has links)
This thesis, which consists of four chapters, focuses on forecasting in a data-rich environment and related computational issues. Chapter 1, “An embarrassment of riches: Forecasting using large panels” explores the idea of combining forecasts from various indicator models by using Bayesian model averaging (BMA) and compares the predictive performance of BMA with predictive performance of factor models. The combination of these two methods is also implemented, together with a benchmark, a simple autoregressive model. The forecast comparison is conducted in a pseudo out-of-sample framework for three distinct datasets measured at different frequencies. These include monthly and quarterly US datasets consisting of more than 140 predictors, and a quarterly Swedish dataset with 77 possible predictors. The results show that none of the considered methods is uniformly superior and that no method consistently outperforms or underperforms a simple autoregressive process. Chapter 2. “Forecast combination using predictive measures” proposes using out-of-sample predictive likelihood as the basis for BMA and forecast combination. In addition to its intuitive appeal, the use of the predictive likelihood relaxes the need to specify proper priors for the parameters of each model. We show that the forecast weights based on the predictive likelihood have desirable asymptotic properties. And that these weights will have better small sample properties than the traditional in-sample marginal likelihood when uninformative priors are used. In order to calculate the weights for the combined forecast, a number of observations, a hold-out sample, is needed. There is a trade off involved in the size of the hold-out sample. The number of observations available for estimation is reduced, which might have a detrimental effect. On the other hand, as the hold-out sample size increases, the predictive measure becomes more stable and this should improve performance. When there is a true model in the model set, the predictive likelihood will select the true model asymptotically, but the convergence to the true model is slower than for the marginal likelihood. It is this slower convergence, coupled with protection against overfitting, which is the reason the predictive likelihood performs better when the true model is not in the model set. In Chapter 3. “Forecasting GDP with factor models and Bayesian forecast combination” the predictive likelihood approach developed in the previous chapter is applied to forecasting GDP growth. The analysis is performed on quarterly economic dataset from six countries: Canada, Germany, Great Britain, Italy, Japan and United States. The forecast combination technique based on both in-sample and out-of-sample weights is compared to forecasts based on factor models. The traditional point forecast analysis is extended by considering confidence intervals. The results indicate that forecast combinations based on the predictive likelihood weights have better forecasting performance compared with the factor models and forecast combinations based on the traditional in-sample weights. In contrast to common findings, the predictive likelihood does improve upon an autoregressive process for longer horizons. The largest improvement over the in-sample weights is for small values of hold-out sample sizes, which provides protection against structural breaks at the end of the sample period. The potential benefits of model averaging as a tool for extracting the relevant information from a large set of predictor variables come at the cost of considerable computational complexity. To avoid evaluating all the models, several approaches have been developed to simulate from the posterior distributions. Markov chain Monte Carlo methods can be used to directly draw from the model posterior distributions. It is desirable that the chain moves well through the model space and takes draws from regions with high probabilities. Several computationally efficient sampling schemes, either one at a time or in blocks, have been proposed for speeding up convergence. There is a trade-off between local moves, which make use of the current parameter values to propose plausible values for model parameters, and more global transitions, which potentially allow faster exploration of the distribution of interest, but may be much harder to implement efficiently. Local model moves enable use of fast updating schemes, where it is unnecessary to completely reestimate the new, slightly modified, model to obtain an updated solution. The last fourth chapter “Computational efficiency in Bayesian model and variable selection” investigates the possibility of increasing computational efficiency by using alternative algorithms to obtain estimates of model parameters as well as keeping track of their numerical accuracy. Also, various samplers that explore the model space are presented and compared based on the output of the Markov chain. / Diss. Stockholm : Handelshögskolan, 2006
452

Evolution and learning in games

Josephson, Jens January 2001 (has links)
This thesis contains four essays that analyze the behaviors that evolve when populations of boundedly rational individuals interact strategically for a long period of time. Individuals are boundedly rational in the sense that their strategy choices are determined by simple rules of adaptation -- learning rules. Convergence results for general finite games are first obtained in a homogenous setting, where all populations consist either of stochastic imitators, who almost always imitate the most successful strategy in a sample from their own population's past strategy choices, or stochastic better repliers, who almost always play a strategy that gives at least as high expected payoff as a sample distribution of all populations' past play. Similar results are then obtained in a heterogeneous setting, where both of these learning rules are represented in each population. It is found that only strategies in certain sets are played in the limit, as time goes to infinity and the mutation rate tends to zero. Sufficient conditions for the selection of a Pareto efficient such set are also provided. Finally, the analysis is extended to natural selection among learning rules. The question is whether there exists a learning rule that is evolutionarily stable, in the sense that a population employing this learning rule cannot be invaded by individuals using a different rule. Monte Carlo simulations for a large class of learning rules and four different games indicate that only a learning rule that takes full account of hypothetical payoffs to strategies that are not played is evolutionarily stable in almost all cases. / Diss. Stockholm : Handelshögsk., 2001
453

Advanced Monte Carlo Methods with Applications in Finance

Joshua Chi Chun Chan Unknown Date (has links)
The main objective of this thesis is to develop novel Monte Carlo techniques with emphasis on various applications in finance and economics, particularly in the fields of risk management and asset returns modeling. New stochastic algorithms are developed for rare-event probability estimation, combinatorial optimization, parameter estimation and model selection. The contributions of this thesis are fourfold. Firstly, we study an NP-hard combinatorial optimization problem, the Winner Determination Problem (WDP) in combinatorial auctions, where buyers can bid on bundles of items rather than bidding on them sequentially. We present two randomized algorithms, namely, the cross-entropy (CE) method and the ADAptive Mulitilevel splitting (ADAM) algorithm, to solve two versions of the WDP. Although an efficient deterministic algorithm has been developed for one version of the WDP, it is not applicable for the other version considered. In addition, the proposed algorithms are straightforward and easy to program, and do not require specialized software. Secondly, two major applications of conditional Monte Carlo for estimating rare-event probabilities are presented: a complex bridge network reliability model and several generalizations of the widely popular normal copula model used in managing portfolio credit risk. We show how certain efficient conditional Monte Carlo estimators developed for simple settings can be extended to handle complex models involving hundreds or thousands of random variables. In particular, by utilizing an asymptotic description on how the rare event occurs, we derive algorithms that are not only easy to implement, but also compare favorably to existing estimators. Thirdly, we make a contribution at the methodological front by proposing an improvement of the standard CE method for estimation. The improved method is relevant, as recent research has shown that in some high-dimensional settings the likelihood ratio degeneracy problem becomes severe and the importance sampling estimator obtained from the CE algorithm becomes unreliable. In contrast, the performance of the improved variant does not deteriorate as the dimension of the problem increases. Its utility is demonstrated via a high-dimensional estimation problem in risk management, namely, a recently proposed t-copula model for credit risk. We show that even in this high-dimensional model that involves hundreds of random variables, the proposed method performs remarkably well, and compares favorably to existing importance sampling estimators. Furthermore, the improved CE algorithm is then applied to estimating the marginal likelihood, a quantity that is fundamental in Bayesian model comparison and Bayesian model averaging. We present two empirical examples to demonstrate the proposed approach. The first example involves women's labor market participation and we compare three different binary response models in order to find the one best fits the data. The second example utilizes two vector autoregressive (VAR) models to analyze the interdependence and structural stability of four U.S. macroeconomic time series: GDP growth, unemployment rate, interest rate, and inflation. Lastly, we contribute to the growing literature of asset returns modeling by proposing several novel models that explicitly take into account various recent findings in the empirical finance literature. Specifically, two classes of stylized facts are particularly important. The first set is concerned with the marginal distributions of asset returns. One prominent feature of asset returns is that the tails of their distributions are heavier than those of the normal---large returns (in absolute value) occur much more frequently than one might expect from a normally distributed random variable. Another robust empirical feature of asset returns is skewness, where the tails of the distributions are not symmetric---losses are observed more frequently than large gains. The second set of stylized facts is concerned with the dependence structure among asset returns. Recent empirical studies have cast doubts on the adequacy of the linear dependence structure implied by the multivariate normal specification. For example, data from various asset markets, including equities, currencies and commodities markets, indicate the presence of extreme co-movement in asset returns, and this observation is again incompatible with the usual assumption that asset returns are jointly normally distributed. In light of the aforementioned empirical findings, we consider various novel models that generalize the usual normal specification. We develop efficient Markov chain Monte Carlo (MCMC) algorithms to estimate the proposed models. Moreover, since the number of plausible models is large, we perform a formal Bayesian model comparison to determine the model that best fits the data. In this way, we can directly compare the two approaches of modeling asset returns: copula models and the joint modeling of returns.
454

A Switching Black-Scholes Model and Option Pricing

Webb, Melanie Ann January 2003 (has links)
Derivative pricing, and in particular the pricing of options, is an important area of current research in financial mathematics. Experts debate on the best method of pricing and the most appropriate model of a price process to use. In this thesis, a ``Switching Black-Scholes'' model of a price process is proposed. This model is based on the standard geometric Brownian motion (or Black-Scholes) model of a price process. However, the drift and volatility parameters are permitted to vary between a finite number of possible values at known times, according to the state of a hidden Markov chain. This type of model has been found to replicate the Black-Scholes implied volatility smiles observed in the market, and produce option prices which are closer to market values than those obtained from the traditional Black-Scholes formula. As the Markov chain incorporates a second source of uncertainty into the Black-Scholes model, the Switching Black-Scholes market is incomplete, and no unique option pricing methodology exists. In this thesis, we apply the methods of mean-variance hedging, Esscher transforms and minimum entropy in order to price options on assets which evolve according to the Switching Black-Scholes model. C programs to compute these prices are given, and some particular numerical examples are examined. Finally, filtering techniques and reference probability methods are applied to find estimates of the model parameters and state of the hidden Markov chain. / Thesis (Ph.D.)--Applied Mathematics, 2003.
455

Επίδραση της χρονοαπόστασης σε σύστημα ακολουθίας οχημάτων υπό συνθήκες κυκλοφοριακού πλήγματος

Γιαννακοπούλου, Ιωσηφίνα 11 August 2011 (has links)
Η επιρροή του παράγοντα χρονοαπόσταση σε ένα σύστημα ακολουθίας οχημάτων μπορεί να προσδιορίσει την επικινδυνότητα του πλήγματος που υφίσταται το σύστημα. Με βάση μια παρ’ολίγον οπισθο-μετωπική σύγκρουση σε αυτοκινητόδρομο 3 λωρίδων, εξετάζεται ο ρόλος της χρονοαπόστασης μεταξύ των οχημάτων σε συνδυασμό με τους χρόνους αντίδρασης των οδηγών στην αντίληψη του επικείμενου κινδύνου. Το μοντέλο ακολουθίας οχημάτων κατά Brill, που συσχετίζει την χρονοαπόσταση, τον χρόνο αντίδρασης του οδηγού και την επιβράδυνση με τη συχνότητα των ατυχημάτων, χρησιμοποιείται ως κύριο εργαλείο για την εκτίμηση της ευαισθησίας της πιθανότητας ενός ατυχήματος. Μέσω της μικροσκοπικής ανάλυσης του βίντεο καταγραφής του ατυχήματος και της επεξεργασίας των δεδομένων και με πηγή έμπνευσης τα προγενέστερα επίμαχα ερωτήματα που θέτει και απαντά ο G. Davis και οι συνεργάτες του, προκύπτουν οι απαραίτητες πληροφορίες για την αριθμητική περιγραφή του ατυχήματος. Με τη χρήση έπειτα του λογισμικού προγράμματος OpenBUGS, το οποίο βασίζεται στη μέθοδο Monte Carlo Markov Chain, γίνεται προσομοίωση του προτύπου ατυχήματος και υπολογίζονται οι τιμές των παραμέτρων που επηρεάζουν τη μορφή του πλήγματος. Από τα αποτελέσματα προκύπτει ο βαθμός που ο συνδυαστικός παράγοντας χρονοαπόσταση και χρόνος αντίδρασης επηρεάζει το πλήγμα και αξιολογείται. Τέλος, με συγκεκριμένες επεμβάσεις επιχειρείται η βελτίωση ολόκληρου του συστήματος ακολουθίας οχημάτων. / The influence of time headway on a car-following system can determine the severity of a shockwave. Based on a near-miss rear-end collision on a 3-lane highway, this study examines the importance of time headway in combination with the driver’s reaction time upon perception of the upcoming hazard. The car-following model developed by Ed. Brill, relating driver’s reaction time, temporal headway and deceleration response to accident frequency, is used as a main tool for assessing the sensitivity of collision probability. Through a microscopic analysis of the video record and data processing and inspired by earlier critical questions that G. Davis and his associates have posed and answered, all the necessary information for the arithmetical description of the accident is extracted. Using the OpenBUGS software, and based on the Monte Carlo Markov Chain method, simulation of the collision prototype is achieved along with the calculation of other main parameters that affect the shockwave form. Simulation results, revealing the influence that the combined factor headway-reaction time has on a shockwave are derived and evaluated. Through certain modifications, the improvement of the whole car-following system is attempted.
456

A abordagem de martingais para o estudo de ocorrência de palavras em ensaios independentes / The martingale approach in the study of words occurrence in independent experiments

Masitéli, Vanessa 07 April 2017 (has links)
Submitted by Ronildo Prado (ronisp@ufscar.br) on 2017-08-16T18:49:11Z No. of bitstreams: 1 DissVM.pdf: 10400529 bytes, checksum: 6f3a8dfea497dd3a1543a2b5847ad36e (MD5) / Approved for entry into archive by Ronildo Prado (ronisp@ufscar.br) on 2017-08-16T18:49:21Z (GMT) No. of bitstreams: 1 DissVM.pdf: 10400529 bytes, checksum: 6f3a8dfea497dd3a1543a2b5847ad36e (MD5) / Approved for entry into archive by Ronildo Prado (ronisp@ufscar.br) on 2017-08-16T18:49:27Z (GMT) No. of bitstreams: 1 DissVM.pdf: 10400529 bytes, checksum: 6f3a8dfea497dd3a1543a2b5847ad36e (MD5) / Made available in DSpace on 2017-08-16T18:49:35Z (GMT). No. of bitstreams: 1 DissVM.pdf: 10400529 bytes, checksum: 6f3a8dfea497dd3a1543a2b5847ad36e (MD5) Previous issue date: 2017-04-07 / Não recebi financiamento / Let {Xn} be a sequence of i.i.d. random variables taking values in an enumerable alphabet. Given a finite collection of words, we observe this sequence till the moment T at which one of these words appears as a run. In this work we apply the martingale approach introduced by Li (1980) and Gerber e Li (1981) in order to study the waiting time until one of the words occurs for the first time, the mean of T and the probability of a word to be the first one to appear. / Seja {Xn} uma sequência de variáveis aleatórias i.i.d. assumindo valores num alfabeto enumerável. Dada uma coleção de palavras finita, observamos esta sequência até o momento T em que uma dessas palavras apareça emX1,X2, .... Neste trabalho utilizamos a abordagem de martingais, introduzida por Li (1980) e Gerber e Li ( 981), para estudar o tempo de espera até que uma das palavras ocorra pela primeira vez, o tempo médio de T e a probabilidade de uma palavra ser a primeira a aparecer.
457

Inferência em modelos de mistura via algoritmo EM estocástico modificado / Inference on mixture models via modified stochastic EM algorithm

Assis, Raul Caram de 02 June 2017 (has links)
Submitted by Ronildo Prado (ronisp@ufscar.br) on 2017-08-22T14:32:30Z No. of bitstreams: 1 DissRCA.pdf: 1727058 bytes, checksum: 78d5444e767bf066e768b88a3a9ab535 (MD5) / Approved for entry into archive by Ronildo Prado (ronisp@ufscar.br) on 2017-08-22T14:32:38Z (GMT) No. of bitstreams: 1 DissRCA.pdf: 1727058 bytes, checksum: 78d5444e767bf066e768b88a3a9ab535 (MD5) / Approved for entry into archive by Ronildo Prado (ronisp@ufscar.br) on 2017-08-22T14:32:44Z (GMT) No. of bitstreams: 1 DissRCA.pdf: 1727058 bytes, checksum: 78d5444e767bf066e768b88a3a9ab535 (MD5) / Made available in DSpace on 2017-08-22T14:32:50Z (GMT). No. of bitstreams: 1 DissRCA.pdf: 1727058 bytes, checksum: 78d5444e767bf066e768b88a3a9ab535 (MD5) Previous issue date: 2017-06-02 / Coordenação de Aperfeiçoamento de Pessoal de Nível Superior (CAPES) / We present the topics and theory of Mixture Models in a context of maximum likelihood and Bayesian inferece. We approach clustering methods in both contexts, with emphasis on the stochastic EM algorithm and the Dirichlet Process Mixture Model. We propose a new method, a modified stochastic EM algorithm, which can be used to estimate the parameters of a mixture model and the number of components. / Apresentamos o tópico e a teoria de Modelos de Mistura de Distribuições, revendo aspectos teóricos e interpretações de tais misturas. Desenvolvemos a teoria dos modelos nos contextos de máxima verossimilhança e de inferência bayesiana. Abordamos métodos de agrupamento já existentes em ambos os contextos, com ênfase em dois métodos, o algoritmo EM estocástico no contexto de máxima verossimilhança e o Modelo de Mistura com Processos de Dirichlet no contexto bayesiano. Propomos um novo método, uma modificação do algoritmo EM Estocástico, que pode ser utilizado para estimar os parâmetros de uma mistura de componentes enquanto permite soluções com número distinto de grupos.
458

[en] MARKOV CHAIN MONTE CARLO FOR NATURAL INFLOW ENERGY SCENARIOS SIMULATION / [pt] MARKOV CHAIN MONTE CARLO PARA SIMULAÇÃO DE CENÁRIOS DE ENERGIA NATURAL AFLUENTE

HUGO RIBEIRO BALDIOTI 11 January 2019 (has links)
[pt] Constituído por uma matriz eletro-energética predominantemente hídrica e território de proporções continentais, o Brasil apresenta características únicas, sendo possível realizar o aproveitamento dos fartos recursos hídricos presentes no território nacional. Aproximadamente 65 por cento da capacidade de geração de energia elétrica advém de recursos hidrelétricos enquanto 28 por cento de recursos termelétricos. Sabe-se que regimes hidrológicos de vazões naturais são de natureza estocástica e em função disso é preciso tratá-los para que se possa planejar a operação do sistema, sendo assim, o despacho hidrotérmico é de suma importância e caracterizado por sua dependência estocástica. A partir das vazões naturais é possível calcular a Energia Natural Afluente (ENA) que será utilizada diretamente no processo de simulação de séries sintéticas que, por sua vez, são utilizadas no processo de otimização, responsável pelo cálculo da política ótima visando minimizar os custos de operação do sistema. Os estudos referentes a simulação de cenários sintéticos de ENA vêm se desenvolvendo com novas propostas metodológicas ao longo dos anos. Tais desenvolvimentos muitas vezes pressupõem Gaussianidade dos dados, de forma que seja possível ajustar uma distribuição paramétrica nos mesmos. Percebeu-se que na maioria dos casos reais, no contexto do Setor Elétrico Brasileiro, os dados não podem ser tratados desta forma, uma vez que apresentam em sua densidade comportamentos de cauda relevantes e uma acentuada assimetria. É necessário para o planejamento da operação do Sistema Interligado Nacional (SIN) que a assimetria intrínseca a este comportamento seja passível de reprodução. Dessa forma, este trabalho propõe duas abordagens não paramétricas para simulação de cenários. A primeira refere-se ao processo de amostragem dos resíduos das séries de ENA, para tanto, utiliza-se a técnica Markov Chain Monte Carlo (MCMC) e o Kernel Density Estimation. A segunda metodologia proposta aplica o MCMC Interconfigurações diretamente nas séries de ENA para simulação de cenários sintéticos a partir de uma abordagem inovadora para transição entre as matrizes e períodos. Os resultados da implementação das metodologias, observados graficamente e a partir de testes estatísticos de aderência ao histórico de dados, apontam que as propostas conseguem reproduzir com uma maior acurácia as características assimétricas sem perder a capacidade de reproduzir estatísticas básicas. Destarte, pode-se afirmar que os modelos propostos são boas alternativas em relação ao modelo vigente utilizado pelo setor elétrico brasileiro. / [en] Consisting of an electro-energetic matrix with hydro predominance and a continental proportion territory, Brazil presents unique characteristics, being able to make use of the abundant water resources in the national territory. Approximately 65 percent of the electricity generation capacity comes from hydropower while 28 percent from thermoelectric plants. It is known that hydrological regimes have a stochastic nature and it is necessary to treat them so the energy system can be planned, thus the hydrothermal dispatch is extremely important and characterized by its stochastic dependence. From the natural streamflows it is possible to calculate the Natural Inflow Energy (NIE) that will be used directly in the synthetic series simulation process, which, in turn, are used on the optimization process, responsible for optimal policy calculation in order to minimize the system operational costs. The studies concerning the simulation of synthetic scenarios of NIE have been developing with new methodological proposals over the years. Such developments often presuppose data Gaussianity, so that a parametric distribution can be fitted to them. It was noticed that in the majority of real cases, in the context of the Brazilian Electrical Sector, the data cannot be treated like that, since they present in their density relevant tail behavior and skewness. It is necessary for the National Interconnected System (SIN) operational planning that the intrinsic skewness behavior is amenable to reproduction. Thus, this paper proposes two non-parametric approaches to scenarios simulation. The first one refers to the process of NIE series residues sampling, using a Markov Chain Monte Carlo (MCMC) technique and the Kernel Density Estimation. The second methodology is also proposed where the MCMC is applied periodically and directly in the NIE series to simulate synthetic scenarios using an innovative approach for transitions between matrices. The methodologies implementation results, observed graphically and based on statistical tests of adherence to the historical data, indicate that the proposals can reproduce with greater accuracy the asymmetric characteristics without losing the ability to reproduce basic statistics. Thus, one can conclude that the proposed models are good alternatives in relation to the current model of the Brazilian Electric Sector.
459

On the use of transport and optimal control methods for Monte Carlo simulation

Heng, Jeremy January 2016 (has links)
This thesis explores ideas from transport theory and optimal control to develop novel Monte Carlo methods to perform efficient statistical computation. The first project considers the problem of constructing a transport map between two given probability measures. In the Bayesian formalism, this approach is natural when one introduces a curve of probability measures connecting the prior to posterior by tempering the likelihood function. The main idea is to move samples from the prior using an ordinary differential equation (ODE), constructed by solving the Liouville partial differential equation (PDE) which governs the time evolution of measures along the curve. In this work, we first study the regularity solutions of Liouville equation should satisfy to guarantee validity of this construction. We place an emphasis on understanding these issues as it explains the difficulties associated with solutions that have been previously reported. After ensuring that the flow transport problem is well-defined, we give a constructive solution. However, this result is only formal as the representation is given in terms of integrals which are intractable. For computational tractability, we proposed a novel approximation of the PDE which yields an ODE whose drift depends on the full conditional distributions of the intermediate distributions. Even when the ODE is time-discretized and the full conditional distributions are approximated numerically, the resulting distribution of mapped samples can be evaluated and used as a proposal within Markov chain Monte Carlo and sequential Monte Carlo (SMC) schemes. We then illustrate experimentally that the resulting algorithm can outperform state-of-the-art SMC methods at a fixed computational complexity. The second project aims to exploit ideas from optimal control to design more efficient SMC methods. The key idea is to control the proposal distribution induced by a time-discretized Langevin dynamics so as to minimize the Kullback-Leibler divergence of the extended target distribution from the proposal. The optimal value functions of the resulting optimal control problem can then be approximated using algorithms developed in the approximate dynamic programming (ADP) literature. We introduce a novel iterative scheme to perform ADP, provide a theoretical analysis of the proposed algorithm and demonstrate that the latter can provide significant gains over state-of-the-art methods at a fixed computational complexity.
460

Les techniques Monte Carlo par chaînes de Markov appliquées à la détermination des distributions de partons / Markov chain Monte Carlo techniques applied to parton distribution functions determination : proof of concept

Gbedo, Yémalin Gabin 22 September 2017 (has links)
Nous avons développé une nouvelle approche basée sur les méthodes Monte Carlo par chaînes de Markov pour déterminer les distributions de Partons et quantifier leurs incertitudes expérimentales. L’intérêt principal d’une telle étude repose sur la possibilité de remplacer la minimisation standard avec MINUIT de la fonction χ 2 par des procédures fondées sur les méthodes Statistiques et sur l’inférence Bayésienne en particulier,offrant ainsi une meilleure compréhension de la détermination des distributions de partons. Après avoir examiné ces techniques Monte Carlo par chaînes de Markov, nous introduisons l’algorithme que nous avons choisi de mettre en œuvre, à savoir le Monte Carlo hybride (ou Hamiltonien). Cet algorithme, développé initialement pour la chromodynamique quantique sur réseau, s’avère très intéressant lorsqu’il est appliqué à la détermination des distributions de partons par des analyses globales. Nous avons montré qu’il permet de contourner les difficultés techniques dues à la grande dimensionnalité du problème, en particulier celle relative au taux d’acceptation. L’étude de faisabilité réalisée et présentée dans cette thèse indique que la méthode Monte Carlo par chaînes de Markov peut être appliquée avec succès à l’extraction des distributions de partons et à leurs in-certitudes expérimentales. / We have developed a new approach to determine parton distribution functions and quantify their experimental uncertainties, based on Markov Chain Monte Carlo methods.The main interest devoted to such a study is that we can replace the standard χ 2 MINUIT minimization by procedures grounded on Statistical Methods, and on Bayesian inference in particular, thus offering additional insight into the rich field of PDFs determination.After reviewing these Markov chain Monte Carlo techniques, we introduce the algorithm we have chosen to implement – namely Hybrid (or Hamiltonian) Monte Carlo. This algorithm, initially developed for lattice quantum chromodynamique, turns out to be very interesting when applied to parton distribution functions determination by global analyses ; we have shown that it allows to circumvent the technical difficulties due to the high dimensionality of the problem, in particular concerning the acceptance rate. The feasibility study performed and presented in this thesis, indicates that Markov chain Monte Carlo method can successfully be applied to the extraction of PDFs and of their experimental uncertainties.

Page generated in 0.0482 seconds