• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 53
  • 11
  • 8
  • 5
  • 4
  • 4
  • 3
  • 1
  • 1
  • Tagged with
  • 101
  • 101
  • 46
  • 46
  • 19
  • 18
  • 14
  • 13
  • 13
  • 12
  • 12
  • 11
  • 11
  • 11
  • 11
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
71

Solving Partial Differential Equations With Neural Networks

Karlsson Faronius, Håkan January 2023 (has links)
In this thesis three different approaches for solving partial differential equa-tions with neural networks will be explored; namely Physics-Informed NeuralNetworks, Fourier Neural Operators and the Deep Ritz method. Physics-Informed Neural Networks and the Deep Ritz Method are unsupervised machine learning methods, while the Fourier Neural Operator is a supervised method. The Physics-Informed Neural Network is implemented on Burger’s equation,while the Fourier Neural Operator is implemented on Poisson’s equation and Darcy’s law and the Deep Ritz method is applied to several variational problems. The Physics-Informed Neural Network is also used for the inverse problem; given some data on a solution, the neural network is trained to determine what the underlying partial differential equation is whose solution is given by the data. Apart from this, importance sampling is also implemented to accelerate the training of physics-informed neural networks. The contributions of this thesis are to implement a slightly different form of importance sampling on the physics-informed neural network, to show that the Deep Ritz method can be used for a larger class of variational problems than the original publication suggests and to apply the Fourier Neural Operator on an application in geophyiscs involving Darcy’s law where the coefficient factor is given by exponentiated two-dimensional pink noise.
72

[pt] AVALIAÇÃO DA CONFIABILIDADE DE SISTEMAS DE GERAÇÃO E TRANSMISSÃO BASEADA EM TÉCNICAS DE ENUMERAÇÃO DE ESTADOS E AMOSTRAGEM POR IMPORTÂNCIA / [en] RELIABILITY ASSESSMENT OF GENERATING AND TRANSMISSION SYSTEMS BASED ON STATE ENUMERATION AND IMPORTANCE SAMPLING TECHNIQUES

BRUNO ALVES DE SA MANSO 30 September 2021 (has links)
[pt] A avaliação probabilística da confiabilidade de um sistema elétrico de potên-cia visa quantificar, em índices, as estatísticas do risco do mesmo não atender seus clientes em plenitude. Na prática, os critérios determinísticos (e.g., N-1) são ainda os mais empregados. Na literatura, porém, a análise probabilística é uma área ex-tensa de pesquisa, podendo ser dividida em duas vertentes: as baseadas em simula-ção Monte Carlo (SMC) e aquelas fundamentadas na enumeração de estados (EE). Apesar de ser reconhecidamente inferior, a técnica EE é a que se assemelha mais aos critérios determinísticos, e, muito provavelmente por esta razão, possui extensa gama de trabalhos relacionados. Contudo, tais trabalhos apresentam limitações, pois, ou se restringem a sistemas de pequeno porte, ou desconsideram contingências de maior ordem quando abordam sistemas reais (médio-grande porte). De qualquer maneira, existe um grande apego do setor elétrico por técnicas de confiabilidade que se assemelhem às práticas dos operadores e planejadores. Isso motivou o de-senvolvimento de um método baseado em EE, o qual seja capaz de avaliar a confi-abilidade de sistemas de geração e transmissão com desempenho comparável ao da SMC. De forma heterodoxa, os conceitos de amostragem por importância (IS - Im-portance Sampling), uma técnica de redução de variância (VRT - Variance Reduc-tion Techniques) tipicamente empregada na SMC, serviram de inspiração para apri-morar a EE. Assim, o método proposto nesta dissertação é o resultado da combina-ção de uma ferramenta do tipo IS-VRT com técnicas de EE. Para análise e validação do método proposto, são utilizados dois sistemas teste comumente empregados neste tópico de pesquisa, sendo um deles de médio porte e capaz de reproduzir caracterís-ticas típicas de sistemas reais. / [en] The probabilistic reliability assessment of an electric power system aims to quantify, in terms of risk indices, its inability to fully serve its customers. In prac-tice, deterministic criteria (e.g., N-1) are still the most widely used. In the literature, however, probabilistic analysis is an extensive area of research, which can be di-vided into two evaluation categories: those based on Monte Carlo simulation (MCS) and those based on the state enumeration (SE). Despite being admittedly inferior, the SE technique is the one that most closely resembles the deterministic criteria, and, most likely for this reason, has a wide range of technical publications. How-ever, such works have limitations, because they are either restricted to small sys-tems, or they disregard higher contingency orders, when addressing real systems (medium-large). In any case, there is a strong attachment of the electric sector to reliability techniques that are similar to the practices of operators and planners. This motivated the development of a method based on SE, which is capable of assessing the reliability of generation and transmission systems with performance comparable to that of MCS. In a heterodox way, importance sampling (IS) concepts used in variance reduction techniques (VRT), typically employed by MCS, have served as inspiration to improve SE. Thus, the method proposed in this dissertation is the combination result of an IS-VRT type tool with SE techniques. For the analysis and validation of the proposed method, two test systems commonly used in this research topic are used, one of which is medium-sized and capable of reproducing typical characteristics of real systems.
73

Position measurement of the superCDMS HVeV detector and implementation of an importance sampling algorithm in the superCDMS simulation software

Pedreros, David S. 03 1900 (has links)
La matière sombre est considérée comme l'un des plus grands mystères dans la cosmologie moderne. En effet, on peut dire que l’on connaît plus sur ce que la matière sombre n'est pas que sur sa vraie nature. La collaboration SuperCDMS travaille sans répit pour réussir à faire la première détection directe de la matière sombre. À cet effet, la collaboration a eu recours à plusieurs expériences et simulations à diverses échelles, pouvant aller de l'usage d'un seul détecteur semi-conducteur, jusqu'à la création d'expériences à grande échelle qui cherchent à faire cette première détection directe de la matière sombre. Dans ce texte, on verra différentes méthodes pour nous aider à mieux comprendre les erreurs systématiques liées à la position du détecteur utilisé dans le cadre des expériences IMPACT@TUNL et IMPACT@MTL, soit l'usage des simulations et de la radiologie industrielle respectivement. On verra aussi comment l'implémentation de la méthode de réduction de variance connue comme échantillonnage préférentiel, peut aider à améliorer l'exécution des simulations de l'expérience à grande échelle planifiée pour le laboratoire canadien SNOLAB. En outre, on verra comment l'échantillonnage préférentiel s'avère utile non seulement pour mieux profiter des ressources disponibles pour la collaboration, mais aussi pour avoir une meilleure compréhension des source de bruits de fond qui seront présentes à SNOLAB, tels que les signaux générés par la désintégration radioactive de divers isotopes. / Dark matter is one of the biggest mysteries of modern-day cosmology. Simply put, we know much more about what it is not, rather than what it actually is. The SuperCDMS collaboration works relentlessly toward making the first direct detection of this type of matter. To this effect, multiple experiments and simulations have been performed, ranging from small-scale testing of the detectors to large-scale, long-term experiments, looking for the actual detection of dark matter. In this work, I will analyze different methods to help understand the systematic errors linked to detector position in regard to the small-scale experiments IMPACT@TUNL and IMPACT@MTL, through simulation and industrial radiography respectively. We will also see how the implementation of the variance reduction method known as importance sampling can be used to improve the simulation performance of the large-scale experiment in the Canadian laboratory SNOLAB. Additionally, we will see how this method can provide not only better management of the computing resources available to the collaboration, but also how it can be used to better the understanding of the background noises, such as the signals generated by radioactive decay of different isotopes, that will be present at SNOLAB.
74

Dynamic Credit Models : An analysis using Monte Carlo methods and variance reduction techniques / Dynamiska Kreditmodeller : En analys med Monte Carlo-simulering och variansreducreingsmetoder

Järnberg, Emelie January 2016 (has links)
In this thesis, the credit worthiness of a company is modelled using a stochastic process. Two credit models are considered; Merton's model, which models the value of a firm's assets using geometric Brownian motion, and the distance to default model, which is driven by a two factor jump diffusion process. The probability of default and the default time are simulated using Monte Carlo and the number of scenarios needed to obtain convergence in the simulations is investigated. The simulations are performed using the probability matrix method (PMM), which means that a transition probability matrix describing the process is created and used for the simulations. Besides this, two variance reduction techniques are investigated; importance sampling and antithetic variates. / I den här uppsatsen modelleras kreditvärdigheten hos ett företag med hjälp av en stokastisk process. Två kreditmodeller betraktas; Merton's modell, som modellerar värdet av ett företags tillgångar med geometrisk Brownsk rörelse, och "distance to default", som drivs av en två-dimensionell stokastisk process med både diffusion och hopp. Sannolikheten för konkurs och den förväntade tidpunkten för konkurs simuleras med hjälp av Monte Carlo och antalet scenarion som behövs för konvergens i simuleringarna undersöks. Vid simuleringen används metoden "probability matrix method", där en övergångssannolikhetsmatris som beskriver processen används. Dessutom undersöks två metoder för variansreducering; viktad simulering (importance sampling) och antitetiska variabler (antithetic variates).
75

North European Power Systems Reliability / Det nordeuropeiska elsystemets tillförlitlighet

Terrier, Viktor January 2017 (has links)
The North European power system (Sweden, Finland, Norway, Denmark, Estonia, Latvia and Lithuania) is facing changes in its electricity production. The increasing share of intermittent power sources, such as wind power, makes the production less predictable. The decommissioning of large plants, for environmental or market reasons, leads to a decrease of production capacity while the demand can increase, which is detrimental to the power system reliability. Investments in interconnections and new power plants can be made to strengthen the system. Evaluating the reliability becomes essential to determine the investments that have to be made. For this purpose, a model of the power system is built. The power system is divided into areas, where the demand, interconnections between areas, and intermittent generation are represented by Cumulative Distribution Functions (CDF); while conventional generation plants follow a two-state behaviour. Imports from outside the system are set equal to their installed capacity, with considering that the neighbouring countries can always provide enough power. The model is set up by using only publicly available data. The model is used for generating numerous possible states of the system in a Monte Carlo simulation, to estimate two reliability indices: the risk (LOLP) and the size (EPNS) of a power deficit. As a power deficit is a rare event, an excessively large number of samples is required to estimate the reliability of the system with a sufficient confidence level. Hence, a pre-simulation, called importance sampling, is run beforehand in order to improve the efficiency of the simulation. Four simulations are run on the colder months (January, February, March, November, December) to test the reliability of the current system (2015) and of three future scenarios (2020, 2025 and 2030). The tests point out that the current weakest areas (Finland and Southern Sweden) are also the ones that will face nuclear decommissioning in years to come, and highlight that the investments in interconnections and wind power considered in the scenarios are not sufficient to maintain the current reliability levels. If today’s reliability levels are considered necessary, then possible solutions include more flexible demand, higher production and/or more interconnections. / Det nordeuropeiska elsystemet (Sverige, Finland, Norge, Danmark, Estland, Lettland och Litauen) står inför förändringar i sin elproduktion. Den ökande andelen intermittenta kraftkällor, såsom vindkraft, gör produktionen mindre förutsägbar. Avvecklingen av stora anläggningar, av miljö- eller marknadsskäl, leder till en minskning av produktionskapaciteten, medan efterfrågan kan öka, vilket är till nackdel för kraftsystemets tillförlitlighet. Investeringar i sammankopplingar och i nya kraftverk kan göras för att stärka systemet. Utvärdering av tillförlitligheten blir nödvändigt för att bestämma vilka investeringar som behövs. För detta ändamål byggs en modell av kraftsystemet. Kraftsystemet är uppdelat i områden, där efterfrågan, sammankopplingar mellan områden, och intermittent produktion representeras av fördelningsfunktioner; medan konventionella kraftverk antas ha ett två-tillståndsbeteende. Import från länder utanför systemet antas lika med deras installerade kapaciteter, med tanke på att grannländerna alltid kan ge tillräckligt med ström. Modellen bygger på allmänt tillgängliga uppgifter. Modellen används för att generera ett stort antal möjliga tillstånd av systemet i en Monte Carlo-simulering för att uppskatta två tillförlitlighetsindex: risken (LOLP) och storleken (EPNS) av en effektbrist. Eftersom effektbrist är en sällsynt händelse, krävs ett mycket stort antal tester av olika tillstånd i systemet för att uppskatta tillförlitligheten med en tillräcklig konfidensnivå. Därför utnyttjas en för-simulering, kallad ”Importance Sampling”, vilken körs i förväg i syfte att förbättra effektiviteten i simuleringen. Fyra simuleringar körs för de kallare månaderna (januari, februari, mars, november, december) för att testa tillförlitligheten i nuvarande systemet (2015) samt för tre framtidsscenarier (2020, 2025 och 2030). Testerna visar att de nuvarande svagaste områdena (Finland och södra Sverige) också är de som kommer att ställas inför en kärnkraftsavveckling under de kommande åren. De indikerar även att planerade investeringar i sammankopplingar och vindkraft i scenarierna inte är tillräckliga för att bibehålla de nuvarande tillförlitlighetsnivåerna. Om dagens tillförlitlighetsnivåer antas nödvändiga, så inkluderar möjliga lösningar mer flexibel efterfrågan, ökad produktion och/eller fler sammankopplingar.
76

Critical Substation Risk Assessment and Mitigation

Delport, Jacques 01 June 2018 (has links)
Substations are joints in the power system that represent nodes that are vital to stable and reliable operation of the power system. They contrast the rest of the power system in that they are a dense combination of critical components causing all of them to be simultaneously vulnerable to one isolated incident: weather, attack, or other common failure modes. Undoubtedly, the loss of these vital links will have a severe impact to the to the power grid to varying degrees. This work creates a cascading model based on protection system misoperations to estimate system risk from loss-of-substation events in order to assess each substation's criticality. A continuation power flow method is utilized for estimating voltage collapse during cascades. Transient stability is included through the use of a supervised machine learning algorithm called random forests. These forests allow for fast, robust and accurate prediction of transient stability during loss-of-substation initiated cascades. Substation risk indices are incorporated into a preventative optimal power flow (OPF) to reduce the risk of critical substations. This risk-based dispatch represents an easily scalable, robust algorithm for reducing risk associated with substation losses. This new dispatch allows operators to operate at a higher cost operating point for short periods in which substations may likely be lost, such as large weather events, likely attacks, etc. and significantly reduce system risk associated with those losses. System risk is then studied considering the interaction of a power grid utility trying to protect their critical substations under a constrained budget and a potential attacker with insider information on critical substations. This is studied under a zero-sum game theoretic framework in which the utility is trying to confuse the attacker. A model is then developed to analyze how a utility may create a robust strategy of protection that cannot be heavily exploited while taking advantage of any mistakes potential attackers may make. / Ph. D.
77

異質性投資組合下的改良式重點取樣法 / Modified Importance Sampling for Heterogeneous Portfolio

許文銘 Unknown Date (has links)
衡量投資組合的稀有事件時,即使稀有事件違約的機率極低,但是卻隱含著高額資產違約時所帶來的重大損失,所以我們必須要精準地評估稀有事件的信用風險。本研究係在估計信用損失分配的尾端機率,模擬的模型包含同質模型與異質模型;然而蒙地卡羅法雖然在風險管理的計算上相當實用,但是估計機率極小的尾端機率時模擬不夠穩定,因此為增進模擬的效率,我們利用Glasserman and Li (Management Science, 51(11),2005)提出的重點取樣法,以及根據Chiang et al. (Joural of Derivatives, 15(2),2007)重點取樣法為基礎做延伸的改良式重點取樣法,兩種方法來對不同的投資組合做模擬,更是將改良式重點取樣法推廣至異質模型做討論,本文亦透過變異數縮減效果來衡量兩種方法的模擬效率。數值結果顯示,比起傳統的蒙地卡羅法,此兩種方法皆能達到變異數縮減,其中在同質模型下的改良式重點取樣法有很好的表現,模擬時間相當省時,而異質模型下的重點取樣法也具有良好的估計效率及模擬的穩定性。 / When measuring portfolio credit risk of rare-event, even though its default probabilities are low, it causes significant losses resulting from a large number of default. Therefore, we have to measure portfolio credit risk of rare-event accurately. In particular, our goal is estimating the tail of loss distribution. Models we simulate are including homogeneous models and heterogeneous models. However, Monte Carlo simulation is useful and widely used computational tool in risk management, but it is unstable especially estimating small tail probabilities. Hence, in order to improve the efficiency of simulation, we use importance sampling proposed by Glasserman and Li (Management Science, 51(11),2005) and modified importance sampling based on importance sampling which proposed by Chiang et al. (2007 Joural of Derivatives, 15(2),). Simulate different portfolios by these two of simulations. On top of that, we extend and discuss the modified importance sampling simulation to heterogeneous model. In this article, we measure efficiency of two simulations by variance reduction. Numerical results show that proposed methods are better than Monte Carlo and achieve variance reduction. In homogeneous model, modified importance sampling has excellent efficiency of estimating and saves time. In heterogeneous model, importance sampling also has great efficiency of estimating and stability.
78

Efficient Monte Carlo Simulation for Counterparty Credit Risk Modeling / Effektiv Monte Carlo-simulering för modellering av motpartskreditrisk

Johansson, Sam January 2019 (has links)
In this paper, Monte Carlo simulation for CCR (Counterparty Credit Risk) modeling is investigated. A jump-diffusion model, Bates' model, is used to describe the price process of an asset, and the counterparty default probability is described by a stochastic intensity model with constant intensity. In combination with Monte Carlo simulation, the variance reduction technique importance sampling is used in an attempt to make the simulations more efficient. Importance sampling is used for simulation of both the asset price and, for CVA (Credit Valuation Adjustment) estimation, the default time. CVA is simulated for both European and Bermudan options. It is shown that a significant variance reduction can be achieved by utilizing importance sampling for asset price simulations. It is also shown that a significant variance reduction for CVA simulation can be achieved for counterparties with small default probabilities by employing importance sampling for the default times. This holds for both European and Bermudan options. Furthermore, the regression based method least squares Monte Carlo is used to estimate the price of a Bermudan option, resulting in CVA estimates that lie within an interval of feasible values. Finally, some topics of further research are suggested. / I denna rapport undersöks Monte Carlo-simuleringar för motpartskreditrisk. En jump-diffusion-modell, Bates modell, används för att beskriva prisprocessen hos en tillgång, och sannolikheten att motparten drabbas av insolvens beskrivs av en stokastisk intensitetsmodell med konstant intensitet. Tillsammans med Monte Carlo-simuleringar används variansreduktionstekinken importance sampling i ett försök att effektivisera simuleringarna. Importance sampling används för simulering av både tillgångens pris och, för estimering av CVA (Credit Valuation Adjustment), tidpunkten för insolvens. CVA simuleras för både europeiska optioner och Bermuda-optioner. Det visas att en signifikant variansreduktion kan uppnås genom att använda importance sampling för simuleringen av tillgångens pris. Det visas även att en signifikant variansreduktion för CVA-simulering kan uppnås för motparter med små sannolikheter att drabbas av insolvens genom att använda importance sampling för simulering av tidpunkter för insolvens. Detta gäller både europeiska optioner och Bermuda-optioner. Vidare, används regressionsmetoden least squares Monte Carlo för att estimera priset av en Bermuda-option, vilket resulterar i CVA-estimat som ligger inom ett intervall av rimliga värden. Slutligen föreslås några ämnen för ytterligare forskning.
79

Importance sampling on the coalescent with recombination

Jenkins, Paul A. January 2008 (has links)
Performing inference on contemporary samples of homologous DNA sequence data is an important task. By assuming a stochastic model for ancestry, one can make full use of observed data by sampling from the distribution of genealogies conditional upon the sample configuration. A natural such model is Kingman's coalescent, with numerous extensions to account for additional biological phenomena. However, in this model the distribution of interest cannot be written down analytically, and so one solution is to utilize importance sampling. In this context, importance sampling (IS) simulates genealogies from an artificial proposal distribution, and corrects for this by weighting each resulting genealogy. In this thesis I investigate in detail approaches for developing efficient proposal distributions on coalescent histories, with a particular focus on a two-locus model mutating under the infinite-sites assumption and in which the loci are separated by a region of recombination. This model was originally studied by Griffiths (1981), and is a useful simplification for considering the correlated ancestries of two linked loci. I show that my proposal distribution generally outperforms an existing IS method which could be recruited to this model. Given today's sequencing technologies it is not difficult to find volumes of data for which even the most efficient proposal distributions might struggle. I therefore appropriate resampling mechanisms from the theory of sequential Monte Carlo in order to effect substantial improvements in IS applications. In particular, I propose a new resampling scheme and confirm that it ensures a significant gain in the accuracy of likelihood estimates. It outperforms an existing scheme which can actually diminish the quality of an IS simulation unless it is applied to coalescent models with care. Finally, I apply the methods developed here to an example dataset, and discuss a new measure for the way in which two gene trees are correlated.
80

Échantillonnage préférentiel adaptatif et méthodes bayésiennes approchées appliquées à la génétique des populations. / Adaptive multiple importance sampling and approximate bayesian computation with applications in population genetics.

Sedki, Mohammed Amechtoh 31 October 2012 (has links)
Dans cette thèse, on propose des techniques d'inférence bayésienne dans les modèles où la vraisemblance possède une composante latente. La vraisemblance d'un jeu de données observé est l'intégrale de la vraisemblance dite complète sur l'espace de la variable latente. On s'intéresse aux cas où l'espace de la variable latente est de très grande dimension et comportes des directions de différentes natures (discrètes et continues), ce qui rend cette intégrale incalculable. Le champs d'application privilégié de cette thèse est l'inférence dans les modèles de génétique des populations. Pour mener leurs études, les généticiens des populations se basent sur l'information génétique extraite des populations du présent et représente la variable observée. L'information incluant l'histoire spatiale et temporelle de l'espèce considérée est inaccessible en général et représente la composante latente. Notre première contribution dans cette thèse suppose que la vraisemblance peut être évaluée via une approximation numériquement coûteuse. Le schéma d'échantillonnage préférentiel adaptatif et multiple (AMIS pour Adaptive Multiple Importance Sampling) de Cornuet et al. [2012] nécessite peu d'appels au calcul de la vraisemblance et recycle ces évaluations. Cet algorithme approche la loi a posteriori par un système de particules pondérées. Cette technique est conçue pour pouvoir recycler les simulations obtenues par le processus itératif (la construction séquentielle d'une suite de lois d'importance). Dans les nombreux tests numériques effectués sur des modèles de génétique des populations, l'algorithme AMIS a montré des performances numériques très prometteuses en terme de stabilité. Ces propriétés numériques sont particulièrement adéquates pour notre contexte. Toutefois, la question de la convergence des estimateurs obtenus parcette technique reste largement ouverte. Dans cette thèse, nous montrons des résultats de convergence d'une version légèrement modifiée de cet algorithme. Sur des simulations, nous montrons que ses qualités numériques sont identiques à celles du schéma original. Dans la deuxième contribution de cette thèse, on renonce à l'approximation de la vraisemblance et onsupposera seulement que la simulation suivant le modèle (suivant la vraisemblance) est possible. Notre apport est un algorithme ABC séquentiel (Approximate Bayesian Computation). Sur les modèles de la génétique des populations, cette méthode peut se révéler lente lorsqu'on vise uneapproximation précise de la loi a posteriori. L'algorithme que nous proposons est une amélioration de l'algorithme ABC-SMC de DelMoral et al. [2012] que nous optimisons en nombre d'appels aux simulations suivant la vraisemblance, et que nous munissons d'un mécanisme de choix de niveauxd'acceptations auto-calibré. Nous implémentons notre algorithme pour inférer les paramètres d'un scénario évolutif réel et complexe de génétique des populations. Nous montrons que pour la même qualité d'approximation, notre algorithme nécessite deux fois moins de simulations par rapport à laméthode ABC avec acceptation couramment utilisée. / This thesis consists of two parts which can be read independently.The first part is about the Adaptive Multiple Importance Sampling (AMIS) algorithm presented in Cornuet et al.(2012) provides a significant improvement in stability and Effective Sample Size due to the introduction of the recycling procedure. These numerical properties are particularly adapted to the Bayesian paradigm in population genetics where the modelization involves a large number of parameters. However, the consistency of the AMIS estimator remains largely open. In this work, we provide a novel Adaptive Multiple Importance Sampling scheme corresponding to a slight modification of Cornuet et al. (2012) proposition that preserves the above-mentioned improvements. Finally, using limit theorems on triangular arrays of conditionally independant random variables, we give a consistensy result for the final particle system returned by our new scheme.The second part of this thesis lies in ABC paradigm. Approximate Bayesian Computation has been successfully used in population genetics models to bypass the calculation of the likelihood. These algorithms provide an accurate estimator by comparing the observed dataset to a sample of datasets simulated from the model. Although parallelization is easily achieved, computation times for assuring a suitable approximation quality of the posterior distribution are still long. To alleviate this issue, we propose a sequential algorithm adapted fromDel Moral et al. (2012) which runs twice as fast as traditional ABC algorithms. Itsparameters are calibrated to minimize the number of simulations from the model.

Page generated in 0.0905 seconds