• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 81
  • 18
  • 13
  • 3
  • 3
  • 3
  • 1
  • 1
  • 1
  • 1
  • Tagged with
  • 142
  • 142
  • 142
  • 28
  • 25
  • 23
  • 20
  • 19
  • 19
  • 19
  • 18
  • 16
  • 15
  • 15
  • 14
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
41

Pricing and hedging asian options using Monte Carlo and integral transform techniques

Chibawara, Trust 03 1900 (has links)
Thesis (MSc (Mathematics))--University of Stellenbosch, 2010. / ENGLISH ABSTRACT: In this thesis, we discuss and apply the Monte Carlo and integral transform methods in pricing options. These methods have proved to be very e ective in the valuation of options especially when acceleration techniques are introduced. By rst pricing European call options we have motivated the use of these methods in pricing arithmetic Asian options which have proved to be di cult to price and hedge under the Black􀀀Scholes framework. The arithmetic average of the prices in this framework, is a sum of correlated lognormal distributions whose distribution does not admit a simple analytic expression. However, many approaches have been reported in the academic literature for pricing these options. We provide a hedging strategy by manipulating the results by Geman and Yor [42] for continuous xed strike arithmetic Asian call options. We then derive a double Laplace transform formula for pricing continuous Asian call options following the approach by Fu et al. [39]. By applying the multi-Laguerre and iterated Talbot inversion techniques for Laplace transforms to the resulting pricing formula we obtain the option prices. Finally, we discuss the shortcomings of using the Laplace transform in pricing options. / AFRIKAANSE OPSOMMING: In hierdie tesis bespreek ons Monte Carlo- en integraaltransform metodes om die pryse van nansi ele opsies te bepaal. Hierdie metodes is baie e ektief, veral wanneer versnellingsmetodes ingevoer word. Ons bepaal eers die pryse van Europese opsies as motivering, voordat ons die bostaande metodes gebruik vir prysbepaling van Asiatiese opsies met rekenkundige gemiddeldes, wat baie moeiliker is om te hanteer in die Black􀀀Scholes raamwerk. Die rekenkundige gemiddelde van batepryse in hierdie raamwerk is 'n som van gekorreleerde lognormale distribusies wie se distribusie nie oor 'n eenvoudige analitiese vorm beskik nie. Daar is egter talle benaderings vir die prysbepaling van hierdie opsies in die akademiese literatuur. Ons bied 'n verskansingsstrategie vir Asiatiese opsies in kontinue tyd met 'n vaste trefprys aan deur die resultate van Geman en Yor [42] te manipuleer. Daarna volg ons Fu et al. [39] om 'n dubbele Laplace transform formule vir die pryse af te lei. Deur toepassing van multi-Laguerre en herhaalde Talbotinversie tegnieke vir Laplace transforms op hierdie formule, bepaal ons dan die opsiepryse. Ons sluit af met 'n bespreking van die tekortkominge van die gebruik van die Laplace transform vir prysbepaling.
42

Modélisation du smile de volatilité pour les produits dérivés de taux d'intérêt / Multi factor stochastic volatility for interest rates modeling

Palidda, Ernesto 29 May 2015 (has links)
L'objet de cette thèse est l'étude d'un modèle de la dynamique de la courbe de taux d'intérêt pour la valorisation et la gestion des produits dérivées. En particulier, nous souhaitons modéliser la dynamique des prix dépendant de la volatilité. La pratique de marché consiste à utiliser une représentation paramétrique du marché, et à construire les portefeuilles de couverture en calculant les sensibilités par rapport aux paramètres du modèle. Les paramètres du modèle étant calibrés au quotidien pour que le modèle reproduise les prix de marché, la propriété d'autofinancement n'est pas vérifiée. Notre approche est différente, et consiste à remplacer les paramètres par des facteurs, qui sont supposés stochastiques. Les portefeuilles de couverture sont construits en annulant les sensibilités des prix à ces facteurs. Les portefeuilles ainsi obtenus vérifient la propriété d’autofinancement / This PhD thesis is devoted to the study of an Affine Term Structure Model where we use Wishart-like processes to model the stochastic variance-covariance of interest rates. This work was initially motivated by some thoughts on calibration and model risk in hedging interest rates derivatives. The ambition of our work is to build a model which reduces as much as possible the noise coming from daily re-calibration of the model to the market. It is standard market practice to hedge interest rates derivatives using models with parameters that are calibrated on a daily basis to fit the market prices of a set of well chosen instruments (typically the instrument that will be used to hedge the derivative). The model assumes that the parameters are constant, and the model price is based on this assumption; however since these parameters are re-calibrated, they become in fact stochastic. Therefore, calibration introduces some additional terms in the price dynamics (precisely in the drift term of the dynamics) which can lead to poor P&L explain, and mishedging. The initial idea of our research work is to replace the parameters by factors, and assume a dynamics for these factors, and assume that all the parameters involved in the model are constant. Instead of calibrating the parameters to the market, we fit the value of the factors to the observed market prices. A large part of this work has been devoted to the development of an efficient numerical framework to implement the model. We study second order discretization schemes for Monte Carlo simulation of the model. We also study efficient methods for pricing vanilla instruments such as swaptions and caplets. In particular, we investigate expansion techniques for prices and volatility of caplets and swaptions. The arguments that we use to obtain the expansion rely on an expansion of the infinitesimal generator with respect to a perturbation factor. Finally we have studied the calibration problem. As mentioned before, the idea of the model we study in this thesis is to keep the parameters of the model constant, and calibrate the values of the factors to fit the market. In particular, we need to calibrate the initial values (or the variations) of the Wishart-like process to fit the market, which introduces a positive semidefinite constraint in the optimization problem. Semidefinite programming (SDP) gives a natural framework to handle this constraint
43

Target Discrimination Against Clutter Based on Unsupervised Clustering and Sequential Monte Carlo Tracking

January 2016 (has links)
abstract: The radar performance of detecting a target and estimating its parameters can deteriorate rapidly in the presence of high clutter. This is because radar measurements due to clutter returns can be falsely detected as if originating from the actual target. Various data association methods and multiple hypothesis filtering approaches have been considered to solve this problem. Such methods, however, can be computationally intensive for real time radar processing. This work proposes a new approach that is based on the unsupervised clustering of target and clutter detections before target tracking using particle filtering. In particular, Gaussian mixture modeling is first used to separate detections into two Gaussian distinct mixtures. Using eigenvector analysis, the eccentricity of the covariance matrices of the Gaussian mixtures are computed and compared to threshold values that are obtained a priori. The thresholding allows only target detections to be used for target tracking. Simulations demonstrate the performance of the new algorithm and compare it with using k-means for clustering instead of Gaussian mixture modeling. / Dissertation/Thesis / Masters Thesis Electrical Engineering 2016
44

Particle-Based Modeling of Reliability for Millimeter-Wave GaN Devices for Power Amplifier Applications

January 2018 (has links)
abstract: In this work, an advanced simulation study of reliability in millimeter-wave (mm-wave) GaN Devices for power amplifier (PA) applications is performed by means of a particle-based full band Cellular Monte Carlo device simulator (CMC). The goal of the study is to obtain a systematic characterization of the performance of GaN devices operating in DC, small signal AC and large-signal radio-frequency (RF) conditions emphasizing on the microscopic properties that correlate to degradation of device performance such as generation of hot carriers, presence of material defects and self-heating effects. First, a review of concepts concerning GaN technology, devices, reliability mechanisms and PA design is presented in chapter 2. Then, in chapter 3 a study of non-idealities of AlGaN/GaN heterojunction diodes is performed, demonstrating that mole fraction variations and the presence of unintentional Schottky contacts are the main limiting factor for high current drive of the devices under study. Chapter 4 consists in a study of hot electron generation in GaN HEMTs, in terms of the accurate simulation of the electron energy distribution function (EDF) obtained under DC and RF operation, taking into account frequency and temperature variations. The calculated EDFs suggest that Class AB PAs operating at low frequency (10 GHz) are more robust to hot carrier effects than when operating under DC or high frequency RF (up to 40 GHz). Also, operation under Class A yields higher EDFs than Class AB indicating lower reliability. This study is followed in chapter 5 by the proposal of a novel π-Shaped gate contact for GaN HEMTs which effectively reduces the hot electron generation while preserving device performance. Finally, in chapter 6 the electro-thermal characterization of GaN-on-Si HEMTs is performed by means of an expanded CMC framework, where charge and heat transport are self-consistently coupled. After the electro-thermal model is validated to experimental data, the assessment of self-heating under lateral scaling is considered. / Dissertation/Thesis / Doctoral Dissertation Electrical Engineering 2018
45

Inférence bayésienne dans les modèles de croissance de plantes pour la prévision et la caractérisation des incertitudes / Bayesian inference in plant growth models for prediction and uncertainty assessment

Chen, Yuting 27 June 2014 (has links)
La croissance des plantes en interaction avec l'environnement peut être décrite par des modèles mathématiques. Ceux-ci présentent des perspectives prometteuses pour un nombre considérable d'applications telles que la prévision des rendements ou l'expérimentation virtuelle dans le contexte de la sélection variétale. Dans cette thèse, nous nous intéressons aux différentes solutions capables d'améliorer les capacités prédictives des modèles de croissance de plantes, en particulier grâce à des méthodes statistiques avancées. Notre contribution se résume en quatre parties.Tout d'abord, nous proposons un nouveau modèle de culture (Log-Normal Allocation and Senescence ; LNAS). Entièrement construit dans un cadre probabiliste, il décrit seulement les processus écophysiologiques essentiels au bilan de la biomasse végétale afin de contourner les problèmes d'identification et d'accentuer l'évaluation des incertitudes. Ensuite, nous étudions en détail le paramétrage du modèle. Dans le cadre Bayésien, nous mettons en œuvre des méthodes Monte-Carlo Séquentielles (SMC) et des méthodes de Monte-Carlo par Chaînes de Markov (MCMC) afin de répondre aux difficultés soulevées lors du paramétrage des modèles de croissance de plantes, caractérisés par des équations dynamiques non-linéaires, des données rares et un nombre important de paramètres. Dans les cas où la distribution a priori est peu informative, voire non-informative, nous proposons une version itérative des méthodes SMC et MCMC, approche équivalente à une variante stochastique d'un algorithme de type Espérance-Maximisation, dans le but de valoriser les données d'observation tout en préservant la robustesse des méthodes Bayésiennes. En troisième lieu, nous soumettons une méthode d'assimilation des données en trois étapes pour résoudre le problème de prévision du modèle. Une première étape d'analyse de sensibilité permet d'identifier les paramètres les plus influents afin d'élaborer une version plus robuste de modèle par la méthode de sélection de modèles à l'aide de critères appropriés. Ces paramètres sélectionnés sont par la suite estimés en portant une attention particulière à l'évaluation des incertitudes. La distribution a posteriori ainsi obtenue est considérée comme information a priori pour l'étape de prévision, dans laquelle une méthode du type SMC telle que le filtrage par noyau de convolution (CPF) est employée afin d'effectuer l'assimilation de données. Dans cette étape, les estimations des états cachés et des paramètres sont mis à jour dans l'objectif d'améliorer la précision de la prévision et de réduire l'incertitude associée. Finalement, d'un point de vue applicatif, la méthodologie proposée est mise en œuvre et évaluée avec deux modèles de croissance de plantes, le modèle LNAS pour la betterave sucrière et le modèle STICS pour le blé d'hiver. Quelques pistes d'utilisation de la méthode pour l'amélioration du design expérimental sont également étudiées, dans le but d'améliorer la qualité de la prévision. Les applications aux données expérimentales réelles montrent des performances prédictives encourageantes, ce qui ouvre la voie à des outils d'aide à la décision en agriculture. / Plant growth models aim to describe plant development and functional processes in interaction with the environment. They offer promising perspectives for many applications, such as yield prediction for decision support or virtual experimentation inthe context of breeding. This PhD focuses on the solutions to enhance plant growth model predictive capacity with an emphasis on advanced statistical methods. Our contributions can be summarized in four parts. Firstly, from a model design perspective, the Log-Normal Allocation and Senescence (LNAS) crop model is proposed. It describes only the essential ecophysiological processes for biomass budget in a probabilistic framework, so as to avoid identification problems and to accentuate uncertainty assessment in model prediction. Secondly, a thorough research is conducted regarding model parameterization. In a Bayesian framework, both Sequential Monte Carlo (SMC) methods and Markov chain Monte Carlo (MCMC) based methods are investigated to address the parameterization issues in the context of plant growth models, which are frequently characterized by nonlinear dynamics, scarce data and a large number of parameters. Particularly, whenthe prior distribution is non-informative, with the objective to put more emphasis on the observation data while preserving the robustness of Bayesian methods, an iterative version of the SMC and MCMC methods is introduced. It can be regarded as a stochastic variant of an EM type algorithm. Thirdly, a three-step data assimilation approach is proposed to address model prediction issues. The most influential parameters are first identified by global sensitivity analysis and chosen by model selection. Subsequently, the model calibration is performed with special attention paid to the uncertainty assessment. The posterior distribution obtained from this estimation step is consequently considered as prior information for the prediction step, in which a SMC-based on-line estimation method such as Convolution Particle Filtering (CPF) is employed to perform data assimilation. Both state and parameter estimates are updated with the purpose of improving theprediction accuracy and reducing the associated uncertainty. Finally, from an application point of view, the proposed methodology is implemented and evaluated with two crop models, the LNAS model for sugar beet and the STICS model for winter wheat. Some indications are also given on the experimental design to optimize the quality of predictions. The applications to real case scenarios show encouraging predictive performances and open the way to potential tools for yield prediction in agriculture.
46

Desenvolvimento de um sistema computacional baseado no código Geant4 para avaliações dosimétricas em radioterapia.

OLIVEIRA, Alex Cristóvão Holanda de 29 April 2016 (has links)
Submitted by Rafael Santana (rafael.silvasantana@ufpe.br) on 2017-05-11T17:49:20Z No. of bitstreams: 2 license_rdf: 1232 bytes, checksum: 66e71c371cc565284e70f40736c94386 (MD5) Tese_AlexOliveira_PROTEN_2016.pdf: 2268858 bytes, checksum: 5f7228b81f8233f71ade4db328005315 (MD5) / Made available in DSpace on 2017-05-11T17:49:20Z (GMT). No. of bitstreams: 2 license_rdf: 1232 bytes, checksum: 66e71c371cc565284e70f40736c94386 (MD5) Tese_AlexOliveira_PROTEN_2016.pdf: 2268858 bytes, checksum: 5f7228b81f8233f71ade4db328005315 (MD5) Previous issue date: 2016-04-29 / CNEN / A incidência de câncer tem crescido no Brasil, assim como em todo mundo, acompanhando a mudança do perfil etário da população. Uma das técnicas mais importantes e comumente utilizadas no tratamento do câncer é a radioterapia. Em torno de 60% dos casos novos de neoplasias malignas utiliza-se a radioterapia. O equipamento mais utilizado para radioterapia é o acelerador linear (Linac) que produz feixes de elétrons ou raios-X na faixa energética de 5 a 30 MeV. A maneira mais apropriada de irradiar o paciente é determinada durante o planejamento. Atualmente, o sistema computacional de planejamento radioterápico (TPS – Treatment Planning System) é a principal e a mais importante ferramenta no processo de planejamento em radioterapia. O principal objetivo desse trabalho foi desenvolver um sistema computacional baseado no código Monte Carlo (MC) Geant4 para avaliações dosimétricas em radioterapia com feixe de fótons. Além de planejamentos, essas avaliações podem ser realizadas para pesquisa e controle de qualidade de equipamentos e de TPSs. O sistema computacional, denominado Quimera, é composto de uma interface gráfica de usuário (qGUI) e três aplicativos MC (qLinacs, qMATphantoms e qNCTphantoms). A qGUI tem a função de interface para os aplicativos MC, criando ou editando os arquivos de entrada, executando as simulações e analisando os resultados. O qLinacs é usado para modelagem e geração de feixes de irradiação (espaços de fase) de Linacs. O qMATphantoms e o qNCTphantoms são usados para avaliações de dose em modelos virtuais de fantomas físicos e em imagens de tomografia computadorizada (CT), respectivamente. A partir de dados do fabricante, foram modelados no qLinacs um Linac e um colimador multifolhas (MLC) da Varian. As modelagens do Linac e do MLC foram validadas utilizando dados experimentais. As validações do qMATphantoms e do qNCTphantoms foram realizadas utilizando espaços de fase da IAEA (International Atomic Energy Agency). Nessa primeira versão, o Quimera pode ser usado para pesquisa, planejamentos radioterápicos de tratamentos simples e controle de qualidade em radioterapia com feixes de fótons gerados por Linacs. Os aplicativos MC funcionam independentes da qGUI e essa pode ser usada para manipulação de imagens CT e análise de resultados de outros aplicativos MC. Devido à estrutura modular do Quimera, é possível adicionar novos aplicativos MC, permitindo o desenvolvimento de novas pesquisas, modelagem de Linacs e MLCs de diferentes fabricantes, o uso de outras técnicas (feixe de elétrons, prótons, íons pesados, tomoterapia, etc.) e aplicações em áreas correlatas (braquiterapia, radioproteção, etc.). Esse trabalho é uma iniciativa para desenvolvimento colaborativo de um sistema computacional completo que possa ser usado em radioterapia, tanto na prática clínica e técnica quanto na pesquisa. / The incidence of cancer has grown in Brazil, as well as around the world, following the change in the age profile of the population. One of the most important techniques and commonly used in cancer treatment is radiotherapy. Around 60% of new cases of cancer use radiation in at least one phase of treatment. The most used equipment for radiotherapy is a linear accelerator (Linac) which produces electron or X-ray beams in energy range from 5 to 30 MeV. The most appropriate way to irradiate a patient is determined during treatment planning. Currently, treatment planning system (TPS) is the main and the most important tool in the process of planning for radiotherapy. The main objective of this work is to develop a computational system based on the MC code Geant4 for dose evaluations in photon beam radiotherapy. In addition to treatment planning, these dose evaluations can be performed for research and quality control of equipment and TPSs. The computer system, called Quimera, consists of a graphical user interface (qGUI) and three MC applications (qLinacs, qMATphantoms and qNCTphantoms). The qGUI has the function of interface for the MC applications, by creating or editing the input files, running simulations and analyzing the results. The qLinacs is used for modeling and generation of Linac beams (phase space). The qMATphantoms and qNCTphantoms are used for dose calculations in virtual models of physical phantoms and computed tomography (CT) images, respectively. From manufacturer's data, models of a Varian Linac photon beam and a Varian multileaf collimator (MLC) were simulated in the qLinacs. The Linac and MLC modelings were validated using experimental data. qMATphamtoms and qNCTphantoms were validated using IAEA phase spaces. In this first version, the Quimera can be used for research, radiotherapy planning of simple treatments and quality control in photom beam radiotherapy. The MC applications work independent of the qGUI and the qGUI can be used for handling CT images and analysis of results from other MC applications. Due to the modular structure of the Quimera, one can add new MC applications, allowing the development of new research for use of other techniques (electron beam, protons, heavy ions, tomotherapy, etc.) and applications (brachytherapy, radiation protection, etc.) in radiotherapy. Quimera is an initiative for collaborative development of a complete computer system that can be used in radiotherapy, for clinical and technical practice and research
47

Desenvolvimento de uma metodologia para caracterização do filtro cuno do reator IEA-R1 utilizando o método de Monte Carlo / Development of methodology for characterization of cartridge filters from the IEA-R1 using the Monte Carlo method

Priscila Costa 28 January 2015 (has links)
O filtro cuno faz parte do circuito de tratamento de água do reator IEA-R1 que , quando saturado, é substituído, se tornando um rejeito radioativo que deve ser gerenciado. Neste trabalho foi realizada a caracterização primária do filtro cuno do reator nuclear IEA-R1 do IPEN utilizando-se espectrometria gama associada ao método de Monte Carlo. A espectrometria gama foi realizada utilizando-se um detector de germânio hiperpuro (HPGe). O cristal de germânio representa o volume ativo de detecção do detector HPGe, que possui uma região denominada camada morta ou camada inativa. Na literatura tem sido reportada uma diferença entre os valores experimentais e teóricos na obtenção da curva de eficiência desses detectores. Neste trabalho foi utilizado o código MCNP-4C para a obtenção da calibração em eficiência do detector para a geometria do filtro cuno, onde foram estudadas as influências da camada morta e do efeito de soma em cascata no detector HPGe. As correções dos valores de camada morta foram realizadas variando-se a espessura e o raio do cristal de germânio. O detector possui 75,83 cm3 de volume ativo de detecção, segundo informações fornecidas pelo fabricante. Entretanto os resultados encontrados mostraram que o valor de volume ativo real é menor do que o especificado, onde a camada morta representa 16% do volume total do cristal. A análise do filtro cuno por meio da espectrometria gama, permitiu a identificação de picos de energia. Por meio desses picos foram identificados três radionuclídeos no filtro: 108mAg, 110mAg e 60Co. A partir da calibração em eficiência obtida pelo método de Monte Carlo, o valor de atividade estimado para esses radionuclídeos está na ordem de MBq. / The Cuno filter is part of the water processing circuit of the IEA-R1 reactor and, when saturated, it is replaced and becomes a radioactive waste, which must be managed. In this work, the primary characterization of the Cuno filter of the IEA-R1 nuclear reactor at IPEN was carried out using gamma spectrometry associated with the Monte Carlo method. The gamma spectrometry was performed using a hyperpure germanium detector (HPGe). The germanium crystal represents the detection active volume of the HPGe detector, which has a region called dead layer or inactive layer. It has been reported in the literature a difference between the theoretical and experimental values when obtaining the efficiency curve of these detectors. In this study we used the MCNP-4C code to obtain the detector calibration efficiency for the geometry of the Cuno filter, and the influence of the dead layer and the effect of sum in cascade at the HPGe detector were studied. The correction of the dead layer values were made by varying the thickness and the radius of the germanium crystal. The detector has 75.83 cm3 of active volume of detection, according to information provided by the manufacturer. Nevertheless, the results showed that the actual value of active volume is less than the one specified, where the dead layer represents 16% of the total volume of the crystal. A Cuno filter analysis by gamma spectrometry has enabled identifying energy peaks. Using these peaks, three radionuclides were identified in the filter: 108mAg, 110mAg and 60Co. From the calibration efficiency obtained by the Monte Carlo method, the value of activity estimated for these radionuclides is in the order of MBq.
48

Phenomenology at a future 100 TeV hadron collider

Ferrarese, Piero 03 November 2017 (has links)
No description available.
49

Échantillonner les solutions de systèmes différentiels / Sampling the solutions of differential systems

Chan Shio, Christian Paul 11 December 2014 (has links)
Ce travail se propose d'étudier deux problèmes complémentaires concernant des systèmes différentiels à coefficients aléatoires étudiés au moyen de simulations de Monte Carlo. Le premier problème consiste à calculer la loi à un instant t* de la solution d'une équation différentielle à coefficients aléatoires. Comme on ne peut pas, en général, exprimer cette loi de probabilité au moyen d'une fonction connue, il est nécessaire d'avoir recours à une approche par simulation pour se faire une idée de cette loi. Mais cette approche ne peut pas toujours être utilisée à cause du phénomène d'explosion des solutions en temps fini. Ce problème peut être surmonté grâce à une compactification de l'ensemble des solutions. Une approximation de la loi au moyen d'un développement de chaos polynomial fournit un outil d'étude alternatif. La deuxième partie considère le problème d'estimer les coefficients d'un système différentiel quand une trajectoire du système est connue en un petit nombre d'instants. On utilise pour cela une méthode de Monté Carlo très simple, la méthode de rejet, qui ne fournit pas directement une estimation ponctuelle des coefficients mais plutôt un ensemble de valeurs compatibles avec les données. L'examen des propriétés de cette méthode permet de comprendre non seulement comment choisir les différents paramètres de la méthode mais aussi d'introduire quelques options plus efficaces. Celles-ci incluent une nouvelle méthode, que nous appelons la méthode de rejet séquentiel, ainsi que deux méthodes classiques, la méthode de Monte-Carlo par chaînes de Markov et la méthode de Monte-Carlo séquentielle dont nous examinons les performances sur différents exemples. / This work addresses two complementary problems when studying differential systems with random coefficients using a simulation approach. In the first part, we look at the problem of computing the law of the solution at time t* of a differential equation with random coefficients. It is shown that even in simplest cases, one will usually obtain a random variable where the pdf cannot be computed explicitly, and for which we need to rely on Monte Carlo simulation. As this simulation may not always be possible due to the explosion of the solution, several workarounds are presented. This includes displaying the histogram on a compact manifold using two charts and approximating the distribution using a polynomial chaos expansion. The second part considers the problem of estimating the coefficients in a system of differential equations when a trajectory of the system is known at a set of times. To do this, we use a simple Monte Carlo sampling method, known as the rejection sampling algorithm. Unlike deterministic methods, it does not provide a point estimate of the coefficients directly, but rather a collection of values that “fits” the known data well. An examination of the properties of the method allows us not only to better understand how to choose the different parameters when implementing the method, but also to introduce more efficient methods. This includes a new approach which we call sequential rejection sampling and methods based on the Markov Chain Monte Carlo and Sequential Monte Carlo algorithms. Several examples are presented to illustrate the performance of all these methods.
50

Monte Carlo Methods for Stochastic Differential Equations and their Applications

Leach, Andrew Bradford, Leach, Andrew Bradford January 2017 (has links)
We introduce computationally efficient Monte Carlo methods for studying the statistics of stochastic differential equations in two distinct settings. In the first, we derive importance sampling methods for data assimilation when the noise in the model and observations are small. The methods are formulated in discrete time, where the "posterior" distribution we want to sample from can be analyzed in an accessible small noise expansion. We show that a "symmetrization" procedure akin to antithetic coupling can improve the order of accuracy of the sampling methods, which is illustrated with numerical examples. In the second setting, we develop "stochastic continuation" methods to estimate level sets for statistics of stochastic differential equations with respect to their parameters. We adapt Keller's Pseudo-Arclength continuation method to this setting using stochastic approximation, and generalized least squares regression. Furthermore, we show that the methods can be improved through the use of coupling methods to reduce the variance of the derivative estimates that are involved.

Page generated in 0.0797 seconds