Spelling suggestions: "subject:"quasimonte carlo"" "subject:"quasimonte sarlo""
11 |
The Java applet for pricing Asian options under Heston’s model using the new Ninomiya weak approximation scheme and quasi-Monte CarloVasilev, Boyko January 2008 (has links)
This study is based on a new weak-approximation scheme for stochastic differential equations applied to the Heston stochastic volatility model. The scheme was published by Ninomiya and Ninomiya (2008) and is an extension of Kusuoka’s approximation scheme. Ninomiya’s algorithm decomposes Kusuoka’s stochastic model into a set of ordinary differential equations with random coefficients and suggests several numerical optimisations for faster calculation. The subject of this paper is a Java applet which calculates the price of an Asian option under the Heston model.
|
12 |
Global sensitivity analysis of reactor parameters / Bolade Adewale AdetulaAdetula, Bolade Adewale January 2011 (has links)
Calculations of reactor parameters of interest (such as neutron multiplication factors, decay heat,
reaction rates, etc.), are often based on models which are dependent on groupwise neutron cross
sections. The uncertainties associated with these neutron cross sections are propagated to the final
result of the calculated reactor parameters. There is a need to characterize this uncertainty and to
be able to apportion the uncertainty in a calculated reactor parameter to the different sources of
uncertainty in the groupwise neutron cross sections, this procedure is known as sensitivity analysis.
The focus of this study is the application of a modified global sensitivity analysis technique to
calculations of reactor parameters that are dependent on groupwise neutron cross–sections. Sensitivity
analysis can help in identifying the important neutron cross sections for a particular model,
and also helps in establishing best–estimate optimized nuclear reactor physics models with reduced
uncertainties.
In this study, our approach to sensitivity analysis will be similar to the variance–based global
sensitivity analysis technique, which is robust, has a wide range of applicability and provides
accurate sensitivity information for most models. However, this technique requires input variables
to be mutually independent. A modification to this technique, that allows one to deal with input
variables that are block–wise correlated and normally distributed, is presented.
The implementation of the modified technique involves the calculation of multi–dimensional integrals,
which can be prohibitively expensive to compute. Numerical techniques specifically suited
to the evaluation of multidimensional integrals namely Monte Carlo, quasi–Monte Carlo and sparse
grids methods are used, and their efficiency is compared. The modified technique is illustrated and
tested on a two–group cross–section dependent problem. In all the cases considered, the results obtained
with sparse grids achieved much better accuracy, while using a significantly smaller number of samples. / Thesis (M.Sc. Engineering Sciences (Nuclear Engineering))--North-West University, Potchefstroom Campus, 2011.
|
13 |
Global sensitivity analysis of reactor parameters / Bolade Adewale AdetulaAdetula, Bolade Adewale January 2011 (has links)
Calculations of reactor parameters of interest (such as neutron multiplication factors, decay heat,
reaction rates, etc.), are often based on models which are dependent on groupwise neutron cross
sections. The uncertainties associated with these neutron cross sections are propagated to the final
result of the calculated reactor parameters. There is a need to characterize this uncertainty and to
be able to apportion the uncertainty in a calculated reactor parameter to the different sources of
uncertainty in the groupwise neutron cross sections, this procedure is known as sensitivity analysis.
The focus of this study is the application of a modified global sensitivity analysis technique to
calculations of reactor parameters that are dependent on groupwise neutron cross–sections. Sensitivity
analysis can help in identifying the important neutron cross sections for a particular model,
and also helps in establishing best–estimate optimized nuclear reactor physics models with reduced
uncertainties.
In this study, our approach to sensitivity analysis will be similar to the variance–based global
sensitivity analysis technique, which is robust, has a wide range of applicability and provides
accurate sensitivity information for most models. However, this technique requires input variables
to be mutually independent. A modification to this technique, that allows one to deal with input
variables that are block–wise correlated and normally distributed, is presented.
The implementation of the modified technique involves the calculation of multi–dimensional integrals,
which can be prohibitively expensive to compute. Numerical techniques specifically suited
to the evaluation of multidimensional integrals namely Monte Carlo, quasi–Monte Carlo and sparse
grids methods are used, and their efficiency is compared. The modified technique is illustrated and
tested on a two–group cross–section dependent problem. In all the cases considered, the results obtained
with sparse grids achieved much better accuracy, while using a significantly smaller number of samples. / Thesis (M.Sc. Engineering Sciences (Nuclear Engineering))--North-West University, Potchefstroom Campus, 2011.
|
14 |
Robustness analysis of VEGA launcher model based on effective sampling strategyDong, Siyi January 2016 (has links)
An efficient robustness analysis for the VEGA launch vehicle is essential to minimize the potential system failure during the ascending phase. Monte Carlo sampling method is usually considered as a reliable strategy in industry if the sampling size is large enough. However, due to a large number of uncertainties and a long response time for a single simulation, exploring the entire uncertainties sufficiently through Monte Carlo sampling method is impractical for VEGA launch vehicle. In order to make the robustness analysis more efficient when the number of simulation is limited, the quasi-Monte Carlo(Sobol, Faure, Halton sequence) and heuristic algorithm(Differential Evolution) are proposed. Nevertheless, the reasonable number of samples for simulation is still much smaller than the minimal number of samples for sufficient exploration. To further improve the efficiency of robustness analysis, the redundant uncertainties are sorted out by sensitivity analysis. Only the dominant uncertainties are remained in the robustness analysis. As all samples for simulation are discrete, many uncertainty spaces are not explored with respect to its objective function by sampling or optimization methods. To study these latent information, the meta-model trained by Gaussian Process is introduced. Based on the meta-model, the expected maximum objective value and expected sensitivity of each uncertainties can be analyzed for robustness analysis with much higher efficiency but without loss much accuracy.
|
15 |
Optimization under parameter uncertainties with application to product cost minimizationKidwell, Ann-Sofi January 2018 (has links)
This report will look at optimization under parameters of uncertainties. It will describe the subject in its wider form, then two model examples will be studied, followed by an application to an ABB product. The Monte Carlo method will be described and scrutinised, with the quasi-Monte Carlo method being favoured for large problems. An example will illustrate how the choice of Monte Carlo method will affect the efficiency of the simulation when evaluating functions of different dimensions. Then an overview of mathematical optimization is given, from its simplest form to nonlinear, nonconvex optimization problems containing uncertainties.A Monte Carlo simulation is applied to the design process and cost function for a custom made ABB transformer, where the production process is assumed to contain some uncertainties.The result from optimizing an ABB cost formula, where the in-parameters contains some uncertainties, shows how the price can vary and is not fixed as often assumed, and how this could influence an accept/reject decision.
|
16 |
A Full Multigrid-Multilevel Quasi-Monte Carlo Approach for Elliptic PDE with Random CoefficientsLiu, Yang 05 May 2019 (has links)
The subsurface flow is usually subject to uncertain porous media structures. However, in most cases we only have partial knowledge about the porous media properties. A common approach is to model the uncertain parameters as random fields, then the expectation of Quantity of Interest(QoI) can be evaluated by the Monte Carlo method.
In this study, we develop a full multigrid-multilevel Monte Carlo (FMG-MLMC) method to speed up the evaluation of random parameters effects on single-phase porous flows. In general, MLMC method applies a series of discretization with increasing resolution and computes the QoI on each of them, the success of which lies in the effective variance reduction. We exploit the similar hierarchies of MLMC and multigrid methods, and obtain the solution on coarse mesh Qcl as a byproduct of the multigrid solution on fine mesh Qfl on each level l. In the cases considered in this thesis, the computational saving is 20% theoretically. In addition, a comparison of Monte Carlo and Quasi-Monte Carlo (QMC) methods reveals a smaller estimator variance and faster convergence rate of the latter method in this study.
|
17 |
Chiral description and physical limit of pseudoscalar decay constants with four dynamical quarks and applicability of quasi-Monte Carlo for lattice systemsAmmon, Andreas 10 June 2015 (has links)
In dieser Arbeit werden Massen und Zerfallskonstanten von pseudoskalaren Mesonen, insbes. dem Pion und dem D-s-Meson, im Rahmen der Quantenchromodynamik (QCD) berechnet. Diese Größen wurden im Rahmen der Gitter-QCD, einer gitter-regularisierten Form der QCD, mit vier dynamischen Twisted-Mass Fermionen (Up-, Down-, Strange- und Charm-Quark) berechnet. Dieses Setup bieten den Vorteil der automatischen O(a)-Verbesserung. Der Gitterabstand a wurde mit Hilfe der Pion-Masse und -Zerfallskonstante durch Extrapolation zum physikalischen Punkt, geg. durch das physikal. Verhältnis von f_pi/M_pi, bestimmt. Dabei kamen Formeln aus der chiralen Störungstheorie, die die speziellen Diskretisierungseffekte des Twisted-Mass-Formalismus berücksichtigen, zum Einsatz. Die bestimmten Werte des Gitterabstands, a=0.0899(13) fm (@ beta=1.9), a=0.0812(11) fm (@ beta=1.95) und a = 0.0624(7) fm (@beta=2.1) liegen etwa fünf Prozent über denen vorheriger Bestimmungen (Baron et. al. 2010). Dies erklärt sich vor allem durch eine Untersuchung bezüglich der Anwendbarkeit des Bereiches der Up-/Down-Quark-Massen auf die verwendeten Extrapolationsformeln. Zur Untersuchung des physikalischen Grenzwertes von f_{D_s} werden Formeln der chiralen Störungstheorie für schwere Mesonen (HM-ChiPT) eingesetzt. Das Endergebnis dieser Betrachtung f_{D_s} = 248.9(5.3) MeV liegt etwas über vorherigen Bestimmungen (ETMC 2009, arXiv:0904.095. HPQCD 2010, arXiv:1008.4018) und etwa zwei Standardabweichungen unter dem Mittel aus experimentellen Werten (PDG 2012). Ein weiterer Teil dieser Arbeit behandelt die i.A. schwierige Berechnung von unverbundenen Beiträgen, die z.B. bei der Berechnung der Masse des neutralen Pions eine Rolle spielen. In dieser Arbeit wird eine neue Methode zur Approximation solcher Beiträge vorgestellt, welche auf der sog. Quasi-Monte-Carlo-Methode (QMC-Methode) beruht. Diese Methode birgt große Möglichkeiten zu enormen Einsparungen der Rechenzeit. / This work deals with the determination of decay constants and masses of the pion and D-s meson. This happens in the framework of lattice QCD, a lattice regularised form of QCD. The four dynamical fermions (up, down, strange and charm quark) are described by the twisted-mass approach (TM-QCD) featuring automatic O(a) improvement. The lattice spacing a has been determined using the pion mass and decay constant extrapolated to the physical point, which is determined by the physical ratio f_pi/m_pi. In order to obtain an accurate description, new formulae from Chi-PT, taking into account the special form of discretisation effects of TM-QCD have been employed. The determined results of a = 0.0899(13) fm (@ beta=1.9), a = 0.0812(11)fm (@ beta=1.95) and a = 0.0624(7) fm (@ beta=2.1) are approximately 5% larger than previous determinations (Baron et. al. 2010). This shift is most likely explained by the reduced range of pion masses (
|
18 |
Méthodes statistiques pour l’estimation du rendement paramétrique des circuits intégrés analogiques et RF / Statistical methods for the parametric yield estimation of analog/RF integratedcircuitsDesrumaux, Pierre-François 08 November 2013 (has links)
De nombreuses sources de variabilité impactent la fabrication des circuits intégrés analogiques et RF et peuvent conduire à une dégradation du rendement. Il est donc nécessaire de mesurer leur influence le plus tôt possible dans le processus de fabrications. Les méthodes de simulation statistiques permettent ainsi d'estimer le rendement paramétrique des circuits durant la phase de conception. Cependant, les méthodes traditionnelles telles que la méthode de Monte Carlo ne sont pas assez précises lorsqu'un faible nombre de circuits est simulé. Par conséquent, il est nécessaire de créer un estimateur précis du rendement paramétrique basé sur un faible nombre de simulations. Dans cette thèse, les méthodes statistiques existantes provenant à la fois de publications en électroniques et non-Électroniques sont d'abord décrites et leurs limites sont mises en avant. Ensuite, trois nouveaux estimateurs de rendement sont proposés: une méthode de type quasi-Monte Carlo avec tri automatique des dimensions, une méthode des variables de contrôle basée sur l'estimation par noyau, et une méthode par tirage d'importance. Les trois méthodes reposent sur un modèle mathématique de la métrique de performance du circuit qui est construit à partir d'un développement de Taylor à l'ordre un. Les résultats théoriques et expérimentaux obtenus démontrent la supériorité des méthodes proposées par rapport aux méthodes existantes, à la fois en terme de précision de l'estimateur et en terme de réduction du nombre de simulations de circuits. / Semiconductor device fabrication is a complex process which is subject to various sources of variability. These variations can impact the functionality and performance of analog integrated circuits, which leads to yield loss, potential chip modifications, delayed time to market and reduced profit. Statistical circuit simulation methods enable to estimate the parametric yield of the circuit early in the design stage so that corrections can be done before manufacturing. However, traditional methods such as Monte Carlo method and corner simulation have limitations. Therefore an accurate analog yield estimate based on a small number of circuit simulations is needed. In this thesis, existing statistical methods from electronics and non-Electronics publications are first described. However, these methods suffer from sever drawbacks such as the need of initial time-Consuming circuit simulations, or a poor scaling with the number of random variables. Second, three novel statistical methods are proposed to accurately estimate the parametric yield of analog/RF integrated circuits based on a moderate number of circuit simulations: An automatically sorted quasi-Monte Carlo method, a kernel-Based control variates method and an importance sampling method. The three methods rely on a mathematical model of the circuit performance metric which is constructed based on a truncated first-Order Taylor expansion. This modeling technique is selected as it requires a minimal number of SPICE-Like circuit simulations. Both theoretical and simulation results show that the proposed methods lead to significant speedup or improvement in accuracy compared to other existing methods.
|
19 |
Modélisation d’actifs industriels pour l’optimisation robuste de stratégies de maintenance / Modelling of industrial assets in view of robust maintenance optimizationDemgne, Jeanne Ady 16 October 2015 (has links)
Ce travail propose de nouvelles méthodes d’évaluation d’indicateurs de risque associés à une stratégie d’investissements, en vue d’une optimisation robuste de la maintenance d’un parc de composants. La quantification de ces indicateurs nécessite une modélisation rigoureuse de l’évolution stochastique des durées de vie des composants soumis à maintenance. Pour ce faire, nous proposons d’utiliser des processus markoviens déterministes par morceaux, qui sont généralement utilisés en Fiabilité Dynamique pour modéliser des composants en interaction avec leur environnement. Les indicateurs de comparaison des stratégies de maintenance candidates sont issus de la Valeur Actuelle Nette (VAN). La VAN représente la différence entre les flux financiers associés à une stratégie de référence et ceux associés à une stratégie de maintenance candidate. D’un point de vue probabiliste, la VAN est la différence de deux variables aléatoires dépendantes, ce qui en complique notablement l’étude. Dans cette thèse, les méthodes de Quasi Monte Carlo sont utilisées comme alternatives à la méthode de Monte Carlo pour la quantification de la loi de la VAN. Ces méthodes sont dans un premier temps appliquées sur des exemples illustratifs. Ensuite, elles ont été adaptées pour l’évaluation de stratégie de maintenance de deux systèmes de composants d’une centrale de production d’électricité. Le couplage de ces méthodes à un algorithme génétique a permis d’optimiser une stratégie d’investissements. / This work proposes new assessment methods of risk indicators associated with an investments plan in view of a robust maintenance optimization of a fleet of components. The quantification of these indicators requires a rigorous modelling of the stochastic evolution of the lifetimes of components subject to maintenance. With that aim, we propose to use Piecewise Deterministic Markov Processes which are usually used in Dynamic Reliability for the modelling of components in interaction with their environment. The comparing indicators of candidate maintenance strategies are derived from the Net Present Value (NPV). The NPV stands for the difference between the cumulated discounted cash-flows of both reference and candidate maintenance strategies. From a probabilistic point of view, the NPV is the difference between two dependent random variables, which complicates its study. In this thesis, Quasi Monte Carlo methods are used as alternatives to Monte Carlo method for the quantification of the NPV probabilistic distribution. These methods are firstly applied to illustrative examples. Then, they were adapted to the assessment of maintenance strategy of two systems of components of an electric power station. The coupling of these methods with a genetic algorithm has allowed to optimize an investments plan.
|
20 |
Randomized Quasi-Monte Carlo Methods for Density Estimation and Simulation of Markov ChainsBen Abdellah, Amal 02 1900 (has links)
La méthode Quasi-Monte Carlo Randomisé (RQMC) est souvent utilisée pour estimer une intégrale sur le cube unitaire (0,1)^s de dimension s. Cette intégrale est interprétée comme l'espérance mathématique d'une variable aléatoire X. Il est bien connu que, sous certaines conditions, les estimateurs d'intégrales par RQMC peuvent converger plus rapidement que les estimateurs par Monte Carlo. Pour la simulation de chaînes de Markov sur un grand nombre d'étapes en utilisant RQMC, il existe peu de résultats. L'approche la plus prometteuse proposée à ce jour est la méthode array-RQMC. Cette méthode simule, en parallèle, n copies de la chaîne en utilisant un ensemble de points RQMC aléatoires et indépendants à chaque étape et trie ces chaînes en utilisant une fonction de tri spécifique après chaque étape. Cette méthode a donné, de manière empirique, des résultats significatifs sur quelques exemples (soit, un taux de convergence bien meilleur que celui observé avec Monte Carlo standard). Par contre, les taux de convergence observés empiriquement n'ont pas encore été prouvés théoriquement. Dans la première partie de cette thèse, nous examinons comment RQMC peut améliorer, non seulement, le taux de convergence lors de l'estimation de l'espérance de X mais aussi lors de l'estimation de sa densité. Dans la deuxième partie, nous examinons comment RQMC peut être utilisé pour la simulation de chaînes de Markov sur un grand nombre d'étapes à l'aide de la méthode array-RQMC. Notre thèse contient quatre articles. Dans le premier article, nous étudions l'efficacité gagnée en remplaçant Monte Carlo (MC) par les méthodes de Quasi-Monte Carlo Randomisé (RQMC) ainsi que celle de la stratification. Nous allons ensuite montrer comment ces méthodes peuvent être utilisées pour rendre un échantillon plus représentatif. De plus, nous allons montrer comment ces méthodes peuvent aider à réduire la variance intégrée (IV) et l'erreur quadratique moyenne intégrée (MISE) pour les estimateurs de densité par noyau (KDE). Nous fournissons des résultats théoriques et empiriques sur les taux de convergence et nous montrons que les estimateurs par RQMC et par stratification peuvent atteindre des réductions significatives en IV et MISE ainsi que des taux de convergence encore plus rapides que MC pour certaines situations, tout en laissant le biais inchangé. Dans le deuxième article, nous examinons la combinaison de RQMC avec une approche Monte Carlo conditionnelle pour l'estimation de la densité. Cette approche est définie en prenant la dérivée stochastique d'une CDF conditionnelle de X et offre une grande amélioration lorsqu'elle est appliquée. L'utilisation de la méthode array-RQMC pour évaluer une option asiatique sous un processus ordinaire de mouvement brownien géométrique avec une volatilité fixe a déjà été tentée dans le passé et un taux de convergence de O(n⁻²) a été observé pour la variance. Dans le troisième article, nous étudions le prix des options asiatiques lorsque le processus sous-jacent présente une volatilité stochastique. Plus spécifiquement, nous examinons les modèles de volatilité stochastique variance-gamma, Heston ainsi que Ornstein-Uhlenbeck. Nous montrons comment l'application de la méthode array-RQMC pour la détermination du prix des options asiatiques et européennes peut réduire considérablement la variance. L'algorithme t-leaping est utilisé dans la simulation des systèmes biologiques stochastiques. La méthode Monte Carlo (MC) est une approche possible pour la simulation de ces systèmes. Simuler la chaîne de Markov pour une discrétisation du temps de longueur t via la méthode quasi-Monte Carlo randomisé (RQMC) a déjà été explorée empiriquement dans plusieurs expériences numériques et les taux de convergence observés pour la variance, lorsque la dimension augmente, s'alignent avec ceux observés avec MC. Dans le dernier article, nous étudions la combinaison de array-RQMC avec cet algorithme et démontrons empiriquement que array-RQMC fournit une réduction significative de la variance par rapport à la méthode de MC standard. / The Randomized Quasi Monte Carlo method (RQMC) is often used to estimate an integral over the s-dimensional unit cube (0,1)^s. This integral is interpreted as the mathematical expectation of some random variable X. It is well known that RQMC estimators can, under some conditions, converge at a faster rate than crude Monte Carlo estimators of the integral. For Markov chains simulation on a large number of steps by using RQMC, little exists. The most promising approach proposed to date is the array-RQMC method. This method simulates n copies of the chain in parallel using a set of independent RQMC points at each step, and sorts the chains using a specific sorting function after each step. This method has given empirically significant results in terms of convergence rates on a few examples (i.e. a much better convergence rate than that observed with Monte Carlo standard). However, the convergence rates observed empirically have not yet been theoretically proven. In the first part of this thesis, we examine how RQMC can improve the convergence rate when estimating not only X's expectation, but also its distribution. In the second part, we examine how RQMC can be used for Markov chains simulation on a large number of steps using the array-RQMC method. Our thesis contains four articles. In the first article, we study the effectiveness of replacing Monte Carlo (MC) by either randomized quasi Monte Carlo (RQMC) or stratification to show how they can be applied to make samples more representative. Furthermore, we show how these methods can help to reduce the integrated variance (IV) and the mean integrated square error (MISE) for the kernel density estimators (KDEs). We provide both theoretical and empirical results on the convergence rates and show that the RQMC and stratified sampling estimators can achieve significant IV and MISE reductions with even faster convergence rates compared to MC in some situations, while leaving the bias unchanged. In the second article, we examine the combination of RQMC with a conditional Monte Carlo approach to density estimation. This approach is defined by taking the stochastic derivative of a conditional CDF of X and provides a large improvement when applied. Using array-RQMC in order to price an Asian option under an ordinary geometric Brownian motion process with fixed volatility has already been attempted in the past and a convergence rate of O(n⁻²) was observed for the variance. In the third article, we study the pricing of Asian options when the underlying process has stochastic volatility. More specifically, we examine the variance-gamma, Heston, and Ornstein-Uhlenbeck stochastic volatility models. We show how applying the array-RQMC method for pricing Asian and European options can significantly reduce the variance. An efficient sample path algorithm called (fixed-step) t-leaping can be used to simulate stochastic biological systems as well as well-stirred chemical reaction systems. The crude Monte Carlo (MC) method is a feasible approach when it comes to simulating these sample paths. Simulating the Markov chain for fixed-step t-leaping via ordinary randomized quasi-Monte Carlo (RQMC) has already been explored empirically and, when the dimension of the problem increased, the convergence rate of the variance was realigned with those observed in several numerical experiments using MC. In the last article, we study the combination of array-RQMC with this algorithm and empirically demonstrate that array-RQMC provides a significant reduction in the variance compared to the standard MC algorithm.
|
Page generated in 0.0613 seconds