Spelling suggestions: "subject:"cow discrepancy"" "subject:"cow adiscrepancy""
1 |
Importance Sampling to Accelerate the Convergence of Quasi-Monte CarloHörmann, Wolfgang, Leydold, Josef January 2007 (has links) (PDF)
Importance sampling is a well known variance reduction technique for Monte Carlo simulation. For quasi-Monte Carlo integration with low discrepancy sequences it was neglected in the literature although it is easy to see that it can reduce the variation of the integrand for many important integration problems. For lattice rules importance sampling is of highest importance as it can be used to obtain a smooth periodic integrand. Thus the convergence of the integration procedure is accelerated. This can clearly speed up QMC algorithms for integration problems up to dimensions 10 to 12. (author's abstract) / Series: Research Report Series / Department of Statistics and Mathematics
|
2 |
Anti-Aliased Low Discrepancy Samplers for Monte Carlo Estimators in Physically Based Rendering / Échantillonneurs basse discrepance anti aliassés pour du rendu réaliste avec estimateurs de Monte CarloPerrier, Hélène 07 March 2018 (has links)
Lorsque l'on affiche un objet 3D sur un écran d'ordinateur, on transforme cet objet en une image, c.a.d en un ensemble de pixels colorés. On appelle Rendu la discipline qui consiste à trouver la couleur à associer à ces pixels. Calculer la couleur d'un pixel revient à intégrer la quantité de lumière arrivant de toutes les directions que la surface renvoie dans la direction du plan image, le tout pondéré par une fonction binaire déterminant si un point est visible ou non. Malheureusement, l'ordinateur ne sait pas calculer des intégrales on a donc deux méthodes possibles : Trouver une expression analytique qui permet de supprimer l'intégrale de l'équation (approche basée statistique). Approximer numériquement l'équation en tirant des échantillons aléatoires dans le domaine d'intégration et en en déduisant la valeur de l'intégrale via des méthodes dites de Monte Carlo. Nous nous sommes ici intéressés à l'intégration numérique et à la théorie de l'échantillonnage. L'échantillonnage est au cœur des problématiques d'intégration numérique. En informatique graphique, il est capital qu'un échantillonneur génère des points uniformément dans le domaine d’échantillonnage pour garantir que l'intégration ne sera pas biaisée. Il faut également que le groupe de points généré ne présente aucune régularité structurelle visible, au risque de voir apparaître des artefacts dit d'aliassage dans l'image résultante. De plus, les groupes de points générés doivent minimiser la variance lors de l'intégration pour converger au plus vite vers le résultat. Il existe de nombreux types d'échantillonneurs que nous classeront ici grossièrement en 2 grandes familles : Les échantillonneurs bruit bleu, qui ont une faible la variance lors de l'intégration tout en générant de groupes de points non structurés. Le défaut de ces échantillonneurs est qu'ils sont extrêmement lents pour générer les points. Les échantillonneurs basse discrépance, qui minimisent la variance lors de l'intégration, génèrent des points extrêmement vite, mais qui présentent une forte structure, générant énormément d'aliassage. Notre travail a été de développer des échantillonneurs hybrides, combinant à la fois bruit bleu et basse discrépance / When you display a 3D object on a computer screen, we transform this 3D scene into a 2D image, which is a set of organized colored pixels. We call Rendering all the process that aims at finding the correct color to give those pixels. This is done by integrating all the light rays coming for every directions that the object's surface reflects back to the pixel, the whole being ponderated by a visibility function. Unfortunately, a computer can not compute an integrand. We therefore have two possibilities to solve this issue: We find an analytical expression to remove the integrand (statistic based strategy). Numerically approximate the equation by taking random samples in the integration domain and approximating the integrand value using Monte Carlo methods. Here we focused on numerical integration and sampling theory. Sampling is a fundamental part of numerical integration. A good sampler should generate points that cover the domain uniformly to prevent bias in the integration and, when used in Computer Graphics, the point set should not present any visible structure, otherwise this structure will appear as artifacts in the resulting image. Furthermore, a stochastic sampler should minimize the variance in integration to converge to a correct approximation using as few samples as possible. There exists many different samplers that we will regroup into two families: Blue Noise samplers, that have a low integration variance while generating unstructured point sets. The issue with those samplers is that they are often slow to generate a pointset. Low Discrepancy samplers, that minimize the variance in integration and are able to generate and enrich a point set very quickly. However, they present a lot of structural artifacts when used in Rendering. Our work aimed at developing hybriod samplers, that are both Blue Noise and Low Discrepancy
|
3 |
Directional Control of Generating Brownian Path under Quasi Monte CarloLiu, Kai January 2012 (has links)
Quasi-Monte Carlo (QMC) methods are playing an increasingly important role in computational finance. This is attributed to the increased complexity of the derivative securities and the sophistication of the financial models. Simple closed-form solutions for the finance applications typically do not exist and hence numerical methods need to be used to approximate
their solutions. QMC method has been proposed as an alternative method to Monte Carlo (MC) method to accomplish this objective. Unlike MC methods, the efficiency of QMC-based methods is highly dependent on the dimensionality of the problems. In particular, numerous researches have documented, under the Black-Scholes models, the critical role of the generating matrix for simulating the Brownian paths. Numerical results support the notion that generating matrix that reduces the effective dimension of the underlying problems is able to increase the efficiency of QMC. Consequently, dimension reduction methods such as principal component analysis, Brownian bridge, Linear Transformation and Orthogonal Transformation have been proposed to further enhance QMC. Motivated by these results, we first propose a new measure to quantify the effective dimension. We then propose a new dimension reduction method which we refer as the directional method (DC). The proposed DC method has the advantage that it depends explicitly on the given function of interest. Furthermore, by assigning appropriately the direction of importance of the given function, the proposed method optimally determines the generating matrix used to simulate the Brownian paths. Because of the flexibility of our proposed method, it can be shown that many of the existing dimension reduction methods are special cases of our proposed DC methods. Finally, many numerical examples are provided to support the competitive efficiency of the proposed method.
|
4 |
Directional Control of Generating Brownian Path under Quasi Monte CarloLiu, Kai January 2012 (has links)
Quasi-Monte Carlo (QMC) methods are playing an increasingly important role in computational finance. This is attributed to the increased complexity of the derivative securities and the sophistication of the financial models. Simple closed-form solutions for the finance applications typically do not exist and hence numerical methods need to be used to approximate
their solutions. QMC method has been proposed as an alternative method to Monte Carlo (MC) method to accomplish this objective. Unlike MC methods, the efficiency of QMC-based methods is highly dependent on the dimensionality of the problems. In particular, numerous researches have documented, under the Black-Scholes models, the critical role of the generating matrix for simulating the Brownian paths. Numerical results support the notion that generating matrix that reduces the effective dimension of the underlying problems is able to increase the efficiency of QMC. Consequently, dimension reduction methods such as principal component analysis, Brownian bridge, Linear Transformation and Orthogonal Transformation have been proposed to further enhance QMC. Motivated by these results, we first propose a new measure to quantify the effective dimension. We then propose a new dimension reduction method which we refer as the directional method (DC). The proposed DC method has the advantage that it depends explicitly on the given function of interest. Furthermore, by assigning appropriately the direction of importance of the given function, the proposed method optimally determines the generating matrix used to simulate the Brownian paths. Because of the flexibility of our proposed method, it can be shown that many of the existing dimension reduction methods are special cases of our proposed DC methods. Finally, many numerical examples are provided to support the competitive efficiency of the proposed method.
|
5 |
Evaluating of path-dependent securities with low discrepancy methodsKrykova, Inna 13 January 2004 (has links)
The objective of this thesis is the implementation of Monte Carlo and quasi-Monte Carlo methods for the valuation of financial derivatives. Advantages and disadvantages of each method are stated based on both the literature and on independent computational experiments by the author. Various methods to generate pseudo-random and quasi-random sequences are implemented in a computationally uniform way to enable objective comparisons. Code is developed in VBA and C++, with the C++ code converted to a COM object to make it callable from Microsoft Excel and Matlab. From the simulated random sequences Brownian motion paths are built using various constructions and variance-reduction techniques including Brownian Bridge and Latin hypercube. The power and efficiency of the methods is compared on four financial securities pricing problems: European options, Asian options, barrier options and mortgage-backed securities. In this paper a detailed step-by-step algorithm is given for each method (construction of pseudo- and quasi-random sequences, Brownian motion paths for some stochastic processes, variance- and dimension- reduction techniques, evaluation of some financial securities using different variance-reduction techniques etc).
|
6 |
Discrepancy of sequences and error estimates for the quasi-Monte Carlo method / Diskrepansen hos talföljder och feluppskattningar för kvasi-Monte Carlo metodenVesterinen, Niklas January 2020 (has links)
We present the notions of uniform distribution and discrepancy of sequences contained in the unit interval, as well as an important application of discrepancy in numerical integration by way of the quasi-Monte Carlo method. Some fundamental (and other interesting) results with regards to these notions are presented, along with some detalied and instructive examples and comparisons (some of which not often provided by the literature). We go on to analytical and numerical investigations of the asymptotic behaviour of the discrepancy (in particular for the van der Corput-sequence), and for the general error estimates of the quasi-Monte Carlo method. Using the discoveries from these investigations, we give a conditional proof of the van der Corput theorem. Furthermore, we illustrate that by using low discrepancy sequences (such as the vdC-sequence), a rather fast convergence rate of the quasi-Monte Carlo method may still be achieved, even for situations in which the famous theoretical result, the Koksma inequality, hasbeen rendered unusable. / Vi presenterar begreppen likformig distribution och diskrepans hos talföljder på enhetsintervallet, såväl som en viktig tillämpning av diskrepans inom numerisk integration via kvasi-Monte Carlo metoden. Några fundamentala (och andra intressanta) resultat presenteras med avseende på dessa begrepp, tillsammans med några detaljerade och instruktiva exempel och jämförelser (varav några sällan presenterade i litteraturen). Vi går vidare med analytiska och numeriska undersökningar av det asymptotiska beteendet hos diskrepansen (särskilt för van der Corput-följden), såväl som för den allmänna feluppskattningen hos kvasi-Monte Carlo metoden. Utifrån upptäckterna från dessa undersökningar ger vi ett villkorligt bevis av van der Corput's sats, samt illustrerar att man genom att använda lågdiskrepanstalföljder (som van der Corput-följden) fortfarande kan uppnå tämligen snabb konvergenshastighet för kvasi-Monte Carlo metoden. Detta även för situationer där de kända teoretiska resultatet, Koksma's olikhet, är oandvändbart.
|
7 |
Méthodes statistiques pour l’estimation du rendement paramétrique des circuits intégrés analogiques et RF / Statistical methods for the parametric yield estimation of analog/RF integratedcircuitsDesrumaux, Pierre-François 08 November 2013 (has links)
De nombreuses sources de variabilité impactent la fabrication des circuits intégrés analogiques et RF et peuvent conduire à une dégradation du rendement. Il est donc nécessaire de mesurer leur influence le plus tôt possible dans le processus de fabrications. Les méthodes de simulation statistiques permettent ainsi d'estimer le rendement paramétrique des circuits durant la phase de conception. Cependant, les méthodes traditionnelles telles que la méthode de Monte Carlo ne sont pas assez précises lorsqu'un faible nombre de circuits est simulé. Par conséquent, il est nécessaire de créer un estimateur précis du rendement paramétrique basé sur un faible nombre de simulations. Dans cette thèse, les méthodes statistiques existantes provenant à la fois de publications en électroniques et non-Électroniques sont d'abord décrites et leurs limites sont mises en avant. Ensuite, trois nouveaux estimateurs de rendement sont proposés: une méthode de type quasi-Monte Carlo avec tri automatique des dimensions, une méthode des variables de contrôle basée sur l'estimation par noyau, et une méthode par tirage d'importance. Les trois méthodes reposent sur un modèle mathématique de la métrique de performance du circuit qui est construit à partir d'un développement de Taylor à l'ordre un. Les résultats théoriques et expérimentaux obtenus démontrent la supériorité des méthodes proposées par rapport aux méthodes existantes, à la fois en terme de précision de l'estimateur et en terme de réduction du nombre de simulations de circuits. / Semiconductor device fabrication is a complex process which is subject to various sources of variability. These variations can impact the functionality and performance of analog integrated circuits, which leads to yield loss, potential chip modifications, delayed time to market and reduced profit. Statistical circuit simulation methods enable to estimate the parametric yield of the circuit early in the design stage so that corrections can be done before manufacturing. However, traditional methods such as Monte Carlo method and corner simulation have limitations. Therefore an accurate analog yield estimate based on a small number of circuit simulations is needed. In this thesis, existing statistical methods from electronics and non-Electronics publications are first described. However, these methods suffer from sever drawbacks such as the need of initial time-Consuming circuit simulations, or a poor scaling with the number of random variables. Second, three novel statistical methods are proposed to accurately estimate the parametric yield of analog/RF integrated circuits based on a moderate number of circuit simulations: An automatically sorted quasi-Monte Carlo method, a kernel-Based control variates method and an importance sampling method. The three methods rely on a mathematical model of the circuit performance metric which is constructed based on a truncated first-Order Taylor expansion. This modeling technique is selected as it requires a minimal number of SPICE-Like circuit simulations. Both theoretical and simulation results show that the proposed methods lead to significant speedup or improvement in accuracy compared to other existing methods.
|
8 |
Novel Computational Methods for the Reliability Evaluation of Composite Power Systems using Computational Intelligence and High Performance Computing TechniquesGreen, Robert C., II 24 September 2012 (has links)
No description available.
|
9 |
厚尾分配在財務與精算領域之應用 / Applications of Heavy-Tailed distributions in finance and actuarial science劉議謙, Liu, I Chien Unknown Date (has links)
本篇論文將厚尾分配(Heavy-Tailed Distribution)應用在財務及保險精算上。本研究主要有三個部分:第一部份是用厚尾分配來重新建構Lee-Carter模型(1992),發現改良後的Lee-Carter模型其配適與預測效果都較準確。第二部分是將厚尾分配建構於具有世代因子(Cohort Factor)的Renshaw and Haberman模型(2006)中,其配適及預測效果皆有顯著改善,此外,針對英格蘭及威爾斯(England and Wales)訂價長壽交換(Longevity Swaps),結果顯示此模型可以支付較少的長壽交換之保費以及避免低估損失準備金。第三部分是財務上的應用,利用Schmidt等人(2006)提出的多元仿射廣義雙曲線分配(Multivariate Affine Generalized Hyperbolic Distributions; MAGH)於Boyle等人(2003)提出的低偏差網狀法(Low Discrepancy Mesh; LDM)來定價多維度的百慕達選擇權。理論上,LDM法的數值會高於Longstaff and Schwartz(2001)提出的最小平方法(Least Square Method; LSM)的數值,而數值分析結果皆一致顯示此性質,藉由此特性,我們可知道多維度之百慕達選擇權的真值落於此範圍之間。 / The thesis focus on the application of heavy-tailed distributions in finance and actuarial science. We provide three applications in this thesis. The first application is that we refine the Lee-Carter model (1992) with heavy-tailed distributions. The results show that the Lee-Carter model with heavy-tailed distributions provide better fitting and prediction. The second application is that we also model the error term of Renshaw and Haberman model (2006) using heavy-tailed distributions and provide an iterative fitting algorithm to generate maximum likelihood estimates under the Cox regression model. Using the RH model with non-Gaussian innovations can pay lower premiums of longevity swaps and avoid the underestimation of loss reserves for England and Wales. The third application is that we use multivariate affine generalized hyperbolic (MAGH) distributions introduced by Schmidt et al. (2006) and low discrepancy mesh (LDM) method introduced by Boyle et al. (2003), to show how to price multidimensional Bermudan derivatives. In addition, the LDM estimates are higher than the corresponding estimates from the Least Square Method (LSM) of Longstaff and Schwartz (2001). This is consistent with the property that the LDM estimate is high bias while the LSM estimate is low bias. This property also ensures that the true option value will lie between these two bounds.
|
Page generated in 0.0414 seconds