41 |
Efficient Simulations in FinanceSak, Halis January 2008 (has links) (PDF)
Measuring the risk of a credit portfolio is a challenge for financial institutions because of the regulations brought by the Basel Committee. In recent years lots of models and state-of-the-art methods, which utilize Monte Carlo simulation, were proposed to solve this problem. In most of the models factors are used to account for the correlations between obligors. We concentrate on the the normal copula model, which assumes multivariate normality of the factors. Computation of value at risk (VaR) and expected shortfall (ES) for realistic credit portfolio models is subtle, since, (i) there is dependency throughout the portfolio; (ii) an efficient method is required to compute tail loss probabilities and conditional expectations at multiple points simultaneously. This is why Monte Carlo simulation must be improved by variance reduction techniques such as importance sampling (IS). Thus a new method is developed for simulating tail loss probabilities and conditional expectations for a standard credit risk portfolio. The new method is an integration of IS with inner replications using geometric shortcut for dependent obligors in a normal copula framework. Numerical results show that the new method is better than naive simulation for computing tail loss probabilities and conditional expectations at a single x and VaR value. Finally, it is shown that compared to the standard t statistic a skewness-correction method of Peter Hall is a simple and more accurate alternative for constructing confidence intervals. (author´s abstract) / Series: Research Report Series / Department of Statistics and Mathematics
|
42 |
Numerical methods for homogenization : applications to random mediaCostaouec, Ronan, Costaouec, Ronan 23 November 2011 (has links) (PDF)
In this thesis we investigate numerical methods for the homogenization of materials the structures of which, at fine scales, are characterized by random heterogenities. Under appropriate hypotheses, the effective properties of such materials are given by closed formulas. However, in practice the computation of these properties is a difficult task because it involves solving partial differential equations with stochastic coefficients that are additionally posed on the whole space. In this work, we address this difficulty in two different ways. The standard discretization techniques lead to random approximate effective properties. In Part I, we aim at reducing their variance, using a well-known variance reduction technique that has already been used successfully in other domains. The works of Part II focus on the case when the material can be seen as a small random perturbation of a periodic material. We then show both numerically and theoretically that, in this case, computing the effective properties is much less costly than in the general case
|
43 |
On improving variational inference with low-variance multi-sample estimatorsDhekane, Eeshan Gunesh 08 1900 (has links)
Les progrès de l’inférence variationnelle, tels que l’approche de variational autoencoder (VI) (Kingma and Welling (2013), Rezende et al. (2014)) et ses nombreuses modifications, se sont avérés très efficaces pour l’apprentissage des représentations latentes de données. Importance-weighted variational inference (IWVI) par Burda et al. (2015) améliore l’inférence variationnelle en utilisant plusieurs échantillons indépendants et répartis de manière identique pour obtenir des limites inférieures variationnelles plus strictes. Des articles récents tels que l’approche de hierarchical importance-weighted autoencoders (HIWVI) par Huang et al. (2019) et la modélisation de la distribution conjointe par Klys et al. (2018) démontrent l’idée de modéliser une distribution conjointe sur des échantillons pour améliorer encore l’IWVI en le rendant efficace pour l’échantillon. L’idée sous-jacente de ce mémoire est de relier les propriétés statistiques des estimateurs au resserrement des limites variationnelles. Pour ce faire, nous démontrons d’abord une borne supérieure sur l’écart variationnel en termes de variance des estimateurs sous certaines conditions. Nous prouvons que l’écart variationnel peut être fait disparaître au taux de O(1/n) pour une grande famille d’approches d’inférence variationelle. Sur la base de ces résultats, nous proposons l’approche de Conditional-IWVI (CIWVI), qui modélise explicitement l’échantillonnage séquentiel et conditionnel de variables latentes pour effectuer importance-weighted variational inference, et une approche connexe de Antithetic-IWVI (AIWVI) par Klys et al. (2018). Nos expériences sur les jeux de données d’analyse comparative, tels que MNIST (LeCun et al. (2010)) et OMNIGLOT (Lake et al. (2015)), démontrent que nos approches fonctionnent soit de manière compétitive, soit meilleures que les références IWVI et HIWVI en tant que le nombre d’échantillons augmente. De plus, nous démontrons que les résultats sont conformes aux propriétés théoriques que nous avons prouvées. En conclusion, nos travaux fournissent une perspective sur le taux d’amélioration de l’inference variationelle avec le nombre d’échantillons utilisés et l’utilité de modéliser la distribution conjointe sur des représentations latentes pour l’efficacité de l’échantillon. / Advances in variational inference, such as variational autoencoders (VI) (Kingma and Welling (2013), Rezende et al. (2014)) along with its numerous modifications, have proven highly successful for learning latent representations of data. Importance-weighted variational inference (IWVI) by Burda et al. (2015) improves the variational inference by using multiple i.i.d. samples for obtaining tighter variational lower bounds. Recent works like hierarchical importance-weighted autoencoders (HIWVI) by Huang et al. (2019) and joint distribution modeling by Klys et al. (2018) demonstrate the idea of modeling a joint distribution over samples to further improve over IWVI by making it sample efficient. The underlying idea in this thesis is to connect the statistical properties of the estimators to the tightness of the variational bounds. Towards this, we first demonstrate an upper bound on the variational gap in terms of the variance of the estimators under certain conditions. We prove that the variational gap can be made to vanish at the rate of O(1/n) for a large family of VI approaches. Based on these results, we propose the approach of Conditional-IWVI (CIWVI), which explicitly models the sequential and conditional sampling of latent variables to perform importance-weighted variational inference, and a related approach of Antithetic-IWVI (AIWVI) by Klys et al. (2018). Our experiments on the benchmarking datasets MNIST (LeCun et al. (2010)) and OMNIGLOT (Lake et al. (2015)) demonstrate that our approaches perform either competitively or better than the baselines IWVI and HIWVI as the number of samples increases. Further, we also demonstrate that the results are in accordance with the theoretical properties we proved. In conclusion, our work provides a perspective on the rate of improvement in VI with the number of samples used and the utility of modeling the joint distribution over latent representations for sample efficiency in VI.
|
44 |
Variance reduction methods for numerical solution of plasma kinetic diffusionHöök, Lars Josef January 2012 (has links)
Performing detailed simulations of plasma kinetic diffusion is a challenging task and currently requires the largest computational facilities in the world. The reason for this is that, the physics in a confined heated plasma occur on a broad range of temporal and spatial scales. It is therefore of interest to improve the computational algorithms together with the development of more powerful computational resources. Kinetic diffusion processes in plasmas are commonly simulated with the Monte Carlo method, where a discrete set of particles are sampled from a distribution function and advanced in a Lagrangian frame according to a set of stochastic differential equations. The Monte Carlo method introduces computational error in the form of statistical random noise produced by a finite number of particles (or markers) N and the error scales as αN−β where β = 1/2 for the standard Monte Carlo method. This requires a large number of simulated particles in order to obtain a sufficiently low numerical noise level. Therefore it is essential to use techniques that reduce the numerical noise. Such methods are commonly called variance reduction methods. In this thesis, we have developed new variance reduction methods with application to plasma kinetic diffusion. The methods are suitable for simulation of RF-heating and transport, but are not limited to these types of problems. We have derived a novel variance reduction method that minimizes the number of required particles from an optimization model. This implicitly reduces the variance when calculating the expected value of the distribution, since for a fixed error the optimization model ensures that a minimal number of particles are needed. Techniques that reduce the noise by improving the order of convergence, have also been considered. Two different methods have been tested on a neutral beam injection scenario. The methods are the scrambled Brownian bridge method and a method here called the sorting and mixing method of L´ecot and Khettabi[1999]. Both methods converge faster than the standard Monte Carlo method for modest number of time steps, but fail to converge correctly for large number of time steps, a range required for detailed plasma kinetic simulations. Different techniques are discussed that have the potential of improving the convergence to this range of time steps. / QC 20120314
|
45 |
股權連結結構型商品之評價 / Valuation of Equity-Linkded Structured Note王瑞元, Wang, Jui Yuan Unknown Date (has links)
本文整理市場上已發行結構債的現金流量型式,且利用風險中立評價法推導多資產Quanto模型,並以蒙地卡羅模擬法模擬外幣計價的結構型商品的理論價格,除了計算使用Quanto模型所求得的理論價格外,本文也比較使用Quanto模型與沒有使用Quanto模型評價商品時理論價格的差異,此外也進行商品的利率敏感度分析和相關係數敏感度分析;其後找到有效的控制變數,利用變異數縮減技術克服蒙地卡羅模擬法收斂不易的缺點,增進模擬的效率與精準程度,最後並做變異數縮減的Rubust分析,討論在何種參數的設定下變異數縮減的效果會最好,及如何透過參數的選取,如參與率與保本率,設計商品與成本分析。
|
46 |
以有效率的方法進行一籃子違約交換之評價 / Efficient algorithms for basket default swap valuation謝旻娟, Hsieh, Min Jyuan Unknown Date (has links)
相較於單一信用違約交換只能對單一信用標的進行信用保護,一籃子信用違約交換則能對一籃子的信用標的進行信用保護。此種產品的評價決定於一籃子信用標的實體的聯合機率分配,因此多個標的資產間違約相關性的衡量,對於一籃子信用違約交換的評價和風險管理是相當重要的課題。
在一個資產池中,有時可以將其切割成兩個以上的群體,各群體間彼此相互獨立,而在各群內彼此相依。我們將其視為在多因子模型下的特例,此模型提供我們更具彈性的方式去建立資產之間彼此的相關性。
在這篇文章中,我們主要以 Chiang, Yueh, and Hsieh (2007) 在單因子模型下所提出來的方法為基礎,將其延伸至多因子的模型下的特例。藉由選擇一個合適的(IS)分配,在每一次的模擬中必定會有k個違約事件發生;因此我們獲得一個有效率的方法對一籃子違約交換進行評價,此演算法不僅簡單並且其變異數較蒙地卡羅小。 / In contrast to a single name credit default swaps which provides credit protection for a single underlying, a basket credit default swap extends the credit protection to portfolio of obligors with the restriction that the default of only one underlying is compensated. The price of the products depends on the joint default probability of the underlying in the credit portfolio. Thus, the modeling of default correlation, default risk and expected loss is a key issue for the valuation and risk management of basket default swaps.
Sometimes a pool of underlying obligors can have two or more separate groups, between those they are unrelated, but in each part they are related. The special cases provide more flexible way to construct the correlation between two or more underlying obligors.
In this paper, our approach is based on the construction of importance sampling (IS) method proposed by Chiang, Yueh and Hsieh (2007) under one-factor model, and then we extend the model to a special case under the multi-factor model. By the appropriate choice of the importance sampling distribution, we establish a way of ensuring that for every path generated, k default events always take place. Then we can obtain an efficiency algorithm for basket default swap valuation. The algorithm is simple to implement and it also guarantees variance reduction.
|
47 |
Mathematical modelling and numerical simulation in materials scienceBoyaval, Sébastien 16 December 2009 (has links) (PDF)
In a first part, we study numerical schemes using the finite-element method to discretize the Oldroyd-B system of equations, modelling a viscoelastic fluid under no flow boundary condition in a 2- or 3- dimensional bounded domain. The goal is to get schemes which are stable in the sense that they dissipate a free-energy, mimicking that way thermodynamical properties of dissipation similar to those actually identified for smooth solutions of the continuous model. This study adds to numerous previous ones about the instabilities observed in the numerical simulations of viscoelastic fluids (in particular those known as High Weissenberg Number Problems). To our knowledge, this is the first study that rigorously considers the numerical stability in the sense of an energy dissipation for Galerkin discretizations. In a second part, we adapt and use ideas of a numerical method initially developped in the works of Y. Maday, A.T. Patera et al., the reduced-basis method, in order to efficiently simulate some multiscale models. The principle is to numerically approximate each element of a parametrized family of complicate objects in a Hilbert space through the closest linear combination within the best linear subspace spanned by a few elementswell chosen inside the same parametrized family. We apply this principle to numerical problems linked : to the numerical homogenization of second-order elliptic equations, with two-scale oscillating diffusion coefficients, then ; to the propagation of uncertainty (computations of the mean and the variance) in an elliptic problem with stochastic coefficients (a bounded stochastic field in a boundary condition of third type), last ; to the Monte-Carlo computation of the expectations of numerous parametrized random variables, in particular functionals of parametrized Itô stochastic processes close to what is encountered in micro-macro models of polymeric fluids, with a control variate to reduce its variance. In each application, the goal of the reduced-basis approach is to speed up the computations without any loss of precision
|
48 |
Estudo de algoritmos de otimização estocástica aplicados em aprendizado de máquina / Study of algorithms of stochastic optimization applied in machine learning problemsFernandes, Jessica Katherine de Sousa 23 August 2017 (has links)
Em diferentes aplicações de Aprendizado de Máquina podemos estar interessados na minimização do valor esperado de certa função de perda. Para a resolução desse problema, Otimização estocástica e Sample Size Selection têm um papel importante. No presente trabalho se apresentam as análises teóricas de alguns algoritmos destas duas áreas, incluindo algumas variações que consideram redução da variância. Nos exemplos práticos pode-se observar a vantagem do método Stochastic Gradient Descent em relação ao tempo de processamento e memória, mas, considerando precisão da solução obtida juntamente com o custo de minimização, as metodologias de redução da variância obtêm as melhores soluções. Os algoritmos Dynamic Sample Size Gradient e Line Search with variable sample size selection apesar de obter soluções melhores que as de Stochastic Gradient Descent, a desvantagem se encontra no alto custo computacional deles. / In different Machine Learnings applications we can be interest in the minimization of the expected value of some loss function. For the resolution of this problem, Stochastic optimization and Sample size selection has an important role. In the present work, it is shown the theoretical analysis of some algorithms of these two areas, including some variations that considers variance reduction. In the practical examples we can observe the advantage of Stochastic Gradient Descent in relation to the processing time and memory, but considering accuracy of the solution obtained and the cost of minimization, the methodologies of variance reduction has the best solutions. In the algorithms Dynamic Sample Size Gradient and Line Search with variable sample size selection, despite of obtaining better solutions than Stochastic Gradient Descent, the disadvantage lies in their high computational cost.
|
49 |
Dosimétrie neutron en radiothérapie : étude expérimentale et développement d'un outil personnalisé de calcul de dose Monte Carlo / Neutron dosimetry in radiotherapy : experimental study and Monte Carlo personalised dose calculation tool developmentElazhar, Halima 07 September 2018 (has links)
L’optimisation des traitements en radiothérapie vise à améliorer la précision de l’irradiation des cellules cancéreuses pour épargner le plus possible les organes environnants. Or la dose périphérique déposée dans les tissus les plus éloignés de la tumeur n’est actuellement pas calculée par les logiciels de planification de traitement, alors qu’elle peut être responsable de l’induction de cancers secondaires radio-induits. Parmi les différentes composantes, les neutrons produits par processus photo-nucléaires sont les particules secondaires pour lesquelles il y a un manque important de données dosimétriques. Une étude expérimentale et par simulation Monte Carlo de la production des neutrons secondaires en radiothérapie nous a conduit à développer un algorithme qui utilise la précision du calcul Monte Carlo pour l’estimation de la distribution 3D de la dose neutron délivrée au patient. Un tel outil permettra la création de bases de données dosimétriques pouvant être utilisées pour l’amélioration des modèles mathématiques « dose-risque » spécifiques à l’irradiation des organes périphériques à de faibles doses en radiothérapie. / Treatment optimization in radiotherapy aims at increasing the accuracy of cancer cell irradiation while saving the surrounding healthy organs. However, the peripheral dose deposited in healthy tissues far away from the tumour are currently not calculated by the treatment planning systems even if it can be responsible for radiation induced secondary cancers. Among the different components, neutrons produced through photo-nuclear processes are suffering from an important lack of dosimetric data. An experimental and Monte Carlo simulation study of the secondary neutron production in radiotherapy led us to develop an algorithm using the Monte Carlo calculation precision to estimate the 3D neutron dose delivered to the patient. Such a tool will allow the generation of dosimetric data bases ready to be used for the improvement of “dose-risk” mathematical models specific to the low dose irradiation to peripheral organs occurring in radiotherapy.
|
50 |
Accelerated clinical prompt gamma simulations for proton therapy / Simulations cliniques des gamma prompt accélérées pour la HadronthérapieHuisman, Brent 19 May 2017 (has links)
Après une introduction à l’hadronthérapie et à la détection gamma prompts, cette thèse de doctorat comprend deux contributions principales: le développement d'une méthode de simulation des gamma prompt (PG) et son application dans une étude de la détection des changements dans les traitements cliniques. La méthode de réduction de variance (vpgTLE) est une méthode d'estimation de longueur de piste en deux étapes développée pour estimer le rendement en PG dans les volumes voxélisés. Comme les particules primaires se propagent tout au long de la CT du patient, les rendements de PG sont calculés en fonction de l'énergie actuelle du primaire, du matériau du voxel et de la longueur de l'étape. La deuxième étape utilise cette image intermédiaire comme source pour générer et propager le nombre de PG dans le reste de la géométrie de la scène, par exemple Dans un dispositif de détection. Pour un fantôme hétérogéné et un plan de traitement CT complet par rapport à MC analogue, à un niveau de convergence de 2% d'incertitude relative sur le rendement de PG par voxel dans la région de rendement de 90%, un gain d'environ 10^3 A été atteint. La méthode s'accorde avec les simulations analogiques MC de référence à moins de 10^-4 par voxel, avec un biais négligeable. La deuxième étude majeure menée dans portait sur l'estimation PG FOP dans les simulations cliniques. Le nombre de protons (poids spot) requis pour une estimation FOP constante a été étudié pour la première fois pour deux caméras PG optimisées, une fente multi-parallèle (MPS) et une conception de bordure de couteau (KES). Trois points ont été choisis pour une étude approfondie et, au niveau des points prescrits, on a constaté qu'ils produisaient des résultats insuffisants, ce qui rend improbable la production clinique utilisable sur le terrain. Lorsque le poids spot est artificiellement augmenté à 10^9 primaires, la précision sur le FOP atteint une précision millimétrique. Sur le décalage FOP, la caméra MPS fournit entre 0,71 - 1,02 mm (1sigma) de précision pour les trois points à 10 $ 9 $ de protons; Le KES entre 2.10 - 2.66 mm. Le regroupement de couches iso-énergétiques a été utilisé dans la détection par PG de distribution passive pour l'un des prototypes d'appareils PG. Dans le groupement iso-depth, activé par la livraison active, les taches avec des chutes de dose distales similaires sont regroupées de manière à fournir des retombées bien définies comme tentative de mélange de gamme de distance. Il est démontré que le regroupement de taches n'a pas nécessairement une incidence négative sur la précision par rapport à la tache artificiellement accrue, ce qui signifie qu'une certaine forme de groupage de points peut permettre l'utilisation clinique de ces caméras PG. Avec tous les spots ou les groupes spot, le MPS a un meilleur signal par rapport au KES, grâce à une plus grande efficacité de détection et à un niveau de fond inférieur en raison de la sélection du temps de vol. / After an introduction to particle therapy and prompt gamma detection, this doctoral dissertation comprises two main contributions: the development of a fast prompt gammas (PGs) simulation method and its application in a study of change detectability in clinical treatments. The variance reduction method (named vpgTLE) is a two-stage track length estimation method developed to estimate the PG yield in voxelized volumes. As primary particles are propagated throughout the patient CT, the PG yields are computed as function of the current energy of the primary, the material in the voxel and the step length. The second stage uses this intermediate image as a source to generate and propagate the number of PGs throughout the rest of the scene geometry, e.g. into a detection device. For both a geometrical heterogeneous phantom and a complete patient CT treatment plan with respect to analog MC, at a convergence level of 2\% relative uncertainty on the PG yield per voxel in the 90\% yield region, a gain of around $10^3$ was achieved. The method agrees with reference analog MC simulations to within $10^{-4}$ per voxel, with negligible bias. The second major study conducted in this PhD program was on PG FOP estimation in clinical simulations. The number of protons (spot weight) required for a consistent FOP estimate was investigated for the first time for two optimized PG cameras, a multi-parallel slit (MPS) and a knife edge design (KES). Three spots were selected for an in depth study, and at the prescribed spot weights were found to produce results of insufficient precision, rendering usable clinical output on the spot level unlikely. When the spot weight is artificially increased to $10^9$ primaries, the precision on the FOP reaches millimetric precision. On the FOP shift the MPS camera provides between 0.71 - 1.02 mm (1$\upsigma$) precision for the three spots at $10^9$ protons; the KES between 2.10 - 2.66 mm. Grouping iso-energy layers was employed in passive delivery PG detection for one of the PG camera prototypes. In iso-depth grouping, enabled by active delivery, spots with similar distal dose fall-offs are grouped so as to provide well-defined fall-offs as an attempt to sidestep range mixing. It is shown that grouping spots does not necessarily negatively affect the precision compared to the artificially increased spot, which means some form of spot grouping can enable clinical use of these PG cameras. With all spots or spot groups the MPS has a better signal compared to the KES, thanks to a larger detection efficiency and a lower background level due to time of flight selection.
|
Page generated in 0.0938 seconds