• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 53
  • 11
  • 6
  • 5
  • 5
  • 4
  • 1
  • 1
  • 1
  • 1
  • 1
  • Tagged with
  • 87
  • 87
  • 45
  • 45
  • 27
  • 22
  • 20
  • 16
  • 12
  • 11
  • 11
  • 11
  • 10
  • 10
  • 10
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
71

偏常態因子信用組合下之效率估計值模擬 / Efficient Simulation in Credit Portfolio with Skew Normal Factor

林永忠, Lin, Yung Chung Unknown Date (has links)
在因子模型下,損失分配函數的估算取決於混合型聯合違約分配。蒙地卡羅是一個經常使用的計算工具。然而,一般蒙地卡羅模擬是一個不具有效率的方法,特別是在稀有事件與複雜的債務違約模型的情形下,因此,找尋可以增進效率的方法變成了一件迫切的事。 對於這樣的問題,重點採樣法似乎是一個可以採用且吸引人的方法。透過改變抽樣的機率測度,重點採樣法使估計量變得更有效率,尤其是針對相對複雜的模型。因此,我們將應用重點採樣法來估計偏常態關聯結構模型的尾部機率。這篇論文包含兩個部分。Ⅰ:應用指數扭轉法---一個經常使用且為較佳的終點採樣技巧---於條件機率。然而,這樣的程序無法確保所得的估計量有足夠的變異縮減。此結果指出,對於因子在選擇重點採樣上,我們需要更進一步的考慮。Ⅱ:進一步應用重點採樣法於因子;在這樣的問題上,已經有相當多的方法在文獻中被提出。在這些文獻中,重點採樣的方法可大略區分成兩種策略。第一種策略主要在選擇一個最好的位移。最佳的位移值可透過操作不同的估計法來求得,這樣的策略出現在Glasserman等(1999)或Glasserman與Li (2005)。 第二種策略則如同在Capriotti (2008)中的一樣,則是考慮擁有許多參數的因子密度函數作為重點採樣的候選分配。透過解出非線性優化問題,就可確立一個未受限於位移的重點採樣分配。不過,這樣的方法在尋找最佳的參數當中,很容易引起另一個效率上的問題。為了要讓此法有效率,就必須在使用此法前,對參數的穩健估計上,投入更多的工作,這將造成問題更行複雜。 本文中,我們說明了另一種簡單且具有彈性的策略。這裡,我們所提的演算法不受限在如同Gaussian模型下決定最佳位移的作法,也不受限於因子分配函數參數的估計。透過Chiang, Yueh與Hsie (2007)文章中的主要概念,我們提供了重點採樣密度函數一個合理的推估並且找出了一個不同於使用隨機近似的演算法來加速模擬的進行。 最後,我們提供了一些單因子的理論的證明。對於多因子模型,我們也因此有了一個較有效率的估計演算法。我們利用一些數值結果來凸顯此法在效率上,是遠優於蒙地卡羅模擬。 / Under a factor model, computation of the loss density function relies on the estimates of some mixture of the joint default probability and joint survival probability. Monte Carlo simulation is among the most widely used computational tools in such estimation. Nevertheless, general Monte Carlo simulation is an ineffective simulation approach, in particular for rare event aspect and complex dependence between defaults of multiple obligors. So a method to increase efficiency of estimation is necessary. Importance sampling (IS) seems to be an attractive method to address this problem. Changing the measure of probabilities, IS makes an estimator to be efficient especially for complicated model. Therefore, we consider IS for estimation of tail probability of skew normal copula model. This paper consists of two parts. First, we apply exponential twist, a usual and better IS technique, to conditional probabilities and the factors. However, this procedure does not always guarantee enough variance reduction. Such result indicates the further consideration of choosing IS factor density. Faced with this problem, a variety of approaches has recently been proposed in the literature ( Capriotti 2008, Glasserman et al 1999, Glasserman and Li 2005). The better choices of IS density can be roughly classified into two kinds of strategies. The first strategy depends on choosing optimal shift. The optimal drift is decided by using different approximation methods. Such strategy is shown in Glasserman et al 1999, or Glasserman and Li 2005. The second strategy, as shown in Capriotti (2008), considers a family of factor probability densities which depend on a set of real parameters. By formulating in terms of a nonlinear optimization problem, IS density which is not limited the determination of drift is then determinate. The method that searches for the optimal parameters, however, incurs another efficiency problem. To keep the method efficient, particular care for robust parameters estimation needs to be taken in preliminary Monte Carlo simulation. This leads method to be more complicated. In this paper, we describe an alternative strategy that is straightforward and flexible enough to be applied in Monte Carlo setting. Indeed, our algorithm is not limited to the determination of optimal drift in Gaussian copula model, nor estimation of parameters of factor density. To exploit the similar concept developed for basket default swap valuation in Chiang, Yueh, and Hsie (2007), we provide a reasonable guess of the optimal sampling density and then establish a way different from stochastic approximation to speed up simulation. Finally, we provide theoretical support for single factor model and take this approach a step further to multifactor case. So we have a rough but fast approximation that execute entirely with Monte Carlo in general situation. We support our approach by some portfolio examples. Numerical results show that such algorithm is more efficient than general Monte Carlo simulation.
72

Méthodes de Monte Carlo stratifiées pour l'intégration numérique et la simulation numériques / Stratified Monte Carlo methods for numerical integration and simulation

Fakhereddine, Rana 26 September 2013 (has links)
Les méthodes de Monte Carlo (MC) sont des méthodes numériques qui utilisent des nombres aléatoires pour résoudre avec des ordinateurs des problèmes des sciences appliquées et des techniques. On estime une quantité par des évaluations répétées utilisant N valeurs et l'erreur de la méthode est approchée par la variance de l'estimateur. Le présent travail analyse des méthodes de réduction de la variance et examine leur efficacité pour l'intégration numérique et la résolution d'équations différentielles et intégrales. Nous présentons d'abord les méthodes MC stratifiées et les méthodes d'échantillonnage par hypercube latin (LHS : Latin Hypercube Sampling). Parmi les méthodes de stratification, nous privilégions la méthode simple (MCS) : l'hypercube unité Is := [0; 1)s est divisé en N sous-cubes d'égale mesure, et un point aléatoire est choisi dans chacun des sous-cubes. Nous analysons la variance de ces méthodes pour le problème de la quadrature numérique. Nous étudions particulièrment le cas de l'estimation de la mesure d'un sous-ensemble de Is. La variance de la méthode MCS peut être majorée par O(1=N1+1=s). Les résultats d'expériences numériques en dimensions 2,3 et 4 montrent que les majorations obtenues sont précises. Nous proposons ensuite une méthode hybride entre MCS et LHS, qui possède les propriétés de ces deux techniques, avec un point aléatoire dans chaque sous-cube et les projections des points sur chacun des axes de coordonnées également réparties de manière régulière : une projection dans chacun des N sousintervalles qui divisent I := [0; 1) uniformément. Cette technique est appelée Stratification Sudoku (SS). Dans le même cadre d'analyse que précédemment, nous montrons que la variance de la méthode SS est majorée par O(1=N1+1=s) ; des expériences numériques en dimensions 2,3 et 4 valident les majorations démontrées. Nous présentons ensuite une approche de la méthode de marche aléatoire utilisant les techniques de réduction de variance précédentes. Nous proposons un algorithme de résolution de l'équation de diffusion, avec un coefficient de diffusion constant ou non-constant en espace. On utilise des particules échantillonnées suivant la distribution initiale, qui effectuent un déplacement gaussien à chaque pas de temps. On ordonne les particules suivant leur position à chaque étape et on remplace les nombres aléatoires qui permettent de calculer les déplacements par les points stratifiés utilisés précédemment. On évalue l'amélioration apportée par cette technique sur des exemples numériques Nous utilisons finalement une approche analogue pour la résolution numérique de l'équation de coagulation, qui modélise l'évolution de la taille de particules pouvant s'agglomérer. Les particules sont d'abord échantillonnées suivant la distribution initiale des tailles. On choisit un pas de temps et, à chaque étape et pour chaque particule, on choisit au hasard un partenaire de coalescence et un nombre aléatoire qui décide de cette coalescence. Si l'on classe les particules suivant leur taille à chaque pas de temps et si l'on remplace les nombres aléatoires par des points stratifiés, on observe une réduction de variance par rapport à l'algorithme MC usuel. / Monte Carlo (MC) methods are numerical methods using random numbers to solve on computers problems from applied sciences and techniques. One estimates a quantity by repeated evaluations using N values ; the error of the method is approximated through the variance of the estimator. In the present work, we analyze variance reduction methods and we test their efficiency for numerical integration and for solving differential or integral equations. First, we present stratified MC methods and Latin Hypercube Sampling (LHS) technique. Among stratification strategies, we focus on the simple approach (MCS) : the unit hypercube Is := [0; 1)s is divided into N subcubes having the same measure, and one random point is chosen in each subcube. We analyze the variance of the method for the problem of numerical quadrature. The case of the evaluation of the measure of a subset of Is is particularly detailed. The variance of the MCS method may be bounded by O(1=N1+1=s). The results of numerical experiments in dimensions 2,3, and 4 show that the upper bounds are tight. We next propose an hybrid method between MCS and LHS, that has properties of both approaches, with one random point in each subcube and such that the projections of the points on each coordinate axis are also evenly distributed : one projection in each of the N subintervals that uniformly divide the unit interval I := [0; 1). We call this technique Sudoku Sampling (SS). Conducting the same analysis as before, we show that the variance of the SS method is bounded by O(1=N1+1=s) ; the order of the bound is validated through the results of numerical experiments in dimensions 2,3, and 4. Next, we present an approach of the random walk method using the variance reduction techniques previously analyzed. We propose an algorithm for solving the diffusion equation with a constant or spatially-varying diffusion coefficient. One uses particles, that are sampled from the initial distribution ; they are subject to a Gaussian move in each time step. The particles are renumbered according to their positions in every step and the random numbers which give the displacements are replaced by the stratified points used above. The improvement brought by this technique is evaluated in numerical experiments. An analogous approach is finally used for numerically solving the coagulation equation ; this equation models the evolution of the sizes of particles that may agglomerate. The particles are first sampled from the initial size distribution. A time step is fixed and, in every step and for each particle, a coalescence partner is chosen and a random number decides if coalescence occurs. If the particles are ordered in every time step by increasing sizes an if the random numbers are replaced by statified points, a variance reduction is observed, when compared to the results of usual MC algorithm.
73

Dataset selection for aggregate model implementation in predictive data mining

Lutu, P.E.N. (Patricia Elizabeth Nalwoga) 15 November 2010 (has links)
Data mining has become a commonly used method for the analysis of organisational data, for purposes of summarizing data in useful ways and identifying non-trivial patterns and relationships in the data. Given the large volumes of data that are collected by business, government, non-government and scientific research organizations, a major challenge for data mining researchers and practitioners is how to select relevant data for analysis in sufficient quantities, in order to meet the objectives of a data mining task. This thesis addresses the problem of dataset selection for predictive data mining. Dataset selection was studied in the context of aggregate modeling for classification. The central argument of this thesis is that, for predictive data mining, it is possible to systematically select many dataset samples and employ different approaches (different from current practice) to feature selection, training dataset selection, and model construction. When a large amount of information in a large dataset is utilised in the modeling process, the resulting models will have a high level of predictive performance and should be more reliable. Aggregate classification models, also known as ensemble classifiers, have been shown to provide a high level of predictive accuracy on small datasets. Such models are known to achieve a reduction in the bias and variance components of the prediction error of a model. The research for this thesis was aimed at the design of aggregate models and the selection of training datasets from large amounts of available data. The objectives for the model design and dataset selection were to reduce the bias and variance components of the prediction error for the aggregate models. Design science research was adopted as the paradigm for the research. Large datasets obtained from the UCI KDD Archive were used in the experiments. Two classification algorithms: See5 for classification tree modeling and K-Nearest Neighbour, were used in the experiments. The two methods of aggregate modeling that were studied are One-Vs-All (OVA) and positive-Vs-negative (pVn) modeling. While OVA is an existing method that has been used for small datasets, pVn is a new method of aggregate modeling, proposed in this thesis. Methods for feature selection from large datasets, and methods for training dataset selection from large datasets, for OVA and pVn aggregate modeling, were studied. The experiments of feature selection revealed that the use of many samples, robust measures of correlation, and validation procedures result in the reliable selection of relevant features for classification. A new algorithm for feature subset search, based on the decision rule-based approach to heuristic search, was designed and the performance of this algorithm was compared to two existing algorithms for feature subset search. The experimental results revealed that the new algorithm makes better decisions for feature subset search. The information provided by a confusion matrix was used as a basis for the design of OVA and pVn base models which aren combined into one aggregate model. A new construct called a confusion graph was used in conjunction with new algorithms for the design of pVn base models. A new algorithm for combining base model predictions and resolving conflicting predictions was designed and implemented. Experiments to study the performance of the OVA and pVn aggregate models revealed the aggregate models provide a high level of predictive accuracy compared to single models. Finally, theoretical models to depict the relationships between the factors that influence feature selection and training dataset selection for aggregate models are proposed, based on the experimental results. / Thesis (PhD)--University of Pretoria, 2010. / Computer Science / unrestricted
74

Mathematical modelling and numerical simulation in materials science / Modélisation mathématique et simulation numérique en science des matériaux

Boyaval, Sébastien 16 December 2009 (has links)
Dans une première partie, nous étudions des schémas numériques utilisant la méthode des éléments finis pour discrétiser le système d'équations Oldroyd-B modélisant un fluide viscolélastique avec conditions de collement dans un domaine borné, en dimension deux ou trois. Le but est d'obtenir des schémas stables au sens où ils dissipent une énergie libre, imitant ainsi des propriétés thermodynamiques de dissipation similaires à celles identifiées pour des solutions régulières du modèle continu. Cette étude s'ajoute a de nombreux travaux antérieurs sur les instabilités observées dans les simulations numériques d'équations viscoélastiques (dont celles connues comme étant des Problèmes à Grand Nombre de Weissenberg). A notre connaissance, c'est la première étude qui considère rigoureusement la stabilité numérique au sens de la dissipation d'une énergie pour des discrétisations de type Galerkin. Dans une seconde partie, nous adaptons et utilisons les idées d'une méthode numérique initialement développée dans des travaux de Y. Maday, A. T. Patera et al., la méthode des bases réduites, pour simuler efficacement divers modèles multi-échelles. Le principe est d'approcher numériquement chaque élément d'une collection paramétrée d'objets complexes dans un espace de Hilbert par la plus proche combinaison linéaire dans le meilleur sous-espace vectoriel engendré par quelques éléments bien choisis au sein de la même collection paramétrée. Nous appliquons ce principe pour des problèmes numériques liés : à l'homogénéisation numérique d'équations elliptiques scalaires du second-ordre, avec coefficients de diffusion oscillant à deux échelles, puis ; à la propagation d'incertitudes (calculs de moyenne et de variance) dans un problème elliptique avec coefficients stochastiques (un champ aléatoire borné dans une condition de bord du troisième type), enfin ; au calcul Monte-Carlo de l'espérance de nombreuses variables aléatoires paramétrées, en particulier des fonctionnelles de processus stochastiques d'Itô paramétrés proches de ce qu'on rencontre dans les modèles micro-macro de fluides polymériques, avec une variable de contrôle pour en réduire la variance. Dans chaque application, le but de l'approche bases-réduites est d'accélérer les calculs sans perte de précision / In a first part, we study numerical schemes using the finite-element method to discretize the Oldroyd-B system of equations, modelling a viscoelastic fluid under no flow boundary condition in a 2- or 3- dimensional bounded domain. The goal is to get schemes which are stable in the sense that they dissipate a free-energy, mimicking that way thermodynamical properties of dissipation similar to those actually identified for smooth solutions of the continuous model. This study adds to numerous previous ones about the instabilities observed in the numerical simulations of viscoelastic fluids (in particular those known as High Weissenberg Number Problems). To our knowledge, this is the first study that rigorously considers the numerical stability in the sense of an energy dissipation for Galerkin discretizations. In a second part, we adapt and use ideas of a numerical method initially developped in the works of Y. Maday, A.T. Patera et al., the reduced-basis method, in order to efficiently simulate some multiscale models. The principle is to numerically approximate each element of a parametrized family of complicate objects in a Hilbert space through the closest linear combination within the best linear subspace spanned by a few elementswell chosen inside the same parametrized family. We apply this principle to numerical problems linked : to the numerical homogenization of second-order elliptic equations, with two-scale oscillating diffusion coefficients, then ; to the propagation of uncertainty (computations of the mean and the variance) in an elliptic problem with stochastic coefficients (a bounded stochastic field in a boundary condition of third type), last ; to the Monte-Carlo computation of the expectations of numerous parametrized random variables, in particular functionals of parametrized Itô stochastic processes close to what is encountered in micro-macro models of polymeric fluids, with a control variate to reduce its variance. In each application, the goal of the reduced-basis approach is to speed up the computations without any loss of precision
75

[pt] APLICAÇÕES DO MÉTODO DA ENTROPIA CRUZADA EM ESTIMAÇÃO DE RISCO E OTIMIZAÇÃO DE CONTRATO DE MONTANTE DE USO DO SISTEMA DE TRANSMISSÃO / [en] CROSS-ENTROPY METHOD APPLICATIONS TO RISK ESTIMATE AND OPTIMIZATION OF AMOUNT OF TRANSMISSION SYSTEM USAGE

23 November 2021 (has links)
[pt] As companhias regionais de distribuição não são autossuficientes em energia elétrica para atender seus clientes, e requerem importar a potência necessária do sistema interligado. No Brasil, elas realizam anualmente o processo de contratação do montante de uso do sistema de transmissão (MUST) para o horizonte dos próximos quatro anos. Essa operação é um exemplo real de tarefa que envolve decisões sob incerteza com elevado impacto na produtividade das empresas distribuidoras e do setor elétrico em geral. O trabalho se torna ainda mais complexo diante da crescente variabilidade associada à geração de energia renovável e à mudança do perfil do consumidor. O MUST é uma variável aleatória, e ser capaz de compreender sua variabilidade é crucial para melhor tomada de decisão. O fluxo de potência probabilístico é uma técnica que mapeia as incertezas das injeções nodais e configuração de rede nos equipamentos de transmissão e, consequentemente, nas potências importadas em cada ponto de conexão com o sistema interligado. Nesta tese, o objetivo principal é desenvolver metodologias baseadas no fluxo de potência probabilístico via simulação Monte Carlo, em conjunto com a técnica da entropia cruzada, para estimar os riscos envolvidos na contratação ótima do MUST. As metodologias permitem a implementação de software comercial para lidar com o algoritmo de fluxo de potência, o que é relevante para sistemas reais de grande porte. Apresenta-se, portanto, uma ferramenta computacional prática que serve aos engenheiros das distribuidoras de energia elétrica. Resultados com sistemas acadêmicos e reais mostram que as propostas cumprem os objetivos traçados, com benefícios na redução dos custos totais no processo de otimização de contratos e dos tempos computacionais envolvidos nas estimativas de risco. / [en] Local power distribution companies are not self-sufficient in electricity to serve their customers, and require importing additional energy supply from the interconnected bulk power systems. In Brazil, they annually carry out the contracting process for the amount of transmission system usage (ATSU) for the next four years. This process is a real example of a task that involves decisions under uncertainty with a high impact on the productivity of the distributions companies and on the electricity sector in general. The task becomes even more complex in face of the increasing variability associated with the generation of renewable energy and the changing profile of the consumer. The ATSU is a random variable, and being able to understand its variability is crucial for better decision making. Probabilistic power flow is a technique that maps the uncertainties of nodal injections and network configuration in the transmission equipment and, consequently, in the imported power at each connection point with the bulk power system. In this thesis, the main objective is to develop methodologies based on probabilistic power flow via Monte Carlo simulation, together with cross entropy techniques, to assess the risks involved in the optimal contracting of the ATSU. The proposed approaches allow the inclusion of commercial software to deal with the power flow algorithm, which is relevant for large practical systems. Thus, a realistic computational tool that serves the engineers of electric distribution companies is presented. Results with academic and real systems show that the proposals fulfill the objectives set, with the benefits of reducing the total costs in the optimization process of contracts and computational times involved in the risk assessments.
76

Randomized Quasi-Monte Carlo Methods for Density Estimation and Simulation of Markov Chains

Ben Abdellah, Amal 02 1900 (has links)
La méthode Quasi-Monte Carlo Randomisé (RQMC) est souvent utilisée pour estimer une intégrale sur le cube unitaire (0,1)^s de dimension s. Cette intégrale est interprétée comme l'espérance mathématique d'une variable aléatoire X. Il est bien connu que, sous certaines conditions, les estimateurs d'intégrales par RQMC peuvent converger plus rapidement que les estimateurs par Monte Carlo. Pour la simulation de chaînes de Markov sur un grand nombre d'étapes en utilisant RQMC, il existe peu de résultats. L'approche la plus prometteuse proposée à ce jour est la méthode array-RQMC. Cette méthode simule, en parallèle, n copies de la chaîne en utilisant un ensemble de points RQMC aléatoires et indépendants à chaque étape et trie ces chaînes en utilisant une fonction de tri spécifique après chaque étape. Cette méthode a donné, de manière empirique, des résultats significatifs sur quelques exemples (soit, un taux de convergence bien meilleur que celui observé avec Monte Carlo standard). Par contre, les taux de convergence observés empiriquement n'ont pas encore été prouvés théoriquement. Dans la première partie de cette thèse, nous examinons comment RQMC peut améliorer, non seulement, le taux de convergence lors de l'estimation de l'espérance de X mais aussi lors de l'estimation de sa densité. Dans la deuxième partie, nous examinons comment RQMC peut être utilisé pour la simulation de chaînes de Markov sur un grand nombre d'étapes à l'aide de la méthode array-RQMC. Notre thèse contient quatre articles. Dans le premier article, nous étudions l'efficacité gagnée en remplaçant Monte Carlo (MC) par les méthodes de Quasi-Monte Carlo Randomisé (RQMC) ainsi que celle de la stratification. Nous allons ensuite montrer comment ces méthodes peuvent être utilisées pour rendre un échantillon plus représentatif. De plus, nous allons montrer comment ces méthodes peuvent aider à réduire la variance intégrée (IV) et l'erreur quadratique moyenne intégrée (MISE) pour les estimateurs de densité par noyau (KDE). Nous fournissons des résultats théoriques et empiriques sur les taux de convergence et nous montrons que les estimateurs par RQMC et par stratification peuvent atteindre des réductions significatives en IV et MISE ainsi que des taux de convergence encore plus rapides que MC pour certaines situations, tout en laissant le biais inchangé. Dans le deuxième article, nous examinons la combinaison de RQMC avec une approche Monte Carlo conditionnelle pour l'estimation de la densité. Cette approche est définie en prenant la dérivée stochastique d'une CDF conditionnelle de X et offre une grande amélioration lorsqu'elle est appliquée. L'utilisation de la méthode array-RQMC pour évaluer une option asiatique sous un processus ordinaire de mouvement brownien géométrique avec une volatilité fixe a déjà été tentée dans le passé et un taux de convergence de O(n⁻²) a été observé pour la variance. Dans le troisième article, nous étudions le prix des options asiatiques lorsque le processus sous-jacent présente une volatilité stochastique. Plus spécifiquement, nous examinons les modèles de volatilité stochastique variance-gamma, Heston ainsi que Ornstein-Uhlenbeck. Nous montrons comment l'application de la méthode array-RQMC pour la détermination du prix des options asiatiques et européennes peut réduire considérablement la variance. L'algorithme t-leaping est utilisé dans la simulation des systèmes biologiques stochastiques. La méthode Monte Carlo (MC) est une approche possible pour la simulation de ces systèmes. Simuler la chaîne de Markov pour une discrétisation du temps de longueur t via la méthode quasi-Monte Carlo randomisé (RQMC) a déjà été explorée empiriquement dans plusieurs expériences numériques et les taux de convergence observés pour la variance, lorsque la dimension augmente, s'alignent avec ceux observés avec MC. Dans le dernier article, nous étudions la combinaison de array-RQMC avec cet algorithme et démontrons empiriquement que array-RQMC fournit une réduction significative de la variance par rapport à la méthode de MC standard. / The Randomized Quasi Monte Carlo method (RQMC) is often used to estimate an integral over the s-dimensional unit cube (0,1)^s. This integral is interpreted as the mathematical expectation of some random variable X. It is well known that RQMC estimators can, under some conditions, converge at a faster rate than crude Monte Carlo estimators of the integral. For Markov chains simulation on a large number of steps by using RQMC, little exists. The most promising approach proposed to date is the array-RQMC method. This method simulates n copies of the chain in parallel using a set of independent RQMC points at each step, and sorts the chains using a specific sorting function after each step. This method has given empirically significant results in terms of convergence rates on a few examples (i.e. a much better convergence rate than that observed with Monte Carlo standard). However, the convergence rates observed empirically have not yet been theoretically proven. In the first part of this thesis, we examine how RQMC can improve the convergence rate when estimating not only X's expectation, but also its distribution. In the second part, we examine how RQMC can be used for Markov chains simulation on a large number of steps using the array-RQMC method. Our thesis contains four articles. In the first article, we study the effectiveness of replacing Monte Carlo (MC) by either randomized quasi Monte Carlo (RQMC) or stratification to show how they can be applied to make samples more representative. Furthermore, we show how these methods can help to reduce the integrated variance (IV) and the mean integrated square error (MISE) for the kernel density estimators (KDEs). We provide both theoretical and empirical results on the convergence rates and show that the RQMC and stratified sampling estimators can achieve significant IV and MISE reductions with even faster convergence rates compared to MC in some situations, while leaving the bias unchanged. In the second article, we examine the combination of RQMC with a conditional Monte Carlo approach to density estimation. This approach is defined by taking the stochastic derivative of a conditional CDF of X and provides a large improvement when applied. Using array-RQMC in order to price an Asian option under an ordinary geometric Brownian motion process with fixed volatility has already been attempted in the past and a convergence rate of O(n⁻²) was observed for the variance. In the third article, we study the pricing of Asian options when the underlying process has stochastic volatility. More specifically, we examine the variance-gamma, Heston, and Ornstein-Uhlenbeck stochastic volatility models. We show how applying the array-RQMC method for pricing Asian and European options can significantly reduce the variance. An efficient sample path algorithm called (fixed-step) t-leaping can be used to simulate stochastic biological systems as well as well-stirred chemical reaction systems. The crude Monte Carlo (MC) method is a feasible approach when it comes to simulating these sample paths. Simulating the Markov chain for fixed-step t-leaping via ordinary randomized quasi-Monte Carlo (RQMC) has already been explored empirically and, when the dimension of the problem increased, the convergence rate of the variance was realigned with those observed in several numerical experiments using MC. In the last article, we study the combination of array-RQMC with this algorithm and empirically demonstrate that array-RQMC provides a significant reduction in the variance compared to the standard MC algorithm.
77

Stochastic mesh approximations for dynamic hedging with costs

Tremblay, Pierre-Alexandre 07 1900 (has links)
Cette thèse se concentre sur le calcul de la solution optimale d'un problème de couverture de produit dérivé en temps discret. Le problème consiste à minimiser une mesure de risque, définie comme l'espérance d'une fonction convexe du profit (ou perte) du portefeuille, en tenant compte des frais de transaction. Lorsqu'il y a des coûts, il peut être optimal de ne pas transiger. Ainsi, les solutions sont caractérisées par des frontières de transaction. En général, les politiques optimales et les fonctions de risque associées ne sont pas connues explicitement, mais une stratégie bien connue consiste à approximer les solutions de manière récursive en utilisant la programmation dynamique. Notre contribution principale est d'appliquer la méthode du maillage stochastique. Cela permet d'utiliser des processus stochastiques multi-dimensionels pour les dynamiques de prix. On obtient aussi des estimateurs biasés à la hausse et à la baisse, donnant une mesure de la proximité de l'optimum. Nous considérons différentes façons d'améliorer l'efficacité computationelle. Utiliser la technique des variables de contrôle réduit le bruit qui provient de l'utilisation de prix de dérivés estimés à même le maillage stochastique. Deux autres techniques apportent des réductions complémentaires du temps de calcul : utiliser une grille unique pour les états du maillage et utiliser une procédure de "roulette Russe". Dans la dernière partie de la thèse, nous présentons une application pour le cas de la fonction de risque exponentielle négative et un modèle à volatilité stochastique (le modèle de Ornstein-Uhlenbeck exponentiel). Nous étudions le comportement des solutions sous diverses configurations des paramètres du modèle et comparons la performance des politiques basées sur un maillage à celles d'heuristiques. / This thesis focuses on computing the optimal solution to a derivative hedging problem in discrete time. The problem is to minimize a risk measure, defined as the expectation of a convex function of the terminal profit and loss of the portfolio, taking transaction costs into account. In the presence of costs, it is sometimes optimal not to trade, so the solutions are characterized in terms of trading boundaries. In general, the optimal policies and the associated risk functions are not known explicitly, but a well-known strategy is to approximate the solutions recursively using dynamic programming. Our central innovation is in applying the stochastic mesh method, which was originally applied to option pricing. It allows exibility for the price dynamics, which could be driven by a multi-dimensional stochastic process. It also yields both low and high biased estimators of the optimal risk, thus providing a measure of closeness to the actual optimum. We look at various ways to improve the computational efficiency. Using the control variate technique reduces the noise that comes from using derivative prices estimated on the stochastic mesh. Two additional techniques turn out to provide complementary computation time reductions : using a single grid for the mesh states and using a so-called Russian roulette procedure. In the last part of the thesis, we showcase an application to the particular case of the negative exponential risk function and a stochastic volatility model (the exponential Ornstein-Uhlenbeck model). We study the behavior of the solutions under various configurations of the model parameters and compare the performance of the mesh-based policies with that of well-known heuristics.
78

Algorithmes stochastiques pour la gestion du risque et l'indexation de bases de données de média / Stochastic algorithms for risk management and indexing of database media

Reutenauer, Victor 22 March 2017 (has links)
Cette thèse s’intéresse à différents problèmes de contrôle et d’optimisation dont il n’existe à ce jour que des solutions approchées. D’une part nous nous intéressons à des techniques visant à réduire ou supprimer les approximations pour obtenir des solutions plus précises voire exactes. D’autre part nous développons de nouvelles méthodes d’approximation pour traiter plus rapidement des problèmes à plus grande échelle. Nous étudions des méthodes numériques de simulation d’équation différentielle stochastique et d’amélioration de calculs d’espérance. Nous mettons en œuvre des techniques de type quantification pour la construction de variables de contrôle ainsi que la méthode de gradient stochastique pour la résolution de problèmes de contrôle stochastique. Nous nous intéressons aussi aux méthodes de clustering liées à la quantification, ainsi qu’à la compression d’information par réseaux neuronaux. Les problèmes étudiés sont issus non seulement de motivations financières, comme le contrôle stochastique pour la couverture d’option en marché incomplet mais aussi du traitement des grandes bases de données de médias communément appelé Big data dans le chapitre 5. Théoriquement, nous proposons différentes majorations de la convergence des méthodes numériques d’une part pour la recherche d’une stratégie optimale de couverture en marché incomplet dans le chapitre 3, d’autre part pour l’extension la technique de Beskos-Roberts de simulation d’équation différentielle dans le chapitre 4. Nous présentons une utilisation originale de la décomposition de Karhunen-Loève pour une réduction de variance de l’estimateur d’espérance dans le chapitre 2. / This thesis proposes different problems of stochastic control and optimization that can be solved only thanks approximation. On one hand, we develop methodology aiming to reduce or suppress approximations to obtain more accurate solutions or something exact ones. On another hand we develop new approximation methodology in order to solve quicker larger scale problems. We study numerical methodology to simulated differential equations and enhancement of computation of expectations. We develop quantization methodology to build control variate and gradient stochastic methods to solve stochastic control problems. We are also interested in clustering methods linked to quantization, and principal composant analysis or compression of data thanks neural networks. We study problems motivated by mathematical finance, like stochastic control for the hedging of derivatives in incomplete market but also to manage huge databases of media commonly known as big Data in chapter 5. Theoretically we propose some upper bound for convergence of the numerical method used. This is the case of optimal hedging in incomplete market in chapter 3 but also an extension of Beskos-Roberts methods of exact simulation of stochastic differential equations in chapter 4. We present an original application of karhunen-Loève decomposition for a control variate of computation of expectation in chapter 2.
79

[pt] ESTIMATIVA DE RISCOS EM REDES ELÉTRICAS CONSIDERANDO FONTES RENOVÁVEIS E CONTINGÊNCIAS DE GERAÇÃO E TRANSMISSÃO VIA FLUXO DE POTÊNCIA PROBABILÍSTICO / [en] RISK ASSESSMENT IN ELECTRIC NETWORKS CONSIDERING RENEWABLE SOURCES AND GENERATION AND TRANSMISSION CONTINGENCIES VIA PROBABILISTIC POWER FLOW

24 November 2023 (has links)
[pt] A demanda global por soluções sustentáveis para geração de energia elétrica cresceu rapidamente nas últimas décadas, sendo impulsionada por incentivos fiscais dos governos e investimentos em pesquisa e desenvolvimento de tecnologias. Isso provocou uma crescente inserção de fontes renováveis nas redes elétricas ao redor do mundo, criando novos desafios críticos para as avaliações de desempenho dos sistemas que são potencializados pela intermitência desses recursos energéticos combinada às falhas dos equipamentos de rede. Motivado por esse cenário, esta dissertação aborda a estimativa de risco de inadequação de grandezas elétricas, como ocorrências de sobrecarga em ramos elétricos ou subtensão em barramentos, através do uso do fluxo de potência probabilístico, baseado na simulação Monte Carlo e no método de entropia cruzada. O objetivo é determinar o risco do sistema não atender a critérios operativos, de forma precisa e com eficiência computacional, considerando as incertezas de carga, geração e transmissão. O método é aplicado aos sistemas testes IEEE RTS 79 e IEEE 118 barras, considerando também versões modificadas com a inclusão de uma usina eólica, e os resultados são amplamente discutidos. / [en] The global demand for sustainable solutions for electricity generation has grown rapidly in recent decades, driven by government tax incentives and investments in technology research and development. This caused a growing insertion of renewable sources in power networks around the world, creating new critical challenges for systems performance assessments that are enhanced by the intermittency of these energy resources combined with the failures of network equipment. Motivated by this scenario, this dissertation addresses the estimation of risk of inadequacy of electrical quantities, such as overload occurrences in electrical branches or undervoltage in buses, through the use of probabilistic power flow, based on Monte Carlo simulation and the cross-entropy method. The objective is to determine the risk of the system not meeting operational criteria, precisely and with computational efficiency, considering load, generation and transmission uncertainties. The method is applied to IEEE RTS 79 and IEEE 118 bus test systems, also considering modified versions with the inclusion of a wind power plant, and the results are widely discussed.
80

異質性投資組合下的改良式重點取樣法 / Modified Importance Sampling for Heterogeneous Portfolio

許文銘 Unknown Date (has links)
衡量投資組合的稀有事件時,即使稀有事件違約的機率極低,但是卻隱含著高額資產違約時所帶來的重大損失,所以我們必須要精準地評估稀有事件的信用風險。本研究係在估計信用損失分配的尾端機率,模擬的模型包含同質模型與異質模型;然而蒙地卡羅法雖然在風險管理的計算上相當實用,但是估計機率極小的尾端機率時模擬不夠穩定,因此為增進模擬的效率,我們利用Glasserman and Li (Management Science, 51(11),2005)提出的重點取樣法,以及根據Chiang et al. (Joural of Derivatives, 15(2),2007)重點取樣法為基礎做延伸的改良式重點取樣法,兩種方法來對不同的投資組合做模擬,更是將改良式重點取樣法推廣至異質模型做討論,本文亦透過變異數縮減效果來衡量兩種方法的模擬效率。數值結果顯示,比起傳統的蒙地卡羅法,此兩種方法皆能達到變異數縮減,其中在同質模型下的改良式重點取樣法有很好的表現,模擬時間相當省時,而異質模型下的重點取樣法也具有良好的估計效率及模擬的穩定性。 / When measuring portfolio credit risk of rare-event, even though its default probabilities are low, it causes significant losses resulting from a large number of default. Therefore, we have to measure portfolio credit risk of rare-event accurately. In particular, our goal is estimating the tail of loss distribution. Models we simulate are including homogeneous models and heterogeneous models. However, Monte Carlo simulation is useful and widely used computational tool in risk management, but it is unstable especially estimating small tail probabilities. Hence, in order to improve the efficiency of simulation, we use importance sampling proposed by Glasserman and Li (Management Science, 51(11),2005) and modified importance sampling based on importance sampling which proposed by Chiang et al. (2007 Joural of Derivatives, 15(2),). Simulate different portfolios by these two of simulations. On top of that, we extend and discuss the modified importance sampling simulation to heterogeneous model. In this article, we measure efficiency of two simulations by variance reduction. Numerical results show that proposed methods are better than Monte Carlo and achieve variance reduction. In homogeneous model, modified importance sampling has excellent efficiency of estimating and saves time. In heterogeneous model, importance sampling also has great efficiency of estimating and stability.

Page generated in 0.088 seconds