Spelling suggestions: "subject:"1mportance ampling"" "subject:"1mportance campling""
21 |
Efficient Risk Simulations for Linear Asset PortfoliosSak, Halis, Hörmann, Wolfgang, Leydold, Josef January 2008 (has links) (PDF)
We consider the problem of calculating tail probabilities of the returns of linear asset portfolios. As flexible and accurate model for the logarithmic returns we use the $t$-copula dependence structure and marginals following the generalized hyperbolic distribution. Exact calculation of the tail-loss probabilities is not possible and even simulation leads to challenging numerical problems. Applying a new numerical inversion method for the generation of the marginals and importance sampling with carefully selected mean shift we develop an efficient simulation algorithm. Numerical results for a variety of realistic portfolio examples show an impressive performance gain. (author´s abstract) / Series: Research Report Series / Department of Statistics and Mathematics
|
22 |
Rare Events Simulations with Applications to the Performance Evaluation of Wireless Communication SystemsBen Rached, Nadhir 08 October 2018 (has links)
The probability that a sum of random variables (RVs) exceeds (respectively falls below) a given threshold, is often encountered in the performance analysis of wireless communication systems. Generally, a closed-form expression of the sum distribution does not exist and a naive Monte Carlo (MC) simulation is computationally expensive when dealing with rare events. An alternative approach is represented by the use of variance reduction techniques, known for their efficiency in requiring less computations for achieving the same accuracy requirement.
For the right-tail region, we develop a unified hazard rate twisting importance sampling (IS) technique that presents the advantage of being logarithmic efficient for arbitrary distributions under the independence assumption. A further improvement of this technique is then developed wherein the twisting is applied only to the components having more impacts on the probability of interest than others. Another challenging problem is when the components are correlated and distributed according to the Log-normal distribution. In this setting, we develop a generalized hybrid IS scheme based on a mean shifting and covariance matrix scaling techniques and we prove that the logarithmic efficiency holds again for two particular instances.
We also propose two unified IS approaches to estimate the left-tail of sums of independent positive RVs. The first applies to arbitrary distributions and enjoys the logarithmic efficiency criterion, whereas the second satisfies the bounded relative error criterion under a mild assumption but is only applicable to the case of independent and identically distributed RVs. The left-tail of correlated Log-normal variates is also considered. In fact, we construct an estimator combining an existing mean shifting IS approach with a control variate technique and prove that it possess the asymptotically vanishing relative error property. A further interesting problem is the left-tail estimation of sums of ordered RVs. Two estimators are presented. The first is based on IS and achieves the bounded relative error under a mild assumption. The second is based on conditional MC approach and achieves the bounded relative error property for the Generalized Gamma case and the logarithmic efficiency for the Log-normal case.
|
23 |
Power System State Estimation and Contingency Constrained Optimal Power Flow - A Numerically Robust ImplementationPajic, Slobodan 01 May 2007 (has links)
The research conducted in this dissertation is divided into two main parts. The first part provides further improvements in power system state estimation and the second part implements Contingency Constrained Optimal Power Flow (CCOPF) in a stochastic multiple contingency framework. As a real-time application in modern power systems, the existing Newton-QR state estimation algorithms are too slow and too fragile numerically. This dissertation presents a new and more robust method that is based on trust region techniques. A faster method was found among the class of Krylov subspace iterative methods, a robust implementation of the conjugate gradient method, called the LSQR method. Both algorithms have been tested against the widely used Newton-QR state estimator on the standard IEEE test networks. The trust region method-based state estimator was found to be very reliable under severe conditions (bad data, topological and parameter errors). This enhanced reliability justifies the additional time and computational effort required for its execution. The numerical simulations indicate that the iterative Newton-LSQR method is competitive in robustness with classical direct Newton-QR. The gain in computational efficiency has not come at the cost of solution reliability. The second part of the dissertation combines Sequential Quadratic Programming (SQP)-based CCOPF with Monte Carlo importance sampling to estimate the operating cost of multiple contingencies. We also developed an LP-based formulation for the CCOPF that can efficiently calculate Locational Marginal Prices (LMPs) under multiple contingencies. Based on Monte Carlo importance sampling idea, the proposed algorithm can stochastically assess the impact of multiple contingencies on LMP-congestion prices.
|
24 |
Parallel MCMC methods and their applications in inverse problemsRussell, Paul January 2018 (has links)
In this thesis we introduce a framework for parallel MCMC methods which we call parallel adaptive importance sampling (PAIS). At each iteration we have an ensemble of particles, from which PAIS builds a kernel density estimate (KDE). We propose a new ensemble, using this KDE, that is weighted according to standard importance sampling rules. A state-of-the art resampling method from the optimal transportation literature, or alternatively our own novel resampling algorithm, can be used to produce an equally weighted ensemble from this weighted ensemble. This equally weighted ensemble is approximately distributed according to the target distribution and is used to progress the algorithm. The PAIS algorithm outputs a weighted sample. We introduce an adaptive scheme for PAIS which automatically tunes the scaling parameters required for efficient sampling. This adaptive tuning converges rapidly for the target distributions we have experimented with and significantly reduces the burn-in period of the algorithm. PAIS has been designed to work well on computers with parallel processing units available, and we have demonstrated that a doubling of the number of processing units available more than halves the number of iterations required to reach the same accuracy. The numerical examples have been implemented on a shared memory system. PAIS is incredibly flexible in terms of the proposal distributions and resampling methods we can use. Throughout the thesis we introduce a number of these proposal schemes, and highlight when they may be of use. Of particular interest is the transport map based proposal scheme introduced in Chapter 7 which, while more expensive than the other schemes, allows us to sample efficiently from a wide range of complex target distributions.
|
25 |
Kriging-based Approaches for the Probabilistic Analysis of Strip Footings Resting on Spatially Varying SoilsThajeel, Jawad 08 December 2017 (has links)
L’analyse probabiliste des ouvrages géotechniques est généralement réalisée en utilisant la méthode de simulation de Monte Carlo. Cette méthode n’est pas adaptée pour le calcul des faibles probabilités de rupture rencontrées dans la pratique car elle devient très coûteuse dans ces cas en raison du grand nombre de simulations requises pour obtenir la probabilité de rupture. Dans cette thèse, nous avons développé trois méthodes probabilistes (appelées AK-MCS, AK-IS et AK-SS) basées sur une méthode d’apprentissage (Active learning) et combinant la technique de Krigeage et l’une des trois méthodes de simulation (i.e. Monte Carlo Simulation MCS, Importance Sampling IS ou Subset Simulation SS). Dans AK-MCS, la population est prédite en utilisant un méta-modèle de krigeage qui est défini en utilisant seulement quelques points de la population, ce qui réduit considérablement le temps de calcul par rapport à la méthode MCS. Dans AK-IS, une technique d'échantillonnage plus efficace 'IS' est utilisée. Dans le cadre de cette approche, la faible probabilité de rupture est estimée avec une précision similaire à celle de AK-MCS, mais en utilisant une taille beaucoup plus petite de la population initiale, ce qui réduit considérablement le temps de calcul. Enfin, dans AK-SS, une technique d'échantillonnage plus efficace 'SS' est proposée. Cette technique ne nécessite pas la recherche de points de conception et par conséquent, elle peut traiter des surfaces d’état limite de forme arbitraire. Toutes les trois méthodes ont été appliquées au cas d'une fondation filante chargée verticalement et reposant sur un sol spatialement variable. Les résultats obtenus sont présentés et discutés. / The probabilistic analysis of geotechnical structures involving spatially varying soil properties is generally performed using Monte Carlo Simulation methodology. This method is not suitable for the computation of the small failure probabilities encountered in practice because it becomes very time-expensive in such cases due to the large number of simulations required to calculate accurate values of the failure probability. Three probabilistic approaches (named AK-MCS, AK-IS and AK-SS) based on an Active learning and combining Kriging and one of the three simulation techniques (i.e. Monte Carlo Simulation MCS, Importance Sampling IS or Subset Simulation SS) were developed. Within AK-MCS, a Monte Carlo simulation without evaluating the whole population is performed. Indeed, the population is predicted using a kriging meta-model which is defined using only a few points of the population thus significantly reducing the computation time with respect to the crude MCS. In AK-IS, a more efficient sampling technique ‘IS’ is used instead of ‘MCS’. In the framework of this approach, the small failure probability is estimated with a similar accuracy as AK-MCS but using a much smaller size of the initial population, thus significantly reducing the computation time. Finally, in AK-SS, a more efficient sampling technique ‘SS’ is proposed. This technique overcomes the search of the design points and thus it can deal with arbitrary shapes of the limit state surfaces. All the three methods were applied to the case of a vertically loaded strip footing resting on a spatially varying soil. The obtained results are presented and discussed.
|
26 |
Effcient Monte Carlo Simulations for the Estimation of Rare Events Probabilities in Wireless Communication SystemsBen Issaid, Chaouki 12 November 2019 (has links)
Simulation methods are used when closed-form solutions do not exist. An interesting simulation method that has been widely used in many scientific fields is the Monte Carlo method. Not only it is a simple technique that enables to estimate the
quantity of interest, but it can also provide relevant information about the value to be
estimated through its confidence interval. However, the use of classical Monte Carlo
method is not a reasonable choice when dealing with rare event probabilities. In fact,
very small probabilities require a huge number of simulation runs, and thus, the computational time of the simulation increases significantly. This observation lies behind the main motivation of the present work. In this thesis, we propose efficient importance sampling estimators to evaluate rare events probabilities. In the first part of
the thesis, we consider a variety of turbulence regimes, and we study the outage probability of free-space optics communication systems under a generalized pointing error model with both a nonzero boresight component and different horizontal and vertical jitter effects. More specifically, we use an importance sampling approach,based on the exponential twisting technique to offer fast and accurate results. We also
show that our approach extends to the multihop scenario. In the second part of the
thesis, we are interested in assessing the outage probability achieved by some diversity techniques over generalized fading channels. In many circumstances, this is related to the difficult question of analyzing the statistics of the sum of random variables.
More specifically, we propose robust importance sampling schemes that efficiently evaluate the outage probability of diversity receivers over Gamma-Gamma, α − µ, κ − µ, and η − µ fading channels. The proposed estimators satisfy the well-known bounded relative error criterion for both maximum ratio combining and equal gain
combining cases. We show the accuracy and the efficiency of our approach compared
to naive Monte Carlo via some selected numerical simulations in both case studies.
In the last part of this thesis, we propose efficient importance sampling estimators
for the left tail of positive Gaussian quadratic forms in both real and complex settings. We show that these estimators possess the bounded relative error property.
These estimators are then used to estimate the outage probability of maximum ratio
combining diversity receivers over correlated Nakagami-m or correlated Rician fading
channels
|
27 |
Monte Carlo Methods for Multifactor Portfolio Credit RiskLee, Yi-hsi 08 February 2010 (has links)
This study develops a dynamic importance sampling method (DIS) for numerical simulations of rare events. The DIS method is flexible, fast, and accurate. The most importance is that it is very easy to implement. It could be applied to any multifactor copula models, which conduct by arbitrary independent random variables. First, the key common factor (KCF) is determined by the maximum value among the coefficients of factor loadings. Second, searching the indicator by the order statistics and applying the truncated sampling techniques, the probability of large losses (PLL) and the expected excess loss above threshold (EELAT) can be estimated precisely. Except for the assumption that the factor loadings of KCF do not exit zero elements, we do not impose any restrictions on the composition of the portfolio. The DIS method developed in this study can therefore be applied to a very wide range of credit risk models. Comparison of the numerical experiment between the method of Glasserman, Kang and Shahabuddin (2008) and the DIS method developed in this study, under the multifactor Gaussian copula model and the high market impact condition (the factor loadings of marketwide factor of 0.8), both variance reduction ratio and efficient ratio of the DIS model are much better than that of Glasserman et al. (2008)¡¦s. And both results approximate when the factor loadings of marketwide factor decreases to the range of 0.5 to 0.25. However, the DIS method is superior to the method of Glasserman et al. (2008) in terms of the practicability. Numerical simulation results demonstrate that the DIS method is not only feasible to the general market conditions, but also particularly to the high market impact condition, especially in credit contagion or market collapse environments. It is also noted that the numerical results indicate that the DIS estimators exit bounded relative error.
|
28 |
Monte Carlo Statistical Methods¡GIntegration and OptimizationPan, Tian-Tian 10 July 2012 (has links)
¡@¡@This paper is refer to the chapter 1 to chapter 5 (except chapter 4 ) of the book, Monte Carlo Statistical Methods(second edition), the author is Robert and Casella(2004). The goal is to translate the chapter 1 to chapter 5 contents of this book into Chinese, modify the mistakes, add the details of the examples, translate the algorithm of the examples into Mathematica(7th) codes, and use the Simulated Annealing method to deal with the estimation of parameters by rounding data, and discuss the results. This paper provides Mathematica(7th) codes of almost every example, and show the actual results, so it can be regarded as a toolbook for those people who are interested in reading this book or may solve some problems related to those examples.
|
29 |
Probabilistic security management for power system operations with large amounts of wind powerHamon, Camille January 2015 (has links)
Power systems are critical infrastructures for the society. They are therefore planned and operated to provide a reliable eletricity delivery. The set of tools and methods to do so are gathered under security management and are designed to ensure that all operating constraints are fulfilled at all times. During the past decade, raising awareness about issues such as climate change, depletion of fossil fuels and energy security has triggered large investments in wind power. The limited predictability of wind power, in the form of forecast errors, pose a number of challenges for integrating wind power in power systems. This limited predictability increases the uncertainty already existing in power systems in the form of random occurrences of contingencies and load forecast errors. It is widely acknowledged that this added uncertainty due to wind power and other variable renewable energy sources will require new tools for security management as the penetration levels of these energy sources become significant. In this thesis, a set of tools for security management under uncertainty is developed. The key novelty in the proposed tools is that they build upon probabilistic descriptions, in terms of distribution functions, of the uncertainty. By considering the distribution functions of the uncertainty, the proposed tools can consider all possible future operating conditions captured in the probabilistic forecasts, as well as the likeliness of these operating conditions. By contrast, today's tools are based on the deterministic N-1 criterion that only considers one future operating condition and disregards its likelihood. Given a list of contingencies selected by the system operator and probabilitistic forecasts for the load and wind power, an operating risk is defined in this thesis as the sum of the probabilities of the pre- and post-contingency violations of the operating constraints, weighted by the probability of occurrence of the contingencies. For security assessment, this thesis proposes efficient Monte-Carlo methods to estimate the operating risk. Importance sampling is used to substantially reduce the computational time. In addition, sample-free analytical approximations are developed to quickly estimate the operating risk. For security enhancement, the analytical approximations are further embedded in an optimization problem that aims at obtaining the cheapest generation re-dispatch that ensures that the operating risk remains below a certain threshold. The proposed tools build upon approximations, developed in this thesis, of the stable feasible domain where all operating constraints are fulfilled. / <p>QC 20150508</p>
|
30 |
Monte Carlo Integration Using Importance Sampling and Gibbs SamplingHörmann, Wolfgang, Leydold, Josef January 2005 (has links) (PDF)
To evaluate the expectation of a simple function with respect to a complicated multivariate density Monte Carlo integration has become the main technique. Gibbs sampling and importance sampling are the most popular methods for this task. In this contribution we propose a new simple general purpose importance sampling procedure. In a simulation study we compare the performance of this method with the performance of Gibbs sampling and of importance sampling using a vector of independent variates. It turns out that the new procedure is much better than independent importance sampling; up to dimension five it is also better than Gibbs sampling. The simulation results indicate that for higher dimensions Gibbs sampling is superior. (author's abstract) / Series: Preprint Series / Department of Applied Statistics and Data Processing
|
Page generated in 0.0677 seconds