• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 50
  • 11
  • 7
  • 5
  • 4
  • 4
  • 1
  • 1
  • Tagged with
  • 93
  • 93
  • 44
  • 44
  • 18
  • 17
  • 13
  • 13
  • 13
  • 12
  • 12
  • 11
  • 11
  • 11
  • 11
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
21

Efficient Risk Simulations for Linear Asset Portfolios

Sak, Halis, Hörmann, Wolfgang, Leydold, Josef January 2008 (has links) (PDF)
We consider the problem of calculating tail probabilities of the returns of linear asset portfolios. As flexible and accurate model for the logarithmic returns we use the $t$-copula dependence structure and marginals following the generalized hyperbolic distribution. Exact calculation of the tail-loss probabilities is not possible and even simulation leads to challenging numerical problems. Applying a new numerical inversion method for the generation of the marginals and importance sampling with carefully selected mean shift we develop an efficient simulation algorithm. Numerical results for a variety of realistic portfolio examples show an impressive performance gain. (author´s abstract) / Series: Research Report Series / Department of Statistics and Mathematics
22

Rare Events Simulations with Applications to the Performance Evaluation of Wireless Communication Systems

Ben Rached, Nadhir 08 October 2018 (has links)
The probability that a sum of random variables (RVs) exceeds (respectively falls below) a given threshold, is often encountered in the performance analysis of wireless communication systems. Generally, a closed-form expression of the sum distribution does not exist and a naive Monte Carlo (MC) simulation is computationally expensive when dealing with rare events. An alternative approach is represented by the use of variance reduction techniques, known for their efficiency in requiring less computations for achieving the same accuracy requirement. For the right-tail region, we develop a unified hazard rate twisting importance sampling (IS) technique that presents the advantage of being logarithmic efficient for arbitrary distributions under the independence assumption. A further improvement of this technique is then developed wherein the twisting is applied only to the components having more impacts on the probability of interest than others. Another challenging problem is when the components are correlated and distributed according to the Log-normal distribution. In this setting, we develop a generalized hybrid IS scheme based on a mean shifting and covariance matrix scaling techniques and we prove that the logarithmic efficiency holds again for two particular instances. We also propose two unified IS approaches to estimate the left-tail of sums of independent positive RVs. The first applies to arbitrary distributions and enjoys the logarithmic efficiency criterion, whereas the second satisfies the bounded relative error criterion under a mild assumption but is only applicable to the case of independent and identically distributed RVs. The left-tail of correlated Log-normal variates is also considered. In fact, we construct an estimator combining an existing mean shifting IS approach with a control variate technique and prove that it possess the asymptotically vanishing relative error property. A further interesting problem is the left-tail estimation of sums of ordered RVs. Two estimators are presented. The first is based on IS and achieves the bounded relative error under a mild assumption. The second is based on conditional MC approach and achieves the bounded relative error property for the Generalized Gamma case and the logarithmic efficiency for the Log-normal case.
23

Parallel MCMC methods and their applications in inverse problems

Russell, Paul January 2018 (has links)
In this thesis we introduce a framework for parallel MCMC methods which we call parallel adaptive importance sampling (PAIS). At each iteration we have an ensemble of particles, from which PAIS builds a kernel density estimate (KDE). We propose a new ensemble, using this KDE, that is weighted according to standard importance sampling rules. A state-of-the art resampling method from the optimal transportation literature, or alternatively our own novel resampling algorithm, can be used to produce an equally weighted ensemble from this weighted ensemble. This equally weighted ensemble is approximately distributed according to the target distribution and is used to progress the algorithm. The PAIS algorithm outputs a weighted sample. We introduce an adaptive scheme for PAIS which automatically tunes the scaling parameters required for efficient sampling. This adaptive tuning converges rapidly for the target distributions we have experimented with and significantly reduces the burn-in period of the algorithm. PAIS has been designed to work well on computers with parallel processing units available, and we have demonstrated that a doubling of the number of processing units available more than halves the number of iterations required to reach the same accuracy. The numerical examples have been implemented on a shared memory system. PAIS is incredibly flexible in terms of the proposal distributions and resampling methods we can use. Throughout the thesis we introduce a number of these proposal schemes, and highlight when they may be of use. Of particular interest is the transport map based proposal scheme introduced in Chapter 7 which, while more expensive than the other schemes, allows us to sample efficiently from a wide range of complex target distributions.
24

Kriging-based Approaches for the Probabilistic Analysis of Strip Footings Resting on Spatially Varying Soils

Thajeel, Jawad 08 December 2017 (has links)
L’analyse probabiliste des ouvrages géotechniques est généralement réalisée en utilisant la méthode de simulation de Monte Carlo. Cette méthode n’est pas adaptée pour le calcul des faibles probabilités de rupture rencontrées dans la pratique car elle devient très coûteuse dans ces cas en raison du grand nombre de simulations requises pour obtenir la probabilité de rupture. Dans cette thèse, nous avons développé trois méthodes probabilistes (appelées AK-MCS, AK-IS et AK-SS) basées sur une méthode d’apprentissage (Active learning) et combinant la technique de Krigeage et l’une des trois méthodes de simulation (i.e. Monte Carlo Simulation MCS, Importance Sampling IS ou Subset Simulation SS). Dans AK-MCS, la population est prédite en utilisant un méta-modèle de krigeage qui est défini en utilisant seulement quelques points de la population, ce qui réduit considérablement le temps de calcul par rapport à la méthode MCS. Dans AK-IS, une technique d'échantillonnage plus efficace 'IS' est utilisée. Dans le cadre de cette approche, la faible probabilité de rupture est estimée avec une précision similaire à celle de AK-MCS, mais en utilisant une taille beaucoup plus petite de la population initiale, ce qui réduit considérablement le temps de calcul. Enfin, dans AK-SS, une technique d'échantillonnage plus efficace 'SS' est proposée. Cette technique ne nécessite pas la recherche de points de conception et par conséquent, elle peut traiter des surfaces d’état limite de forme arbitraire. Toutes les trois méthodes ont été appliquées au cas d'une fondation filante chargée verticalement et reposant sur un sol spatialement variable. Les résultats obtenus sont présentés et discutés. / The probabilistic analysis of geotechnical structures involving spatially varying soil properties is generally performed using Monte Carlo Simulation methodology. This method is not suitable for the computation of the small failure probabilities encountered in practice because it becomes very time-expensive in such cases due to the large number of simulations required to calculate accurate values of the failure probability. Three probabilistic approaches (named AK-MCS, AK-IS and AK-SS) based on an Active learning and combining Kriging and one of the three simulation techniques (i.e. Monte Carlo Simulation MCS, Importance Sampling IS or Subset Simulation SS) were developed. Within AK-MCS, a Monte Carlo simulation without evaluating the whole population is performed. Indeed, the population is predicted using a kriging meta-model which is defined using only a few points of the population thus significantly reducing the computation time with respect to the crude MCS. In AK-IS, a more efficient sampling technique ‘IS’ is used instead of ‘MCS’. In the framework of this approach, the small failure probability is estimated with a similar accuracy as AK-MCS but using a much smaller size of the initial population, thus significantly reducing the computation time. Finally, in AK-SS, a more efficient sampling technique ‘SS’ is proposed. This technique overcomes the search of the design points and thus it can deal with arbitrary shapes of the limit state surfaces. All the three methods were applied to the case of a vertically loaded strip footing resting on a spatially varying soil. The obtained results are presented and discussed.
25

Effcient Monte Carlo Simulations for the Estimation of Rare Events Probabilities in Wireless Communication Systems

Ben Issaid, Chaouki 12 November 2019 (has links)
Simulation methods are used when closed-form solutions do not exist. An interesting simulation method that has been widely used in many scientific fields is the Monte Carlo method. Not only it is a simple technique that enables to estimate the quantity of interest, but it can also provide relevant information about the value to be estimated through its confidence interval. However, the use of classical Monte Carlo method is not a reasonable choice when dealing with rare event probabilities. In fact, very small probabilities require a huge number of simulation runs, and thus, the computational time of the simulation increases significantly. This observation lies behind the main motivation of the present work. In this thesis, we propose efficient importance sampling estimators to evaluate rare events probabilities. In the first part of the thesis, we consider a variety of turbulence regimes, and we study the outage probability of free-space optics communication systems under a generalized pointing error model with both a nonzero boresight component and different horizontal and vertical jitter effects. More specifically, we use an importance sampling approach,based on the exponential twisting technique to offer fast and accurate results. We also show that our approach extends to the multihop scenario. In the second part of the thesis, we are interested in assessing the outage probability achieved by some diversity techniques over generalized fading channels. In many circumstances, this is related to the difficult question of analyzing the statistics of the sum of random variables. More specifically, we propose robust importance sampling schemes that efficiently evaluate the outage probability of diversity receivers over Gamma-Gamma, α − µ, κ − µ, and η − µ fading channels. The proposed estimators satisfy the well-known bounded relative error criterion for both maximum ratio combining and equal gain combining cases. We show the accuracy and the efficiency of our approach compared to naive Monte Carlo via some selected numerical simulations in both case studies. In the last part of this thesis, we propose efficient importance sampling estimators for the left tail of positive Gaussian quadratic forms in both real and complex settings. We show that these estimators possess the bounded relative error property. These estimators are then used to estimate the outage probability of maximum ratio combining diversity receivers over correlated Nakagami-m or correlated Rician fading channels
26

Monte Carlo Methods for Multifactor Portfolio Credit Risk

Lee, Yi-hsi 08 February 2010 (has links)
This study develops a dynamic importance sampling method (DIS) for numerical simulations of rare events. The DIS method is flexible, fast, and accurate. The most importance is that it is very easy to implement. It could be applied to any multifactor copula models, which conduct by arbitrary independent random variables. First, the key common factor (KCF) is determined by the maximum value among the coefficients of factor loadings. Second, searching the indicator by the order statistics and applying the truncated sampling techniques, the probability of large losses (PLL) and the expected excess loss above threshold (EELAT) can be estimated precisely. Except for the assumption that the factor loadings of KCF do not exit zero elements, we do not impose any restrictions on the composition of the portfolio. The DIS method developed in this study can therefore be applied to a very wide range of credit risk models. Comparison of the numerical experiment between the method of Glasserman, Kang and Shahabuddin (2008) and the DIS method developed in this study, under the multifactor Gaussian copula model and the high market impact condition (the factor loadings of marketwide factor of 0.8), both variance reduction ratio and efficient ratio of the DIS model are much better than that of Glasserman et al. (2008)¡¦s. And both results approximate when the factor loadings of marketwide factor decreases to the range of 0.5 to 0.25. However, the DIS method is superior to the method of Glasserman et al. (2008) in terms of the practicability. Numerical simulation results demonstrate that the DIS method is not only feasible to the general market conditions, but also particularly to the high market impact condition, especially in credit contagion or market collapse environments. It is also noted that the numerical results indicate that the DIS estimators exit bounded relative error.
27

Probabilistic security management for power system operations with large amounts of wind power

Hamon, Camille January 2015 (has links)
Power systems are critical infrastructures for the society. They are therefore planned and operated to provide a reliable eletricity delivery. The set of tools and methods to do so are gathered under security management and are designed to ensure that all operating constraints are fulfilled at all times. During the past decade, raising awareness about issues such as climate change, depletion of fossil fuels and energy security has triggered large investments in wind power. The limited predictability of wind power, in the form of forecast errors, pose a number of challenges for integrating wind power in power systems. This limited predictability increases the uncertainty already existing in power systems in the form of random occurrences of contingencies and load forecast errors. It is widely acknowledged that this added uncertainty due to wind power and other variable renewable energy sources will require new tools for security management as the penetration levels of these energy sources become significant. In this thesis, a set of tools for security management under uncertainty is developed. The key novelty in the proposed tools is that they build upon probabilistic descriptions, in terms of distribution functions, of the uncertainty. By considering the distribution functions of the uncertainty, the proposed tools can consider all possible future operating conditions captured in the probabilistic forecasts, as well as the likeliness of these operating conditions. By contrast, today's tools are based on the deterministic N-1 criterion that only considers one future operating condition and disregards its likelihood. Given a list of contingencies selected by the system operator and probabilitistic forecasts for the load and wind power, an operating risk is defined in this thesis as the sum of the probabilities of the pre- and post-contingency violations of the operating constraints, weighted by the probability of occurrence of the contingencies. For security assessment, this thesis proposes efficient Monte-Carlo methods to estimate the operating risk. Importance sampling is used to substantially reduce the computational time. In addition, sample-free analytical approximations are developed to quickly estimate the operating risk. For security enhancement, the analytical approximations are further embedded in an optimization problem that aims at obtaining the cheapest generation re-dispatch that ensures that the operating risk remains below a certain threshold. The proposed tools build upon approximations, developed in this thesis, of the stable feasible domain where all operating constraints are fulfilled. / <p>QC 20150508</p>
28

Monte Carlo Integration Using Importance Sampling and Gibbs Sampling

Hörmann, Wolfgang, Leydold, Josef January 2005 (has links) (PDF)
To evaluate the expectation of a simple function with respect to a complicated multivariate density Monte Carlo integration has become the main technique. Gibbs sampling and importance sampling are the most popular methods for this task. In this contribution we propose a new simple general purpose importance sampling procedure. In a simulation study we compare the performance of this method with the performance of Gibbs sampling and of importance sampling using a vector of independent variates. It turns out that the new procedure is much better than independent importance sampling; up to dimension five it is also better than Gibbs sampling. The simulation results indicate that for higher dimensions Gibbs sampling is superior. (author's abstract) / Series: Preprint Series / Department of Applied Statistics and Data Processing
29

Polytopes Arising from Binary Multi-way Contingency Tables and Characteristic Imsets for Bayesian Networks

Xi, Jing 01 January 2013 (has links)
The main theme of this dissertation is the study of polytopes arising from binary multi-way contingency tables and characteristic imsets for Bayesian networks. Firstly, we study on three-way tables whose entries are independent Bernoulli ran- dom variables with canonical parameters under no three-way interaction generalized linear models. Here, we use the sequential importance sampling (SIS) method with the conditional Poisson (CP) distribution to sample binary three-way tables with the sufficient statistics, i.e., all two-way marginal sums, fixed. Compared with Monte Carlo Markov Chain (MCMC) approach with a Markov basis (MB), SIS procedure has the advantage that it does not require expensive or prohibitive pre-computations. Note that this problem can also be considered as estimating the number of lattice points inside the polytope defined by the zero-one and two-way marginal constraints. The theorems in Chapter 2 give the parameters for the CP distribution on each column when it is sampled. In this chapter, we also present the algorithms, the simulation results, and the results for Samson’s monks data. Bayesian networks, a part of the family of probabilistic graphical models, are widely applied in many areas and much work has been done in model selections for Bayesian networks. The second part of this dissertation investigates the problem of finding the optimal graph by using characteristic imsets, where characteristic imsets are defined as 0-1 vector representations of Bayesian networks which are unique up to Markov equivalence. Characteristic imset polytopes are defined as the convex hull of all characteristic imsets we consider. It was proven that the problem of finding optimal Bayesian network for a specific dataset can be converted to a linear programming problem over the characteristic imset polytope [51]. In Chapter 3, we first consider characteristic imset polytopes for all diagnosis models and show that these polytopes are direct product of simplices. Then we give the combinatorial description of all edges and all facets of these polytopes. At the end of this chapter, we generalize these results to the characteristic imset polytopes for all Bayesian networks with a fixed underlying ordering of nodes. Chapter 4 includes discussion and future work on these two topics.
30

[en] NON GAUSSIAN STATE SPACE MODELS FOR COUNT DATA: THE DURBIN AND KOOPMAN METHODOLOGY / [pt] MODELOS DE ESPAÇO DE ESTADO NÃO GAUSSIANOS PARA DADOS DE CONTAGEM: METODOLOGIA DURBIN-KOOPMAN

MAYTE SUAREZ FARINAS 15 February 2006 (has links)
[pt] O objetivo desta tese é o de apresentar e investigar a metodologia de Durbin e Koopman (DK) usada para estimar o espaço de estado de modelos de séries temporais não- Gaussianos, dentro do contexto de modelos estruturais. A abordagem de DK está baseada na avaliação da verossimilhança usando uma eficiente simulação de Monte Carlo, por meio de amostragem por importância e técnicas de redução de variância, tais como variáveis antitéticas e variáveis de controle. Ela também integra conhecidas técnicas existentes no caso Gaussiano tais como o Filtro de Kalman Siavizado e o algoritmo de simulação suavizada. Uma vez que os hiperparâmetros do modelo são estimados, o estado, que contém as componentes do modelo, é estimado pela avaliação da moda a posteriori. Propomos então aproximações para avaliar a média e a variância da distribuição preditiva. São consideradas aplicações usando o modelo de Poisson. / [en] The aim of this thesis is to present and investigate the methodology of Durbin and Koopman (DK) used to estimate non-Gaussian state space time series models, within the context of structural models. DK`s approach is based on evaluating the likelihood using efficient Monte Carlo simulation, by means of importance sampling and variance- reduction techniques, such as antithetic variables and control variables. It also contents known existent techniques for the Gaussian case as the Kalman Filter smoother Simulation algorithm. Once the model hyperparameters are estimated, the state, which encapsulates the model`s components, is estimated by evaluating its posterior mode. Proposals are approximated to evaluate mean and variance for the predictive distribution. Applications are considered using the Poisson model.

Page generated in 0.0658 seconds