• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 53
  • 11
  • 8
  • 5
  • 4
  • 4
  • 3
  • 1
  • 1
  • Tagged with
  • 101
  • 101
  • 46
  • 46
  • 19
  • 18
  • 14
  • 13
  • 13
  • 12
  • 12
  • 11
  • 11
  • 11
  • 11
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
31

Polytopes Arising from Binary Multi-way Contingency Tables and Characteristic Imsets for Bayesian Networks

Xi, Jing 01 January 2013 (has links)
The main theme of this dissertation is the study of polytopes arising from binary multi-way contingency tables and characteristic imsets for Bayesian networks. Firstly, we study on three-way tables whose entries are independent Bernoulli ran- dom variables with canonical parameters under no three-way interaction generalized linear models. Here, we use the sequential importance sampling (SIS) method with the conditional Poisson (CP) distribution to sample binary three-way tables with the sufficient statistics, i.e., all two-way marginal sums, fixed. Compared with Monte Carlo Markov Chain (MCMC) approach with a Markov basis (MB), SIS procedure has the advantage that it does not require expensive or prohibitive pre-computations. Note that this problem can also be considered as estimating the number of lattice points inside the polytope defined by the zero-one and two-way marginal constraints. The theorems in Chapter 2 give the parameters for the CP distribution on each column when it is sampled. In this chapter, we also present the algorithms, the simulation results, and the results for Samson’s monks data. Bayesian networks, a part of the family of probabilistic graphical models, are widely applied in many areas and much work has been done in model selections for Bayesian networks. The second part of this dissertation investigates the problem of finding the optimal graph by using characteristic imsets, where characteristic imsets are defined as 0-1 vector representations of Bayesian networks which are unique up to Markov equivalence. Characteristic imset polytopes are defined as the convex hull of all characteristic imsets we consider. It was proven that the problem of finding optimal Bayesian network for a specific dataset can be converted to a linear programming problem over the characteristic imset polytope [51]. In Chapter 3, we first consider characteristic imset polytopes for all diagnosis models and show that these polytopes are direct product of simplices. Then we give the combinatorial description of all edges and all facets of these polytopes. At the end of this chapter, we generalize these results to the characteristic imset polytopes for all Bayesian networks with a fixed underlying ordering of nodes. Chapter 4 includes discussion and future work on these two topics.
32

[en] NON GAUSSIAN STATE SPACE MODELS FOR COUNT DATA: THE DURBIN AND KOOPMAN METHODOLOGY / [pt] MODELOS DE ESPAÇO DE ESTADO NÃO GAUSSIANOS PARA DADOS DE CONTAGEM: METODOLOGIA DURBIN-KOOPMAN

MAYTE SUAREZ FARINAS 15 February 2006 (has links)
[pt] O objetivo desta tese é o de apresentar e investigar a metodologia de Durbin e Koopman (DK) usada para estimar o espaço de estado de modelos de séries temporais não- Gaussianos, dentro do contexto de modelos estruturais. A abordagem de DK está baseada na avaliação da verossimilhança usando uma eficiente simulação de Monte Carlo, por meio de amostragem por importância e técnicas de redução de variância, tais como variáveis antitéticas e variáveis de controle. Ela também integra conhecidas técnicas existentes no caso Gaussiano tais como o Filtro de Kalman Siavizado e o algoritmo de simulação suavizada. Uma vez que os hiperparâmetros do modelo são estimados, o estado, que contém as componentes do modelo, é estimado pela avaliação da moda a posteriori. Propomos então aproximações para avaliar a média e a variância da distribuição preditiva. São consideradas aplicações usando o modelo de Poisson. / [en] The aim of this thesis is to present and investigate the methodology of Durbin and Koopman (DK) used to estimate non-Gaussian state space time series models, within the context of structural models. DK`s approach is based on evaluating the likelihood using efficient Monte Carlo simulation, by means of importance sampling and variance- reduction techniques, such as antithetic variables and control variables. It also contents known existent techniques for the Gaussian case as the Kalman Filter smoother Simulation algorithm. Once the model hyperparameters are estimated, the state, which encapsulates the model`s components, is estimated by evaluating its posterior mode. Proposals are approximated to evaluate mean and variance for the predictive distribution. Applications are considered using the Poisson model.
33

Sampling from Linear Multivariate Densities

Hörmann, Wolfgang, Leydold, Josef January 2009 (has links) (PDF)
It is well known that the generation of random vectors with non-independent components is difficult. Nevertheless, we propose a new and very simple generation algorithm for multivariate linear densities over point-symmetric domains. Among other applications it can be used to design a simple decomposition-rejection algorithm for multivariate concave distributions. / Series: Research Report Series / Department of Statistics and Mathematics
34

Monte Carlo Methods for Stochastic Differential Equations and their Applications

Leach, Andrew Bradford, Leach, Andrew Bradford January 2017 (has links)
We introduce computationally efficient Monte Carlo methods for studying the statistics of stochastic differential equations in two distinct settings. In the first, we derive importance sampling methods for data assimilation when the noise in the model and observations are small. The methods are formulated in discrete time, where the "posterior" distribution we want to sample from can be analyzed in an accessible small noise expansion. We show that a "symmetrization" procedure akin to antithetic coupling can improve the order of accuracy of the sampling methods, which is illustrated with numerical examples. In the second setting, we develop "stochastic continuation" methods to estimate level sets for statistics of stochastic differential equations with respect to their parameters. We adapt Keller's Pseudo-Arclength continuation method to this setting using stochastic approximation, and generalized least squares regression. Furthermore, we show that the methods can be improved through the use of coupling methods to reduce the variance of the derivative estimates that are involved.
35

Better Confidence Intervals for Importance Sampling

Sak, Halis, Hörmann, Wolfgang, Leydold, Josef January 2010 (has links) (PDF)
It is well known that for highly skewed distributions the standard method of using the t statistic for the confidence interval of the mean does not give robust results. This is an important problem for importance sampling (IS) as its final distribution is often skewed due to a heavy tailed weight distribution. In this paper, we first explain Hall's transformation and its variants to correct the confidence interval of the mean and then evaluate the performance of these methods for two numerical examples from finance which have closed-form solutions. Finally, we assess the performance of these methods for credit risk examples. Our numerical results suggest that Hall's transformation or one of its variants can be safely used in correcting the two-sided confidence intervals of financial simulations.(author's abstract) / Series: Research Report Series / Department of Statistics and Mathematics
36

Non-parametric inference of risk measures

Ahn, Jae Youn 01 May 2012 (has links)
Responding to the changes in the insurance environment of the past decade, insurance regulators globally have been revamping the valuation and capital regulations. This thesis is concerned with the design and analysis of statistical inference procedures that are used to implement these new and upcoming insurance regulations, and their analysis in a more general setting toward lending further insights into their performance in practical situations. The quantitative measure of risk that is used in these new and upcoming regulations is the risk measure known as the Tail Value-at-Risk (T-VaR). In implementing these regulations, insurance companies often have to estimate the T-VaR of product portfolios from the output of a simulation of its cash flows. The distributions for the underlying economic variables are either estimated or prescribed by regulations. In this situation the computational complexity of estimating the T-VaR arises due to the complexity in determining the portfolio cash flows for a given realization of economic variables. A technique that has proved promising in such settings is that of importance sampling. While the asymptotic behavior of the natural non-parametric estimator of T-VaR under importance sampling has been conjectured, the literature has lacked an honest result. The main goal of the first part of the thesis is to give a precise weak convergence result describing the asymptotic behavior of this estimator under importance sampling. Our method also establishes such a result for the natural non-parametric estimator for the Value-at-Risk, another popular risk measure, under weaker assumptions than those used in the literature. We also report on a simulation study conducted to examine the quality of these asymptotic approximations in small samples. The Haezendonck-Goovaerts class of risk measures corresponds to a premium principle that is a multiplicative analog of the zero utility principle, and is thus of significant academic interest. From a practical point of view our interest in this class of risk measures arose primarily from the fact that the T-VaR is, in a sense, a minimal member of the class. Hence, a study of the natural non-parametric estimator for these risk measures will lend further insights into the statistical inference for the T-VaR. Analysis of the asymptotic behavior of the generalized estimator has proved elusive, largely due to the fact that, unlike the T-VaR, it lacks a closed form expression. Our main goal in the second part of this thesis is to study the asymptotic behavior of this estimator. In order to conduct a simulation study, we needed an efficient algorithm to compute the Haezendonck-Goovaerts risk measure with precise error bounds. The lack of such an algorithm has clearly been noticed in the literature, and has impeded the quality of simulation results. In this part we also design and analyze an algorithm for computing these risk measures. In the process of doing we also derive some fundamental bounds on the solutions to the optimization problem underlying these risk measures. We also have implemented our algorithm on the R software environment, and included its source code in the Appendix.
37

Asymptotic approaches in financial risk management / Approches asymptotiques en gestion des risques financiers

Genin, Adrien 21 September 2018 (has links)
Cette thèse se propose de traiter de trois problèmes de gestion des risques financiers en utilisant différentes approches asymptotiques. La première partie présente un algorithme Monte Carlo d’échantillonnage d’importance pour la valorisation d’options asiatiques dans des modèles exponentiels de Lévy. La mesure optimale d’échantillonnage d’importance est obtenue grâce à la théorie des grandes déviations. La seconde partie présente l’étude du comportement asymptotique de la somme de n variables aléatoires positives et dépendantes dont la distribution est un mélange log-normal ainsi que des applications en gestion des risque de portefeuille d’actifs. Enfin, la dernière partie, présente une application de la notion de variations régulières pour l’analyse du comportement des queues de distribution d’un vecteur aléatoire dont les composantes suivent des distributions à queues épaisses et dont la structure de dépendance est modélisée par une copule Gaussienne. Ces résultats sont ensuite appliqués au comportement asymptotique d’un portefeuille d’options dans le modèle de Black-Scholes / This thesis focuses on three problems from the area of financial risk management, using various asymptotic approaches. The first part presents an importance sampling algorithm for Monte Carlo pricing of exotic options in exponential Lévy models. The optimal importance sampling measure is computed using techniques from the theory of large deviations. The second part uses the Laplace method to study the tail behavior of the sum of n dependent positive random variables, following a log-normal mixture distribution, with applications to portfolio risk management. Finally, the last part employs the notion of multivariate regular variation to analyze the tail behavior of a random vector with heavy-tailed components, whose dependence structure is modeled by a Gaussian copula. As application, we consider the tail behavior of a portfolio of options in the Black-Scholes model
38

Implementation and Visualization of Importance sampling in Deep learning

Knutsson, Alex, Unnebäck, Jakob January 2023 (has links)
Artificial neural networks are networks made up of thousands and sometimes millions or more nodes also referred to as neurons. Due to the sheer scale of a network, the task of training the network can become very compute-intensive. This is because all samples need to be evaluated through the network during training, and the gradients need to be updated based on each sample`s loss. Like humans, neural networks find some samples more difficult to interpret correctly than others. By feeding the network with more difficult samples while avoiding samples it has already mastered the training process can be executed more efficiently. In the medical field neural networks are among other use cases used to identify malignant cancer in tissue samples. In such a use case being able to increase the performance of a model by 1-2 percentage units could have a huge impact on saving lives by correctly discovering malignant cancer. In this thesis project, different importance sampling methods are evaluated and tested on multiple networks and datasets. The results show how importance sampling can be utilized to faster reach a higher accuracy and save time. Not only are different importance sampling methods evaluated but also different thresholds and methods to determine when to start the importance sampling. / <p>Examensarbetet är utfört vid Institutionen för teknik och naturvetenskap (ITN) vid Tekniska fakulteten, Linköpings universitet</p>
39

Sequential Imputation and Linkage Analysis

Skrivanek, Zachary 20 December 2002 (has links)
No description available.
40

Estimation of Probability of Failure for Damage-Tolerant Aerospace Structures

Halbert, Keith January 2014 (has links)
The majority of aircraft structures are designed to be damage-tolerant such that safe operation can continue in the presence of minor damage. It is necessary to schedule inspections so that minor damage can be found and repaired. It is generally not possible to perform structural inspections prior to every flight. The scheduling is traditionally accomplished through a deterministic set of methods referred to as Damage Tolerance Analysis (DTA). DTA has proven to produce safe aircraft but does not provide estimates of the probability of failure of future flights or the probability of repair of future inspections. Without these estimates maintenance costs cannot be accurately predicted. Also, estimation of failure probabilities is now a regulatory requirement for some aircraft. The set of methods concerned with the probabilistic formulation of this problem are collectively referred to as Probabilistic Damage Tolerance Analysis (PDTA). The goal of PDTA is to control the failure probability while holding maintenance costs to a reasonable level. This work focuses specifically on PDTA for fatigue cracking of metallic aircraft structures. The growth of a crack (or cracks) must be modeled using all available data and engineering knowledge. The length of a crack can be assessed only indirectly through evidence such as non-destructive inspection results, failures or lack of failures, and the observed severity of usage of the structure. The current set of industry PDTA tools are lacking in several ways: they may in some cases yield poor estimates of failure probabilities, they cannot realistically represent the variety of possible failure and maintenance scenarios, and they do not allow for model updates which incorporate observed evidence. A PDTA modeling methodology must be flexible enough to estimate accurately the failure and repair probabilities under a variety of maintenance scenarios, and be capable of incorporating observed evidence as it becomes available. This dissertation describes and develops new PDTA methodologies that directly address the deficiencies of the currently used tools. The new methods are implemented as a free, publicly licensed and open source R software package that can be downloaded from the Comprehensive R Archive Network. The tools consist of two main components. First, an explicit (and expensive) Monte Carlo approach is presented which simulates the life of an aircraft structural component flight-by-flight. This straightforward MC routine can be used to provide defensible estimates of the failure probabilities for future flights and repair probabilities for future inspections under a variety of failure and maintenance scenarios. This routine is intended to provide baseline estimates against which to compare the results of other, more efficient approaches. Second, an original approach is described which models the fatigue process and future scheduled inspections as a hidden Markov model. This model is solved using a particle-based approximation and the sequential importance sampling algorithm, which provides an efficient solution to the PDTA problem. Sequential importance sampling is an extension of importance sampling to a Markov process, allowing for efficient Bayesian updating of model parameters. This model updating capability, the benefit of which is demonstrated, is lacking in other PDTA approaches. The results of this approach are shown to agree with the results of the explicit Monte Carlo routine for a number of PDTA problems. Extensions to the typical PDTA problem, which cannot be solved using currently available tools, are presented and solved in this work. These extensions include incorporating observed evidence (such as non-destructive inspection results), more realistic treatment of possible future repairs, and the modeling of failure involving more than one crack (the so-called continuing damage problem). The described hidden Markov model / sequential importance sampling approach to PDTA has the potential to improve aerospace structural safety and reduce maintenance costs by providing a more accurate assessment of the risk of failure and the likelihood of repairs throughout the life of an aircraft. / Statistics

Page generated in 0.1054 seconds