• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 53
  • 11
  • 8
  • 5
  • 4
  • 4
  • 3
  • 1
  • 1
  • Tagged with
  • 101
  • 101
  • 46
  • 46
  • 19
  • 18
  • 14
  • 13
  • 13
  • 12
  • 12
  • 11
  • 11
  • 11
  • 11
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
31

Polytopes Arising from Binary Multi-way Contingency Tables and Characteristic Imsets for Bayesian Networks

Xi, Jing 01 January 2013 (has links)
The main theme of this dissertation is the study of polytopes arising from binary multi-way contingency tables and characteristic imsets for Bayesian networks. Firstly, we study on three-way tables whose entries are independent Bernoulli ran- dom variables with canonical parameters under no three-way interaction generalized linear models. Here, we use the sequential importance sampling (SIS) method with the conditional Poisson (CP) distribution to sample binary three-way tables with the sufficient statistics, i.e., all two-way marginal sums, fixed. Compared with Monte Carlo Markov Chain (MCMC) approach with a Markov basis (MB), SIS procedure has the advantage that it does not require expensive or prohibitive pre-computations. Note that this problem can also be considered as estimating the number of lattice points inside the polytope defined by the zero-one and two-way marginal constraints. The theorems in Chapter 2 give the parameters for the CP distribution on each column when it is sampled. In this chapter, we also present the algorithms, the simulation results, and the results for Samson’s monks data. Bayesian networks, a part of the family of probabilistic graphical models, are widely applied in many areas and much work has been done in model selections for Bayesian networks. The second part of this dissertation investigates the problem of finding the optimal graph by using characteristic imsets, where characteristic imsets are defined as 0-1 vector representations of Bayesian networks which are unique up to Markov equivalence. Characteristic imset polytopes are defined as the convex hull of all characteristic imsets we consider. It was proven that the problem of finding optimal Bayesian network for a specific dataset can be converted to a linear programming problem over the characteristic imset polytope [51]. In Chapter 3, we first consider characteristic imset polytopes for all diagnosis models and show that these polytopes are direct product of simplices. Then we give the combinatorial description of all edges and all facets of these polytopes. At the end of this chapter, we generalize these results to the characteristic imset polytopes for all Bayesian networks with a fixed underlying ordering of nodes. Chapter 4 includes discussion and future work on these two topics.
32

[en] NON GAUSSIAN STATE SPACE MODELS FOR COUNT DATA: THE DURBIN AND KOOPMAN METHODOLOGY / [pt] MODELOS DE ESPAÇO DE ESTADO NÃO GAUSSIANOS PARA DADOS DE CONTAGEM: METODOLOGIA DURBIN-KOOPMAN

MAYTE SUAREZ FARINAS 15 February 2006 (has links)
[pt] O objetivo desta tese é o de apresentar e investigar a metodologia de Durbin e Koopman (DK) usada para estimar o espaço de estado de modelos de séries temporais não- Gaussianos, dentro do contexto de modelos estruturais. A abordagem de DK está baseada na avaliação da verossimilhança usando uma eficiente simulação de Monte Carlo, por meio de amostragem por importância e técnicas de redução de variância, tais como variáveis antitéticas e variáveis de controle. Ela também integra conhecidas técnicas existentes no caso Gaussiano tais como o Filtro de Kalman Siavizado e o algoritmo de simulação suavizada. Uma vez que os hiperparâmetros do modelo são estimados, o estado, que contém as componentes do modelo, é estimado pela avaliação da moda a posteriori. Propomos então aproximações para avaliar a média e a variância da distribuição preditiva. São consideradas aplicações usando o modelo de Poisson. / [en] The aim of this thesis is to present and investigate the methodology of Durbin and Koopman (DK) used to estimate non-Gaussian state space time series models, within the context of structural models. DK`s approach is based on evaluating the likelihood using efficient Monte Carlo simulation, by means of importance sampling and variance- reduction techniques, such as antithetic variables and control variables. It also contents known existent techniques for the Gaussian case as the Kalman Filter smoother Simulation algorithm. Once the model hyperparameters are estimated, the state, which encapsulates the model`s components, is estimated by evaluating its posterior mode. Proposals are approximated to evaluate mean and variance for the predictive distribution. Applications are considered using the Poisson model.
33

Sampling from Linear Multivariate Densities

Hörmann, Wolfgang, Leydold, Josef January 2009 (has links) (PDF)
It is well known that the generation of random vectors with non-independent components is difficult. Nevertheless, we propose a new and very simple generation algorithm for multivariate linear densities over point-symmetric domains. Among other applications it can be used to design a simple decomposition-rejection algorithm for multivariate concave distributions. / Series: Research Report Series / Department of Statistics and Mathematics
34

Monte Carlo Methods for Stochastic Differential Equations and their Applications

Leach, Andrew Bradford, Leach, Andrew Bradford January 2017 (has links)
We introduce computationally efficient Monte Carlo methods for studying the statistics of stochastic differential equations in two distinct settings. In the first, we derive importance sampling methods for data assimilation when the noise in the model and observations are small. The methods are formulated in discrete time, where the "posterior" distribution we want to sample from can be analyzed in an accessible small noise expansion. We show that a "symmetrization" procedure akin to antithetic coupling can improve the order of accuracy of the sampling methods, which is illustrated with numerical examples. In the second setting, we develop "stochastic continuation" methods to estimate level sets for statistics of stochastic differential equations with respect to their parameters. We adapt Keller's Pseudo-Arclength continuation method to this setting using stochastic approximation, and generalized least squares regression. Furthermore, we show that the methods can be improved through the use of coupling methods to reduce the variance of the derivative estimates that are involved.
35

Better Confidence Intervals for Importance Sampling

Sak, Halis, Hörmann, Wolfgang, Leydold, Josef January 2010 (has links) (PDF)
It is well known that for highly skewed distributions the standard method of using the t statistic for the confidence interval of the mean does not give robust results. This is an important problem for importance sampling (IS) as its final distribution is often skewed due to a heavy tailed weight distribution. In this paper, we first explain Hall's transformation and its variants to correct the confidence interval of the mean and then evaluate the performance of these methods for two numerical examples from finance which have closed-form solutions. Finally, we assess the performance of these methods for credit risk examples. Our numerical results suggest that Hall's transformation or one of its variants can be safely used in correcting the two-sided confidence intervals of financial simulations.(author's abstract) / Series: Research Report Series / Department of Statistics and Mathematics
36

Non-parametric inference of risk measures

Ahn, Jae Youn 01 May 2012 (has links)
Responding to the changes in the insurance environment of the past decade, insurance regulators globally have been revamping the valuation and capital regulations. This thesis is concerned with the design and analysis of statistical inference procedures that are used to implement these new and upcoming insurance regulations, and their analysis in a more general setting toward lending further insights into their performance in practical situations. The quantitative measure of risk that is used in these new and upcoming regulations is the risk measure known as the Tail Value-at-Risk (T-VaR). In implementing these regulations, insurance companies often have to estimate the T-VaR of product portfolios from the output of a simulation of its cash flows. The distributions for the underlying economic variables are either estimated or prescribed by regulations. In this situation the computational complexity of estimating the T-VaR arises due to the complexity in determining the portfolio cash flows for a given realization of economic variables. A technique that has proved promising in such settings is that of importance sampling. While the asymptotic behavior of the natural non-parametric estimator of T-VaR under importance sampling has been conjectured, the literature has lacked an honest result. The main goal of the first part of the thesis is to give a precise weak convergence result describing the asymptotic behavior of this estimator under importance sampling. Our method also establishes such a result for the natural non-parametric estimator for the Value-at-Risk, another popular risk measure, under weaker assumptions than those used in the literature. We also report on a simulation study conducted to examine the quality of these asymptotic approximations in small samples. The Haezendonck-Goovaerts class of risk measures corresponds to a premium principle that is a multiplicative analog of the zero utility principle, and is thus of significant academic interest. From a practical point of view our interest in this class of risk measures arose primarily from the fact that the T-VaR is, in a sense, a minimal member of the class. Hence, a study of the natural non-parametric estimator for these risk measures will lend further insights into the statistical inference for the T-VaR. Analysis of the asymptotic behavior of the generalized estimator has proved elusive, largely due to the fact that, unlike the T-VaR, it lacks a closed form expression. Our main goal in the second part of this thesis is to study the asymptotic behavior of this estimator. In order to conduct a simulation study, we needed an efficient algorithm to compute the Haezendonck-Goovaerts risk measure with precise error bounds. The lack of such an algorithm has clearly been noticed in the literature, and has impeded the quality of simulation results. In this part we also design and analyze an algorithm for computing these risk measures. In the process of doing we also derive some fundamental bounds on the solutions to the optimization problem underlying these risk measures. We also have implemented our algorithm on the R software environment, and included its source code in the Appendix.
37

Asymptotic approaches in financial risk management / Approches asymptotiques en gestion des risques financiers

Genin, Adrien 21 September 2018 (has links)
Cette thèse se propose de traiter de trois problèmes de gestion des risques financiers en utilisant différentes approches asymptotiques. La première partie présente un algorithme Monte Carlo d’échantillonnage d’importance pour la valorisation d’options asiatiques dans des modèles exponentiels de Lévy. La mesure optimale d’échantillonnage d’importance est obtenue grâce à la théorie des grandes déviations. La seconde partie présente l’étude du comportement asymptotique de la somme de n variables aléatoires positives et dépendantes dont la distribution est un mélange log-normal ainsi que des applications en gestion des risque de portefeuille d’actifs. Enfin, la dernière partie, présente une application de la notion de variations régulières pour l’analyse du comportement des queues de distribution d’un vecteur aléatoire dont les composantes suivent des distributions à queues épaisses et dont la structure de dépendance est modélisée par une copule Gaussienne. Ces résultats sont ensuite appliqués au comportement asymptotique d’un portefeuille d’options dans le modèle de Black-Scholes / This thesis focuses on three problems from the area of financial risk management, using various asymptotic approaches. The first part presents an importance sampling algorithm for Monte Carlo pricing of exotic options in exponential Lévy models. The optimal importance sampling measure is computed using techniques from the theory of large deviations. The second part uses the Laplace method to study the tail behavior of the sum of n dependent positive random variables, following a log-normal mixture distribution, with applications to portfolio risk management. Finally, the last part employs the notion of multivariate regular variation to analyze the tail behavior of a random vector with heavy-tailed components, whose dependence structure is modeled by a Gaussian copula. As application, we consider the tail behavior of a portfolio of options in the Black-Scholes model
38

Implementation and Visualization of Importance sampling in Deep learning

Knutsson, Alex, Unnebäck, Jakob January 2023 (has links)
Artificial neural networks are networks made up of thousands and sometimes millions or more nodes also referred to as neurons. Due to the sheer scale of a network, the task of training the network can become very compute-intensive. This is because all samples need to be evaluated through the network during training, and the gradients need to be updated based on each sample`s loss. Like humans, neural networks find some samples more difficult to interpret correctly than others. By feeding the network with more difficult samples while avoiding samples it has already mastered the training process can be executed more efficiently. In the medical field neural networks are among other use cases used to identify malignant cancer in tissue samples. In such a use case being able to increase the performance of a model by 1-2 percentage units could have a huge impact on saving lives by correctly discovering malignant cancer. In this thesis project, different importance sampling methods are evaluated and tested on multiple networks and datasets. The results show how importance sampling can be utilized to faster reach a higher accuracy and save time. Not only are different importance sampling methods evaluated but also different thresholds and methods to determine when to start the importance sampling. / <p>Examensarbetet är utfört vid Institutionen för teknik och naturvetenskap (ITN) vid Tekniska fakulteten, Linköpings universitet</p>
39

Appearance-driven Material Design

Colbert, Mark 01 January 2008 (has links)
In the computer graphics production environment, artists often must tweak specific lighting and material parameters to match a mind's eye vision of the appearance of a 3D scene. However, the interaction between a material and a lighting environment is often too complex to cognitively predict without visualization. Therefore, artists operate in a design cycle, where they tweak the parameters, wait for a visualization, and repeat, seeking to obtain a desired look. We propose the use of appearance-driven material design. Here, artists directly design the appearance of reflected light for a specific view, surface point, and time. In this thesis, we discuss several methods for appearance-driven design with homogeneous materials, spatially-varying materials, and appearance-matching materials, where each uses a unique modeling and optimization paradigm. Moreover, we present a novel treatment of the illumination integral using sampling theory that can utilize the computational power of the graphics processing unit (GPU) to provide real-time visualization of the appearance of various materials illuminated by complex environment lighting. As a system, the modeling, optimization and rendering steps all operate on arbitrary geometry and in detailed lighting environments, while still providing instant feedback to the designer. Thus, our approach allows materials to play an active role in the process of set design and story-telling, a capability that was, until now, difficult to achieve due to the unavailability of interactive tools appropriate for artists.
40

Sequential Imputation and Linkage Analysis

Skrivanek, Zachary 20 December 2002 (has links)
No description available.

Page generated in 0.0877 seconds