• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 53
  • 11
  • 8
  • 5
  • 4
  • 4
  • 3
  • 1
  • 1
  • Tagged with
  • 101
  • 101
  • 46
  • 46
  • 19
  • 18
  • 14
  • 13
  • 13
  • 12
  • 12
  • 11
  • 11
  • 11
  • 11
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
61

Advanced Monte Carlo Methods with Applications in Finance

Joshua Chi Chun Chan Unknown Date (has links)
The main objective of this thesis is to develop novel Monte Carlo techniques with emphasis on various applications in finance and economics, particularly in the fields of risk management and asset returns modeling. New stochastic algorithms are developed for rare-event probability estimation, combinatorial optimization, parameter estimation and model selection. The contributions of this thesis are fourfold. Firstly, we study an NP-hard combinatorial optimization problem, the Winner Determination Problem (WDP) in combinatorial auctions, where buyers can bid on bundles of items rather than bidding on them sequentially. We present two randomized algorithms, namely, the cross-entropy (CE) method and the ADAptive Mulitilevel splitting (ADAM) algorithm, to solve two versions of the WDP. Although an efficient deterministic algorithm has been developed for one version of the WDP, it is not applicable for the other version considered. In addition, the proposed algorithms are straightforward and easy to program, and do not require specialized software. Secondly, two major applications of conditional Monte Carlo for estimating rare-event probabilities are presented: a complex bridge network reliability model and several generalizations of the widely popular normal copula model used in managing portfolio credit risk. We show how certain efficient conditional Monte Carlo estimators developed for simple settings can be extended to handle complex models involving hundreds or thousands of random variables. In particular, by utilizing an asymptotic description on how the rare event occurs, we derive algorithms that are not only easy to implement, but also compare favorably to existing estimators. Thirdly, we make a contribution at the methodological front by proposing an improvement of the standard CE method for estimation. The improved method is relevant, as recent research has shown that in some high-dimensional settings the likelihood ratio degeneracy problem becomes severe and the importance sampling estimator obtained from the CE algorithm becomes unreliable. In contrast, the performance of the improved variant does not deteriorate as the dimension of the problem increases. Its utility is demonstrated via a high-dimensional estimation problem in risk management, namely, a recently proposed t-copula model for credit risk. We show that even in this high-dimensional model that involves hundreds of random variables, the proposed method performs remarkably well, and compares favorably to existing importance sampling estimators. Furthermore, the improved CE algorithm is then applied to estimating the marginal likelihood, a quantity that is fundamental in Bayesian model comparison and Bayesian model averaging. We present two empirical examples to demonstrate the proposed approach. The first example involves women's labor market participation and we compare three different binary response models in order to find the one best fits the data. The second example utilizes two vector autoregressive (VAR) models to analyze the interdependence and structural stability of four U.S. macroeconomic time series: GDP growth, unemployment rate, interest rate, and inflation. Lastly, we contribute to the growing literature of asset returns modeling by proposing several novel models that explicitly take into account various recent findings in the empirical finance literature. Specifically, two classes of stylized facts are particularly important. The first set is concerned with the marginal distributions of asset returns. One prominent feature of asset returns is that the tails of their distributions are heavier than those of the normal---large returns (in absolute value) occur much more frequently than one might expect from a normally distributed random variable. Another robust empirical feature of asset returns is skewness, where the tails of the distributions are not symmetric---losses are observed more frequently than large gains. The second set of stylized facts is concerned with the dependence structure among asset returns. Recent empirical studies have cast doubts on the adequacy of the linear dependence structure implied by the multivariate normal specification. For example, data from various asset markets, including equities, currencies and commodities markets, indicate the presence of extreme co-movement in asset returns, and this observation is again incompatible with the usual assumption that asset returns are jointly normally distributed. In light of the aforementioned empirical findings, we consider various novel models that generalize the usual normal specification. We develop efficient Markov chain Monte Carlo (MCMC) algorithms to estimate the proposed models. Moreover, since the number of plausible models is large, we perform a formal Bayesian model comparison to determine the model that best fits the data. In this way, we can directly compare the two approaches of modeling asset returns: copula models and the joint modeling of returns.
62

The Generalized Splitting method for Combinatorial Counting and Static Rare-Event Probability Estimation

Zdravko Botev Unknown Date (has links)
This thesis is divided into two parts. In the first part we describe a new Monte Carlo algorithm for the consistent and unbiased estimation of multidimensional integrals and the efficient sampling from multidimensional densities. The algorithm is inspired by the classical splitting method and can be applied to general static simulation models. We provide examples from rare-event probability estimation, counting, optimization, and sampling, demonstrating that the proposed method can outperform existing Markov chain sampling methods in terms of convergence speed and accuracy. In the second part we present a new adaptive kernel density estimator based on linear diffusion processes. The proposed estimator builds on existing ideas for adaptive smoothing by incorporating information from a pilot density estimate. In addition, we propose a new plug-in bandwidth selection method that is free from the arbitrary normal reference rules used by existing methods. We present simulation examples in which the proposed approach outperforms existing methods in terms of accuracy and reliability.
63

On the use of transport and optimal control methods for Monte Carlo simulation

Heng, Jeremy January 2016 (has links)
This thesis explores ideas from transport theory and optimal control to develop novel Monte Carlo methods to perform efficient statistical computation. The first project considers the problem of constructing a transport map between two given probability measures. In the Bayesian formalism, this approach is natural when one introduces a curve of probability measures connecting the prior to posterior by tempering the likelihood function. The main idea is to move samples from the prior using an ordinary differential equation (ODE), constructed by solving the Liouville partial differential equation (PDE) which governs the time evolution of measures along the curve. In this work, we first study the regularity solutions of Liouville equation should satisfy to guarantee validity of this construction. We place an emphasis on understanding these issues as it explains the difficulties associated with solutions that have been previously reported. After ensuring that the flow transport problem is well-defined, we give a constructive solution. However, this result is only formal as the representation is given in terms of integrals which are intractable. For computational tractability, we proposed a novel approximation of the PDE which yields an ODE whose drift depends on the full conditional distributions of the intermediate distributions. Even when the ODE is time-discretized and the full conditional distributions are approximated numerically, the resulting distribution of mapped samples can be evaluated and used as a proposal within Markov chain Monte Carlo and sequential Monte Carlo (SMC) schemes. We then illustrate experimentally that the resulting algorithm can outperform state-of-the-art SMC methods at a fixed computational complexity. The second project aims to exploit ideas from optimal control to design more efficient SMC methods. The key idea is to control the proposal distribution induced by a time-discretized Langevin dynamics so as to minimize the Kullback-Leibler divergence of the extended target distribution from the proposal. The optimal value functions of the resulting optimal control problem can then be approximated using algorithms developed in the approximate dynamic programming (ADP) literature. We introduce a novel iterative scheme to perform ADP, provide a theoretical analysis of the proposed algorithm and demonstrate that the latter can provide significant gains over state-of-the-art methods at a fixed computational complexity.
64

Mathematical methods for portfolio management

Ondo, Guy-Roger Abessolo 08 1900 (has links)
Portfolio Management is the process of allocating an investor's wealth to in­ vestment opportunities over a given planning period. Not only should Portfolio Management be treated within a multi-period framework, but one should also take into consideration the stochastic nature of related parameters. After a short review of key concepts from Finance Theory, e.g. utility function, risk attitude, Value-at-rusk estimation methods, a.nd mean-variance efficiency, this work describes a framework for the formulation of the Portfolio Management problem in a Stochastic Programming setting. Classical solution techniques for the resolution of the resulting Stochastic Programs (e.g. L-shaped Decompo­ sition, Approximation of the probability function) are presented. These are discussed within both the two-stage and the multi-stage case with a special em­ phasis on the former. A description of how Importance Sampling and EVPI are used to improve the efficiency of classical methods is presented. Postoptimality Analysis, a sensitivity analysis method, is also described. / Statistics / M. Sc. (Operations Research)
65

Contributions aux méthodes de Monte Carlo et leur application au filtrage statistique / Contributions to Monte Carlo methods and their application to statistical filtering

Lamberti, Roland 22 November 2018 (has links)
Cette thèse s’intéresse au problème de l’inférence bayésienne dans les modèles probabilistes dynamiques. Plus précisément nous nous focalisons sur les méthodes de Monte Carlo pour l’intégration. Nous revisitons tout d’abord le mécanisme d’échantillonnage d’importance avec rééchantillonnage, puis son extension au cadre dynamique connue sous le nom de filtrage particulaire, pour enfin conclure nos travaux par une application à la poursuite multi-cibles.En premier lieu nous partons du problème de l’estimation d’un moment suivant une loi de probabilité, connue à une constante près, par une méthode de Monte Carlo. Tout d’abord,nous proposons un nouvel estimateur apparenté à l’estimateur d’échantillonnage d’importance normalisé mais utilisant deux lois de proposition différentes au lieu d’une seule. Ensuite,nous revisitons le mécanisme d’échantillonnage d’importance avec rééchantillonnage dans son ensemble afin de produire des tirages Monte Carlo indépendants, contrairement au mécanisme usuel, et nous construisons ainsi deux nouveaux estimateurs.Dans un second temps nous nous intéressons à l’aspect dynamique lié au problème d’inférence bayésienne séquentielle. Nous adaptons alors dans ce contexte notre nouvelle technique de rééchantillonnage indépendant développée précédemment dans un cadre statique.Ceci produit le mécanisme de filtrage particulaire avec rééchantillonnage indépendant, que nous interprétons comme cas particulier de filtrage particulaire auxiliaire. En raison du coût supplémentaire en tirages requis par cette technique, nous proposons ensuite une procédure de rééchantillonnage semi-indépendant permettant de le contrôler.En dernier lieu, nous considérons une application de poursuite multi-cibles dans un réseau de capteurs utilisant un nouveau modèle bayésien, et analysons empiriquement les résultats donnés dans cette application par notre nouvel algorithme de filtrage particulaire ainsi qu’un algorithme de Monte Carlo par Chaînes de Markov séquentiel / This thesis deals with integration calculus in the context of Bayesian inference and Bayesian statistical filtering. More precisely, we focus on Monte Carlo integration methods. We first revisit the importance sampling with resampling mechanism, then its extension to the dynamic setting known as particle filtering, and finally conclude our work with a multi-target tracking application. Firstly, we consider the problem of estimating some moment of a probability density, known up to a constant, via Monte Carlo methodology. We start by proposing a new estimator affiliated with the normalized importance sampling estimator but using two proposition densities rather than a single one. We then revisit the importance sampling with resampling mechanism as a whole in order to produce Monte Carlo samples that are independent, contrary to the classical mechanism, which enables us to develop two new estimators. Secondly, we consider the dynamic aspect in the framework of sequential Bayesian inference. We thus adapt to this framework our new independent resampling technique, previously developed in a static setting. This yields the particle filtering with independent resampling mechanism, which we reinterpret as a special case of auxiliary particle filtering. Because of the increased cost required by this technique, we next propose a semi independent resampling procedure which enables to control this additional cost. Lastly, we consider an application of multi-target tracking within a sensor network using a new Bayesian model, and empirically analyze the results from our new particle filtering algorithm as well as a sequential Markov Chain Monte Carlo algorithm
66

Fotorealistické zobrazování 3D scén / Photorealistic Rendering of 3D Scenes

Vlnas, Michal January 2020 (has links)
This thesis proposes a concept of sampling, especially for path tracing like algorithms, for faster convergence of the scene, using a local radiance approximation in the scene with hemispherical harmonics, which allows more effective way of ray casting on the given surface. In the first part, the basics of photorealistic rendering are introduced together with commonly used algorithms for image synthesis. The mathematical apparatus used in this thesis is defined in the second part of the thesis. Subsequently, existing solutions in this area are presented. The following chapter summarizes state-of-the-art methods in this branch. The rest of this thesis is focused on proposal and implementation of already mentioned extension.
67

Vícestupňové stochastické programování s CVaR: modely, algoritmy a robustnost / Multi-Stage Stochastic Programming with CVaR: Modeling, Algorithms and Robustness

Kozmík, Václav January 2015 (has links)
Multi-Stage Stochastic Programming with CVaR: Modeling, Algorithms and Robustness RNDr. Václav Kozmík Abstract: We formulate a multi-stage stochastic linear program with three different risk measures based on CVaR and discuss their properties, such as time consistency. The stochastic dual dynamic programming algorithm is described and its draw- backs in the risk-averse setting are demonstrated. We present a new approach to evaluating policies in multi-stage risk-averse programs, which aims to elimi- nate the biggest drawback - lack of a reasonable upper bound estimator. Our approach is based on an importance sampling scheme, which is thoroughly ana- lyzed. A general variance reduction scheme for mean-risk sampling with CVaR is provided. In order to evaluate robustness of the presented models we extend con- tamination technique to the case of large-scale programs, where a precise solution cannot be obtained. Our computational results are based on a simple multi-stage asset allocation model and confirm usefulness of the presented procedures, as well as give additional insights into the behavior of more complex models. Keywords: Multi-stage stochastic programming, stochastic dual dynamic programming, im- portance sampling, contamination, CVaR
68

Reliability Analysis of Linear Dynamic Systems by Importance Sampling-Separable Monte Carlo Technique

Thapa, Badal January 2020 (has links)
No description available.
69

GPU-Accelerated Monte Carlo Geometry Processing for Gradient-Domain Methods

Mossberg, Linus January 2021 (has links)
This thesis extends the utility of the Monte Carlo approach to PDE-based methods presented in the paper Monte Carlo Geometry Processing. In particular, we implement this method on the GPU using CUDA, and investigate more viable methods of estimating the source integral when solving Poisson’s equation with intricate source terms. This is the case for a large group of gradient-domain methods in computer graphics, where source terms are represented by discrete volumetric data on regular grids. We develop unbiased source integral estimators like image-based importance sampling (IBIS) and biased estimators like source integral caching (SIC) and evaluate these against existing GPU-accelerated finite difference solvers for gradient-domain applications. By decoupling the source integration step from the WoS-algorithm, we find that the SIC method can improve performance by several orders of magnitude, making it competitive with existing finite difference solvers in many cases. We further investigate the viability of distance fields for accelerated distance queries and find that these can provide significant performance improvements compared to BVHs without meaningfully affecting bias. / <p>Examensarbetet är utfört vid Institutionen för teknik och naturvetenskap (ITN) vid Tekniska fakulteten, Linköpings universitet</p>
70

Hierarchical Adaptive Quadrature and Quasi-Monte Carlo for Efficient Fourier Pricing of Multi-Asset Options

Samet, Michael 11 July 2023 (has links)
Efficiently pricing multi-asset options is a challenging problem in computational finance. Although classical Fourier methods are extremely fast in pricing single asset options, maintaining the tractability of Fourier techniques for multi-asset option pricing is still an area of active research. Fourier methods rely on explicit knowledge of the characteristic function of the suitably stochastic price process, allowing for calculation of the option price by evaluation of multidimensional integral in the Fourier domain. The high smoothness of the integrand in the Fourier space motivates the exploration of deterministic quadrature methods that are highly efficient under certain regularity assumptions, such as, adaptive sparse grids quadrature (ASGQ), and Randomized Quasi-Monte Carlo (RQMC). However, when designing a numerical quadrature method for most of the existing Fourier pricing approaches, two key factors affecting the complexity should be carefully controlled, (i) the choice of the vector of damping parameters that ensure the Fourier-integrability and control the regularity class of the integrand, (ii) the high-dimensionality of the integration problem. To address these challenges, in the first part of this thesis we propose a rule for choosing the damping parameters, resulting in smoother integrands. Moreover, we explore the effect of sparsification and dimension-adaptivity in alleviating the curse of dimensionality. Despite the efficiency of ASGQ, the error estimates are very hard to compute. In cases where error quantification is of high priority, in the second part of this thesis, we design an RQMC-based method for the (inverse) Fourier integral computation. RQMC integration is known to be highly efficient for high-dimensional integration problems of sufficiently regular integrands, and it further allows for computation of probabilistic estimates. Nonetheless, using RQMC requires an appropriate domain transformation of the unbounded integration domain to the hypercube, which may originate in a transformed integrand with singularities at the boundaries, and consequently deteriorate the rate of convergence. To preserve the nice properties of the transformed integrand,we propose a model-dependent domain transformation to avoid these corner singularities and retain the optimal efficiency of RQMC. The effectiveness of the proposed optimal damping rule, the designed domain transformation procedure, and their combination with ASGQ and RQMC are demonstrated via several numerical experiments and computational comparisons to the MC approach and the COS method.

Page generated in 0.1232 seconds