• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 462
  • 32
  • 16
  • 16
  • 15
  • 14
  • 14
  • 14
  • 14
  • 14
  • 13
  • 13
  • 10
  • 6
  • 6
  • Tagged with
  • 683
  • 683
  • 142
  • 141
  • 115
  • 89
  • 86
  • 57
  • 55
  • 49
  • 49
  • 40
  • 38
  • 38
  • 36
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
481

SVI estimation of the implied volatility by Kalman filter.

Burnos, Sergey, Ngow, ChaSing January 2010 (has links)
To understand and model the dynamics of the implied volatility smile is essential for trading, pricing and risk management portfolio. We suggest a  linear Kalman filter for updating of the Stochastic Volatility Inspired (SVI) model of the volatility. From a risk management perspective we generate the 1-day ahead forecast of profit and loss (P\&L) of option portfolios. We compare the estimation of the implied volatility using the SVI model with the cubic polynomial model. We find that the SVI Kalman filter has outperformed the  others.
482

Choosing and Implementing a Quality Management System at Statistics Sweden

Lisai, Dan January 2008 (has links)
In today’s society we are surrounded by large amounts of information, quick decisions and high expectations to perform successfully in everything we do. As a statistical agency, Statistics Sweden is responsible for producing some of the information that is used for decision-making in society and is therefore under constant internal and external pressure to perform well. The responsibility to produce high-quality statistics to all customers and users is not simple. What is the quality of the statistics produced? How do we assure and control the quality of the statistics? Do we use our resources efficiently? These are important questions, which need to be addressed. One way of addressing these and other issues is to work with quality in a systematic fashion. Thus there is a need for a Quality Management System, i.e., a systematic way to handle quality issues of all kinds in all parts of the organization, and to continue the journey towards the vision of being a “world class statistical agency”. This Masters thesis is a description and discussion of the efforts to find a suitable Quality Management System. The thesis starts with a discussion about the vague quality concept, continues with a description of numerous frameworks, methods and systems related to quality management as well as their pros and cons and ends with a recommendation for Statistics Sweden. The recommendation is to use the EFQM Excellence Model as a quality framework, Six Sigma as a tool-box for improvement projects and modern internal auditing methods for evaluation and follow-up. Finally, issues related to the implementation of the system are discussed.
483

Contributions to Bayesian wavelet shrinkage

Remenyi, Norbert 07 November 2012 (has links)
This thesis provides contributions to research in Bayesian modeling and shrinkage in the wavelet domain. Wavelets are a powerful tool to describe phenomena rapidly changing in time, and wavelet-based modeling has become a standard technique in many areas of statistics, and more broadly, in sciences and engineering. Bayesian modeling and estimation in the wavelet domain have found useful applications in nonparametric regression, image denoising, and many other areas. In this thesis, we build on the existing techniques and propose new methods for applications in nonparametric regression, image denoising, and partially linear models. The thesis consists of an overview chapter and four main topics. In Chapter 1, we provide an overview of recent developments and the current status of Bayesian wavelet shrinkage research. The chapter contains an extensive literature review consisting of almost 100 references. The main focus of the overview chapter is on nonparametric regression, where the observations come from an unknown function contaminated with Gaussian noise. We present many methods which employ model-based and adaptive shrinkage of the wavelet coefficients through Bayes rules. These includes new developments such as dependence models, complex wavelets, and Markov chain Monte Carlo (MCMC) strategies. Some applications of Bayesian wavelet shrinkage, such as curve classification, are discussed. In Chapter 2, we propose the Gibbs Sampling Wavelet Smoother (GSWS), an adaptive wavelet denoising methodology. We use the traditional mixture prior on the wavelet coefficients, but also formulate a fully Bayesian hierarchical model in the wavelet domain accounting for the uncertainty of the prior parameters by placing hyperpriors on them. Since a closed-form solution to the Bayes estimator does not exist, the procedure is computational, in which the posterior mean is computed via MCMC simulations. We show how to efficiently develop a Gibbs sampling algorithm for the proposed model. The developed procedure is fully Bayesian, is adaptive to the underlying signal, and provides good denoising performance compared to state-of-the-art methods. Application of the method is illustrated on a real data set arising from the analysis of metabolic pathways, where an iterative shrinkage procedure is developed to preserve the mass balance of the metabolites in the system. We also show how the methodology can be extended to complex wavelet bases. In Chapter 3, we propose a wavelet-based denoising methodology based on a Bayesian hierarchical model using a double Weibull prior. The interesting feature is that in contrast to the mixture priors traditionally used by some state-of-the-art methods, the wavelet coefficients are modeled by a single density. Two estimators are developed, one based on the posterior mean and the other based on the larger posterior mode; and we show how to calculate these estimators efficiently. The methodology provides good denoising performance, comparable even to state-of-the-art methods that use a mixture prior and an empirical Bayes setting of hyperparameters; this is demonstrated by simulations on standard test functions. An application to a real-word data set is also considered. In Chapter 4, we propose a wavelet shrinkage method based on a neighborhood of wavelet coefficients, which includes two neighboring coefficients and a parental coefficient. The methodology is called Lambda-neighborhood wavelet shrinkage, motivated by the shape of the considered neighborhood. We propose a Bayesian hierarchical model using a contaminated exponential prior on the total mean energy in the Lambda-neighborhood. The hyperparameters in the model are estimated by the empirical Bayes method, and the posterior mean, median, and Bayes factor are obtained and used in the estimation of the total mean energy. Shrinkage of the neighboring coefficients is based on the ratio of the estimated and observed energy. The proposed methodology is comparable and often superior to several established wavelet denoising methods that utilize neighboring information, which is demonstrated by extensive simulations. An application to a real-world data set from inductance plethysmography is considered, and an extension to image denoising is discussed. In Chapter 5, we propose a wavelet-based methodology for estimation and variable selection in partially linear models. The inference is conducted in the wavelet domain, which provides a sparse and localized decomposition appropriate for nonparametric components with various degrees of smoothness. A hierarchical Bayes model is formulated on the parameters of this representation, where the estimation and variable selection is performed by a Gibbs sampling procedure. For both the parametric and nonparametric part of the model we are using point-mass-at-zero contamination priors with a double exponential spread distribution. In this sense we extend the model of Chapter 2 to partially linear models. Only a few papers in the area of partially linear wavelet models exist, and we show that the proposed methodology is often superior to the existing methods with respect to the task of estimating model parameters. Moreover, the method is able to perform Bayesian variable selection by a stochastic search for the parametric part of the model.
484

Statistical methods for function estimation and classification

Kim, Heeyoung 20 June 2011 (has links)
This thesis consists of three chapters. The first chapter focuses on adaptive smoothing splines for fitting functions with varying roughness. In the first part of the first chapter, we study an asymptotically optimal procedure to choose the value of a discretized version of the variable smoothing parameter in adaptive smoothing splines. With the choice given by the multivariate version of the generalized cross validation, the resulting adaptive smoothing spline estimator is shown to be consistent and asymptotically optimal under some general conditions. In the second part, we derive the asymptotically optimal local penalty function, which is subsequently used for the derivation of the locally optimal smoothing spline estimator. In the second chapter, we propose a Lipschitz regularity based statistical model, and apply it to coordinate measuring machine (CMM) data to estimate the form error of a manufactured product and to determine the optimal sampling positions of CMM measurements. Our proposed wavelet-based model takes advantage of the fact that the Lipschitz regularity holds for the CMM data. The third chapter focuses on the classification of functional data which are known to be well separable within a particular interval. We propose an interval based classifier. We first estimate a baseline of each class via convex optimization, and then identify an optimal interval that maximizes the difference among the baselines. Our interval based classifier is constructed based on the identified optimal interval. The derived classifier can be implemented via a low-order-of-complexity algorithm.
485

Test Cycle Optimization using Regression Analysis

Meless, Dejen January 2010 (has links)
Industrial robots make up an important part in today’s industry and are assigned to a range of different tasks. Needless to say, businesses need to rely on their machine park to function as planned, avoiding stops in production due to machine failures. This is where fault detection methods play a very important part. In this thesis a specific fault detection method based on signal analysis will be considered. When testing a robot for fault(s), a specific test cycle (trajectory) is executed in order to be able to compare test data from different test occasions. Furthermore, different test cycles yield different measurements to analyse, which may affect the performance of the analysis. The question posed is: Can we find an optimal test cycle so that the fault is best revealed in the test data? The goal of this thesis is to, using regression analysis, investigate how the presently executed test cycle in a specific diagnosis method relates to the faults that are monitored (in this case a so called friction fault) and decide if a different one should be recommended. The data also includes representations of two disturbances. The results from the regression show that the variation in the test quantities utilised in the diagnosis method are not explained by neither the friction fault or the test cycle. It showed that the disturbances had too large effect on the test quantities. This made it impossible to recommend a different (optimal) test cycle based on the analysis.
486

Pricing and Hedging of Defaultable Models

Antczak, Magdalena, Leniec, Marta January 2011 (has links)
Modelling defaultable contingent claims has attracted a lot of interest in recent years, motivated in particular by the Late-2000s Financial Crisis. In several papers various approaches on the subject have been made. This thesis tries to summarize these results and derive explicit formulas for the prices of financial derivatives with credit risk. It is divided into two main parts. The first one is devoted to the well-known theory of modelling the default risk while the second one presents the results concerning pricing of the defaultable models that we obtained ourselves.
487

Energy Derivatives Pricing

Prostakova, Irina, Tazov, Alexander January 2011 (has links)
In this paper we examine energy derivatives pricing. The previous studies considered the same source of uncertainty for the spot and the futures prices. We investigate the problem of futures pricing with two independent sources of risk. In general the structure of the oil and gas futures markets is closely related to some stock indices. Therefore, we develop a model for the futures market and compound derivatives with pricing in accordance with the correspondent index. We derive a framework for energy derivatives pricing, compute the price of the European call option on futures and corresponding hedging strategy. We calculate the price of the European call option adjusted for an index level, study the American put option on futures and corresponding hedging strategies.
488

The Ising Model on a Heavy Gravity Portfolio Applied to Default Contagion

Zhao, Yang, Zhang, Min January 2011 (has links)
In this paper we introduce a model of default contagion in the financail market. The structure of the companies are represented by a Heavy Gravity Portfolio, where we assume there are N sectors in the market and in each sector i, there is one big trader and ni supply companies.The supply companies in each sector are directly inuenced by the bigtrader and the big traders are also pairwise interacting with each other.This development of the Ising model is called Heavy gravity portfolioand according to this, the relation between expectation and correlationof the default of companies are derived by means of simulations utilisingthe Gibbs sampler. Finally methods for maximum likelihood estimationand for a likelihood ratio test of the interaction parameter in the modelare derived.
489

Monitoring Exchange Rates by Statistical Process Control

Ko, Byeonggeon, Gao, Yang January 2011 (has links)
The exchange rate market has traditionally played a key role in the financial market. The variation of the exchange rate which is called volatility is also an important feature for studying the exchange rate market because the increased volatility may have a negative effect on a nation's economy by increasing the uncertainty in the exchange market. In this paper the volatility of the exchange rate is considered by means of a Heterogeneous Autoregression Conditional Heteroskedastictity (HARCH) Model. It explains the volatility of the exchange rate market well. In addition, it is assumed that at a random time point a change of a parameter in the distribution of the random process underobservation may occur. Some methods such as the Shewhart method, the Culumative Sum Method (CUSUM) and the ExponentiallyWeighted Moving Average Method (EWMA) are investigated within the frames of this change-point problem. In order to evaluate them, Average Run Length (ARL) and Conditional Expected Delay (CED) will be used asperformance measures.
490

Comparison of modes of convergence in a particle system related to the Boltzmann equation

Petersson, Mikael January 2010 (has links)
The distribution of particles in a rarefied gas in a vessel can be described by the Boltzmann equation. As an approximation of the solution to this equation, Caprino, Pulvirenti and Wagner [3] constructed a random N-particle system. In the equilibrium case, they prove in [3] that the L1-distance between the density function of k particles in the N-particle process and the k-fold product of the solution to the stationary Boltzmann equation is of order 1/N. They do this in order to show that the N-particle system converges to the system described by the stationary Boltzmann equation as the number of particles tends to infinity. This is different from the standard approach of describing convergence of an N-particle system. Usually, convergence in distribution of random measures or weak convergence of measures over the space of probability measures is used. The purpose of the present thesis is to compare different modes of convergence of the N-particle system as N tends to infinity assuming stationarity.

Page generated in 0.1578 seconds