Roth, Michael, Ozkan, Emre, Gustafsson, Fredrik
We consider the filtering problem in linear state space models with heavy tailed process and measurement noise. Our work is based on Student's t distribution, for which we give a number of useful results. The derived filtering algorithm is a generalization of the ubiquitous Kalman filter, and reduces to it as special case. Both Kalman filter and the new algorithm are compared on a challenging tracking example where a maneuvering target is observed in clutter. / MC Impulse
24 June 2008
There should be more interpretations which are derived from data, presented by those professional analysts. The empirical rules and knowledge do help as making statistical inference in Econometrics. The approaches from classical statistical analysis make judges simply resulting from historical data. To be frank, the advantage of this analysis is the objectivity, but there is a fatal drawback. That is, it does not pay attention to some logically extra information. This paper is born for the applications of Bayesian, which has the essential characteristic of accepting subjective outlook, applying empirical rules to study unit root test on exchange rate market. Furthermore, the various distributions of data may have direct effect on the classical statistical inference we use, such as Dickey-Fuller and Phillips-Perron test. To take those defects into consideration, this paper tends not to take the assumption of disturbances in normal distribution as granted. For instance, it is quite common for us to confront the heavy-tailed distribution when studying some data of time series related to stocks and targets of investment. Hence, we will apply more generalized model to do research on Bayesian unit root test. Use the model of Schotman and Van Dijk (1991) and assuming disturbance shaped as independent student-t distribution to revise the unit root test, next, applying to exchange rate market. This is the motif of this paper.
The Transformed Rejection Method for Generation Random Variables, an Alternative to the Ratio of Uniforms MethodHörmann, Wolfgang, Derflinger, Gerhard January 1994 (has links) (PDF)
Theoretical considerations and empirical results show that the one-dimensional quality of non-uniform random numbers is bad and the discrepancy is high when they are generated by the ratio of uniforms method combined with linear congruential generators. This observation motivates the suggestion to replace the ratio of uniforms method by transformed rejection (also called exact approximation or almost exact inversion), as the above problem does not occur for this method. Using the function $G(x) =\left( \frac(a)(1-x)+b\right)x $ with appropriate $a$ and $b$ as approximation of the inverse distribution function the transformed rejection method can be used for the same distributions as the ratio of uniforms method. The resulting algorithms for the normal, the exponential and the t-distribution are short and easy to implement. Looking at the number of uniform deviates required, at the code length and at the speed the suggested algorithms are superior to the ratio of uniforms method and compare well with other algorithms suggested in literature. (author's abstract) / Series: Preprint Series / Department of Applied Statistics and Data Processing
Model-based clustering allows for the identification of subgroups in a data set through the use of finite mixture models. When applied to high-dimensional microarray data, we can discover groups of genes characterized by their gene expression profiles. In this thesis, a mixture of skew-t factor analyzers is introduced for the clustering of high-dimensional data. Notably, we make use of a version of the skew-t distribution which has not previously appeared in mixture-modelling literature. Allowing a constraint on the factor loading matrix leads to two mixtures of skew-t factor analyzers models. These models are implemented using the alternating expectation-conditional maximization algorithm for parameter estimation with an Aitken's acceleration stopping criterion used to determine convergence. The Bayesian information criterion is used for model selection and the performance of each model is assessed using the adjusted Rand index. The models are applied to both real and simulated data, obtaining clustering results which are equivalent or superior to those of established clustering methods.
(has links) (PDF)
We document how sampling from a conditional Student's t distribution is implemented in stochvol. Moreover, a simple example using EUR/CHF exchange rates illustrates how to use the augmented sampler. We conclude with results and implications. (author's abstract)
Heracleous, Maria S.
02 October 2003
Over the last twenty years or so the Dynamic Volatility literature has produced a wealth of univariate and multivariate GARCH type models. While the univariate models have been relatively successful in empirical studies, they suffer from a number ofweaknesses, such as unverifiable parameter restrictions, existence of moment conditions and the retention of Normality. These problems are naturally more acute in the multivariate GARCH type models, which in addition have the problem of overparameterization. This dissertation uses the Student's t distribution and follows the Probabilistic Reduction (PR) methodology to modify and extend the univariate and multivariate volatility models viewed as alternative to the GARCH models. Its most important advantage is that it gives rise to internally consistent statistical models that do not require ad hoc parameter restrictions unlike the GARCH formulations. Chapters 1 and 2 provide an overview of my dissertation and recent developments in the volatility literature. In Chapter 3 we provide an empirical illustration of the PR approach for modeling univariate volatility. Estimation results suggest that the Student's t AR model is a parsimonious and statistically adequate representation of exchange rate returns and Dow Jones returns data. Econometric modeling based on the Student's t distribution introduces an additional variable - the degree of freedom parameter. In Chapter 4 we focus on two questions relating to the `degree of freedom' parameter. A simulation study is used to examine:(i) the ability of the kurtosis coefficient to accurately capture the implied degrees of freedom, and (ii) the ability of Student's t GARCH model to estimate the true degree of freedom parameter accurately. Simulation results reveal that the kurtosis coefficient and the Student's t GARCH model (Bollerslev, 1987) provide biased and inconsistent estimators of the degree of freedom parameter. Chapter 5 develops the Students' t Dynamic Linear Regression (DLR) }model which allows us to explain univariate volatility in terms of: (i) volatility in the past history of the series itself and (ii) volatility in other relevant exogenous variables. Empirical results of this chapter suggest that the Student's t DLR model provides a promising way to model volatility. The main advantage of this model is that it is defined in terms of observable random variables and their lags, and not the errors as is the case with the GARCH models. This makes the inclusion of relevant exogenous variables a natural part of the model set up. In Chapter 6 we propose the Student's t VAR model which deals effectively with several key issues raised in the multivariate volatility literature. In particular, it ensures positive definiteness of the variance-covariance matrix without requiring any unrealistic coefficient restrictions and provides a parsimonious description of the conditional variance-covariance matrix by jointly modeling the conditional mean and variance functions. / Ph. D.
Master of Science / Department of Statistics / Weixin Yao / In this report, we propose a robust mixture of regression based on t-distribution by extending the mixture of t-distributions proposed by Peel and McLachlan (2000) to the regression setting. This new mixture of regression model is robust to outliers in y direction but not robust to the outliers with high leverage points. In order to combat this, we also propose a modified version of the proposed method, which fits the mixture of regression based on t-distribution to the data after adaptively trimming the high leverage points. We further propose to adaptively choose the degree of freedom for the t-distribution using profile likelihood. The proposed robust mixture regression estimate has high efficiency due to the adaptive choice of degree of freedom. We demonstrate the effectiveness of the proposed new method and compare it with some of the existing methods through simulation study.
Master of Science / Department of Statistics / Weixing Song / A robust estimation procedure for mixture errors-in-variables linear regression models is proposed in the report by assuming the error terms follow a t-distribution. The estimation procedure is implemented by an EM algorithm based on the fact that the t-distribution is a scale mixture of normal distribution and a Gamma distribution. Finite sample performance of the proposed algorithm is evaluated by some extensive simulation studies. Comparison is also made with the MLE procedure under normality assumption.
Talarico, Alina Marcondes
23 January 2014
Os programas de Ensaios de Prociência (EP) são utilizados pela sociedade para avaliar a competência e a confiabilidade de laboratórios na execução de medições específicas. Atualmente, diversos grupos de EP foram estabelecidos pelo INMETRO, entre estes, o grupo de testes de motores. Cada grupo é formado por diversos laboratórios que medem o mesmo artefato e suas medições são comparadas através de métodos estatísticos. O grupo de motores escolheu um motor gasolina 1.0, gentilmente cedido pela GM Powertrain, como artefato. A potência do artefato foi medida em 10 pontos de rotação por 6 laboratórios. Aqui, motivados por este conjunto de dados, estendemos o modelo de calibração comparativa de Barnett (1969) para avaliar a compatibilidade dos laboratórios considerando a distribuição t de Student e apresentamos os resultados obtidos das aplicações e simulações a este conjunto de dados / Proficiency Testing (PT) programs are used by society to assess the competence and the reliability in laboratories execution of specific measurements. Nowadays many PT groups were established by INMETRO, including the motor\'s test group. Each group is formed by laboratories measuring the same artifact and their measurements are compared through statistic methods. The motor\'s group chose a gasoline engine 1.0, kindly provided by GM as an artifact. The artifact\'s power was measured at ten points of rotation by 6 laboratories. Here, motivated by this set data, we extend the Barnet comparative calibration model (1969) to assess the compatibility of the laboratories considering the Student-t distribution and show the results obtained from application and simulation of this set data
Ali Mohamed, Khadar
The purpose with this study is to compare four different models to VaR in terms of accuracy, namely Historical Simulation (HS), Simple Moving Average (SMA), Exponentially Weighted Moving Average (EWMA) and Exponentially Weighted Historical Simulation (EWHS). These VaR models will be applied to one underlying asset which is the Brent Blend Oil using these confidence levels 95 %, 99 % and 99, 9 %. Concerning the return of the asset the models under two different assumptions namely student t-distribution and normal distribution will be studied
Page generated in 0.1268 seconds