Spelling suggestions: "subject:"codistribution"" "subject:"bydistribution""
21 |
GARCH models applied on Swedish Stock Exchange IndicesBlad, Wiktor, Nedic, Vilim January 2019 (has links)
In the financial industry, it has been increasingly popular to measure risk. One of the most common quantitative measures for assessing risk is Value-at-Risk (VaR). VaR helps to measure extreme risks that an investor is exposed to. In addition to acquiring information of the expected loss, VaR was introduced in the regulatory frameworks of Basel I and II as a standardized measure of market risk. Due to necessity of measuring VaR accurately, this thesis aims to be a contribution to the research field of applying GARCH-models to financial time series in order to forecast the conditional variance and find accurate VaR-estimations. The findings in this thesis is that GARCH-models which incorporate the asymmetric effect of positive and negative returns perform better than a standard GARCH. Further on, leptokurtic distributions have been found to outperform normal distribution. In addition to various models and distributions, various rolling windows have been used to examine how the forecasts differ given window lengths.
|
22 |
Diagnóstico de influência bayesiano em modelos de regressão da família t-assimétrica / Bayesian influence diagnostic in skew-t family linear regression modelsSilva, Diego Wesllen da 05 May 2017 (has links)
O modelo de regressão linear com erros na família de distribuições t-assimétrica, que contempla as distribuições normal, t-Student e normal assimétrica como casos particulares, tem sido considerado uma alternativa robusta ao modelo normal. Para concluir qual modelo é, de fato, mais robusto, é importante ter um método tanto para identificar uma observação como discrepante quanto aferir a influência que esta observação terá em nossas estimativas. Nos modelos de regressão bayesianos, uma das medidas de identificação de observações discrepantes mais conhecidas é a conditional predictive ordinate (CPO). Analisamos a influência dessas observações nas estimativas tanto de forma global, isto é, no vetor completo de parâmetros do modelo quanto de forma marginal, apenas nos parâmetros regressores. Consideramos a norma L1 e a divergência Kullback-Leibler como medidas de influência das observações nas estimativas dos parâmetros. Além disso, encontramos as distribuições condicionais completas de todos os modelos para o uso do algoritmo de Gibbs obtendo, assim, amostras da distribuição a posteriori dos parâmetros. Tais amostras são utilizadas no calculo do CPO e das medidas de divergência estudadas. A principal contribuição deste trabalho é obter as medidas de influência global e marginal calculadas para os modelos t-Student, normal assimétrico e t-assimétrico. Na aplicação em dados reais originais e contaminados, observamos que, em geral, o modelo t-Student é uma alternativa robusta ao modelo normal. Por outro lado, o modelo t-assimétrico não é, em geral, uma alternativa robusta ao modelo normal. A capacidade de robustificação do modelo t-assimétrico está diretamente ligada à posição do resíduo do ponto discrepante em relação a distribuição dos resíduos. / The linear regression model with errors in the skew-t family, which includes the normal, Student-t and skew normal distributions as particular cases, has been considered as a robust alternative to the normal model. To conclude which model is in fact more robust its important to have a method to identify an observation as outlier, as well as to assess the influence of this observation in the estimates. In bayesian regression models, one of the most known measures to identify an outlier is the conditional predictive ordinate (CPO). We analyze the influence of these observations on the estimates both in a global way, that is, in the complete parameter vector of the model and in a marginal way, only in the regressor parameters. We consider the L1 norm and the Kullback-Leibler divergence as influence measures of the observations on the parameter estimates. Using the bayesian approach, we find the complete conditional distributions of all the models for the usage of the Gibbs sampler thus obtaining samples of the posterior distribution of the parameters. These samples are used in the calculation of the CPO and the studied divergence measures. The major contribution of this work is to present the global and marginal influence measures calculated for the Student-t, skew normal and skew-t models. In the application on original and contaminated real data, we observed that in general the Student-t model is a robust alternative to the normal model. However, the skew-t model is not a robust alternative to the normal model. The robustification capability of the skew-t model is directly linked to the position of the residual of the outlier in relation to the distribution of the residuals.
|
23 |
Diagnóstico de influência bayesiano em modelos de regressão da família t-assimétrica / Bayesian influence diagnostic in skew-t family linear regression modelsDiego Wesllen da Silva 05 May 2017 (has links)
O modelo de regressão linear com erros na família de distribuições t-assimétrica, que contempla as distribuições normal, t-Student e normal assimétrica como casos particulares, tem sido considerado uma alternativa robusta ao modelo normal. Para concluir qual modelo é, de fato, mais robusto, é importante ter um método tanto para identificar uma observação como discrepante quanto aferir a influência que esta observação terá em nossas estimativas. Nos modelos de regressão bayesianos, uma das medidas de identificação de observações discrepantes mais conhecidas é a conditional predictive ordinate (CPO). Analisamos a influência dessas observações nas estimativas tanto de forma global, isto é, no vetor completo de parâmetros do modelo quanto de forma marginal, apenas nos parâmetros regressores. Consideramos a norma L1 e a divergência Kullback-Leibler como medidas de influência das observações nas estimativas dos parâmetros. Além disso, encontramos as distribuições condicionais completas de todos os modelos para o uso do algoritmo de Gibbs obtendo, assim, amostras da distribuição a posteriori dos parâmetros. Tais amostras são utilizadas no calculo do CPO e das medidas de divergência estudadas. A principal contribuição deste trabalho é obter as medidas de influência global e marginal calculadas para os modelos t-Student, normal assimétrico e t-assimétrico. Na aplicação em dados reais originais e contaminados, observamos que, em geral, o modelo t-Student é uma alternativa robusta ao modelo normal. Por outro lado, o modelo t-assimétrico não é, em geral, uma alternativa robusta ao modelo normal. A capacidade de robustificação do modelo t-assimétrico está diretamente ligada à posição do resíduo do ponto discrepante em relação a distribuição dos resíduos. / The linear regression model with errors in the skew-t family, which includes the normal, Student-t and skew normal distributions as particular cases, has been considered as a robust alternative to the normal model. To conclude which model is in fact more robust its important to have a method to identify an observation as outlier, as well as to assess the influence of this observation in the estimates. In bayesian regression models, one of the most known measures to identify an outlier is the conditional predictive ordinate (CPO). We analyze the influence of these observations on the estimates both in a global way, that is, in the complete parameter vector of the model and in a marginal way, only in the regressor parameters. We consider the L1 norm and the Kullback-Leibler divergence as influence measures of the observations on the parameter estimates. Using the bayesian approach, we find the complete conditional distributions of all the models for the usage of the Gibbs sampler thus obtaining samples of the posterior distribution of the parameters. These samples are used in the calculation of the CPO and the studied divergence measures. The major contribution of this work is to present the global and marginal influence measures calculated for the Student-t, skew normal and skew-t models. In the application on original and contaminated real data, we observed that in general the Student-t model is a robust alternative to the normal model. However, the skew-t model is not a robust alternative to the normal model. The robustification capability of the skew-t model is directly linked to the position of the residual of the outlier in relation to the distribution of the residuals.
|
24 |
Multiple hypothesis testing and multiple outlier identification methodsYin, Yaling 13 April 2010
Traditional multiple hypothesis testing procedures, such as that of Benjamini and Hochberg, fix an error rate and determine the corresponding rejection region. In 2002 Storey proposed a fixed rejection region procedure and showed numerically that it can gain more power than the fixed error rate procedure of Benjamini and Hochberg while controlling the same false discovery rate (FDR). In this thesis it is proved that when the number of alternatives is small compared to the total number of hypotheses, Storeys method can be less powerful than that of Benjamini and Hochberg. Moreover, the two procedures are compared by setting them to produce the same FDR. The difference in power between Storeys procedure and that of Benjamini and Hochberg is near zero when the distance between the null and alternative distributions is large, but Benjamini and Hochbergs procedure becomes more powerful as the distance decreases. It is shown that modifying the Benjamini and Hochberg procedure to incorporate an estimate of the proportion of true null hypotheses as proposed by Black gives a procedure with superior power.<p>
Multiple hypothesis testing can also be applied to regression diagnostics. In this thesis, a Bayesian method is proposed to test multiple hypotheses, of which the i-th null and alternative hypotheses are that the i-th observation is not an outlier versus it is, for i=1,...,m. In the proposed Bayesian model, it is assumed that outliers have a mean shift, where the proportion of outliers and the mean shift respectively follow a Beta prior distribution and a normal prior distribution. It is proved in the thesis that for the proposed model, when there exists more than one outlier, the marginal distributions of the deletion residual of the i-th observation under both null and alternative hypotheses are doubly noncentral t distributions. The outlyingness of the i-th observation is measured by the marginal posterior probability that the i-th observation is an outlier given its deletion residual. An importance sampling method is proposed to calculate this probability. This method requires the computation of the density of the doubly noncentral F distribution and this is approximated using Patnaiks approximation. An algorithm is proposed in this thesis to examine the accuracy of Patnaiks approximation. The comparison of this algorithms output with Patnaiks approximation shows that the latter can save massive computation time without losing much accuracy.<p>
The proposed Bayesian multiple outlier identification procedure is applied to some simulated data sets. Various simulation and prior parameters are used to study the sensitivity of the posteriors to the priors. The area under the ROC curves (AUC) is calculated for each combination of parameters. A factorial design analysis on AUC is carried out by choosing various simulation and prior parameters as factors. The resulting AUC values are high for various selected parameters, indicating that the proposed method can identify the majority of outliers within tolerable errors. The results of the factorial design show that the priors do not have much effect on the marginal posterior probability as long as the sample size is not too small.<p>
In this thesis, the proposed Bayesian procedure is also applied to a real data set obtained by Kanduc et al. in 2008. The proteomes of thirty viruses examined by Kanduc et al. are found to share a high number of pentapeptide overlaps to the human proteome. In a linear regression analysis of the level of viral overlaps to the human proteome and the length of viral proteome, it is reported by Kanduc et al. that among the thirty viruses, human T-lymphotropic virus 1, Rubella virus, and hepatitis C virus, present relatively higher levels of overlaps with the human proteome than the predicted level of overlaps. The results obtained using the proposed procedure indicate that the four viruses with extremely large sizes (Human herpesvirus 4, Human herpesvirus 6, Variola virus, and Human herpesvirus 5) are more likely to be the outliers than the three reported viruses. The results with thefour extreme viruses deleted confirm the claim of Kanduc et al.
|
25 |
Value at Risk: A Standard Tool in Measuring Risk : A Quantitative Study on Stock PortfolioOfe, Hosea, Okah, Peter January 2011 (has links)
The role of risk management has gained momentum in recent years most notably after the recent financial crisis. This thesis uses a quantitative approach to evaluate the theory of value at risk which is considered a benchmark to measure financial risk. The thesis makes use of both parametric and non parametric approaches to evaluate the effectiveness of VAR as a standard tool in measuring risk of stock portfolio. This study uses the normal distribution, student t-distribution, historical simulation and the exponential weighted moving average at 95% and 99% confidence levels on the stock returns of Sonny Ericsson, Three Months Swedish Treasury bill (STB3M) and Nordea Bank. The evaluations of the VAR models are based on the Kupiec (1995) Test. From a general perspective, the results of the study indicate that VAR as a proxy of risk measurement has some imprecision in its estimates. However, this imprecision is not all the same for all the approaches. The results indicate that models which assume normality of return distribution display poor performance at both confidence levels than models which assume fatter tails or have leptokurtic characteristics. Another finding from the study which may be interesting is the fact that during the period of high volatility such as the financial crisis of 2008, the imprecision of VAR estimates increases. For the parametric approaches, the t-distribution VAR estimates were accurate at 95% confidence level, while normal distribution approach produced inaccurate estimates at 95% confidence level. However both approaches were unable to provide accurate estimates at 99% confidence level. For the non parametric approaches the exponentially weighted moving average outperformed the historical simulation approach at 95% confidence level, while at the 99% confidence level both approaches tend to perform equally. The results of this study thus question the reliability on VAR as a standard tool in measuring risk on stock portfolio. It also suggest that more research should be done to improve on the accuracy of VAR approaches, given that the role of risk management in today’s business environment is increasing ever than before. The study suggest VAR should be complemented with other risk measures such as Extreme value theory and stress testing, and that more than one back testing techniques should be used to test the accuracy of VAR.
|
26 |
Multiple hypothesis testing and multiple outlier identification methodsYin, Yaling 13 April 2010 (has links)
Traditional multiple hypothesis testing procedures, such as that of Benjamini and Hochberg, fix an error rate and determine the corresponding rejection region. In 2002 Storey proposed a fixed rejection region procedure and showed numerically that it can gain more power than the fixed error rate procedure of Benjamini and Hochberg while controlling the same false discovery rate (FDR). In this thesis it is proved that when the number of alternatives is small compared to the total number of hypotheses, Storeys method can be less powerful than that of Benjamini and Hochberg. Moreover, the two procedures are compared by setting them to produce the same FDR. The difference in power between Storeys procedure and that of Benjamini and Hochberg is near zero when the distance between the null and alternative distributions is large, but Benjamini and Hochbergs procedure becomes more powerful as the distance decreases. It is shown that modifying the Benjamini and Hochberg procedure to incorporate an estimate of the proportion of true null hypotheses as proposed by Black gives a procedure with superior power.<p>
Multiple hypothesis testing can also be applied to regression diagnostics. In this thesis, a Bayesian method is proposed to test multiple hypotheses, of which the i-th null and alternative hypotheses are that the i-th observation is not an outlier versus it is, for i=1,...,m. In the proposed Bayesian model, it is assumed that outliers have a mean shift, where the proportion of outliers and the mean shift respectively follow a Beta prior distribution and a normal prior distribution. It is proved in the thesis that for the proposed model, when there exists more than one outlier, the marginal distributions of the deletion residual of the i-th observation under both null and alternative hypotheses are doubly noncentral t distributions. The outlyingness of the i-th observation is measured by the marginal posterior probability that the i-th observation is an outlier given its deletion residual. An importance sampling method is proposed to calculate this probability. This method requires the computation of the density of the doubly noncentral F distribution and this is approximated using Patnaiks approximation. An algorithm is proposed in this thesis to examine the accuracy of Patnaiks approximation. The comparison of this algorithms output with Patnaiks approximation shows that the latter can save massive computation time without losing much accuracy.<p>
The proposed Bayesian multiple outlier identification procedure is applied to some simulated data sets. Various simulation and prior parameters are used to study the sensitivity of the posteriors to the priors. The area under the ROC curves (AUC) is calculated for each combination of parameters. A factorial design analysis on AUC is carried out by choosing various simulation and prior parameters as factors. The resulting AUC values are high for various selected parameters, indicating that the proposed method can identify the majority of outliers within tolerable errors. The results of the factorial design show that the priors do not have much effect on the marginal posterior probability as long as the sample size is not too small.<p>
In this thesis, the proposed Bayesian procedure is also applied to a real data set obtained by Kanduc et al. in 2008. The proteomes of thirty viruses examined by Kanduc et al. are found to share a high number of pentapeptide overlaps to the human proteome. In a linear regression analysis of the level of viral overlaps to the human proteome and the length of viral proteome, it is reported by Kanduc et al. that among the thirty viruses, human T-lymphotropic virus 1, Rubella virus, and hepatitis C virus, present relatively higher levels of overlaps with the human proteome than the predicted level of overlaps. The results obtained using the proposed procedure indicate that the four viruses with extremely large sizes (Human herpesvirus 4, Human herpesvirus 6, Variola virus, and Human herpesvirus 5) are more likely to be the outliers than the three reported viruses. The results with thefour extreme viruses deleted confirm the claim of Kanduc et al.
|
27 |
An Investigation of Distribution FunctionsSu, Nan-cheng 24 June 2008 (has links)
The study of properties of probability distributions has always been a persistent theme of statistics and of applied probability. This thesis deals with an investigation of distribution functions under the following two topics: (i) characterization of distributions based on record values and order statistics, (ii) properties of the skew-t distribution.
Within the extensive characterization literature there are several results involving properties of record values and order statistics. Although there have been many well known results already developed, it is still of great interest to find new characterization of distributions based on record values and order statistics. In the first part, we provide the conditional distribution of any record value given the maximum order statistics and study characterizations of distributions based on record values and the maximum order statistics. We also give some characterizations of the mean value function within the class of order statistics point processes, by using certain relations between the conditional moments of the jump times or current lives. These results can be applied to characterize the uniform distribution using the sequence of order statistics, and the exponential distribution using the sequence of record values, respectively.
Azzalini (1985, 1986) introduced the skew-normal distribution which includes the normal distribution and has some properties like the normal and yet is skew. This class of distributions is useful in studying robustness and for modeling skewness. Since then, skew-symmetric distributions have been proposed by many authors. In the second part, the so-called generalized skew-t distribution is defined and studied. Examples of distributions in this class, generated by the ratio of two independent skew-symmetric distributions, are given. We also investigate properties of the skew-symmetric distribution.
|
28 |
Mensuração de risco de mercado com modelo Arma-Garch e distribuição T assimétricaMori, Renato Seiti 22 August 2017 (has links)
Submitted by RENATO MORI (rmori3@hotmail.com) on 2017-09-20T05:58:01Z
No. of bitstreams: 1
dissertacao_VaRArmaGarchSkewt.pdf: 3267680 bytes, checksum: 6a8a935c128bb04a8a4f91fb592de3a8 (MD5) / Approved for entry into archive by Thais Oliveira (thais.oliveira@fgv.br) on 2017-09-20T17:58:58Z (GMT) No. of bitstreams: 1
dissertacao_VaRArmaGarchSkewt.pdf: 3267680 bytes, checksum: 6a8a935c128bb04a8a4f91fb592de3a8 (MD5) / Made available in DSpace on 2017-09-21T13:36:32Z (GMT). No. of bitstreams: 1
dissertacao_VaRArmaGarchSkewt.pdf: 3267680 bytes, checksum: 6a8a935c128bb04a8a4f91fb592de3a8 (MD5)
Previous issue date: 2017-08-22 / A proposta do estudo é aplicar ao Ibovespa, modelo paramétrico de VaR de 1 dia, com distribuição dos retornos dinâmica, que procura apreciar características empíricas comumente apresentadas por séries financeiras, como clusters de volatilidade e leptocurtose. O processo de retornos é modelado como um ARMA com erros GARCH que seguem distribuição t assimétrica. A metodologia foi comparada com o RiskMetrics e com modelos ARMA-GARCH com distribuição dos erros normal e t. Os modelos foram estimados diariamente usando uma janela móvel de 1008 dias. Foi verificado pelos backtests de Christoffersen e de Diebold, Gunther e Tay que dentre os modelos testados, o ARMA(2,2)- GARCH(2,1) com distribuição t assimétrica apresentou os melhores resultados. / The proposal of the study is to apply to Ibovespa a 1 day VaR parametric model, with dynamic distribution of returns, that aims to address empirical features usually seen in financial series, such as volatility clustering and leptocurtosis. The returns process is modeled as an ARMA with GARCH residuals that follow a skewed t distribution. The methodology was compared to RiskMetrics and to ARMA-GARCH with normal and t distributed residuals. The models were estimated every daily period using a window of 1008 days. By the backtests of Christoffersen and Diebold, Gunther and Tay, among the tested models, the ARMA(2,2)-GARCH(2,1) with skewed t distribution has given the best results.
|
29 |
A heteroscedastic volatility model with Fama and French risk factors for portfolio returns in Japan / En heteroskedastisk volatilitetsmodell med Fama och Frenchriskfaktorer för portföljavkastning i JapanWallin, Edvin, Chapman, Timothy January 2021 (has links)
This thesis has used the Fama and French five-factor model (FF5M) and proposed an alternative model. The proposed model is named the Fama and French five-factor heteroscedastic student's model (FF5HSM). The model utilises an ARMA model for the returns with the FF5M factors incorporated and a GARCH(1,1) model for the volatility. The FF5HSM uses returns data from the FF5M's portfolio construction for the Japanese stock market and the five risk factors. The portfolio's capture different levels of market capitalisation, and the factors capture market risk. The ARMA modelling is used to address the autocorrelation present in the data. To deal with the heteroscedasticity in daily returns of stocks, a GARCH(1,1) model has been used. The order of the GARCH-model has been concluded to be reasonable in academic literature for this type of data. Another finding in earlier research is that asset returns do not follow the assumption of normality that a regular regression model assumes. Therefore, the skewed student's t-distribution has been assumed for the error terms. The result of the data indicates that the FF5HSM has a better in-sample fit than the FF5M. The FF5HSM addresses heteroscedasticity and autocorrelation in the data and minimises them depending on the portfolio. Regardingforecasting, both the FF5HSM and the FF5M are accurate models depending on what portfolio the model is applied on.
|
30 |
INTROSTAT (Statistics textbook)Underhill, Les, Bradfield, Dave January 2013 (has links)
IntroStat was designed to meet the needs of students, primarily those in business, commerce and management, for a course in applied statistics. IntroSTAT is designed as a lecture-book. One of the aims is to maximize the time spent in explaining concepts and doing examples. The book is commonly used as part of first year courses into Statistics.
|
Page generated in 0.0819 seconds