• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 100
  • 14
  • 13
  • 7
  • 6
  • 5
  • 5
  • 3
  • 3
  • 2
  • 2
  • 2
  • 1
  • 1
  • Tagged with
  • 168
  • 168
  • 168
  • 54
  • 33
  • 30
  • 25
  • 24
  • 21
  • 20
  • 20
  • 19
  • 18
  • 16
  • 16
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
31

Flexible Extremal Dependence Models for Multivariate and Spatial Extremes

Zhang, Zhongwei 11 1900 (has links)
Classical models for multivariate or spatial extremes are mainly based upon the asymptotically justified max-stable or generalized Pareto processes. These models are suitable when asymptotic dependence is present. However, recent environmental data applications suggest that asymptotic independence is equally important. Therefore, development of flexible subasymptotic models is in pressing need. This dissertation consists of four major contributions to subasymptotic modeling of multivariate and spatial extremes. Firstly, the dissertation proposes a new spatial copula model for extremes based on the multivariate generalized hyperbolic distribution. The extremal dependence of this distribution is revisited and a corrected theoretical description is provided. Secondly, the dissertation thoroughly investigates the extremal dependence of stochastic processes driven by exponential-tailed Lévy noise. It shows that the discrete approximation models, which are linear transformations of a random vector with independent components, bridge asymptotic independence and asymptotic dependence in a novel way, whilst the exact stochastic processes exhibit only asymptotic independence. Thirdly, the dissertation explores two different notions of optimal prediction for extremes, and compares the classical linear kriging predictor and the conditional mean predictor for certain non-Gaussian models. Finally, the dissertation proposes a multivariate skew-elliptical link model for correlated highly-imbalanced (extreme) binary responses, and shows that the regression coefficients have a closed-form unified skew-elliptical posterior with an elliptical prior.
32

Managing the extremes : An application of extreme value theory to financial risk management

Strömqvist, Zakris, Petersen, Jesper January 2016 (has links)
We compare the traditional GARCH models with a semiparametric approach based on extreme value theory and find that the semiparametric approach yields more accurate predictions of Value-at-Risk (VaR). Using traditional parametric approaches based on GARCH and EGARCH to model the conditional volatility, we calculate univariate one-day ahead predictions of Value-at-Risk (VaR) under varying distributional assumptions. The accuracy of these predictions is then compared to that of a semiparametric approach, based on results from extreme value theory. For the 95% VaR, the EGARCH’s ability to incorporate the asymmetric behaviour of return volatility proves most useful. For higher quantiles, however, we show that what matters most for predictive accuracy is the underlying distributional assumption of the innovations, where the normal distribution falls behind other distributions which allow for thicker tails. Both the semiparametric approach and the conditional volatility models based on the t-distribution outperform the normal, especially at higher quantiles. As for the comparison between the semiparametric approach and the conditional volatility models with t-distributed innovations, the results are mixed. However, the evidence indicates that there certainly is a place for extreme value theory in financial risk management.
33

Statistical inference for inequality measures based on semi-parametric estimators

Kpanzou, Tchilabalo Abozou 12 1900 (has links)
Thesis (PhD)--Stellenbosch University, 2011. / ENGLISH ABSTRACT: Measures of inequality, also used as measures of concentration or diversity, are very popular in economics and especially in measuring the inequality in income or wealth within a population and between populations. However, they have applications in many other fields, e.g. in ecology, linguistics, sociology, demography, epidemiology and information science. A large number of measures have been proposed to measure inequality. Examples include the Gini index, the generalized entropy, the Atkinson and the quintile share ratio measures. Inequality measures are inherently dependent on the tails of the population (underlying distribution) and therefore their estimators are typically sensitive to data from these tails (nonrobust). For example, income distributions often exhibit a long tail to the right, leading to the frequent occurrence of large values in samples. Since the usual estimators are based on the empirical distribution function, they are usually nonrobust to such large values. Furthermore, heavy-tailed distributions often occur in real life data sets, remedial action therefore needs to be taken in such cases. The remedial action can be either a trimming of the extreme data or a modification of the (traditional) estimator to make it more robust to extreme observations. In this thesis we follow the second option, modifying the traditional empirical distribution function as estimator to make it more robust. Using results from extreme value theory, we develop more reliable distribution estimators in a semi-parametric setting. These new estimators of the distribution then form the basis for more robust estimators of the measures of inequality. These estimators are developed for the four most popular classes of measures, viz. Gini, generalized entropy, Atkinson and quintile share ratio. Properties of such estimators are studied especially via simulation. Using limiting distribution theory and the bootstrap methodology, approximate confidence intervals were derived. Through the various simulation studies, the proposed estimators are compared to the standard ones in terms of mean squared error, relative impact of contamination, confidence interval length and coverage probability. In these studies the semi-parametric methods show a clear improvement over the standard ones. The theoretical properties of the quintile share ratio have not been studied much. Consequently, we also derive its influence function as well as the limiting normal distribution of its nonparametric estimator. These results have not previously been published. In order to illustrate the methods developed, we apply them to a number of real life data sets. Using such data sets, we show how the methods can be used in practice for inference. In order to choose between the candidate parametric distributions, use is made of a measure of sample representativeness from the literature. These illustrations show that the proposed methods can be used to reach satisfactory conclusions in real life problems. / AFRIKAANSE OPSOMMING: Maatstawwe van ongelykheid, wat ook gebruik word as maatstawwe van konsentrasie of diversiteit, is baie populêr in ekonomie en veral vir die kwantifisering van ongelykheid in inkomste of welvaart binne ’n populasie en tussen populasies. Hulle het egter ook toepassings in baie ander dissiplines, byvoorbeeld ekologie, linguistiek, sosiologie, demografie, epidemiologie en inligtingskunde. Daar bestaan reeds verskeie maatstawwe vir die meet van ongelykheid. Voorbeelde sluit in die Gini indeks, die veralgemeende entropie maatstaf, die Atkinson maatstaf en die kwintiel aandeel verhouding. Maatstawwe van ongelykheid is inherent afhanklik van die sterte van die populasie (onderliggende verdeling) en beramers daarvoor is tipies dus sensitief vir data uit sodanige sterte (nierobuust). Inkomste verdelings het byvoorbeeld dikwels lang regtersterte, wat kan lei tot die voorkoms van groot waardes in steekproewe. Die tradisionele beramers is gebaseer op die empiriese verdelingsfunksie, en hulle is gewoonlik dus nierobuust teenoor sodanige groot waardes nie. Aangesien swaarstert verdelings dikwels voorkom in werklike data, moet regstellings gemaak word in sulke gevalle. Hierdie regstellings kan bestaan uit of die afknip van ekstreme data of die aanpassing van tradisionele beramers om hulle meer robuust te maak teen ekstreme waardes. In hierdie tesis word die tweede opsie gevolg deurdat die tradisionele empiriese verdelingsfunksie as beramer aangepas word om dit meer robuust te maak. Deur gebruik te maak van resultate van ekstreemwaardeteorie, word meer betroubare beramers vir verdelings ontwikkel in ’n semi-parametriese opset. Hierdie nuwe beramers van die verdeling vorm dan die basis vir meer robuuste beramers van maatstawwe van ongelykheid. Hierdie beramers word ontwikkel vir die vier mees populêre klasse van maatstawwe, naamlik Gini, veralgemeende entropie, Atkinson en kwintiel aandeel verhouding. Eienskappe van hierdie beramers word bestudeer, veral met behulp van simulasie studies. Benaderde vertrouensintervalle word ontwikkel deur gebruik te maak van limietverdelingsteorie en die skoenlus metodologie. Die voorgestelde beramers word vergelyk met tradisionele beramers deur middel van verskeie simulasie studies. Die vergelyking word gedoen in terme van gemiddelde kwadraat fout, relatiewe impak van kontaminasie, vertrouensinterval lengte en oordekkingswaarskynlikheid. In hierdie studies toon die semi-parametriese metodes ’n duidelike verbetering teenoor die tradisionele metodes. Die kwintiel aandeel verhouding se teoretiese eienskappe het nog nie veel aandag in die literatuur geniet nie. Gevolglik lei ons die invloedfunksie asook die asimptotiese verdeling van die nie-parametriese beramer daarvoor af. Ten einde die metodes wat ontwikkel is te illustreer, word dit toegepas op ’n aantal werklike datastelle. Hierdie toepassings toon hoe die metodes gebruik kan word vir inferensie in die praktyk. ’n Metode in die literatuur vir steekproefverteenwoordiging word voorgestel en gebruik om ’n keuse tussen die kandidaat parametriese verdelings te maak. Hierdie voorbeelde toon dat die voorgestelde metodes met vrug gebruik kan word om bevredigende gevolgtrekkings in die praktyk te maak.
34

Improved estimation procedures for a positive extreme value index

Berning, Thomas Louw 12 1900 (has links)
Thesis (PhD (Statistics))--University of Stellenbosch, 2010. / ENGLISH ABSTRACT: In extreme value theory (EVT) the emphasis is on extreme (very small or very large) observations. The crucial parameter when making inferences about extreme quantiles, is called the extreme value index (EVI). This thesis concentrates on only the right tail of the underlying distribution (extremely large observations), and specifically situations where the EVI is assumed to be positive. A positive EVI indicates that the underlying distribution of the data has a heavy right tail, as is the case with, for example, insurance claims data. There are numerous areas of application of EVT, since there are a vast number of situations in which one would be interested in predicting extreme events accurately. Accurate prediction requires accurate estimation of the EVI, which has received ample attention in the literature from a theoretical as well as practical point of view. Countless estimators of the EVI exist in the literature, but the practitioner has little information on how these estimators compare. An extensive simulation study was designed and conducted to compare the performance of a wide range of estimators, over a wide range of sample sizes and distributions. A new procedure for the estimation of a positive EVI was developed, based on fitting the perturbed Pareto distribution (PPD) to observations above a threshold, using Bayesian methodology. Attention was also given to the development of a threshold selection technique. One of the major contributions of this thesis is a measure which quantifies the stability (or rather instability) of estimates across a range of thresholds. This measure can be used to objectively obtain the range of thresholds over which the estimates are most stable. It is this measure which is used for the purpose of threshold selection for the proposed PPD estimator. A case study of five insurance claims data sets illustrates how data sets can be analyzed in practice. It is shown to what extent discretion can/should be applied, as well as how different estimators can be used in a complementary fashion to give more insight into the nature of the data and the extreme tail of the underlying distribution. The analysis is carried out from the point of raw data, to the construction of tables which can be used directly to gauge the risk of the insurance portfolio over a given time frame. / AFRIKAANSE OPSOMMING: Die veld van ekstreemwaardeteorie (EVT) is bemoeid met ekstreme (baie klein of baie groot) waarnemings. Die parameter wat deurslaggewend is wanneer inferensies aangaande ekstreme kwantiele ter sprake is, is die sogenaamde ekstreemwaarde-indeks (EVI). Hierdie verhandeling konsentreer op slegs die regterstert van die onderliggende verdeling (baie groot waarnemings), en meer spesifiek, op situasies waar aanvaar word dat die EVI positief is. ’n Positiewe EVI dui aan dat die onderliggende verdeling ’n swaar regterstert het, wat byvoorbeeld die geval is by versekeringseis data. Daar is verskeie velde waar EVT toegepas word, aangesien daar ’n groot aantal situasies is waarin mens sou belangstel om ekstreme gebeurtenisse akkuraat te voorspel. Akkurate voorspelling vereis die akkurate beraming van die EVI, wat reeds ruim aandag in die literatuur geniet het, uit beide teoretiese en praktiese oogpunte. ’n Groot aantal beramers van die EVI bestaan in die literatuur, maar enige persoon wat die toepassing van EVT in die praktyk beoog, het min inligting oor hoe hierdie beramers met mekaar vergelyk. ’n Uitgebreide simulasiestudie is ontwerp en uitgevoer om die akkuraatheid van beraming van ’n groot verskeidenheid van beramers in die literatuur te vergelyk. Die studie sluit ’n groot verskeidenheid van steekproefgroottes en onderliggende verdelings in. ’n Nuwe prosedure vir die beraming van ’n positiewe EVI is ontwikkel, gebaseer op die passing van die gesteurde Pareto verdeling (PPD) aan waarnemings wat ’n gegewe drempel oorskrei, deur van Bayes tegnieke gebruik te maak. Aandag is ook geskenk aan die ontwikkeling van ’n drempelseleksiemetode. Een van die hoofbydraes van hierdie verhandeling is ’n maatstaf wat die stabiliteit (of eerder onstabiliteit) van beramings oor verskeie drempels kwantifiseer. Hierdie maatstaf bied ’n objektiewe manier om ’n gebied (versameling van drempelwaardes) te verkry waaroor die beramings die stabielste is. Dit is hierdie maatstaf wat gebruik word om drempelseleksie te doen in die geval van die PPD beramer. ’n Gevallestudie van vyf stelle data van versekeringseise demonstreer hoe data in die praktyk geanaliseer kan word. Daar word getoon tot watter mate diskresie toegepas kan/moet word, asook hoe verskillende beramers op ’n komplementêre wyse ingespan kan word om meer insig te verkry met betrekking tot die aard van die data en die stert van die onderliggende verdeling. Die analise word uitgevoer vanaf die punt waar slegs rou data beskikbaar is, tot op die punt waar tabelle saamgestel is wat direk gebruik kan word om die risiko van die versekeringsportefeulje te bepaal oor ’n gegewe periode.
35

Empirical Bayes estimation of the extreme value index in an ANOVA setting

Jordaan, Aletta Gertruida 04 1900 (has links)
Thesis (MComm)-- Stellenbosch University, 2014. / ENGLISH ABSTRACT: Extreme value theory (EVT) involves the development of statistical models and techniques in order to describe and model extreme events. In order to make inferences about extreme quantiles, it is necessary to estimate the extreme value index (EVI). Numerous estimators of the EVI exist in the literature. However, these estimators are only applicable in the single sample setting. The aim of this study is to obtain an improved estimator of the EVI that is applicable to an ANOVA setting. An ANOVA setting lends itself naturally to empirical Bayes (EB) estimators, which are the main estimators under consideration in this study. EB estimators have not received much attention in the literature. The study begins with a literature study, covering the areas of application of EVT, Bayesian theory and EB theory. Different estimation methods of the EVI are discussed, focusing also on possible methods of determining the optimal threshold. Specifically, two adaptive methods of threshold selection are considered. A simulation study is carried out to compare the performance of different estimation methods, applied only in the single sample setting. First order and second order estimation methods are considered. In the case of second order estimation, possible methods of estimating the second order parameter are also explored. With regards to obtaining an estimator that is applicable to an ANOVA setting, a first order EB estimator and a second order EB estimator of the EVI are derived. A case study of five insurance claims portfolios is used to examine whether the two EB estimators improve the accuracy of estimating the EVI, when compared to viewing the portfolios in isolation. The results showed that the first order EB estimator performed better than the Hill estimator. However, the second order EB estimator did not perform better than the “benchmark” second order estimator, namely fitting the perturbed Pareto distribution to all observations above a pre-determined threshold by means of maximum likelihood estimation. / AFRIKAANSE OPSOMMING: Ekstreemwaardeteorie (EWT) behels die ontwikkeling van statistiese modelle en tegnieke wat gebruik word om ekstreme gebeurtenisse te beskryf en te modelleer. Ten einde inferensies aangaande ekstreem kwantiele te maak, is dit nodig om die ekstreem waarde indeks (EWI) te beraam. Daar bestaan talle beramers van die EWI in die literatuur. Hierdie beramers is egter slegs van toepassing in die enkele steekproef geval. Die doel van hierdie studie is om ’n meer akkurate beramer van die EWI te verkry wat van toepassing is in ’n ANOVA opset. ’n ANOVA opset leen homself tot die gebruik van empiriese Bayes (EB) beramers, wat die fokus van hierdie studie sal wees. Hierdie beramers is nog nie in literatuur ondersoek nie. Die studie begin met ’n literatuurstudie, wat die areas van toepassing vir EWT, Bayes teorie en EB teorie insluit. Verskillende metodes van EWI beraming word bespreek, insluitend ’n bespreking oor hoe die optimale drempel bepaal kan word. Spesifiek word twee aanpasbare metodes van drempelseleksie beskou. ’n Simulasiestudie is uitgevoer om die akkuraatheid van beraming van verskillende beramingsmetodes te vergelyk, in die enkele steekproef geval. Eerste orde en tweede orde beramingsmetodes word beskou. In die geval van tweede orde beraming, word moontlike beramingsmetodes van die tweede orde parameter ook ondersoek. ’n Eerste orde en ’n tweede orde EB beramer van die EWI is afgelei met die doel om ’n beramer te kry wat van toepassing is vir die ANAVA opset. ’n Gevallestudie van vyf versekeringsportefeuljes word gebruik om ondersoek in te stel of die twee EB beramers die akkuraatheid van beraming van die EWI verbeter, in vergelyking met die EWI beramers wat verkry word deur die portefeuljes afsonderlik te ontleed. Die resultate toon dat die eerste orde EB beramer beter gevaar het as die Hill beramer. Die tweede orde EB beramer het egter slegter gevaar as die tweede orde beramer wat gebruik is as maatstaf, naamlik die passing van die gesteurde Pareto verdeling (PPD) aan alle waarnemings bo ’n gegewe drempel, met behulp van maksimum aanneemlikheidsberaming.
36

Extreme value theory : from a financial risk management perspective

Baldwin, Sheena 03 1900 (has links)
Thesis (MBA)--Stellenbosch University, 2004. / ENGLISH ABSTRACT: Risk managers and regulators are primarily concerned with ensuring that there is sufficient capital to withstand the effects of adverse movements in market prices. The accurate prediction of the maximum amount that a financial institution can expect to Jose over a specified period is essential to guard against catastrophic losses that can threaten the viability of an individual finn or the stability of entire markets. Value-at-risk (VaR) is a quantile-based measure of risk that is widely used for calculating the capital adequacy requirements of banks and other financial institutions. However, the current models for price risk tend to underestimate the risk of catastrophic losses because the entire return distribution is used to calculate the value-at-risk. By contrast, Extreme Value" Theory uses only the largest observations to model the tails of a distribution, which should provide a better fit for estimates of extreme quantiles and probabilities. The semi-parametric Hill (1975) estimator has often been used to fit the tails of financial returns, but its performance is heavily dependent on the number k" of order statistics used in the estimation process and the estimator can be very biased if this choice is suboptimal. Since k" depends on unknown properties of the tail, it has to be estimated from the sample. The first truly data-driven method for choosing an optimal number of order statistics adaptively was introduced by Beirlant, Dierckx. Goegebeur and Matthys (1999) and modified by Beirlanl. Dierckx and Stmca (2000) and Matthys and Beirlanl (2000b). Their methods are based on an exponential regression model developed independently by Beirlant et a/. (1999) and Feuerverger and Hall (1999) to reduce the bias found in the Hill estimator. The reduced bias of these adaptive estimators and the associated estimator for extreme quantiles developed by Matthys and Beirlant (2000b) makes these estimators attractive from a risk management point of view, but more work needs to be done on characterising their finite sample properties before they can be used in practice. In particular, it is crucially important to establish the smallest sample size that will yield reliable estimates of extreme quantiles and probabilities and to determine the widths and coverage probabilities of confidence intervals. This study project reviews the probability and statistical theory of univariate Extreme Value Theory from a financial risk management perspective. It is clear from a survey of the literature that the most worthwhile direction to pursue in terms of practical research will be intimately connected with developments in the fast-moving field of EVT with a future emphasis not only on fully evaluating the existing models, but indeed on creating even less biased and more precise models. Keywords and phrases: Extreme value index, Pareto-type distributions, maximum likelihood estimation, bias reduction, exponential regression model, market risk. / AFRIKAANSE OPSOMMING: Risikobestuurders en -reguleerders is hoofsaaklik gemoeid met die versekering dat genoegsame kapitaal beskikbaar is om die effek van ongunstige beweging in markpryse die hoof te kan bied. Die akkurate vooruitskatting van die maksimum verlies wat 'n finansiele instelling oor 'n spesifieke tydperk kan ly, is noodsaaklik as beskerming teen katastrofiese verliese wat die voortbestaan van 'n individuele firma, of die stabiliteit van die totale mark, mag bedreig. Waarde-op-Risiko (WoR) is 'n kwantiel gebaseerde maatstaaf van risiko wat algemeen vir die berekening van kapitaaltoereikendheid van banke en ander finansiele instellings benut word. Die huidige prys risikomodelle neig om die risiko van katastrofiese verliese te onderskat, omdat die totale opbrengs verspreiding gebruik word om WoR te bereken. In teenstelling benut die Ekstreme Waarde Teorie (EWT), slegs die grootste waarnemings om die eindverdelings te modelleer en is as sulks meer geskik om ekstreme kwantiele en waarskynlikhede te bepaal. Die semi-parametriese Hill (1975) skatter word gereeld gebruik om die stertgedeeltes van finansiele opbrengste te beraam, maar sy verrigting is swaar afhanklik van die getal k~ van rangstatistieke wat in die skattingsproses gebruik word en die skatting kan baie sydig wees indien die keuse suboptimaal is. Weens die afhanklikheid van kn van onbekende eienskappe van die stertgedeeltes, moet dit geskat word vanuit die steekproefdata. Die eerste data-gedrewe metode vir die keuse van die optimale rangordestatistieke, is deur Beiriant, Dierckx, Goegebeur en Matthys (1999) ontwikkel en aangepas deur Beirlant, Dierckx and Starica (2000), asook Matthys en Beirlant (2000b). Hul metodes is op 'n eksponensiele regressiemodel gebaseer, en is onafhanklik deur Beirlant et at. (1999), en Feuerverger en Hall (1999) ontwikkel met die doel om die sydigheid van die Hill skatter te verminder. Die verminderde sydigheid van hierdie adaptiewe skatters en die verwante skatter vir ekstreme kwantiele, ontwikkel deur Matthys en Beirlant (2000b), maak hierdie skatters aantreklik vanuit 'n risikobestuur oogpunt, maar meer werk word benodig met die karakterisering van hul eindige steekproefeienskappe, alvorens dit in die praktyk benut kan word. In besonder is dit van uiterste belang dat die kleinste steekproefgrootte bepaal sal word wat die betroubare skattings van ekstreme kwantiele en moontlikhede sal verseker, en wat ook benut kan word om betroubaarheidsintervalle op te ste!. Hierdie studie bied 'n oorsig van die moontlikhede en statistiese teorie van die eenveranderlike EWT vanuit 'n finansiele risikobestuur perspektief. Dit is duidelik vanuit die literatuurstudie dat die mees nuttige rigting om voort te gaan met praktiese navorsing, verband hou met die ontwikkeling in die vinnig ontwikkelende veld van EWT met toekomstige fokus, nie slegs op die volle evaluering van die bestaande modelle nie, maar ook op die ontwikkeling van minder sydige en meer akkurate modelle.
37

Fitting extreme value distributions to the Zambezi river flood water levels recorded at Katima Mulilo in Namibia.

Kamwi, Innocent Silibelo January 2005 (has links)
The aim of this research project was to estimate parameters for the distribution of annual maximum flood levels for the Zambezi River at Katima Mulilo. The estimation of parameters was done by using the maximum likelihood method. The study aimed to explore data of the Zambezi's annual maximum flood heights at Katima Mulilo by means of fitting the Gumbel, Weibull and the generalized extreme value distributions and evaluated their goodness of fit.
38

Bivariate extreme value analysis of commodity prices

Joyce, Matthew 21 April 2017 (has links)
The crude oil, natural gas, and electricity markets are among the most widely traded and talked about commodity markets across the world. Over the past two decades each commodity has seen price volatility due to political, economic, social, and technological reasons. With that comes a significant amount of risk that both corporations and governments must account for to ensure expected cash flows and to minimize losses. This thesis analyzes the portfolio risk of the major US commodity hubs for crude oil, natural gas and electricity by applying Extreme Value Theory to historical daily price returns between 2003 and 2013. The risk measures used to analyze risk are Value-at-Risk and Expected Shortfall, with these estimated by fitting the Generalized Pareto Distribution to the data using the peak-over-threshold method. We consider both the univariate and bivariate cases in order to determine the effects that price shocks within and across commodities will have in a mixed portfolio. The results show that electricity is the most volatile, and therefore most risky, commodity of the three markets considered for both positive and negative returns. In addition, we find that the univariate and bivariate results are statistically indistinguishable, leading to the conclusion that for the three markets analyzed during this period, price shocks in one commodity does not directly impact the volatility of another commodity’s price. / Graduate
39

How Low Can You Go? : Quantitative Risk Measures in Commodity Markets

Forsgren, Johan January 2016 (has links)
The volatility model approach to forecasting Value at Risk is complemented with modelling of Expected Shortfalls using an extreme value approach. Using three models from the GARCH family (GARCH, EGARCH and GJR-GARCH) and assuming two conditional distributions, normal Gaussian and Student t’s distribution, to make predictions of VaR, the forecasts are used as a threshold for assigning losses to the distribution tail. The Expected Shortfalls are estimated assuming that the violations of VaR follow the Generalized Pareto distribution, and the estimates are evaluated. The results indicate that the most efficient model for making predictions of VaR is the asymmetric GJR-GARCH, and that assuming the t distribution generates conservative forecasts. In conclusion there is evidence that the commodities are characterized by asymmetry and conditional normality. Since no comparison is made, the EVT approach can not be deemed to be either superior or inferior to standard approaches to Expected Shortfall modeling, although the data intensity of the method suggest that a standard approach may be preferable.
40

Order-statistics-based inferences for censored lifetime data and financial risk analysis

Sheng, Zhuo January 2013 (has links)
This thesis focuses on applying order-statistics-based inferences on lifetime analysis and financial risk measurement. The first problem is raised from fitting the Weibull distribution to progressively censored and accelerated life-test data. A new orderstatistics- based inference is proposed for both parameter and con dence interval estimation. The second problem can be summarised as adopting the inference used in the first problem for fitting the generalised Pareto distribution, especially when sample size is small. With some modifications, the proposed inference is compared with classical methods and several relatively new methods emerged from recent literature. The third problem studies a distribution free approach for forecasting financial volatility, which is essentially the standard deviation of financial returns. Classical models of this approach use the interval between two symmetric extreme quantiles of the return distribution as a proxy of volatility. Two new models are proposed, which use intervals of expected shortfalls and expectiles, instead of interval of quantiles. Different models are compared with empirical stock indices data. Finally, attentions are drawn towards the heteroskedasticity quantile regression. The proposed joint modelling approach, which makes use of the parametric link between the quantile regression and the asymmetric Laplace distribution, can provide estimations of the regression quantile and of the log linear heteroskedastic scale simultaneously. Furthermore, the use of the expectation of the check function as a measure of quantile deviation is discussed.

Page generated in 0.0996 seconds