• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 139
  • 27
  • 19
  • 13
  • 11
  • 9
  • 7
  • 5
  • 3
  • 2
  • 2
  • 2
  • 2
  • 1
  • 1
  • Tagged with
  • 263
  • 263
  • 175
  • 68
  • 61
  • 51
  • 40
  • 34
  • 31
  • 30
  • 28
  • 25
  • 25
  • 23
  • 23
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
51

Modelling heavy rainfall over time and space

Khuluse, Sibusisiwe Audrey 06 June 2011 (has links)
Extreme Value Theory nds application in problems concerning low probability but high consequence events. In hydrology the study of heavy rainfall is important in regional ood risk assessment. In particular, the N-year return level is a key output of an extreme value analysis, hence care needs to be taken to ensure that the model is accurate and that the level of imprecision in the parameter estimates is made explicit. Rainfall is a process that evolves over time and space. Therefore, it is anticipated that at extreme levels the process would continue to show temporal and spatial correlation. In this study interest is in whether any trends in heavy rainfall can be detected for the Western Cape. The focus is on obtaining the 50-year daily winter rainfall return level and investigating whether this quantity is homogenous over the study area. The study is carried out in two stages. In the rst stage, the point process approach to extreme value theory is applied to arrive at the return level estimates at each of the fteen sites. Stationarity is assumed for the series at each station, thus an issue to deal with is that of short-range temporal correlation of threshold exceedances. The proportion of exceedances is found to be smaller (approximately 0.01) for stations towards the east such as Jonkersberg, Plettenbergbay and Tygerhoek. This can be attributed to rainfall values being mostly low, with few instances where large amounts of rainfall were observed. Looking at the parameters of the point process extreme value model, the location parameter estimate appears stable over the region in contrast to the scale parameter estimate which shows an increase towards in a south easterly direction. While the model is shown to t exceedances at each station adequately, the degree of uncertainty is large for stations such as Tygerhoek, where the maximum observed rainfall value is approximately twice as large as the high rainfall values. This situation was also observed at other stations and in such cases removal of these high rainfall values was avoided to minimize the risk of obtaining inaccurate return level estimates. The key result is an N-year rainfall return level estimate at each site. Interest is in mapping an estimate of the 50-year daily winter rainfall return level, however to evaluate the adequacy of the model at each site the 25-year return level is considered since a 25 year return period is well within the range of the observed data. The 25-year daily winter rainfall return level estimate for Ladismith is the smallest at 22:42 mm. This can be attributed to the station's generally low observed winter rainfall values. In contrast, the return level estimate for Tygerhoek is high, almost six times larger than that of Ladismith at 119:16 mm. Visually design values show di erences between sites, therefore it is of interest to investigate whether these di erences can be modelled. The second stage is the geostatistical analysis of the 50-year 24-hour rainfall return level The aim here is to quantify the degree of spatial variation in the 50-year 24-hour rainfall return level estimates and to use that association to predict values at unobserved sites within the study region. A tool for quantifying spatial variation is the variogram model. Estimation of the parameters of this model require a su ciently large sample, which is a challenge in this study since there is only fteen stations and therefore only fteen observations for the geostatistical analysis. To address this challenge, observations are expanded in space and time and then standardized and to create a larger pool of data from which the variogram is estimated. The obtained estimates are used in ordinary and universal kriging to derive the 50-year 24-hour winter rainfall return level maps. It is shown that 50-year daily winter design rainfall over most of the Western Cape lies between 40 mm and 80 mm, but rises sharply as one moves towards the east coast of the region. This is largely due to the in uence of large design values obtained for Tygerhoek. In ordinary kriging prediction uncertainty is lowest around observed values and is large if the distance from these points increases. Overall, prediction uncertainty maps show that ordinary kriging performs better than universal kriging where a linear regional trend in design values is included.
52

Microstructure-sensitive extreme value probabilities of fatigue in advanced engineering alloys

Przybyla, Craig Paul 07 July 2010 (has links)
A novel microstructure-sensitive extreme value probabilistic framework is introduced to evaluate material performance/variability for damage evolution processes (e.g., fatigue, fracture, creep). This framework employs newly developed extreme value marked correlation functions (EVMCF) to identify the coupled microstructure attributes (e.g., phase/grain size, grain orientation, grain misorientation) that have the greatest statistical relevance to the extreme value response variables (e.g., stress, elastic/plastic strain) that describe the damage evolution processes of interest. This is an improvement on previous approaches that account for distributed extreme value response variables that describe the damage evolution process of interest based only on the extreme value distributions of a single microstructure attribute; previous approaches have given no consideration of how coupled microstructure attributes affect the distributions of extreme value response. This framework also utilizes computational modeling techniques to identify correlations between microstructure attributes that significantly raise or lower the magnitudes of the damage response variables of interest through the simulation of multiple statistical volume elements (SVE). Each SVE for a given response is constructed to be a statistical sample of the entire microstructure ensemble (i.e., bulk material); therefore, the response of interest in each SVE is not expected to be the same. This is in contrast to computational simulation of a single representative volume element (RVE), which often is untenably large for response variables dependent on the extreme value microstructure attributes. This framework has been demonstrated in the context of characterizing microstructure-sensitive high cycle fatigue (HCF) variability due to the processes of fatigue crack formation (nucleation and microstructurally small crack growth) in polycrystalline metallic alloys. Specifically, the framework is exercised to estimate the local driving forces for fatigue crack formation, to validate these with limited existing experiments, and to explore how the extreme value probabilities of certain fatigue indicator parameters (FIPs) affect overall variability in fatigue life in the HCF regime. Various FIPs have been introduced and used previously as a means to quantify the potential for fatigue crack formation based on experimentally observed mechanisms. Distributions of the extreme value FIPs are calculated for multiple SVEs simulated via the FEM with crystal plasticity constitutive relations. By using crystal plasticity relations, the FIPs can be computed based on the cyclic plastic strain on the scale of the individual grains. These simulated SVEs are instantiated such that they are statistically similar to real microstructures in terms of the crystallographic microstructure attributes that are hypothesized to have the most influence on the extreme value HCF response. The polycrystalline alloys considered here include the Ni-base superalloy IN100 and the Ti alloy Ti-6Al-4V. In applying this framework to study the microstructure dependent variability of HCF in these alloys, the extreme value distributions of the FIPs and associated extreme value marked correlations of crystallographic microstructure attributes are characterized. This information can then be used to rank order multiple variants of the microstructure for a specific material system for relative HCF performance or to design new microstructures hypothesized to exhibit improved performance. This framework enables limiting the (presently) large number of experiments required to characterize scatter in HCF and lends quantitative support to designing improved, fatigue-resistant materials and accelerating insertion of modified and new materials into service.
53

Využití teorie extrémních hodnot při řízení operačních rizik / Extreme Value Theory in Operational Risk Management

Vojtěch, Jan January 2009 (has links)
Currently, financial institutions are supposed to analyze and quantify a new type of banking risk, known as operational risk. Financial institutions are exposed to this risk in their everyday activities. The main objective of this work is to construct an acceptable statistical model of capital requirement computation. Such a model must respect specificity of losses arising from operational risk events. The fundamental task is represented by searching for a suitable distribution, which describes the probabilistic behavior of losses arising from this type of risk. There is a strong utilization of the Pickands-Balkema-de Haan theorem used in extreme value theory. Roughly speaking, distribution of a random variable exceeding a given high threshold, converges in distribution to generalized Pareto distribution. The theorem is subsequently used in estimating the high percentile from a simulated distribution. The simulated distribution is considered to be a compound model for the aggregate loss random variable. It is constructed as a combination of frequency distribution for the number of losses random variable and the so-called severity distribution for individual loss random variable. The proposed model is then used to estimate a fi -nal quantile, which represents a searched amount of capital requirement. This capital requirement is constituted as the amount of funds the bank is supposed to retain, in order to make up for the projected lack of funds. There is a given probability the capital charge will be exceeded, which is commonly quite small. Although a combination of some frequency distribution and some severity distribution is the common way to deal with the described problem, the final application is often considered to be problematic. Generally, there are some combinations for severity distribution of two or three, for instance, lognormal distributions with different location and scale parameters. Models like these usually do not have any theoretical background and in particular, the connecting of distribution functions has not been conducted in the proper way. In this work, we will deal with both problems. In addition, there is a derivation of maximum likelihood estimates of lognormal distribution for which hold F_LN(u) = p, where u and p is given. The results achieved can be used in the everyday practices of financial institutions for operational risks quantification. In addition, they can be used for the analysis of a variety of sample data with so-called heavy tails, where standard distributions do not offer any help. As an integral part of this work, a CD with source code of each function used in the model is included. All of these functions were created in statistical programming language, in S-PLUS software. In the fourth annex, there is the complete description of each function and its purpose and general syntax for a possible usage in solving different kinds of problems.
54

Neparametrické metody odhadu parametrů rozdělení extrémního typu / Non-parametric estimation of parameters of extreme value distribution

Blachut, Vít January 2013 (has links)
The concern of this diploma thesis is extreme value distributions. The first part formulates and proves the limit theorem for distribution of maximum. Further there are described basic properties of class of extreme value distributions. The key role of this thesis is on non-parametric estimations of extreme value index. Primarily, Hill and moment estimator are derived, for which is, based on the results of mathematical analysis, suggested an alternative choice of optimal sample fraction using a bootstrap based method. The estimators of extreme value index are compared based on simulations from proper chosen distributions, being close to distribution of given rain-fall data series. This time series is recommended a suitable estimator and suggested choice of optimal sample fraction, which belongs to the most difficult task in the area of extreme value theory.
55

Metody odhadu parametrů rozdělení extrémního typu s aplikacemi / Extreme Value Distribution Parameter Estimation and its Application

Holešovský, Jan January 2016 (has links)
The thesis is focused on extreme value theory and its applications. Initially, extreme value distribution is introduced and its properties are discussed. At this basis are described two models mostly used for an extreme value analysis, i.e. the block maxima model and the Pareto-distribution threshold model. The first one takes advantage in its robustness, however recently the threshold model is mostly preferred. Although the threshold choice strongly affects estimation quality of the model, an optimal threshold selection still belongs to unsolved issues of this approach. Therefore, the thesis is focused on techniques for proper threshold identification, mainly on adaptive methods suitable for the use in practice. For this purpose a simulation study was performed and acquired knowledge was applied for analysis of precipitation records from South-Moravian region. Further on, the thesis also deals with extreme value estimation within a stationary series framework. Usually, an observed time series needs to be separated to obtain approximately independent observations. The use of the advanced theory for stationary series allows to avoid the entire separation procedure. In this context the commonly applied separation techniques turn out to be quite inappropriate in most cases and the estimates based on theory of stationary series are obtained with better precision.
56

Fitting extreme value distributions to the Zambezi River flood water levels recorded at Katima Mulilo in Namibia (1965-2003)

Kamwi, Innocent Silibelo January 2005 (has links)
>Magister Scientiae - MSc / This study sought to identify and fit the appropriate extreme value distribution to flood data, using the method of maximum likelihood. To examine the uncertainty of the estimated parameters and evaluate the goodness of fit of the model identified. The study revealed that the three parameter Weibull and the generalised extreme value (GEV) distributions fit the data very well. Standard errors for the estimated parameters were calculated from the empirical information matrix. An upper limit to the flood levels followed from the fitted distribution.
57

Managing the extremes : An application of extreme value theory to financial risk management

Strömqvist, Zakris, Petersen, Jesper January 2016 (has links)
We compare the traditional GARCH models with a semiparametric approach based on extreme value theory and find that the semiparametric approach yields more accurate predictions of Value-at-Risk (VaR). Using traditional parametric approaches based on GARCH and EGARCH to model the conditional volatility, we calculate univariate one-day ahead predictions of Value-at-Risk (VaR) under varying distributional assumptions. The accuracy of these predictions is then compared to that of a semiparametric approach, based on results from extreme value theory. For the 95% VaR, the EGARCH’s ability to incorporate the asymmetric behaviour of return volatility proves most useful. For higher quantiles, however, we show that what matters most for predictive accuracy is the underlying distributional assumption of the innovations, where the normal distribution falls behind other distributions which allow for thicker tails. Both the semiparametric approach and the conditional volatility models based on the t-distribution outperform the normal, especially at higher quantiles. As for the comparison between the semiparametric approach and the conditional volatility models with t-distributed innovations, the results are mixed. However, the evidence indicates that there certainly is a place for extreme value theory in financial risk management.
58

An empirical comparison of extreme value modelling procedures for the estimation of high quantiles

Engberg, Alexander January 2016 (has links)
The peaks over threshold (POT) method provides an attractive framework for estimating the risk of extreme events such as severe storms or large insurance claims. However, the conventional POT procedure, where the threshold excesses are modelled by a generalized Pareto distribution, suffers from small samples and subjective threshold selection. In recent years, two alternative approaches have been proposed in the form of mixture models that estimate the threshold and a folding procedure that generates larger tail samples. In this paper the empirical performances of the conventional POT procedure, the folding procedure and a mixture model are compared by modelling data sets on fire insurance claims and hurricane damage costs. The results show that the folding procedure gives smaller standard errors of the parameter estimates and in some cases more stable quantile estimates than the conventional POT procedure. The mixture model estimates are dependent on the starting values in the numerical maximum likelihood estimation, and are therefore difficult to compare with those from the other procedures. The conclusion is that none of the procedures is overall better than the others but that there are situations where one method may be preferred.
59

Statistical inference for inequality measures based on semi-parametric estimators

Kpanzou, Tchilabalo Abozou 12 1900 (has links)
Thesis (PhD)--Stellenbosch University, 2011. / ENGLISH ABSTRACT: Measures of inequality, also used as measures of concentration or diversity, are very popular in economics and especially in measuring the inequality in income or wealth within a population and between populations. However, they have applications in many other fields, e.g. in ecology, linguistics, sociology, demography, epidemiology and information science. A large number of measures have been proposed to measure inequality. Examples include the Gini index, the generalized entropy, the Atkinson and the quintile share ratio measures. Inequality measures are inherently dependent on the tails of the population (underlying distribution) and therefore their estimators are typically sensitive to data from these tails (nonrobust). For example, income distributions often exhibit a long tail to the right, leading to the frequent occurrence of large values in samples. Since the usual estimators are based on the empirical distribution function, they are usually nonrobust to such large values. Furthermore, heavy-tailed distributions often occur in real life data sets, remedial action therefore needs to be taken in such cases. The remedial action can be either a trimming of the extreme data or a modification of the (traditional) estimator to make it more robust to extreme observations. In this thesis we follow the second option, modifying the traditional empirical distribution function as estimator to make it more robust. Using results from extreme value theory, we develop more reliable distribution estimators in a semi-parametric setting. These new estimators of the distribution then form the basis for more robust estimators of the measures of inequality. These estimators are developed for the four most popular classes of measures, viz. Gini, generalized entropy, Atkinson and quintile share ratio. Properties of such estimators are studied especially via simulation. Using limiting distribution theory and the bootstrap methodology, approximate confidence intervals were derived. Through the various simulation studies, the proposed estimators are compared to the standard ones in terms of mean squared error, relative impact of contamination, confidence interval length and coverage probability. In these studies the semi-parametric methods show a clear improvement over the standard ones. The theoretical properties of the quintile share ratio have not been studied much. Consequently, we also derive its influence function as well as the limiting normal distribution of its nonparametric estimator. These results have not previously been published. In order to illustrate the methods developed, we apply them to a number of real life data sets. Using such data sets, we show how the methods can be used in practice for inference. In order to choose between the candidate parametric distributions, use is made of a measure of sample representativeness from the literature. These illustrations show that the proposed methods can be used to reach satisfactory conclusions in real life problems. / AFRIKAANSE OPSOMMING: Maatstawwe van ongelykheid, wat ook gebruik word as maatstawwe van konsentrasie of diversiteit, is baie populêr in ekonomie en veral vir die kwantifisering van ongelykheid in inkomste of welvaart binne ’n populasie en tussen populasies. Hulle het egter ook toepassings in baie ander dissiplines, byvoorbeeld ekologie, linguistiek, sosiologie, demografie, epidemiologie en inligtingskunde. Daar bestaan reeds verskeie maatstawwe vir die meet van ongelykheid. Voorbeelde sluit in die Gini indeks, die veralgemeende entropie maatstaf, die Atkinson maatstaf en die kwintiel aandeel verhouding. Maatstawwe van ongelykheid is inherent afhanklik van die sterte van die populasie (onderliggende verdeling) en beramers daarvoor is tipies dus sensitief vir data uit sodanige sterte (nierobuust). Inkomste verdelings het byvoorbeeld dikwels lang regtersterte, wat kan lei tot die voorkoms van groot waardes in steekproewe. Die tradisionele beramers is gebaseer op die empiriese verdelingsfunksie, en hulle is gewoonlik dus nierobuust teenoor sodanige groot waardes nie. Aangesien swaarstert verdelings dikwels voorkom in werklike data, moet regstellings gemaak word in sulke gevalle. Hierdie regstellings kan bestaan uit of die afknip van ekstreme data of die aanpassing van tradisionele beramers om hulle meer robuust te maak teen ekstreme waardes. In hierdie tesis word die tweede opsie gevolg deurdat die tradisionele empiriese verdelingsfunksie as beramer aangepas word om dit meer robuust te maak. Deur gebruik te maak van resultate van ekstreemwaardeteorie, word meer betroubare beramers vir verdelings ontwikkel in ’n semi-parametriese opset. Hierdie nuwe beramers van die verdeling vorm dan die basis vir meer robuuste beramers van maatstawwe van ongelykheid. Hierdie beramers word ontwikkel vir die vier mees populêre klasse van maatstawwe, naamlik Gini, veralgemeende entropie, Atkinson en kwintiel aandeel verhouding. Eienskappe van hierdie beramers word bestudeer, veral met behulp van simulasie studies. Benaderde vertrouensintervalle word ontwikkel deur gebruik te maak van limietverdelingsteorie en die skoenlus metodologie. Die voorgestelde beramers word vergelyk met tradisionele beramers deur middel van verskeie simulasie studies. Die vergelyking word gedoen in terme van gemiddelde kwadraat fout, relatiewe impak van kontaminasie, vertrouensinterval lengte en oordekkingswaarskynlikheid. In hierdie studies toon die semi-parametriese metodes ’n duidelike verbetering teenoor die tradisionele metodes. Die kwintiel aandeel verhouding se teoretiese eienskappe het nog nie veel aandag in die literatuur geniet nie. Gevolglik lei ons die invloedfunksie asook die asimptotiese verdeling van die nie-parametriese beramer daarvoor af. Ten einde die metodes wat ontwikkel is te illustreer, word dit toegepas op ’n aantal werklike datastelle. Hierdie toepassings toon hoe die metodes gebruik kan word vir inferensie in die praktyk. ’n Metode in die literatuur vir steekproefverteenwoordiging word voorgestel en gebruik om ’n keuse tussen die kandidaat parametriese verdelings te maak. Hierdie voorbeelde toon dat die voorgestelde metodes met vrug gebruik kan word om bevredigende gevolgtrekkings in die praktyk te maak.
60

Empirical Bayes estimation of the extreme value index in an ANOVA setting

Jordaan, Aletta Gertruida 04 1900 (has links)
Thesis (MComm)-- Stellenbosch University, 2014. / ENGLISH ABSTRACT: Extreme value theory (EVT) involves the development of statistical models and techniques in order to describe and model extreme events. In order to make inferences about extreme quantiles, it is necessary to estimate the extreme value index (EVI). Numerous estimators of the EVI exist in the literature. However, these estimators are only applicable in the single sample setting. The aim of this study is to obtain an improved estimator of the EVI that is applicable to an ANOVA setting. An ANOVA setting lends itself naturally to empirical Bayes (EB) estimators, which are the main estimators under consideration in this study. EB estimators have not received much attention in the literature. The study begins with a literature study, covering the areas of application of EVT, Bayesian theory and EB theory. Different estimation methods of the EVI are discussed, focusing also on possible methods of determining the optimal threshold. Specifically, two adaptive methods of threshold selection are considered. A simulation study is carried out to compare the performance of different estimation methods, applied only in the single sample setting. First order and second order estimation methods are considered. In the case of second order estimation, possible methods of estimating the second order parameter are also explored. With regards to obtaining an estimator that is applicable to an ANOVA setting, a first order EB estimator and a second order EB estimator of the EVI are derived. A case study of five insurance claims portfolios is used to examine whether the two EB estimators improve the accuracy of estimating the EVI, when compared to viewing the portfolios in isolation. The results showed that the first order EB estimator performed better than the Hill estimator. However, the second order EB estimator did not perform better than the “benchmark” second order estimator, namely fitting the perturbed Pareto distribution to all observations above a pre-determined threshold by means of maximum likelihood estimation. / AFRIKAANSE OPSOMMING: Ekstreemwaardeteorie (EWT) behels die ontwikkeling van statistiese modelle en tegnieke wat gebruik word om ekstreme gebeurtenisse te beskryf en te modelleer. Ten einde inferensies aangaande ekstreem kwantiele te maak, is dit nodig om die ekstreem waarde indeks (EWI) te beraam. Daar bestaan talle beramers van die EWI in die literatuur. Hierdie beramers is egter slegs van toepassing in die enkele steekproef geval. Die doel van hierdie studie is om ’n meer akkurate beramer van die EWI te verkry wat van toepassing is in ’n ANOVA opset. ’n ANOVA opset leen homself tot die gebruik van empiriese Bayes (EB) beramers, wat die fokus van hierdie studie sal wees. Hierdie beramers is nog nie in literatuur ondersoek nie. Die studie begin met ’n literatuurstudie, wat die areas van toepassing vir EWT, Bayes teorie en EB teorie insluit. Verskillende metodes van EWI beraming word bespreek, insluitend ’n bespreking oor hoe die optimale drempel bepaal kan word. Spesifiek word twee aanpasbare metodes van drempelseleksie beskou. ’n Simulasiestudie is uitgevoer om die akkuraatheid van beraming van verskillende beramingsmetodes te vergelyk, in die enkele steekproef geval. Eerste orde en tweede orde beramingsmetodes word beskou. In die geval van tweede orde beraming, word moontlike beramingsmetodes van die tweede orde parameter ook ondersoek. ’n Eerste orde en ’n tweede orde EB beramer van die EWI is afgelei met die doel om ’n beramer te kry wat van toepassing is vir die ANAVA opset. ’n Gevallestudie van vyf versekeringsportefeuljes word gebruik om ondersoek in te stel of die twee EB beramers die akkuraatheid van beraming van die EWI verbeter, in vergelyking met die EWI beramers wat verkry word deur die portefeuljes afsonderlik te ontleed. Die resultate toon dat die eerste orde EB beramer beter gevaar het as die Hill beramer. Die tweede orde EB beramer het egter slegter gevaar as die tweede orde beramer wat gebruik is as maatstaf, naamlik die passing van die gesteurde Pareto verdeling (PPD) aan alle waarnemings bo ’n gegewe drempel, met behulp van maksimum aanneemlikheidsberaming.

Page generated in 0.0615 seconds