• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 9
  • 6
  • 3
  • 3
  • 2
  • 2
  • 1
  • 1
  • 1
  • Tagged with
  • 28
  • 28
  • 24
  • 19
  • 15
  • 9
  • 8
  • 6
  • 6
  • 6
  • 5
  • 4
  • 4
  • 4
  • 4
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
21

Distribuição generalizada de chuvas máximas no Estado do Paraná. / Local and regional frequency analysis by lh-moments and generalized distributions

Pansera, Wagner Alessandro 07 December 2013 (has links)
Made available in DSpace on 2017-07-10T19:23:40Z (GMT). No. of bitstreams: 1 Wagner.pdf: 5111902 bytes, checksum: b4edf3498cca6f9c7e2a9dbde6e62e18 (MD5) Previous issue date: 2013-12-07 / The purpose of hydrologic frequency analysis is to relate magnitude of events with their occurrence frequency based on probability distribution. The generalized probability distributions can be used on the study concerning extreme hydrological events: extreme events, logistics and Pareto. There are several methodologies to estimate probability distributions parameters, however, L-moments are often used due to computational easiness. Reliability of quantiles with high return period can be increased by LH-moments or high orders L-moments. L-moments have been widely studied; however, there is little information about LH-moments on literature, thus, there is a great research requirement on such area. Therefore, in this study, LH-moments were studied under two approaches commonly used in hydrology: (i) local frequency analysis (LFA) and (ii) regional frequency analysis (RFA). Moreover, a database with 227 rainfall stations was set (daily maximum annual), in Paraná State, from 1976 to 2006. LFA was subdivided into two steps: (i) Monte Carlo simulations and (ii) application of results to database. The main result of Monte Carlo simulations was that LH-moments make 0.99 and 0.995 quantiles less biased. Besides, simulations helped on creating an algorithm to perform LFA by generalized distributions. The algorithm was applied to database and enabled an adjustment of 227 studied series. In RFA, the 227stations have been divided into 11 groups and regional growth curves were obtained; while local quantiles were obtained from the regional growth curves. The difference between local quantiles obtained by RFA was quantified with those obtained via LFA. The differences may be approximately 33 mm for return periods of 100 years. / O objetivo da análise de frequência das variáveis hidrológicas é relacionar a magnitude dos eventos com sua frequência de ocorrência por meio do uso de uma distribuição de probabilidade. No estudo de eventos hidrológicos extremos, podem ser usadas as distribuições de probabilidade generalizadas: de eventos extremos, logística e Pareto. Existem diversas metodologias para a estimativa dos parâmetros das distribuições de probabilidade, no entanto, devido às facilidades computacionais, utilizam-se frequentemente os momentos-L. A confiabilidade dos quantis com alto período de retorno pode ser aumentada utilizando os momentos-LH ou momentos-L de altas ordens. Os momentos-L foram amplamente estudados, todavia, os momentos-LH apresentam literatura reduzida, logo, mais pesquisas são necessárias. Portanto, neste estudo, os momentos-LH foram estudados sob duas abordagens comumente utilizadas na hidrologia: (i) Análise de frequência local (AFL) e (ii) Análise de frequência regional (AFR). Além disso, foi montado um banco de dados com 227 estações pluviométricas (máximas diárias anuais), localizadas no Estado do Paraná, no período de 1976 a 2006. A AFL subdividiu-se em duas etapas: (i) Simulações de Monte Carlo e (ii) Aplicação dos resultados ao banco de dados. O principal resultado das simulações de Monte Carlo foi que os momentos-LH tornam os quantis 0,99 e 0,995 menos enviesados. Além disso, as simulações viabilizaram a criação de um algoritmo para realizar a AFL utilizando as distribuições generalizadas. O algoritmo foi aplicado ao banco de dados e possibilitou ajuste das 227 séries estudadas. Na AFR, as 227 estações foram dividas em 11 grupos e foram obtidas as curvas de crescimento regional. Os quantis locais foram obtidos a partir das curvas de crescimento regional. Foi quantificada a diferença entre os quantis locais obtidos via AFL com aqueles obtidos via AFR. As diferenças podem ser de aproximadamente 33 mm para períodos de retorno de 100 anos.
22

Pricing and Modeling Heavy Tailed Reinsurance Treaties - A Pricing Application to Risk XL Contracts / Prissättning och modellering av långsvansade återförsäkringsavtal - En prissättningstillämpning på Risk XL kontrakt

Abdullah Mohamad, Ormia, Westin, Anna January 2023 (has links)
To estimate the risk of a loss occurring for insurance takers is a difficult task in the insurance industry. It is an even more difficult task to price the risk for reinsurance companies which insures the primary insurers. Insurance that is bought by an insurance company, the cedent, from another insurance company, the reinsurer, is called treaty reinsurance. This type of reinsurance is the main focus in this thesis. A very common risk to insure, is the risk of fire in municipal and commercial properties which is the risk that is priced in this thesis. This thesis evaluates Länsförsäkringar AB's current pricing model which calculates the risk premium for Risk XL contracts. The goal of this thesis is to find areas of improvement for tail risk pricing. The risk premium can be calculated commonly by using one of three different types of pricing models, experience rating, exposure rating and frequency-severity rating. This thesis focuses on frequency-severity pricing, which is a model that assumes independence between the frequency and the severity of losses, and therefore splits the two into separate models. This is a very common model used when pricing Risk XL contracts. The risk premium is calculated with the help of loss data from two insurance companies, from a Norwegian and a Finnish insurance company. The main focus of this thesis is to price the risk with the help of extreme value theory, mainly with the method of moments method to model the frequency of losses, and peaks over threshold model to model the severity of the losses. In order to model the estimated frequency of losses by using the method of moments method, two distributions are compared, the Poisson and the negative binomial distribution. There are different distributions that can be used to model the severity of losses. In order to evaluate which distribution is optimal to use, two different Goodness of Fit tests are applied, the Kolmogorov-Smirnov and the Anderson-Darling test. The Peaks over threshold model is a model that can be used with the Pareto distribution. With the help of the Hill estimator we are able to calculate a threshold $u$, which regulates the tail of the Pareto curve. To estimate the rest of the ingoing parameters in the generalized Pareto distribution, the maximum likelihood and the least squares method are used. Lastly, the bootstrap method is used to estimate the uncertainty in the price which was calculated with the help of the estimated parameters. From this, empirical percentiles are calculated and set as guidelines to where the risk premium should lie between, in order for both the data sets to be considered fairly priced. / Att uppskatta risken för en skada ska inträffa för försäkringstagarna är svår uppgift i försäkringsbranschen. Det är en ännu svårare uppgift är att prissätta risken för återförsäkringsbolag som försäkrar direktförsäkrarna. Den försäkringen som köps av direkförsäkrarna, cedenten, från återförsäkrarna kallas treaty återförsäkring. Denna typ av återförsäkring är den som behandlas i denna avhandlig. En vanlig risk att prisätta är brandrisken för kommunala och industriella byggnader, vilket är risken som prissätts i denna avhandlnig. Denna avhandling utvärderar Länsförsäkringar AB's nuvarande prissättning som beräknar riskpremien för Risk XL kontrakt.Målet med denna avhandling är att hitta förbättringsområden för långsvansad affär. Riskpremien kan beräknas med hjälp av tre vanliga typer av prissättningsmodeller, experience rating, exposure rating och frequency-severity raring. Denna tes fokuserar endast på frequency-severity rating, vilket är en modell som antar att frekevensen av skador och storleken av de är oberoende, de delas därmed upp de i separata modeller. Detta är en väldigt vanlig modell som används vid prissättning av Risk XL kontrakt.Riskpremien beräknas med hjälp av skadedata från två försäkringsbolag, ett norskt och ett finskt försäkringsbolag.Det huvudsakliga fokuset i denna avhandling är att prissätta risken med hjälp av extremevärdesteori, huvudsakligen med hjälp av momentmetoden för att modellera frekvensen av skador och peaks over threshold metoden för att modellera storleken av de skadorna.För att kunna modellera den förväntade frekvensen av skador med hjälp av moment metoden så jämförs två fördelingar, Poissonfördelingen och den negativa binomialfördelningen. Det finns ett antal fördelningar som kan användas för att modellera storleken av skadorna. För att kunna avgöra vilken fördeling som är bäst att använda så har två olika Goodness of Fit test applicerats, Kolmogorov-Smirnov och Anderson-Darling testet.Peaks over threhsold modellen är en modell som kan användas med Paretofördelningen. Med hjälp av Hillestimatorn så beräknas en tröskel $u$ som regulerar paretokurvans uteseende. För att beräkna de resterande parametrarna i den generaliserade Paretofördelningen används maximum likliehood och minsta kvadratmetoden. Slutligen används bootstrap metoden för att skatta osäkerheten i risk premien som satts med hjälp av de skattade parametrarna. Utifrån den metoden så skapas percentiler som blir en riktlinje för vart risk premien bör ligga för de datasetten för att kunna anses vara rättvist prissatt.
23

The Performance of Market Risk Models for Value at Risk and Expected Shortfall Backtesting : In the Light of the Fundamental Review of the Trading Book / Bakåttest av VaR och ES i marknadsriskmodeller

Dalne, Katja January 2017 (has links)
The global financial crisis that took off in 2007 gave rise to several adjustments of the risk regulation for banks. An extensive adjustment, that is to be implemented in 2019, is the Fundamental Review of the Trading Book (FRTB). It proposes to use Expected Shortfall (ES) as risk measure instead of the currently used Value at Risk (VaR), as well as applying varying liquidity horizons based on the various risk levels of the assets involved. A major difficulty of implementing the FRTB lies within the backtesting of ES. Righi and Ceretta proposes a robust ES backtest based on Monte Carlo simulation. It is flexible since it does not assume any probability distribution and can be performed without waiting for an entire backtesting period. Implementing some commonly used VaR backtests as well as the ES backtest by Righi and Ceretta, yield a perception of which risk models that are the most accurate from both a VaR and an ES backtesting perspective. It can be concluded that a model that is satisfactory from a VaR backtesting perspective does not necessarily remain so from an ES backtesting perspective and vice versa. Overall, the models that are satisfactory from a VaR backtesting perspective turn out to be probably too conservative from an ES backtesting perspective. Considering the confidence levels proposed by the FRTB, from a VaR backtesting perspective, a risk measure model with a normal copula and a hybrid distribution with the generalized Pareto distribution in the tails and the empirical distribution in the center along with GARCH filtration is the most accurate one, as from an ES backtesting perspective a risk measure model with univariate Student’s t distribution with ⱱ ≈ 7 together with GARCH filtration is the most accurate one for implementation. Thus, when implementing the FRTB, the bank will need to compromise between obtaining a good VaR model, potentially resulting in conservative ES estimates, and obtaining a less satisfactory VaR model, possibly resulting in more accurate ES estimates. The thesis was performed at SAS Institute, an American IT company that develops software for risk management among others. Targeted customers are banks and other financial institutions. Investigating the FRTB acts a potential advantage for the company when approaching customers that are to implement the regulation framework in a near future. / Den globala finanskrisen som inleddes år 2007 ledde till flertalet ändringar vad gäller riskreglering för banker. En omfattande förändring som beräknas implementeras år 2019, utgörs av Fundamental Review of the Trading Book (FRTB). Denna föreslår bland annat användande av Expected Shortfall (ES) som riskmått istället för Value at Risk (VaR) som används idag, liksom tillämpandet av varierande likviditetshorisonter beroende på risknivåerna för tillgångarna i fråga. Den huvudsakliga svårigheten med att implementera FRTB ligger i backtestingen av ES. Righi och Ceretta föreslår ett robust ES backtest som baserar sig på Monte Carlo-simulering. Det är flexibelt i den mening att det inte antar någon specifik sannolikhetsfördelning samt att det går att implementera utan att man behöver vänta en hel backtestingperiod. Vid implementation av olika standardbacktest för VaR, liksom backtestet för ES av Righi och Ceretta, fås en uppfattning av vilka riskmåttsmodeller som ger de mest korrekta resultaten från både ett VaR- och ES-backtestingperspektiv. Sammanfattningsvis kan man konstatera att en modell som är acceptabel från ett VaR-backtestingperspektiv inte nödvändigtvis är det från ett ES-backtestingperspektiv och vice versa. I det hela taget har det visat sig att de modeller som är acceptabla ur ett VaR-backtestingperspektiv troligtvis är för konservativa från ett ESbacktestingperspektiv. Om man betraktar de konfidensnivåer som föreslagits i FRTB, kan man ur ett VaR-backtestingperspektiv konstatera att en riskmåttsmodell med normal-copula och en hybridfördelning med generaliserad Pareto-fördelning i svansarna och empirisk fördelning i centrum tillsammans med GARCH-filtrering är den bäst lämpade, medan det från ett ES-backtestingperspektiv är att föredra en riskmåttsmodell med univariat Student t-fördelning med ⱱ ≈ 7 tillsammans med GARCH-filtrering. Detta innebär att när banker ska implementera FRTB kommer de behöva kompromissa mellan att uppnå en bra VaR-modell som potentiellt resulterar i för konservativa ES-estimat och en modell som är mindre bra ur ett VaRperspektiv men som resulterar i rimligare ES-estimat. Examensarbetet genomfördes vid SAS Institute, ett amerikanskt IT-företag som bland annat utvecklar mjukvara för riskhantering. Tänkbara kunder är banker och andra finansinstitut. Denna studie av FRTB innebär en potentiell fördel för företaget vid kontakt med kunder som planerar implementera regelverket inom en snar framtid. / Riskhantering, finansiella tidsserier, Value at Risk, Expected Shortfall, Monte Carlo-simulering, GARCH-modellering, Copulas, hybrida distributioner, generaliserad Pareto-fördelning, extremvärdesteori, Backtesting, likviditetshorisonter, Basels regelverk
24

Contribution de la Théorie des Valeurs Extrêmes à la gestion et à la santé des systèmes / Contribution of extreme value theory to systems management and health

Diamoutene, Abdoulaye 26 November 2018 (has links)
Le fonctionnement d'un système, de façon générale, peut être affecté par un incident imprévu. Lorsque cet incident a de lourdes conséquences tant sur l'intégrité du système que sur la qualité de ses produits, on dit alors qu'il se situe dans le cadre des événements dits extrêmes. Ainsi, de plus en plus les chercheurs portent un intérêt particulier à la modélisation des événements extrêmes pour diverses études telles que la fiabilité des systèmes et la prédiction des différents risques pouvant entraver le bon fonctionnement d'un système en général. C'est dans cette optique que s'inscrit la présente thèse. Nous utilisons la Théorie des Valeurs Extrêmes (TVE) et les statistiques d'ordre extrême comme outil d'aide à la décision dans la modélisation et la gestion des risques dans l'usinage et l'aviation. Plus précisément, nous modélisons la surface de rugosité de pièces usinées et la fiabilité de l'outil de coupe associé par les statistiques d'ordre extrême. Nous avons aussi fait une modélisation à l'aide de l'approche dite du "Peaks-Over Threshold, POT" permettant de faire des prédictions sur les éventuelles victimes dans l'Aviation Générale Américaine (AGA) à la suite d'accidents extrêmes. Par ailleurs, la modélisation des systèmes soumis à des facteurs d'environnement ou covariables passent le plus souvent par les modèles à risque proportionnel basés sur la fonction de risque. Dans les modèles à risque proportionnel, la fonction de risque de base est généralement de type Weibull, qui est une fonction monotone; l'analyse du fonctionnement de certains systèmes comme l'outil de coupe dans l'industrie a montré qu'un système peut avoir un mauvais fonctionnement sur une phase et s'améliorer sur la phase suivante. De ce fait, des modifications ont été apportées à la distribution de Weibull afin d'avoir des fonctions de risque de base non monotones, plus particulièrement les fonctions de risque croissantes puis décroissantes. En dépit de ces modifications, la prise en compte des conditions d'opérations extrêmes et la surestimation des risques s'avèrent problématiques. Nous avons donc, à partir de la loi standard de Gumbel, proposé une fonction de risque de base croissante puis décroissante permettant de prendre en compte les conditions extrêmes d'opérations, puis établi les preuves mathématiques y afférant. En outre, un exemple d'application dans le domaine de l'industrie a été proposé. Cette thèse est divisée en quatre chapitres auxquels s'ajoutent une introduction et une conclusion générales. Dans le premier chapitre, nous rappelons quelques notions de base sur la théorie des valeurs extrêmes. Le deuxième chapitre s'intéresse aux concepts de base de l'analyse de survie, particulièrement à ceux relatifs à l'analyse de fiabilité, en proposant une fonction de risque croissante-décroissante dans le modèle à risques proportionnels. En ce qui concerne le troisième chapitre, il porte sur l'utilisation des statistiques d'ordre extrême dans l'usinage, notamment dans la détection de pièces défectueuses par lots, la fiabilité de l'outil de coupe et la modélisation des meilleures surfaces de rugosité. Le dernier chapitre porte sur la prédiction d'éventuelles victimes dans l'Aviation Générale Américaine à partir des données historiques en utilisant l'approche "Peaks-Over Threshold" / The operation of a system in general may at any time be affected by an unforeseen incident. When this incident has major consequences on the system integrity and the quality of system products, then it is said to be in the context of extreme events. Thus, increasingly researchers have a particular interest in modeling such events with studies on the reliability of systems and the prediction of the different risks that can hinder the proper functioning of a system. This thesis takes place in this very perspective. We use Extreme Value Theory (EVT) and extreme order statistics as a decision support tool in modeling and risk management in industry and aviation. Specifically, we model the surface roughness of machined parts and the reliability of the associated cutting tool with the extreme order statistics. We also did a modeling using the "Peaks-Over Threshold, POT" approach to make predictions about the potential victims in the American General Aviation (AGA) following extreme accidents. In addition, the modeling of systems subjected to environmental factors or covariates is most often carried out by proportional hazard models based on the hazard function. In proportional hazard models, the baseline risk function is typically Weibull distribution, which is a monotonic function. The analysis of the operation of some systems like the cutting tool in the industry has shown that a system can deteriorated on one phase and improving on the next phase. Hence, some modifications have been made in the Weibull distribution in order to have non-monotonic basic risk functions, more specifically, the increasing-decreasing risk function. Despite these changes, taking into account extreme operating conditions and overestimating risks are problematics. We have therefore proposed from Gumbel's standard distribution, an increasingdecreasing risk function to take into account extreme conditions, and established mathematical proofs. Furthermore, an example of the application in the field of industry was proposed. This thesis is organized in four chapters and to this must be added a general introduction and a general conclusion. In the first chapter, we recall some basic notions about the Extreme Values Theory. The second chapter focuses on the basic concepts of survival analysis, particularly those relating to reliability analysis by proposing a function of increasing-decreasing hazard function in the proportional hazard model. Regarding the third chapter, it deals with the use of extreme order statistics in industry, particularly in the detection of defective parts, the reliability of the cutting tool and the modeling of the best roughness surfaces. The last chapter focuses on the prediction of potential victims in AGA from historical data using the Peaks-Over Threshold approach.
25

Outliers detection in mixtures of dissymmetric distributions for data sets with spatial constraints / Détection de valeurs aberrantes dans des mélanges de distributions dissymétriques pour des ensembles de données avec contraintes spatiales

Planchon, Viviane 29 May 2007 (has links)
In the case of soil chemical analyses, frequency distributions for some elements show a dissymmetrical aspect, with a very marked spread to the right or to the left. A high frequency of extreme values is also observed and a possible mixture of several distributions, due to the presence of various soil types within a single geographical unit, is encountered. Then, for the outliers detection and the establishment of detection limits, an original outliers detection procedure has been developed; it allows estimating extreme quantiles above and under which observations are considered as outliers. The estimation of these detection limits is based on the right and the left of the distribution tails. A first estimation is realised for each elementary geographical unit to determine an appropriate truncation level. Then, a spatial classification allows creating adjoining homogeneous groups of geographical units to estimate robust limit values based on an optimal number of observations. / Dans le cas des analyses chimiques de sols, les distributions de fréquences des résultats présentent, pour certains éléments étudiés, un caractère très dissymétrique avec un étalement très marqué à droite ou à gauche. Une fréquence importante de valeurs extrêmes est également observée et un mélange éventuel de plusieurs distributions au sein dune même entité géographique, lié à la présence de divers types de sols, peut être rencontré. Dès lors, pour la détection des valeurs aberrantes et la fixation des limites de détection, une méthode originale, permettant destimer des quantiles extrêmes au-dessus et en dessous desquelles les observations sont considérées comme aberrantes, a été élaborée. Lestimation des limites de détection est établie de manière distincte à partir des queues des distributions droite et gauche. Une première estimation par entité géographique élémentaire est réalisée afin de déterminer un niveau de troncature adéquat. Une classification spatiale permet ensuite de créer des groupes dentités homogènes contiguës, de manière à estimer des valeurs limites robustes basées sur un nombre dobservations optimal.
26

Modelling of extremes

Hitz, Adrien January 2016 (has links)
This work focuses on statistical methods to understand how frequently rare events occur and what the magnitude of extreme values such as large losses is. It lies in a field called extreme value analysis whose scope is to provide support for scientific decision making when extreme observations are of particular importance such as in environmental applications, insurance and finance. In the univariate case, I propose new techniques to model tails of discrete distributions and illustrate them in an application on word frequency and multiple birth data. Suitably rescaled, the limiting tails of some discrete distributions are shown to converge to a discrete generalized Pareto distribution and generalized Zipf distribution respectively. In the multivariate high-dimensional case, I suggest modeling tail dependence between random variables by a graph such that its nodes correspond to the variables and shocks propagate through the edges. Relying on the ideas of graphical models, I prove that if the variables satisfy a new notion called asymptotic conditional independence, then the density of the joint distribution can be simplified and expressed in terms of lower dimensional functions. This generalizes the Hammersley- Clifford theorem and enables us to infer tail distributions from observations in reduced dimension. As an illustration, extreme river flows are modeled by a tree graphical model whose structure appears to recover almost exactly the actual river network. A fundamental concept when studying limiting tail distributions is regular variation. I propose a new notion in the multivariate case called one-component regular variation, of which Karamata's and the representation theorem, two important results in the univariate case, are generalizations. Eventually, I turn my attention to website visit data and fit a censored copula Gaussian graphical model allowing the visualization of users' behavior by a graph.
27

Développement d'un outil statistique pour évaluer les charges maximales subies par l'isolation d'une cuve de méthanier au cours de sa période d'exploitation / Development of a statistical tool to determine sloshing loads to be applied on cargo containment system of a LNG carrier for structural strength assessment

Fillon, Blandine 19 December 2014 (has links)
Ce travail de thèse porte sur les outils statistiques pour l'évaluation des maxima de charges de sloshing dans les cuves de méthaniers. Selon les caractéristiques du navire, son chargement et les conditions de navigation, un ballotement hydrodynamique est observé à l'intérieur des cuves, phénomène communément appelé sloshing. La détermination des charges qui s'appliquent à la structure est basée sur des mesures de pression d'impact au moyen d'essais sur maquette. Les maxima de pression par impact, extraits des mesures, sont étudiés. La durée d'un essai est équivalente à 5 heures au réel et insuffisante pour déterminer des maxima de pression associés à de grandes périodes de retour (40 ans). Un modèle probabiliste est nécessaire pour extrapoler les maxima de pression. Le modèle usuel est une loi de Weibull. Comme ce sont les valeurs extrêmes des échantillons qui nous intéressent, les ajustements sont aussi effectués par les lois des valeurs extrêmes et de Pareto généralisées via les méthodes de maximum par bloc et d'excès au-dessus d'un seuil.L'originalité du travail repose sur l'emploi d'un système alternatif, plus pertinent pour la capture des maxima de pression et d'une quantité de 480 heures de mesures disponible pour les mêmes conditions d'essai. Cela fournit une distribution de référence pour les maxima de pression et nous permet d'évaluer la pertinence des modèles sélectionnés. Nous insistons sur l'importance d'évaluer la qualité des ajustements par des tests statistiques et de quantifier les incertitudes sur les estimations obtenues. La méthodologie fournie a été implémentée dans un logiciel nommé Stat_R qui facilite la manipulation et le traitement des résultats. / This thesis focuses on statistical tools for the assessment of maxima sloshing loads in LNG tanks. According to ship features, tank cargo and sailing conditions, a sloshing phenomenon is observed inside LNG tanks. The determination of sloshing loads supported by the tank structure is derived from impact pressure measurements performed on a test rig. Pressure maxima per impact, extracted from test measurements, are investigated. Test duration is equivalent to 5 hours in full scale. This duration is not sufficient to determine pressure maxima associated with high return periods (40 years). It is necessary to use a probabilistic model in order to extrapolate pressure maxima. Usually, a Weibull model is used. As we focus on extreme values from samples, fittings are also performed with the generalized extreme value distribution and the generalized Pareto distribution using block maximum method and peaks over threshold method.The originality of this work is based on the use of an alternate measurement system which is more relevant than usual measurement system to get pressure maxima and a 480 hours measured data available for same test conditions. This provides a reference distribution for pressure maxima which is used to assess the relevance of the selected probabilistic models. Particular attention is paid to the assessment of fittings quality using statistical tests and to the quantification of uncertainties on estimated values.The provided methodology has been implemented in a software called Stat_R which makes the manipulation and the treatment of results easier.
28

Modelling equity risk and external dependence: A survey of four African Stock Markets

Samuel, Richard Abayomi 18 May 2019 (has links)
Department of Statistics / MSc (Statistics) / The ripple e ect of a stock market crash due to extremal dependence is a global issue with key attention and it is at the core of all modelling e orts in risk management. Two methods of extreme value theory (EVT) were used in this study to model equity risk and extremal dependence in the tails of stock market indices from four African emerging markets: South Africa, Nigeria, Kenya and Egypt. The rst is the \bivariate-threshold-excess model" and the second is the \point process approach". With regards to the univariate analysis, the rst nding in the study shows in descending hierarchy that volatility with persistence is highest in the South African market, followed by Egyptian market, then Nigerian market and lastly, the Kenyan equity market. In terms of risk hierarchy, the Egyptian EGX 30 market is the most risk-prone, followed by the South African JSE-ALSI market, then the Nigerian NIGALSH market and the least risky is the Kenyan NSE 20 market. It is therefore concluded that risk is not a brainchild of volatility in these markets. For the bivariate modelling, the extremal dependence ndings indicate that the African continent regional equity markets present a huge investment platform for investors and traders, and o er tremendous opportunity for portfolio diversi cation and investment synergies between markets. These synergistic opportunities are due to the markets being asymptotic (extremal) independent or (very) weak asymptotic dependent and negatively dependent. This outcome is consistent with the ndings of Alagidede (2008) who analysed these same markets using co-integration analysis. The bivariate-threshold-excess and point process models are appropriate for modelling the markets' risks. For modelling the extremal dependence however, given the same marginal threshold quantile, the point process has more access to the extreme observations due to its wider sphere of coverage than the bivariate-threshold-excess model. / NRF

Page generated in 0.0723 seconds