• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 119
  • 27
  • 19
  • 13
  • 10
  • 9
  • 7
  • 3
  • 2
  • 2
  • 2
  • 1
  • 1
  • 1
  • 1
  • Tagged with
  • 232
  • 232
  • 151
  • 61
  • 58
  • 41
  • 36
  • 32
  • 29
  • 27
  • 26
  • 24
  • 23
  • 23
  • 22
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
181

Fitting extreme value distributions to the Zambezi River flood water levels recorded at Katima Mulilo in Namibia (1965-2003)

Kamwi, Innocent Silibelo January 2005 (has links)
>Magister Scientiae - MSc / This study sought to identify and fit the appropriate extreme value distribution to flood data, using the method of maximum likelihood. To examine the uncertainty of the estimated parameters and evaluate the goodness of fit of the model identified. The study revealed that the three parameter Weibull and the generalised extreme value (GEV) distributions fit the data very well. Standard errors for the estimated parameters were calculated from the empirical information matrix. An upper limit to the flood levels followed from the fitted distribution.
182

Extreme value analysis of non-stationary time series: Quantifying climate change using observational data throughout Germany

Müller, Philipp 11 March 2019 (has links)
The overall subject of this thesis is the massive parallel application of the extreme value analysis (EVA) on climatological time series. In this branch of statistics one strives to learn about the tails of a distribution and its upper quantiles, like the so-called 50 year return level, an event realized on average only once during its return period of 50 years. Since most studies just focus on average statistics and it's the extreme events that have the biggest impact on our life, such an analysis is key for a proper understanding of the climate change. In there a time series gets separated into blocks, whose maxima can be described using the generalized extreme value (GEV) distribution for sufficiently large block sizes. But, unfortunately, the estimation of its parameters won't be possible on a massive parallel scale with any available software package since they are all affected by onceptional problems in the maximum likelihood fit. Both the logarithms in the negative log-likelihood of the GEV distribution and the theoretical limitations on one of its parameters give rise to regions in the parameter space inaccessible to the optimization routines, causing them to produce numerical artifacts. I resolved this issue by incorporating all constraints into the optimization using the augmented Lagrangian method. With my implementation in the open source package **climex** it is now possible to analyze large climatological data sets. In this thesis I used temperature and precipitation data from measurement stations provided by the German weather service (DWD) and the ERA-Interim reanalysis data set and analyzed them using both a qualitative method based on time windows and a more quantitative one relying on the class of vector generalized linear models (VGLM). Due to the climate change a general shift of the temperature towards higher values and thus more hot and less cold extremes would be expect. Indeed, I could find the cation parameters of the GEV distributions, which can be thought of as the mean event size at a return period of approximately the block size of one year, to increase for both the aily maximum and minimum temperatures. But the overall changes are far more complex and dependent on the geographical location as well as the considered return period, hich is quite unexpected. E.g. for the 100 year return levels of the daily maximum temperatures a decrease was found in the east and the center of Germany for both the raw series and their anomalies, as well as a quite strong reduction for the raw series in the very south of Germany. The VGLM-based non-stationary EVA resulted in significant trends in the GEV parameters for the daily maximum temperatures of almost all stations and for about half of them in case of the daily minima. So, there is statistically sound evidence for a change in the extreme temperatures and, surprisingly, it is not exclusively towards higher values. The analysis yielded several significant trends featuring a negative slope in the 10 year return levels. The analysis of the temperature data of the ERA-Interim reanalysis data set yielded quite surprising results too. While in some parts of the globe, especially on land, the 10 year return levels were found to increase, they do in general decrease in most parts of the earth and almost entirely over the sea. But since we found a huge discrepancy between the results of the analysis using the station data within Germany and the results obtained for the corresponding grid points of the reanalysis data set, we can not be sure whether the patterns in the return levels of the ERA-Interim data are trustworthy. / Das Ziel dieser Arbeit ist die massiv parallele Anwendung der Extremwertanalyse (EVA) auf klimatologischen Zeitreihen. Dieser Bereich der Statistik beschäftigt sich mit den Schwänzen von Wahrscheinlichkeitsverteilungen und deren großen Quantilen, wie z.B. dem sogenannten 50-jährigen Return Level. Dies ist ein Ereignis, welches im Mittel nur einmal innerhalb seiner Return Periode von 50 Jahren realisiert wird. Da sich aber die Mehrheit der wissenschaftlichen Studien auf die Analyse gemittelter statistischer Größen stützen, aber es gerade die extremen Ereignisse sind, welche unser Leben maßgeblich beeinflussen, ist eine solche EVA entscheidend für ein umfassendes Verständnis des Klimawandels. In der Extremwertanalyse wird eine Zeitreihe in einzelne Blöcke geteilt, deren Maxima sich bei hinreichend großer Blocklänge mittels der generalisierten Extremwertverteilung (GEV) beschreiben lassen. Die Schätzung ihrer Parameter ist auf solch massiv parallelen Skalen jedoch mit keinem der verfügbaren Softwarepakete möglich, da sie alle vom selben konzeptionellen Problem der Maximum Likelihood Methode betroffen sind. Sowohl die Logarithmen in der negativen log-Likelihood der GEV Verteilung, als auch die theoretischen Beschränkungen im Wertebereich eines ihrer Parameter machen Teile des Parameterraumes für den Optimierungsalgorithmus unzugänglich und führen zur Erzeugung numerischer Artefakte durch die Routine. Dieses Problem konnte ich lösen, indem ich die Beschränkungen mittels der augmented Lagrangian Methode in die Optimierung integrierte. Mittels dem verbesserten Fit, den ich in dem Open Source Paket **climex** zur Verfügung stellte, ist es nun möglich beliebig viele Zeitreihen in einer parallelen Analyse zu behandeln. In dieser Arbeit verwende ich Temperatur- und Niederschlagszeitreihen des deutschen Wetterdienstes (DWD) und den ERA-Interim Reanalyse Datensatz in Kombination mit sowohl einer qualitativen Analyse basierend auf Zeitfenstern, als auch einer quantitativen, welche auf der Modellklasse der Vektor-generalisierten linearen Modellen (VGLM) beruht. Aufgrund des Klimawandels ist intuitiv eine Verschiebung der Temperaturverteilung zu höheren Werten und damit mehr heiße und weniger kalte Temperaturextreme zu erwarten. Tatsächlich konnte ich für die täglichen Maximal- und Minimaltemperaturen einen Anstieg des Location Parameters finden, dem man sich als mittlere Ereignisgröße für eine Return Periode gleich der verwendeten Blocklänge von einem Jahr versinnbildlichen kann. Im Großen und Ganzen sind die Änderungen jedoch deutlich komplexer und hängen sowohl vom Ort, als auch von der Return Periode ab. Z.B. verringern sich die 100 jährigen Return Level der täglichen Maximaltemperaturen im Osten und im Zentrum Deutschlands für sowohl die unprozessierten Zeitreihen, als auch für deren Anomalien, und weisen eine besonders starke Reduktion im Süden des Landes für die prozessierten auf. Durch die VGLM-basierte, nicht-stationäre EVA konnte ich zeigen, dass nahezu alle Stationen für die täglichen Maximaltemperaturen, sowie rund die Hälfte aller Stationen für die täglichen Minimaltemperaturen, signifikante Trends in den Parameters der GEV Verteilung aufweisen. Somit war es mir möglich statistisch fundierte Beweise für Veränderungen in den extremen Temperaturen finden, die jedoch nicht ausschließlich in einer Verschiebung zu höheren Werten bestanden. Einige Stationen wiesen eine negativen Trend in ihren 10 jährigen Return Leveln auf. Die Analyse der Temperaturzeitreihen des ERA-Interim Reanalyse Datensatzes ergab ebenfalls überraschende Resultate. Während in einigen Teilen der Welt, hauptsächlich an Land, die 10 jährigen Return Level steigen, sinkt ihr Wert für den Großteil der Zeitreihen und fast über den gesamten Ozeanen. Da jedoch eine große Diskrepanz zwischen den Ergebnissen der Stationsdaten des DWD und den dazugehörigen Rasterpunkten im ERA-Interim Datensatz besteht, konnte nicht abschließend geklärt werden in wieweit die Resultate der Rasteranalyse der Natur entsprechen.
183

Climate Change Effects on Rainfall Intensity-Duration-Frequency (IDF) Curves for the Town of Willoughby (HUC-12) Watershed Using Various Climate Models

Mainali, Samir 18 July 2023 (has links)
No description available.
184

Correction d'estimateurs de la fonction de Pickands et estimateur bayésien

Chalifoux, Kevin 01 1900 (has links)
Faire l'estimation d'une copule de valeurs extrêmes bivariée revient à estimer A, sa fonction de Pickands qui lui est associée. Cette fonction A:[0,1] \( \rightarrow \) [0,1] doit satisfaire certaines contraintes : $$\max\{1-t, t \} \leq A(t) \leq 1, \hspace{3mm} t\in[0,1]$$ $$\text{A est convexe.}$$ Plusieurs estimateurs ont été proposés pour estimer cette fonction A, mais peu respectent ses contraintes imposées. La contribution principale de ce mémoire est d'introduire une technique simple de correction d'estimateurs de la fonction de Pickands de sorte à ce que les estimateurs corrigés respectent les contraintes exigées. La correction proposée utilise une nouvelle propriété du vecteur aléatoire bivarié à valeurs extrêmes, combinée avec l'enveloppe convexe de l'estimateur obtenu pour garantir le respect des contraintes de la fonction A. La seconde contribution de ce mémoire est de présenter un estimateur bayésien non paramétrique de la fonction de Pickands basé sur la forme introduite par Capéraà et al. (1997). L'estimateur utilise les processus de Dirichlet pour estimer la fonction de répartition d'une transformation du vecteur aléatoire bivarié à valeurs extrêmes. Des analyses par simulations sont produites sur un ensemble d'estimateurs pour mesurer la performance de la correction et de l'estimateur bayésien proposés, sur un ensemble de 18 distributions de valeurs extrêmes bivariées. La correction améliore l'erreur quadratique moyenne sur l'ensemble des niveaux. L'estimateur bayésien proposé obtient l'erreur quadratique moyenne minimale pour les estimateurs considérés. / Estimating a bivariate extreme-value copula is equivalent to estimating A, its associated Pickands function. This function A: [0,1] \( \rightarrow \) [0,1] must satisfy some constraints : $$\max\{1-t, t \} \leq A(t) \leq 1, \hspace{3mm} t\in[0,1]$$ $$\text{A is convex.}$$ Many estimators have been proposed to estimate A, but few satisfy the imposed constraints. The main contribution of this thesis is the introduction of a simple correction technique for Pickands function estimators so that the corrected estimators respect the required constraints. The proposed correction uses a new property of the extreme-value random vector and the convex hull of the obtained estimator to guaranty the respect of the Pickands function constraints. The second contribution of this thesis is to present a nonparametric bayesian estimator of the Pickands function based on the form introduced by Capéraà, Fougères and Genest (1997). The estimator uses Dirichlet processes to estimate the cumulative distribution function of a transformation of the extreme-value bivariate vector. Analysis by simulations and a comparison with popular estimators provide a measure of performance for the proposed correction and bayesian estimator. The analysis is done on 18 bivariate extreme-value distributions. The correction reduces the mean square error on all distributions. The bayesian estimator has the lowest mean square error of all the considered estimators.
185

Metal Additive Manufacturing Defects Analysis and Prediction of Their Effect on Fatigue Performance

Sanaei, Niloofar January 2020 (has links)
No description available.
186

Prognostisering av dimensionerande grundvattennivå : En fallstudie av Chalmersmodellen och hur referensrör med olika hydrogeologiska egenskaper påverkar modellens tillförlitlighet / Predicting extreme groundwater levels : A case study of the Chalmers model and how reference wells with different hydrogeological characteristics impact the precision of the model

Cedergren, Andrea January 2022 (has links)
Grundvatten och dess varierande nivåer kan potentiellt få en stor inverkan både på byggnaderoch dess omgivning och kan innebära risker som upplyftande krafter, skred och ras. Baseratpå detta är det av vikt att kunna förutsäga extrema grundvattennivåer, kallat dimensionerandegrundvattennivåer. Däremot görs sällan mätningar under en längre tid, vilket krävs för att fastställasannolikheten av en viss grundvattennivå. För att kunna prognostisera den dimensionerandegrundvattennivån har den så kallade Chalmersmodellen utvecklats. Modellen utgår från attsätta korta mätningar från ett grundvattenrör vid en observationsplats (observationsrör) i relation till en lång mätserie från ett grundvattenrör vid en referensplats (referensrör). Enligtmetoden ska val av referensrör baseras på att det är likartade förhållanden mellan de två platserna, att de ligger inom 50 km från varandra och att mätningar i referensröret utförts i mer än 20 år. Denna studie syftar att utreda med vilken tillförlitlighet som Chalmersmodellen kan prognostisera grundvattennivåer som kan förekomma inom en viss återkomsttid. Fokus är på hur valet av referensrör som är placerade vid olikartade hydrogeologiska förhållanden påverkar Chalmersmodellens resultat. Studien utförs som en fallstudie, med utgångspunkt i utbyggnaden av tunnelbanan i Stockholm vid Sockenplan och Station Sofia. Utgångspunkten i Chalmersmodellen är att använda grundvattennivåmätningar från observationsplatsen tillsammans med mätningar från en ostörd miljö vid en referensplats. Beräkningar görs genom att utföra databehandling och beräkningar i Python i enlighet med beskrivningar från Chalmersmodellen och utvärderas genom att jämföras mot en liknande metod kallad extremvärdesanalys. Karakterisering av platserna vid observationsrören och referensrören används för att utvärdera hur stor inverkan olika hydrogeologiska egenskaper (akvifertyp och topografiskt läge) har på beräkningarna av den dimensionerande grundvattennivån. Resultaten visar att Chalmersmodellen generellt underskattar dimensionerande grundvattennivåer. Modellen har även varierande storleksordning av noggrannheten och därmed är det svårt att fastställa förväntad noggrannhet med Chalmersmodellen. Studien visar även att om observationsrör och referensrör är placerade vid en sluten akvifer kan en högre tillförlitlighet förväntas och osäkerheten i tillförlitligheten tycks öka för öppna akviferer. Slutligen om referensrör och observationsrör väljs utifrån att likartade hydrogeologiska egenskaper och samvariation mellan respektive grundvattennivåer, kan högre precision förväntas enligt denna studie. / Groundwater can potentially have a great impact on both constructions and the surrounding areas, and high groundwater levels can involve risks such as uplifting forces and landslide. Due to these risks it is important to predict and estimate the probability of extreme groundwater levels. However, when the necessary long term measurements are not available alternative methods are needed, like the Chalmers model. The Chalmers model is used for calculating extreme ground water levels, by combining a short measurement series from an observation well with the data from a reference well. For the results to be as accurate as possible, the two wells must share similar characteristics. The aim of this study is to investigate the Chalmers model when predicting the groundwater level for a specific return period. Focus will be on how the choice of different reference wells, with different characteristics, will influence the accuracy of the model.  A case study will be conducted on two station sites (Sockenplan and Station Sofia) for the extension of the metro in the southern part of Stockholm, Sweden, upon which the Chalmer model will be implemented. The different characteristics of the obeservation and reference wells are tested to evaluate the accuracy of the model. The accuracy will be evaluated by using extreme value analysis as an alternative calculation model, assumed to be more precise, and compare the difference in extreme groundwater levels. The measurements used as reference in the Chalmers model are public data from The Geological Survey of Sweden, SGU, for groundwater levels. Data processing and calculations are performed in python. This study highlights the difficulties in determining the accuracy of the Chalmers model when predicting extreme groundwater levels, and no specific expected accuracy has been determined. Generally, the model appears to underestimate extreme grounwater levels. Furthermore, if the observation well and reference well are located by a confined aquifer and between inflow- and outflow areas, a higher precision can be expected. The uncertainty of the model increases with an unconfined aquifer. The results also imply that if the reference well and the observation well are selected based on similar hydrogeological characteristics, a covariation of groundwater levels over time and between highest and lowest level, a higher accuracy can be expected.
187

A Statistical Framework for Distinguishing Between Aleatory and Epistemic Uncertainties in the Best- Estimate Plus Uncertainty (BEPU) Nuclear Safety Analyses

Pun-Quach, Dan 11 1900 (has links)
In 1988, the US Nuclear Regulatory Commission approved an amendment that allowed the use of best-estimate methods. This led to an increased development, and application of Best Estimate Plus Uncertainty (BEPU) safety analyses. However, a greater burden was placed on the licensee to justify all uncertainty estimates. A review of the current state of the BEPU methods indicate that there exists a number of significant criticisms, which limits the BEPU methods from reaching its full potential as a comprehensive licensing basis. The most significant criticism relates to the lack of a formal framework for distinguishing between aleatory and epistemic uncertainties. This has led to a prevalent belief that such separation of uncertainties is for convenience, rather than one out of necessity. In this thesis, we address the above concerns by developing a statistically rigorous framework to characterize the different uncertainty types. This framework is grounded on the philosophical concepts of knowledge. Considering the Plato problem, we explore the use of probability as a means to gain knowledge, which allows us to relate the inherent distinctness in knowledge with the different uncertaintytypesforanycomplexphysicalsystem. Thisframeworkis demonstrated using nuclear analysis problems, and we show through the use of structural models that the separation of these uncertainties leads to more accurate tolerance limits relative to existing BEPU methods. In existing BEPU methods, where such a distinction is not applied, the total uncertainty is essentially treated as the aleatory uncertainty. Thus, the resulting estimated percentile is much larger than the actual (true) percentile of the system's response. Our results support the premise that the separation of these two distinct uncertainty types is necessary and leads to more accurate estimates of the reactor safety margins. / Thesis / Doctor of Philosophy (PhD)
188

[en] ANALYSIS OF EXTREME VALUES THEORY AND MONTE CARLO SIMULATION FOR THE CALCULATION OF VALUE-AT-RISK IN STOCK PORTFOLIOS / [pt] ANÁLISE DA TEORIA DOS VALORES EXTREMOS E DA SIMULAÇÃO DE MONTE CARLO PARA O CÁLCULO DO VALUE-AT-RISK EM CARTEIRAS DE INVESTIMENTOS DE ATIVOS DE RENDA VARIÁVEL

GUSTAVO JARDIM DE MORAIS 16 July 2018 (has links)
[pt] Após as recentes crises financeiras que se abateram sobre os mercados financeiros de todo o mundo, com mais propriedade a de 2008/2009, mas ainda a crise no Leste Europeu em Julho/2007, a moratória Russa em Outubro/1998, e, no âmbito nacional, a mudança no regime cambial brasileiro, em Janeiro/1999, as instituições financeiras incorreram em grandes perdas em cada um desses eventos e uma das principais questões levantadas acerca dos modelos financeiros diziam respeito ao gerenciamento de risco. Os diversos métodos de cálculo do Value-atrisk, bem como as simulações e cenários traçados por analistas não puderam prever sua magnitude nem tampouco evitar que a crise se agravasse. Em função disso, proponho-me à questão de estudar os sistemas de gerenciamento de risco financeiro, na medida em que este pode e deve ser aprimorado, sob pena de catástrofes financeiras ainda maiores. Embora seu conteúdo se mostre tão vasto na literatura, as metodologias para cálculo de valor em risco não são exatas e livres de falhas. Nesse contexto, coloca-se necessário o desenvolvimento e aprimoramento de ferramentas de gestão de risco que sejam capazes de auxiliar na melhor alocação dos recursos disponíveis, avaliando o nível de risco à que um investimento está exposto e sua compatibilidade com seu retorno esperado. / [en] After recent financial crisis that have hit financial markets all around the world, with more property on 2008/2009 periods, the Eastern Europe crisis in 2007, the Russian moratorium on October/1998, and with Brazilian national exchange rate regime change on January/1999, financial institutions have incurred in large losses on each of these events and one of the main question raised about the financial models related to risk management. The Value-at-Risk management and its many forms to calculate it, as well as the simulations and scenarios predicted by analysts could not predict its magnitude or prevent crisis worsened. As a result, I intent to study the question of financial systems management, in order to improve the existing methods, under the threat that even bigger financial disasters are shall overcome. Although it s content is vast on scientific literature, the Value-at-Risk calculate is not exact and free of flaws. In this context, there is need for the development and improvement of risk management tools that are able to assist in a better asset equities allocation of resources, equalizing the risk level of an investment and it s return.
189

Value at risk and expected shortfall : traditional measures and extreme value theory enhancements with a South African market application

Dicks, Anelda 12 1900 (has links)
Thesis (MComm)--Stellenbosch University, 2013. / ENGLISH ABSTRACT: Accurate estimation of Value at Risk (VaR) and Expected Shortfall (ES) is critical in the management of extreme market risks. These risks occur with small probability, but the financial impacts could be large. Traditional models to estimate VaR and ES are investigated. Following usual practice, 99% 10 day VaR and ES measures are calculated. A comprehensive theoretical background is first provided and then the models are applied to the Africa Financials Index from 29/01/1996 to 30/04/2013. The models considered include independent, identically distributed (i.i.d.) models and Generalized Autoregressive Conditional Heteroscedasticity (GARCH) stochastic volatility models. Extreme Value Theory (EVT) models that focus especially on extreme market returns are also investigated. For this, the Peaks Over Threshold (POT) approach to EVT is followed. For the calculation of VaR, various scaling methods from one day to ten days are considered and their performance evaluated. The GARCH models fail to converge during periods of extreme returns. During these periods, EVT forecast results may be used. As a novel approach, this study considers the augmentation of the GARCH models with EVT forecasts. The two-step procedure of pre-filtering with a GARCH model and then applying EVT, as suggested by McNeil (1999), is also investigated. This study identifies some of the practical issues in model fitting. It is shown that no single forecasting model is universally optimal and the choice will depend on the nature of the data. For this data series, the best approach was to augment the GARCH stochastic volatility models with EVT forecasts during periods where the first do not converge. Model performance is judged by the actual number of VaR and ES violations compared to the expected number. The expected number is taken as the number of return observations over the entire sample period, multiplied by 0.01 for 99% VaR and ES calculations. / AFRIKAANSE OPSOMMING: Akkurate beraming van Waarde op Risiko (Value at Risk) en Verwagte Tekort (Expected Shortfall) is krities vir die bestuur van ekstreme mark risiko’s. Hierdie risiko’s kom met klein waarskynlikheid voor, maar die finansiële impakte is potensieel groot. Tradisionele modelle om Waarde op Risiko en Verwagte Tekort te beraam, word ondersoek. In ooreenstemming met die algemene praktyk, word 99% 10 dag maatstawwe bereken. ‘n Omvattende teoretiese agtergrond word eers gegee en daarna word die modelle toegepas op die Africa Financials Index vanaf 29/01/1996 tot 30/04/2013. Die modelle wat oorweeg word sluit onafhanklike, identies verdeelde modelle en Veralgemeende Auto-regressiewe Voorwaardelike Heteroskedastiese (GARCH) stogastiese volatiliteitsmodelle in. Ekstreemwaarde Teorie modelle, wat spesifiek op ekstreme mark opbrengste fokus, word ook ondersoek. In hierdie verband word die Peaks Over Threshold (POT) benadering tot Ekstreemwaarde Teorie gevolg. Vir die berekening van Waarde op Risiko word verskillende skaleringsmetodes van een dag na tien dae oorweeg en die prestasie van elk word ge-evalueer. Die GARCH modelle konvergeer nie gedurende tydperke van ekstreme opbrengste nie. Gedurende hierdie tydperke, kan Ekstreemwaarde Teorie modelle gebruik word. As ‘n nuwe benadering oorweeg hierdie studie die aanvulling van die GARCH modelle met Ekstreemwaarde Teorie vooruitskattings. Die sogenaamde twee-stap prosedure wat voor-af filtrering met ‘n GARCH model behels, gevolg deur die toepassing van Ekstreemwaarde Teorie (soos voorgestel deur McNeil, 1999), word ook ondersoek. Hierdie studie identifiseer sommige van die praktiese probleme in model passing. Daar word gewys dat geen enkele vooruistkattingsmodel universeel optimaal is nie en die keuse van die model hang af van die aard van die data. Die beste benadering vir die data reeks wat in hierdie studie gebruik word, was om die GARCH stogastiese volatiliteitsmodelle met Ekstreemwaarde Teorie vooruitskattings aan te vul waar die voorafgenoemde nie konvergeer nie. Die prestasie van die modelle word beoordeel deur die werklike aantal Waarde op Risiko en Verwagte Tekort oortredings met die verwagte aantal te vergelyk. Die verwagte aantal word geneem as die aantal obrengste waargeneem oor die hele steekproefperiode, vermenigvuldig met 0.01 vir die 99% Waarde op Risiko en Verwagte Tekort berekeninge.
190

用極值理論分析次級房貸風暴的衝擊-以全球市場為例 / Using extreme value theory to analyze the US sub-prime mortgage crisis on the global stock market

彭富忠, Peng, Fu Chung Unknown Date (has links)
The US sub-prime mortgage crisis greatly affected not only the US economy but also other countries in the world. This thesis employs the extreme value theory and Value at Risk (VaR) analysis to assess the impact of the US sub-prime mortgage crisis on various stock markets of the MSCI indexes, including 10 countries and 7 areas. It is reasonable to guess that VaR value should increase after the crisis. The empirical analyses on these indexes conclude that (1) the American market indexes not only do not agree with the guess after the crisis but four American indexes are identical; (2) not all the Asia market indexes consist with the guess; (3) the European market indexes agree with the guess; (4) MSCI AC PACIFIC, NEW ZEALAND, and AUSTRALIA consist with the guess; (5) the behavior for the positive log returns is different from that for the negative returns in some MSCI indexes. Over speaking, the impacts of US sub-prime mortgage crisis on those countries are not the same.

Page generated in 0.0368 seconds