• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 100
  • 14
  • 13
  • 7
  • 6
  • 5
  • 5
  • 3
  • 3
  • 2
  • 2
  • 2
  • 1
  • 1
  • Tagged with
  • 168
  • 168
  • 168
  • 54
  • 33
  • 30
  • 25
  • 24
  • 21
  • 20
  • 20
  • 19
  • 18
  • 16
  • 16
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
21

Risk Measures and Dependence Modeling in Financial Risk Management

Eriksson, Kristofer January 2014 (has links)
In financial risk management it is essential to be able to model dependence in markets and portfolios in an accurate and efficient way. A high positive dependence between assets in a portfolio can be devastating, especially in times of crises, since losses will most likely occur at the same time in all assets for such a portfolio. The dependence is therefore directly linked to the risk of the portfolio. The risk can be estimated by several different risk measures, for example Value-at-Risk and Expected shortfall. This paper studies some different ways to measure risk and model dependence, both in a theoretical and empirical way. The main focus is on copulas, which is a way to model and construct complex dependencies. Copulas are a useful tool since it allows the user to separately specify the marginal distributions and then link them together with the copula. However, copulas can be quite complex to understand and it is not trivial to know which copula to use. An implemented copula model might give the user a "black-box" feeling and a severe model risk if the user trusts the model too much and is unaware of what is going. Another model would be to use the linear correlation which is also a way to measure dependence. This is an easier model and as such it is believed to be easier for all users to understand. However, linear correlation is only easy to understand in the case of elliptical distributions, and when we move away from this assumption (which is usually the case in financial data), some clear drawbacks and pitfalls become present. A third model, called historical simulation, uses the historical returns of the portfolio and estimate the risk on this data without making any parametric assumptions about the dependence. The dependence is assumed to be incorporated in the historical evolvement of the portfolio. This model is very easy and very popular, but it is more limited than the previous two models to the assumption that history will repeat itself and needs much more historical observations to yield good results. Here we face the risk that the market dynamics has changed when looking too far back in history. In this paper some different copula models are implemented and compared to the historical simulation approach by estimating risk with Value-at-Risk and Expected shortfall. The parameters of the copulas are also investigated under calm and stressed market periods. This information about the parameters is useful when performing stress tests. The empirical study indicates that it is difficult to distinguish the parameters between the stressed and calm market period. The overall conclusion is; which model to use depends on our beliefs about the future distribution. If we believe that the distribution is elliptical then a correlation model is good, if it is believed to have a complex dependence then the user should turn to a copula model, and if we can assume that history will repeat itself then historical simulation is advantageous.
22

Extreme value distribution quantile estimation

Buck, Debra L. January 1983 (has links)
This thesis considers estimation of the quantiles of the smallest extreme value distribution, sometimes referred to as the log - Weibull distribution. The estimators considered are linear combinations of two order statistics. A table of the best linear estimates (BLUE's) is presented for sample sizes two through twenty. These estimators are compared to the asymptotic estimators of Kubat and Epstein (1980).
23

Extreme Value Theory with an Application to Bank Failures through Contagion

Nikzad, Rashid 03 October 2011 (has links)
This study attempts to quantify the shocks to a banking network and analyze the transfer of shocks through the network. We consider two sources of shocks: external shocks due to market and macroeconomic factors which impact the entire banking system, and idiosyncratic shocks due to failure of a single bank. The external shocks will be estimated by using two methods: (i) non-parametric simulation of the time series of shocks that occurred to the banking system in the past, and (ii) using the extreme value theory (EVT) to model the tail part of the shocks. The external shocks we considered in this study are due to exchange rate and treasury bill rate volatility. Also, an ARMA/GARCH model is used to extract iid residuals for this purpose. In the next step, the probability of the failure of banks in the system is studied by using Monte Carlo simulation. We calibrate the model such that the network resembles the Canadian banking system.
24

Generalized extreme value and mixed logit models : empirical applications to vehicle accident severities /

Milton, John Calvin. January 2006 (has links)
Thesis (Ph. D.)--University of Washington, 2006. / Vita. Includes bibliographical references (leaves 87-96).
25

Fitting extreme value distributions to the Zambezi river flood water levels recorded at Katima Mulilo in Namibia

Kamwi, Innocent Silibelo January 2005 (has links)
Magister Scientiae - MSc / The aim of this research project was to estimate parameters for the distribution of annual maximum flood levels for the Zambezi River at Katima Mulilo. The estimation of parameters was done by using the maximum likelihood method. The study aimed to explore data of the Zambezi's annual maximum flood heights at Katima Mulilo by means of fitting the Gumbel, Weibull and the generalized extreme value distributions and evaluated their goodness of fit. / South Africa
26

Hurricane Loss Modeling and Extreme Quantile Estimation

Yang, Fan 26 January 2012 (has links)
This thesis reviewed various heavy tailed distributions and Extreme Value Theory (EVT) to estimate the catastrophic losses simulated from Florida Public Hurricane Loss Projection Model (FPHLPM). We have compared risk measures such as Probable Maximum Loss (PML) and Tail Value at Risk (TVaR) of the selected distributions with empirical estimation to capture the characteristics of the loss data as well as its tail distribution. Generalized Pareto Distribution (GPD) is the main focus for modeling the tail losses in this application. We found that the hurricane loss data generated from FPHLPM were consistent with historical losses and were not as heavy as expected. The tail of the stochastic annual maximum losses can be explained by an exponential distribution. This thesis also touched on the philosophical implication of small probability, high impact events such as Black Swan and discussed the limitations of quantifying catastrophic losses for future inference using statistical methods.
27

Extreme Value Theory with an Application to Bank Failures through Contagion

Nikzad, Rashid January 2011 (has links)
This study attempts to quantify the shocks to a banking network and analyze the transfer of shocks through the network. We consider two sources of shocks: external shocks due to market and macroeconomic factors which impact the entire banking system, and idiosyncratic shocks due to failure of a single bank. The external shocks will be estimated by using two methods: (i) non-parametric simulation of the time series of shocks that occurred to the banking system in the past, and (ii) using the extreme value theory (EVT) to model the tail part of the shocks. The external shocks we considered in this study are due to exchange rate and treasury bill rate volatility. Also, an ARMA/GARCH model is used to extract iid residuals for this purpose. In the next step, the probability of the failure of banks in the system is studied by using Monte Carlo simulation. We calibrate the model such that the network resembles the Canadian banking system.
28

Variational Open Set Recognition

Buquicchio, Luke J. 08 May 2020 (has links)
In traditional classification problems, all classes in the test set are assumed to also occur in the training set, also referred to as the closed-set assumption. However, in practice, new classes may occur in the test set, which reduces the performance of machine learning models trained under the closed-set assumption. Machine learning models should be able to accurately classify instances of classes known during training while concurrently recognizing instances of previously unseen classes (also called the open set assumption). This open set assumption is motivated by real world applications of classifiers wherein its improbable that sufficient data can be collected a priori on all possible classes to reliably train for them. For example, motivated by the DARPA WASH project at WPI, a disease classifier trained on data collected prior to the outbreak of COVID-19 might erroneously diagnose patients with the flu rather than the novel coronavirus. State-of-the-art open set methods based on the Extreme Value Theory (EVT) fail to adequately model class distributions with unequal variances. We propose the Variational Open-Set Recognition (VOSR) model that leverages all class-belongingness probabilities to reject unknown instances. To realize the VOSR model, we design a novel Multi-Modal Variational Autoencoder (MMVAE) that learns well-separated Gaussian Mixture distributions with equal variances in its latent representation. During training, VOSR maps instances of known classes to high-probability regions of class-specific components. By enforcing a large distance between these latent components during training, VOSR then assumes unknown data lies in the low-probability space between components and uses a multivariate form of Extreme Value Theory to reject unknown instances. Our VOSR framework outperforms state-of-the-art open set classification methods with a 15% F1 score increase on a variety of benchmark datasets.
29

Modelling temperature in South Africa using extreme value theory

Nemukula, Murendeni M. January 2018 (has links)
Dissertation submitted for Masters of Science degree in Mathematical Statistics in the FacultyofScience, SchoolofStatisticsandActuarialScience, University of the Witwatersrand Johannesburg, January 2018 / This dissertation focuses on demonstrating the use of extreme value theory in modelling temperature in South Africa. The purpose of modelling temperature is to investigate the frequency of occurrences of extremely low and extremely high temperatures and how they influence the demand of electricity over time. The data comprise a time series of average hourly temperatures that are collected by the South African Weather Service over the period 2000−2010 and supplied by Eskom. The generalized extreme value distribution (GEVD) for r largest order statistics is fitted to the average maximum daily temperature (non-winter season) using the maximum likelihood estimation method and used to estimate extreme high temperatures which result in high demand of electricity due to use of cooling systems. The estimation of the shape parameter reveals evidence that the Weibull family of distributions is an appropriate fit to the data. A frequency analysis of extreme temperatures is carried out and the results show that most of the extreme temperatures are experienced during the months January, February, November and December of each year. The generalized Pareto distribution (GPD) is firstly used for modelling the average minimum daily temperatures for the period January 2000 to August 2010. A penalized regression cubic smoothing spline is used as a time varying threshold. We then extract excessesabovethecubicregressionsmoothingsplineandfitanon-parametricmixturemodel to get a sufficiently high threshold. The data exhibit evidence of short-range dependence and high seasonality which lead to the declustering of the excesses above the threshold and fit the GPD to cluster maxima. The estimate of the shape parameter shows that the Weibullfamilyofdistributionsisappropriateinmodellingtheuppertailofthedistribution. The stationary GPD and the piecewise linear regression models are used in modelling the influence of temperature above the reference point of 22◦C on the demand of electricity. The stationary and non-stationary point process models are fitted and used in determining the frequency of occurrence of extremely high temperatures. The orthogonal and the reparameterizationapproachesofdeterminingthefrequencyandintensityofextremeshave i been used to establish that, extremely hot days occur in frequencies of 21 and 16 days per annum, respectively. For the fact that temperature is established as a major driver of electricity demand, this dissertation is relevant to the system operators, planners and decision makers in Eskom and most of the utility and engineering companies. Our results are furtherusefultoEskomsinceitisduringthenon-winterperiodthattheyplanformaintenance of their power plants. Modelling temperature is important for the South African economy since electricity sector is considered as one of the most weather sensitive sectors of the economy. Over and above, the modelling approaches that are presented in this dissertation are relevant for modelling heat waves which impose several impacts on energy, economy and health of our citizens. / XL2018
30

Market Timing strategy through Reinforcement Learning

HE, Xuezhong January 2021 (has links)
This dissertation implements an optimal trading strategy based on the machine learning method and extreme value theory (EVT) to obtain an excess return on investments in the capital market. The trading strategy outperforms the benchmark S&P 500 index with higher returns and lower volatility through effective market timing. In addition, this dissertation starts by modeling the market tail risk using the EVT and reinforcement learning methods, distinguishing from the traditional value at risk method. In this dissertation, I used EVT to extract the characteristics of the tail risk, which are inputs for reinforcement learning. This process is proved to be effective in market timing, and the trading strategy could avoid market crash and achieve a long-term excess return. In sum, this study has several contributions. First, this study takes a new method to analyze stock price (in this dissertation, I use the S&P 500 index as a stock). I combined the EVT and reinforcement learning to study the price tail risk and predict stock crash efficiently, which is a new method for tail risk research. Thus, I can predict the stock crash or provide the probability of risk, and then, the trading strategy can be built. The second contribution is that this dissertation provides a dynamic market timing trading strategy, which can significantly outperform the market index with a lower volatility and a higher Sharpe ratio. Moreover, the dynamic trading process can provide investors an intuitive sense on the stock market and help in decision-making. Third, the success of the strategy shows that the combination of EVT and reinforcement learning can predict the stock crash very well, which is a great improvement on the extreme event study and deserves further study. / Business Administration/Finance

Page generated in 0.0822 seconds