• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 139
  • 27
  • 19
  • 13
  • 11
  • 9
  • 7
  • 5
  • 3
  • 2
  • 2
  • 2
  • 2
  • 1
  • 1
  • Tagged with
  • 263
  • 263
  • 175
  • 68
  • 61
  • 51
  • 40
  • 34
  • 31
  • 30
  • 28
  • 25
  • 25
  • 23
  • 23
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
21

New statistical models for extreme values

Eljabri, Sumaya Saleh M. January 2013 (has links)
Extreme value theory (EVT) has wide applicability in several areas like hydrology, engineering, science and finance. Across the world, we can see the disruptive effects of flooding, due to heavy rains or storms. Many countries in the world are suffering from natural disasters like heavy rains, storms, floods, and also higher temperatures leading to desertification. One of the best known extraordinary natural disasters is the 1931 Huang He flood, which led to around 4 millions deaths in China; these were a series of floods between Jul and Nov in 1931 in the Huang He river.Several publications are focused on how to find the best model for these events, and to predict the behaviour of these events. Normal, log-normal, Gumbel, Weibull, Pearson type, 4-parameter Kappa, Wakeby and GEV distributions are presented as statistical models for extreme events. However, GEV and GP distributions seem to be the most widely used models for extreme events. In spite of that, these models have been misused as models for extreme values in many areas.The aim of this dissertation is to create new modifications of univariate extreme value models.The modifications developed in this dissertation are divided into two parts: in the first part, we make generalisations of GEV and GP, referred to as the Kumaraswamy GEV and Kumaraswamy GP distributions. The major benefit of these models is their ability to fit the skewed data better than other models. The other idea in this study comes from Chen, which is presented in Proceedings of the International Conference on Computational Intelligence and Software Engineering, pp. 1-4. However, the cumulative and probability density functions for this distribution do not appear to be valid functions. The correction of this model is presented in chapter 6.The major problem in extreme event models is the ability of the model to fit tails of data. In chapter 7, the idea of the Chen model with the correction is combined with the GEV distribution to introduce a new model for extreme values referred to as new extreme value (NEV) distribution. It seems to be more flexible than the GEV distribution.
22

Improvements to the computational pipeline in crystal plasticity estimates of high cycle fatigue of microstructures

Kern, Paul Calvin 27 May 2016 (has links)
The objective of this work is to provide various improvements to the modeling and uncertainty quantification of fatigue lives of materials as understood via simulation of crystal plasticity models applied to synthetically reconstructed microstructures. A computational framework has been developed to automate standardized analysis of crystal plasticity models in the high cycle fatigue regime. This framework incorporates synthetic microstructure generation, simulation preparation, execution and post-processing to analysis statistical distributions related to fatigue properties. Additionally, an improved crack nucleation and propagation approach has been applied to Al 7075-T6 to improve predictive capabilities of the crystal plasticity model for fatigue in various loading regimes. Finally, sensitivities of fatigue response to simulation and synthetic microstructure properties have been explored to provide future guidance for the study of fatigue quantification based on crystal plasticity models.
23

Does copula beat linearity? : Comparison of copulas and linear correlation in portfolio optimization.

Blom, Joakim, Wargclou, Joakim January 2016 (has links)
Modern portfolio theory (MPT) is an investment theory which was introduced by Harry Markowitz in 1952 and describes how risk averse investors can optimize their portfolios. The objective of MPT is to assemble a portfolio by maximizing the expected return given a level of market risk or minimizing the market risk given an expected return. Although MPT has gained popularity over the years it has also been criticized for several theoretical and empirical shortcomings such as using variance as a measure of risk, measuring the dependence with linear correlation and assuming that returns are normally distributed when in fact empirical data suggests otherwise. When moving away from the assumption that returns are elliptical distributed, for example normally distributed, we can not use linear correlation as a measure of dependence in an accurate way. Copulas are a flexible tool for modeling dependence of random variables and enable us to separate the marginals from any joint distribution in order to extract the dependence structure. The objective of this paper was to examine the applicability of a copula-CVaR framework in portfolio optimization compared to the traditional MPT. Further, we studied how the presence of memory, when calibrating the copulas, affects portfolio optimization. The marginals for the copula based portfolios were constructed using Extreme Value Theory and the market risk was measured by Conditional Value at Risk. We implemented a dynamic investing strategy where the portfolios were optimized on a monthly basis with two different length of rolling calibration windows. The portfolios were backtested during a sample period from 2000-2016 and compared against two benchmarks; Markowitz portfolio based on normally distributed returns and an equally weighted, non optimized portfolio. The results demonstrated that portfolio optimization is often preferred compared to choosing an equally weighted portfolio. However, the results also indicated that the copula based portfolios do not always beat the traditional Markowitz portfolio. Furthermore, the results indicated that the choice of length of calibration window affects the selected portfolios and consequently also the performance. This result was supported both by the performance metrics and the stability of the estimated copula parameters.
24

Dynamic extreme value theory (DEVT): a dynamic approach for obtaining value-at-risk (VaR).

January 2006 (has links)
by Leung Tsun Ip. / Thesis (M.Phil.)--Chinese University of Hong Kong, 2006. / Includes bibliographical references (leaves 72-78). / Abstracts in English and Chinese. / Chapter 1. --- Introduction --- p.1 / Chapter 2. --- Literature Review --- p.6 / Chapter 2.1 --- Development of estimation of Value-at-Risk (VaR) --- p.6 / Chapter 2.2 --- Methods to evaluate VaR --- p.9 / Chapter 2.2.1 --- Non-paremetric Method --- p.9 / Chapter 2.2.2 --- Semi-parametric Method --- p.11 / Chapter 2.2.3 --- Parametric Method --- p.12 / Chapter 3. --- Extreme Value Theory (EVT) --- p.16 / Chapter 3.1 --- Introduction of Extreme Value Theory (EVT) --- p.16 / Chapter 3.1.1 --- Block Maxima Approach --- p.18 / Chapter 3.1.2 --- Peaks over Threshold (POT) Approach --- p.21 / Chapter 3.1.3 --- Comparison between Block Maxima and POT Approach --- p.22 / Chapter 3.2 --- Numerical Illustration --- p.23 / Chapter 3.2.1 --- Data --- p.23 / Chapter 3.2.2 --- Diagnosis --- p.24 / Chapter 4. --- Dynamic Extreme Value Theory (DEVT) --- p.29 / Chapter 4.1 --- Theoretical Framework of DEVT --- p.29 / Chapter 4.2 --- Estimation of Parameters --- p.32 / Chapter 4.3 --- Determination of Threshold Level --- p.37 / Chapter 4.4 --- Estimation of zq --- p.44 / Chapter 5. --- Backtesting and Time Aggregation --- p.49 / Chapter 5.1 --- Backtesting DEVT --- p.49 / Chapter 5.2 --- Time Aggregation --- p.55 / Chapter 6. --- Case Study: China Aviation Oil Singapore (CAO) Incident --- p.61 / Chapter 6.1 --- Background Information --- p.61 / Chapter 6.2 --- Data Analysis --- p.63 / Chapter 6.3 --- Suggestion --- p.68 / Chapter 7. --- Discussion --- p.71 / References --- p.72 / Chapter A. --- Appendix --- p.79
25

Extreme-day return as a measure of stock market volatility : comparative study developed vs. emerging capital markets of the world

Kabir, Muashab, Ahmed, Naeem January 2010 (has links)
<p>This paper uses a new measure of volatility based on extreme day return occurrences and examines the relative prevailing volatility among worldwide stock markets during 1997-2009. Using several global stock market indexes of countries categorized as an emerging and developed capital markets are utilized. Additionally this study investigates well known anomalies namely Monday effect and January effect. Further using correlation analysis of co movement and extent of integration highlights the opportunities for international diversification among those markets. Evidences during this time period suggest volatility is not the only phenomena of emerging capital markets. Emerging markets offer opportunities of higher returns during volatility. Cross correlation analysis depicts markets have become more integrated during this time frame; still opportunities for higher returns prevail through global portfolio diversification.</p>
26

Extreme Value Theory with an Application to Bank Failures through Contagion

Nikzad, Rashid 03 October 2011 (has links)
This study attempts to quantify the shocks to a banking network and analyze the transfer of shocks through the network. We consider two sources of shocks: external shocks due to market and macroeconomic factors which impact the entire banking system, and idiosyncratic shocks due to failure of a single bank. The external shocks will be estimated by using two methods: (i) non-parametric simulation of the time series of shocks that occurred to the banking system in the past, and (ii) using the extreme value theory (EVT) to model the tail part of the shocks. The external shocks we considered in this study are due to exchange rate and treasury bill rate volatility. Also, an ARMA/GARCH model is used to extract iid residuals for this purpose. In the next step, the probability of the failure of banks in the system is studied by using Monte Carlo simulation. We calibrate the model such that the network resembles the Canadian banking system.
27

Extreme Value Theory with an Application to Bank Failures through Contagion

Nikzad, Rashid 03 October 2011 (has links)
This study attempts to quantify the shocks to a banking network and analyze the transfer of shocks through the network. We consider two sources of shocks: external shocks due to market and macroeconomic factors which impact the entire banking system, and idiosyncratic shocks due to failure of a single bank. The external shocks will be estimated by using two methods: (i) non-parametric simulation of the time series of shocks that occurred to the banking system in the past, and (ii) using the extreme value theory (EVT) to model the tail part of the shocks. The external shocks we considered in this study are due to exchange rate and treasury bill rate volatility. Also, an ARMA/GARCH model is used to extract iid residuals for this purpose. In the next step, the probability of the failure of banks in the system is studied by using Monte Carlo simulation. We calibrate the model such that the network resembles the Canadian banking system.
28

Downside Risk Constraints and Currency Hedging in International Portfolios: the Asian and Late-2000 Crisis

Zhou, Ying 2010 December 1900 (has links)
MV is the traditional method to treat international portfolio selection problems, which bases its theory on the assumption of Normal Distribution. However, during economy recession the portfolio return turns out to be a fat tail distribution. Therefore, in this sense, we explore Roy’s SF criterion and apply the extreme theory to the historical data. We demonstrate how such portfolios would perform during the Asian Crisis, IT Bubble Bust and the Financial Crisis separately. We also compare the SF portfolio’s performance to the MV portfolio’s performance, therefore to check, SF and MV portfolio, which will outperform during bust and boom of the economy. The Asian Crisis was marked with great currency devaluation and lower currency return on equity. The Dot.Com Bubble Busts was known for its sharp plummet in the stock market, while, the Financial Crisis was known as the large falls in the US stock market and elsewhere. They are the extreme events of the world capital markets, which in some way contribute to the non-normal distribution. Simulated results over the 1997-2010 period which include six busts and booms: the Asian Crisis, period after Asian Crisis, IT Bubble Bust, period after IT Bubble Bust, The Financial Crisis and period after The Financial Crisis, indicate that SF portfolio outperforms MV portfolio during most of the times, this result is especially obvious for Indonesian and Thailand.
29

Applying RAROC, Value-at-Risk and Extreme Value Theory to Performance Measurement of Financial Holding Companies.

Chou, Cheng-Yi 07 July 2006 (has links)
none
30

Risk Measures and Dependence Modeling in Financial Risk Management

Eriksson, Kristofer January 2014 (has links)
In financial risk management it is essential to be able to model dependence in markets and portfolios in an accurate and efficient way. A high positive dependence between assets in a portfolio can be devastating, especially in times of crises, since losses will most likely occur at the same time in all assets for such a portfolio. The dependence is therefore directly linked to the risk of the portfolio. The risk can be estimated by several different risk measures, for example Value-at-Risk and Expected shortfall. This paper studies some different ways to measure risk and model dependence, both in a theoretical and empirical way. The main focus is on copulas, which is a way to model and construct complex dependencies. Copulas are a useful tool since it allows the user to separately specify the marginal distributions and then link them together with the copula. However, copulas can be quite complex to understand and it is not trivial to know which copula to use. An implemented copula model might give the user a "black-box" feeling and a severe model risk if the user trusts the model too much and is unaware of what is going. Another model would be to use the linear correlation which is also a way to measure dependence. This is an easier model and as such it is believed to be easier for all users to understand. However, linear correlation is only easy to understand in the case of elliptical distributions, and when we move away from this assumption (which is usually the case in financial data), some clear drawbacks and pitfalls become present. A third model, called historical simulation, uses the historical returns of the portfolio and estimate the risk on this data without making any parametric assumptions about the dependence. The dependence is assumed to be incorporated in the historical evolvement of the portfolio. This model is very easy and very popular, but it is more limited than the previous two models to the assumption that history will repeat itself and needs much more historical observations to yield good results. Here we face the risk that the market dynamics has changed when looking too far back in history. In this paper some different copula models are implemented and compared to the historical simulation approach by estimating risk with Value-at-Risk and Expected shortfall. The parameters of the copulas are also investigated under calm and stressed market periods. This information about the parameters is useful when performing stress tests. The empirical study indicates that it is difficult to distinguish the parameters between the stressed and calm market period. The overall conclusion is; which model to use depends on our beliefs about the future distribution. If we believe that the distribution is elliptical then a correlation model is good, if it is believed to have a complex dependence then the user should turn to a copula model, and if we can assume that history will repeat itself then historical simulation is advantageous.

Page generated in 0.0637 seconds