• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 81
  • 14
  • 13
  • 7
  • 6
  • 5
  • 3
  • 3
  • 2
  • 2
  • 1
  • 1
  • 1
  • Tagged with
  • 141
  • 141
  • 141
  • 52
  • 32
  • 24
  • 21
  • 19
  • 19
  • 18
  • 18
  • 17
  • 17
  • 16
  • 16
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
131

Modelování kybernetického rizika pomocí kopula funkcí / Cyber risk modelling using copulas

Spišiak, Michal January 2020 (has links)
Cyber risk or data breach risk can be estimated similarly as other types of operational risk. First we identify problems of cyber risk models in existing literature. A large dataset consisting of 5,713 loss events enables us to apply extreme value theory. We adopt goodness of fit tests adjusted for distribution functions with estimated parameters. These tests are often overlooked in the literature even though they are essential for correct results. We model aggregate losses in three different industries separately and then we combine them using a copula. A t-test reveals that potential one-year global losses due to data breach risk are larger than the GDP of the Czech Republic. Moreover, one-year global cyber risk measured with a 99% CVaR amounts to 2.5% of the global GDP. Unlike others we compare risk measures with other quantities which allows wider audience to understand the magnitude of the cyber risk. An estimate of global data breach risk is a useful indicator not only for insurers, but also for any organization processing sensitive data.
132

Pricing and Modeling Heavy Tailed Reinsurance Treaties - A Pricing Application to Risk XL Contracts / Prissättning och modellering av långsvansade återförsäkringsavtal - En prissättningstillämpning på Risk XL kontrakt

Abdullah Mohamad, Ormia, Westin, Anna January 2023 (has links)
To estimate the risk of a loss occurring for insurance takers is a difficult task in the insurance industry. It is an even more difficult task to price the risk for reinsurance companies which insures the primary insurers. Insurance that is bought by an insurance company, the cedent, from another insurance company, the reinsurer, is called treaty reinsurance. This type of reinsurance is the main focus in this thesis. A very common risk to insure, is the risk of fire in municipal and commercial properties which is the risk that is priced in this thesis. This thesis evaluates Länsförsäkringar AB's current pricing model which calculates the risk premium for Risk XL contracts. The goal of this thesis is to find areas of improvement for tail risk pricing. The risk premium can be calculated commonly by using one of three different types of pricing models, experience rating, exposure rating and frequency-severity rating. This thesis focuses on frequency-severity pricing, which is a model that assumes independence between the frequency and the severity of losses, and therefore splits the two into separate models. This is a very common model used when pricing Risk XL contracts. The risk premium is calculated with the help of loss data from two insurance companies, from a Norwegian and a Finnish insurance company. The main focus of this thesis is to price the risk with the help of extreme value theory, mainly with the method of moments method to model the frequency of losses, and peaks over threshold model to model the severity of the losses. In order to model the estimated frequency of losses by using the method of moments method, two distributions are compared, the Poisson and the negative binomial distribution. There are different distributions that can be used to model the severity of losses. In order to evaluate which distribution is optimal to use, two different Goodness of Fit tests are applied, the Kolmogorov-Smirnov and the Anderson-Darling test. The Peaks over threshold model is a model that can be used with the Pareto distribution. With the help of the Hill estimator we are able to calculate a threshold $u$, which regulates the tail of the Pareto curve. To estimate the rest of the ingoing parameters in the generalized Pareto distribution, the maximum likelihood and the least squares method are used. Lastly, the bootstrap method is used to estimate the uncertainty in the price which was calculated with the help of the estimated parameters. From this, empirical percentiles are calculated and set as guidelines to where the risk premium should lie between, in order for both the data sets to be considered fairly priced. / Att uppskatta risken för en skada ska inträffa för försäkringstagarna är svår uppgift i försäkringsbranschen. Det är en ännu svårare uppgift är att prissätta risken för återförsäkringsbolag som försäkrar direktförsäkrarna. Den försäkringen som köps av direkförsäkrarna, cedenten, från återförsäkrarna kallas treaty återförsäkring. Denna typ av återförsäkring är den som behandlas i denna avhandlig. En vanlig risk att prisätta är brandrisken för kommunala och industriella byggnader, vilket är risken som prissätts i denna avhandlnig. Denna avhandling utvärderar Länsförsäkringar AB's nuvarande prissättning som beräknar riskpremien för Risk XL kontrakt.Målet med denna avhandling är att hitta förbättringsområden för långsvansad affär. Riskpremien kan beräknas med hjälp av tre vanliga typer av prissättningsmodeller, experience rating, exposure rating och frequency-severity raring. Denna tes fokuserar endast på frequency-severity rating, vilket är en modell som antar att frekevensen av skador och storleken av de är oberoende, de delas därmed upp de i separata modeller. Detta är en väldigt vanlig modell som används vid prissättning av Risk XL kontrakt.Riskpremien beräknas med hjälp av skadedata från två försäkringsbolag, ett norskt och ett finskt försäkringsbolag.Det huvudsakliga fokuset i denna avhandling är att prissätta risken med hjälp av extremevärdesteori, huvudsakligen med hjälp av momentmetoden för att modellera frekvensen av skador och peaks over threshold metoden för att modellera storleken av de skadorna.För att kunna modellera den förväntade frekvensen av skador med hjälp av moment metoden så jämförs två fördelingar, Poissonfördelingen och den negativa binomialfördelningen. Det finns ett antal fördelningar som kan användas för att modellera storleken av skadorna. För att kunna avgöra vilken fördeling som är bäst att använda så har två olika Goodness of Fit test applicerats, Kolmogorov-Smirnov och Anderson-Darling testet.Peaks over threhsold modellen är en modell som kan användas med Paretofördelningen. Med hjälp av Hillestimatorn så beräknas en tröskel $u$ som regulerar paretokurvans uteseende. För att beräkna de resterande parametrarna i den generaliserade Paretofördelningen används maximum likliehood och minsta kvadratmetoden. Slutligen används bootstrap metoden för att skatta osäkerheten i risk premien som satts med hjälp av de skattade parametrarna. Utifrån den metoden så skapas percentiler som blir en riktlinje för vart risk premien bör ligga för de datasetten för att kunna anses vara rättvist prissatt.
133

Applying Peaks-Over-Threshold for Increasing the Speed of Convergence of a Monte Carlo Simulation / Peaks-Over-Threshold tillämpat på en Monte Carlo simulering för ökad konvergenshastighet

Jakobsson, Eric, Åhlgren, Thor January 2022 (has links)
This thesis investigates applying the semiparametric method Peaks-Over-Threshold on data generated from a Monte Carlo simulation when estimating the financial risk measures Value-at-Risk and Expected Shortfall. The goal is to achieve a faster convergence than a Monte Carlo simulation when assessing extreme events that symbolise the worst outcomes of a financial portfolio. Achieving a faster convergence will enable a reduction of iterations in the Monte Carlo simulation, thus enabling a more efficient way of estimating risk measures for the portfolio manager.  The financial portfolio consists of US life insurance policies offered on the secondary market, gathered by our partner RessCapital. The method is evaluated on three different portfolios with different defining characteristics.  In Part I an analysis of selecting an optimal threshold is made. The accuracy and precision of Peaks-Over-Threshold is compared to the Monte Carlo simulation with 10,000 iterations, using a simulation of 100,000 iterations as the reference value. Depending on the risk measure and the percentile of interest, different optimal thresholds are selected.  Part II presents the result with the optimal thresholds from Part I. One can conclude that Peaks-Over-Threshold performed significantly better than a Monte Carlo simulation for Value-at-Risk with 10,000 iterations. The results for Expected Shortfall did not achieve a clear improvement in terms of precision, but it did show improvement in terms of accuracy.  Value-at-Risk and Expected Shortfall at the 99.5th percentile achieved a greater error reduction than at the 99th. The result therefore aligned well with theory, as the more "rare" event considered, the better the Peaks-Over-Threshold method performed.  In conclusion, the method of applying Peaks-Over-Threshold can be proven useful when looking to reduce the number of iterations since it do increase the convergence of a Monte Carlo simulation. The result is however dependent on the rarity of the event of interest, and the level of precision/accuracy required. / Det här examensarbetet tillämpar metoden Peaks-Over-Threshold på data genererat från en Monte Carlo simulering för att estimera de finansiella riskmåtten Value-at-Risk och Expected Shortfall. Målet med arbetet är att uppnå en snabbare konvergens jämfört med en Monte Carlo simulering när intresset är s.k. extrema händelser som symboliserar de värsta utfallen för en finansiell portfölj. Uppnås en snabbare konvergens kan antalet iterationer i simuleringen minskas, vilket möjliggör ett mer effektivt sätt att estimera riskmåtten för portföljförvaltaren.  Den finansiella portföljen består av amerikanska livförsäkringskontrakt som har erbjudits på andrahandsmarknaden, insamlat av vår partner RessCapital. Metoden utvärderas på tre olika portföljer med olika karaktär.  I Del I så utförs en analys för att välja en optimal tröskel för Peaks-Over-Threshold. Noggrannheten och precisionen för Peaks-Over-Threshold jämförs med en Monte Carlo simulering med 10,000 iterationer, där en Monte Carlo simulering med 100,000 iterationer används som referensvärde. Beroende på riskmått samt vilken percentil som är av intresse så väljs olika trösklar.  I Del II presenteras resultaten med de "optimalt" valda trösklarna från Del I. Peaks-over-Threshold påvisade signifikant bättre resultat för Value-at-Risk jämfört med Monte Carlo simuleringen med 10,000 iterationer. Resultaten för Expected Shortfall påvisade inte en signifikant förbättring sett till precision, men visade förbättring sett till noggrannhet.  För både Value-at-Risk och Expected Shortfall uppnådde Peaks-Over-Threshold en större felminskning vid 99.5:e percentilen jämfört med den 99:e. Resultaten var därför i linje med de teoretiska förväntningarna då en högre percentil motsvarar ett extremare event.  Sammanfattningsvis så kan metoden Peaks-Over-Threshold vara användbar när det kommer till att minska antalet iterationer i en Monte Carlo simulering då resultatet visade att Peaks-Over-Threshold appliceringen accelererar Monte Carlon simuleringens konvergens. Resultatet är dock starkt beroende av det undersökta eventets sannolikhet, samt precision- och noggrannhetskravet.
134

The Performance of Market Risk Models for Value at Risk and Expected Shortfall Backtesting : In the Light of the Fundamental Review of the Trading Book / Bakåttest av VaR och ES i marknadsriskmodeller

Dalne, Katja January 2017 (has links)
The global financial crisis that took off in 2007 gave rise to several adjustments of the risk regulation for banks. An extensive adjustment, that is to be implemented in 2019, is the Fundamental Review of the Trading Book (FRTB). It proposes to use Expected Shortfall (ES) as risk measure instead of the currently used Value at Risk (VaR), as well as applying varying liquidity horizons based on the various risk levels of the assets involved. A major difficulty of implementing the FRTB lies within the backtesting of ES. Righi and Ceretta proposes a robust ES backtest based on Monte Carlo simulation. It is flexible since it does not assume any probability distribution and can be performed without waiting for an entire backtesting period. Implementing some commonly used VaR backtests as well as the ES backtest by Righi and Ceretta, yield a perception of which risk models that are the most accurate from both a VaR and an ES backtesting perspective. It can be concluded that a model that is satisfactory from a VaR backtesting perspective does not necessarily remain so from an ES backtesting perspective and vice versa. Overall, the models that are satisfactory from a VaR backtesting perspective turn out to be probably too conservative from an ES backtesting perspective. Considering the confidence levels proposed by the FRTB, from a VaR backtesting perspective, a risk measure model with a normal copula and a hybrid distribution with the generalized Pareto distribution in the tails and the empirical distribution in the center along with GARCH filtration is the most accurate one, as from an ES backtesting perspective a risk measure model with univariate Student’s t distribution with ⱱ ≈ 7 together with GARCH filtration is the most accurate one for implementation. Thus, when implementing the FRTB, the bank will need to compromise between obtaining a good VaR model, potentially resulting in conservative ES estimates, and obtaining a less satisfactory VaR model, possibly resulting in more accurate ES estimates. The thesis was performed at SAS Institute, an American IT company that develops software for risk management among others. Targeted customers are banks and other financial institutions. Investigating the FRTB acts a potential advantage for the company when approaching customers that are to implement the regulation framework in a near future. / Den globala finanskrisen som inleddes år 2007 ledde till flertalet ändringar vad gäller riskreglering för banker. En omfattande förändring som beräknas implementeras år 2019, utgörs av Fundamental Review of the Trading Book (FRTB). Denna föreslår bland annat användande av Expected Shortfall (ES) som riskmått istället för Value at Risk (VaR) som används idag, liksom tillämpandet av varierande likviditetshorisonter beroende på risknivåerna för tillgångarna i fråga. Den huvudsakliga svårigheten med att implementera FRTB ligger i backtestingen av ES. Righi och Ceretta föreslår ett robust ES backtest som baserar sig på Monte Carlo-simulering. Det är flexibelt i den mening att det inte antar någon specifik sannolikhetsfördelning samt att det går att implementera utan att man behöver vänta en hel backtestingperiod. Vid implementation av olika standardbacktest för VaR, liksom backtestet för ES av Righi och Ceretta, fås en uppfattning av vilka riskmåttsmodeller som ger de mest korrekta resultaten från både ett VaR- och ES-backtestingperspektiv. Sammanfattningsvis kan man konstatera att en modell som är acceptabel från ett VaR-backtestingperspektiv inte nödvändigtvis är det från ett ES-backtestingperspektiv och vice versa. I det hela taget har det visat sig att de modeller som är acceptabla ur ett VaR-backtestingperspektiv troligtvis är för konservativa från ett ESbacktestingperspektiv. Om man betraktar de konfidensnivåer som föreslagits i FRTB, kan man ur ett VaR-backtestingperspektiv konstatera att en riskmåttsmodell med normal-copula och en hybridfördelning med generaliserad Pareto-fördelning i svansarna och empirisk fördelning i centrum tillsammans med GARCH-filtrering är den bäst lämpade, medan det från ett ES-backtestingperspektiv är att föredra en riskmåttsmodell med univariat Student t-fördelning med ⱱ ≈ 7 tillsammans med GARCH-filtrering. Detta innebär att när banker ska implementera FRTB kommer de behöva kompromissa mellan att uppnå en bra VaR-modell som potentiellt resulterar i för konservativa ES-estimat och en modell som är mindre bra ur ett VaRperspektiv men som resulterar i rimligare ES-estimat. Examensarbetet genomfördes vid SAS Institute, ett amerikanskt IT-företag som bland annat utvecklar mjukvara för riskhantering. Tänkbara kunder är banker och andra finansinstitut. Denna studie av FRTB innebär en potentiell fördel för företaget vid kontakt med kunder som planerar implementera regelverket inom en snar framtid. / Riskhantering, finansiella tidsserier, Value at Risk, Expected Shortfall, Monte Carlo-simulering, GARCH-modellering, Copulas, hybrida distributioner, generaliserad Pareto-fördelning, extremvärdesteori, Backtesting, likviditetshorisonter, Basels regelverk
135

Dynamic portfolio construction and portfolio risk measurement

Mazibas, Murat January 2011 (has links)
The research presented in this thesis addresses different aspects of dynamic portfolio construction and portfolio risk measurement. It brings the research on dynamic portfolio optimization, replicating portfolio construction, dynamic portfolio risk measurement and volatility forecast together. The overall aim of this research is threefold. First, it is aimed to examine the portfolio construction and risk measurement performance of a broad set of volatility forecast and portfolio optimization model. Second, in an effort to improve their forecast accuracy and portfolio construction performance, it is aimed to propose new models or new formulations to the available models. Third, in order to enhance the replication performance of hedge fund returns, it is aimed to introduce a replication approach that has the potential to be used in numerous applications, in investment management. In order to achieve these aims, Chapter 2 addresses risk measurement in dynamic portfolio construction. In this chapter, further evidence on the use of multivariate conditional volatility models in hedge fund risk measurement and portfolio allocation is provided by using monthly returns of hedge fund strategy indices for the period 1990 to 2009. Building on Giamouridis and Vrontos (2007), a broad set of multivariate GARCH models, as well as, the simpler exponentially weighted moving average (EWMA) estimator of RiskMetrics (1996) are considered. It is found that, while multivariate GARCH models provide some improvements in portfolio performance over static models, they are generally dominated by the EWMA model. In particular, in addition to providing a better risk-adjusted performance, the EWMA model leads to dynamic allocation strategies that have a substantially lower turnover and could therefore be expected to involve lower transaction costs. Moreover, it is shown that these results are robust across the low - volatility and high-volatility sub-periods. Chapter 3 addresses optimization in dynamic portfolio construction. In this chapter, the advantages of introducing alternative optimization frameworks over the mean-variance framework in constructing hedge fund portfolios for a fund of funds. Using monthly return data of hedge fund strategy indices for the period 1990 to 2011, the standard mean-variance approach is compared with approaches based on CVaR, CDaR and Omega, for both conservative and aggressive hedge fund investors. In order to estimate portfolio CVaR, CDaR and Omega, a semi-parametric approach is proposed, in which first the marginal density of each hedge fund index is modelled using extreme value theory and the joint density of hedge fund index returns is constructed using a copula-based approach. Then hedge fund returns from this joint density are simulated in order to compute CVaR, CDaR and Omega. The semi-parametric approach is compared with the standard, non-parametric approach, in which the quantiles of the marginal density of portfolio returns are estimated empirically and used to compute CVaR, CDaR and Omega. Two main findings are reported. The first is that CVaR-, CDaR- and Omega-based optimization offers a significant improvement in terms of risk-adjusted portfolio performance over mean-variance optimization. The second is that, for all three risk measures, semi-parametric estimation of the optimal portfolio offers a very significant improvement over non-parametric estimation. The results are robust to as the choice of target return and the estimation period. Chapter 4 searches for improvements in portfolio risk measurement by addressing volatility forecast. In this chapter, two new univariate Markov regime switching models based on intraday range are introduced. A regime switching conditional volatility model is combined with a robust measure of volatility based on intraday range, in a framework for volatility forecasting. This chapter proposes a one-factor and a two-factor model that combine useful properties of range, regime switching, nonlinear filtration, and GARCH frameworks. Any incremental improvement in the performance of volatility forecasting is searched for by employing regime switching in a conditional volatility setting with enhanced information content on true volatility. Weekly S&P500 index data for 1982-2010 is used. Models are evaluated by using a number of volatility proxies, which approximate true integrated volatility. Forecast performance of the proposed models is compared to renowned return-based and range-based models, namely EWMA of Riskmetrics, hybrid EWMA of Harris and Yilmaz (2009), GARCH of Bollerslev (1988), CARR of Chou (2005), FIGARCH of Baillie et al. (1996) and MRSGARCH of Klaassen (2002). It is found that the proposed models produce more accurate out of sample forecasts, contain more information about true volatility and exhibit similar or better performance when used for value at risk comparison. Chapter 5 searches for improvements in risk measurement for a better dynamic portfolio construction. This chapter proposes multivariate versions of one and two factor MRSACR models introduced in the fourth chapter. In these models, useful properties of regime switching models, nonlinear filtration and range-based estimator are combined with a multivariate setting, based on static and dynamic correlation estimates. In comparing the out-of-sample forecast performance of these models, eminent return and range-based volatility models are employed as benchmark models. A hedge fund portfolio construction is conducted in order to investigate the out-of-sample portfolio performance of the proposed models. Also, the out-of-sample performance of each model is tested by using a number of statistical tests. In particular, a broad range of statistical tests and loss functions are utilized in evaluating the forecast performance of the variance covariance matrix of each portfolio. It is found that, in terms statistical test results, proposed models offer significant improvements in forecasting true volatility process, and, in terms of risk and return criteria employed, proposed models perform better than benchmark models. Proposed models construct hedge fund portfolios with higher risk-adjusted returns, lower tail risks, offer superior risk-return tradeoffs and better active management ratios. However, in most cases these improvements come at the expense of higher portfolio turnover and rebalancing expenses. Chapter 6 addresses the dynamic portfolio construction for a better hedge fund return replication and proposes a new approach. In this chapter, a method for hedge fund replication is proposed that uses a factor-based model supplemented with a series of risk and return constraints that implicitly target all the moments of the hedge fund return distribution. The approach is used to replicate the monthly returns of ten broad hedge fund strategy indices, using long-only positions in ten equity, bond, foreign exchange, and commodity indices, all of which can be traded using liquid, investible instruments such as futures, options and exchange traded funds. In out-of-sample tests, proposed approach provides an improvement over the pure factor-based model, offering a closer match to both the return performance and risk characteristics of the hedge fund strategy indices.
136

Risks in Commodity and Currency Markets

Bozovic, Milos 17 April 2009 (has links)
This thesis analyzes market risk factors in commodity and currency markets. It focuses on the impact of extreme events on the prices of financial products traded in these markets, and on the overall market risk faced by the investors. The first chapter develops a simple two-factor jump-diffusion model for valuation of contingent claims on commodities in order to investigate the pricing implications of shocks that are exogenous to this market. The second chapter analyzes the nature and pricing implications of the abrupt changes in exchange rates, as well as the ability of these changes to explain the shapes of option-implied volatility "smiles". Finally, the third chapter employs the notion that key results of the univariate extreme value theory can be applied separately to the principal components of ARMA-GARCH residuals of a multivariate return series. The proposed approach yields more precise Value at Risk forecasts than conventional multivariate methods, while maintaining the same efficiency. / El objetivo de esta tesis es analizar los factores del riesgo del mercado de las materias primas y las divisas. Está centrada en el impacto de los eventos extremos tanto en los precios de los productos financieros como en el riesgo total de mercado al cual se enfrentan los inversores. En el primer capítulo se introduce un modelo simple de difusión y saltos (jump-diffusion) con dos factores para la valuación de activos contingentes sobre las materias primas, con el objetivo de investigar las implicaciones de shocks en los precios que son exógenos a este mercado. En el segundo capítulo se analiza la naturaleza e implicaciones para la valuación de los saltos en los tipos de cambio, así como la capacidad de éstos para explicar las formas de sonrisa en la volatilidad implicada. Por último, en el tercer capítulo se utiliza la idea de que los resultados principales de la Teoria de Valores Extremos univariada se pueden aplicar por separado a los componentes principales de los residuos de un modelo ARMA-GARCH de series multivariadas de retorno. El enfoque propuesto produce pronósticos de Value at Risk más precisos que los convencionales métodos multivariados, manteniendo la misma eficiencia.
137

Modeling sea-level rise uncertainties for coastal defence adaptation using belief functions / Utilisation des fonctions de croyance pour la modélisation des incertitudes dans les projections de l'élévation du niveau marin pour l'adaptation côtière

Ben Abdallah, Nadia 12 March 2014 (has links)
L’adaptation côtière est un impératif pour faire face à l’élévation du niveau marin,conséquence directe du réchauffement climatique. Cependant, la mise en place d’actions et de stratégies est souvent entravée par la présence de diverses et importantes incertitudes lors de l’estimation des aléas et risques futurs. Ces incertitudes peuvent être dues à une connaissance limitée (de l’élévation du niveau marin futur par exemple) ou à la variabilité naturelle de certaines variables (les conditions de mer extrêmes). La prise en compte des incertitudes dans la chaîne d’évaluation des risques est essentielle pour une adaptation efficace.L’objectif de ce travail est de proposer une méthodologie pour la quantification des incertitudes basée sur les fonctions de croyance – un formalisme de l’incertain plus flexible que les probabilités. Les fonctions de croyance nous permettent de décrire plus fidèlement l’information incomplète fournie par des experts (quantiles,intervalles, etc.), et de combiner différentes sources d’information. L’information statistique peut quand à elle être décrite par de fonctions des croyance définies à partir de la fonction de vraisemblance. Pour la propagation d’incertitudes, nous exploitons l’équivalence mathématique entre fonctions de croyance et intervalles aléatoires, et procédons par échantillonnage Monte Carlo. La méthodologie est appliquée dans l’estimation des projections de la remontée du niveau marin global à la fin du siècle issues de la modélisation physique, d’élicitation d’avis d’experts, et de modèle semi-empirique. Ensuite, dans une étude de cas, nous évaluons l’impact du changement climatique sur les conditions de mers extrêmes et évaluons le renforcement nécessaire d’une structure afin de maintenir son niveau de performance fonctionnelle. / Coastal adaptation is an imperative to deal with the elevation of the global sealevel caused by the ongoing global warming. However, when defining adaptationactions, coastal engineers encounter substantial uncertainties in the assessment of future hazards and risks. These uncertainties may stem from a limited knowledge (e.g., about the magnitude of the future sea-level rise) or from the natural variabilityof some quantities (e.g., extreme sea conditions). A proper consideration of these uncertainties is of principal concern for efficient design and adaptation.The objective of this work is to propose a methodology for uncertainty analysis based on the theory of belief functions – an uncertainty formalism that offers greater features to handle both aleatory and epistemic uncertainties than probabilities.In particular, it allows to represent more faithfully experts’ incomplete knowledge (quantiles, intervals, etc.) and to combine multi-sources evidence taking into account their dependences and reliabilities. Statistical evidence can be modeledby like lihood-based belief functions, which are simply the translation of some inference principles in evidential terms. By exploiting the mathematical equivalence between belief functions and random intervals, uncertainty can be propagated through models by Monte Carlo simulations. We use this method to quantify uncertainty in future projections of the elevation of the global sea level by 2100 and evaluate its impact on some coastal risk indicators used in coastal design. Sea-level rise projections are derived from physical modelling, expert elicitation, and historical sea-level measurements. Then, within a methodologically-oriented case study,we assess the impact of climate change on extreme sea conditions and evaluate there inforcement of a typical coastal defence asset so that its functional performance is maintained.
138

Modelling equity risk and external dependence: A survey of four African Stock Markets

Samuel, Richard Abayomi 18 May 2019 (has links)
Department of Statistics / MSc (Statistics) / The ripple e ect of a stock market crash due to extremal dependence is a global issue with key attention and it is at the core of all modelling e orts in risk management. Two methods of extreme value theory (EVT) were used in this study to model equity risk and extremal dependence in the tails of stock market indices from four African emerging markets: South Africa, Nigeria, Kenya and Egypt. The rst is the \bivariate-threshold-excess model" and the second is the \point process approach". With regards to the univariate analysis, the rst nding in the study shows in descending hierarchy that volatility with persistence is highest in the South African market, followed by Egyptian market, then Nigerian market and lastly, the Kenyan equity market. In terms of risk hierarchy, the Egyptian EGX 30 market is the most risk-prone, followed by the South African JSE-ALSI market, then the Nigerian NIGALSH market and the least risky is the Kenyan NSE 20 market. It is therefore concluded that risk is not a brainchild of volatility in these markets. For the bivariate modelling, the extremal dependence ndings indicate that the African continent regional equity markets present a huge investment platform for investors and traders, and o er tremendous opportunity for portfolio diversi cation and investment synergies between markets. These synergistic opportunities are due to the markets being asymptotic (extremal) independent or (very) weak asymptotic dependent and negatively dependent. This outcome is consistent with the ndings of Alagidede (2008) who analysed these same markets using co-integration analysis. The bivariate-threshold-excess and point process models are appropriate for modelling the markets' risks. For modelling the extremal dependence however, given the same marginal threshold quantile, the point process has more access to the extreme observations due to its wider sphere of coverage than the bivariate-threshold-excess model. / NRF
139

Simulation-Based Portfolio Optimization with Coherent Distortion Risk Measures / Simuleringsbaserad portföljoptimering med koherenta distortionsriskmått

Prastorfer, Andreas January 2020 (has links)
This master's thesis studies portfolio optimization using linear programming algorithms. The contribution of this thesis is an extension of the convex framework for portfolio optimization with Conditional Value-at-Risk, introduced by Rockafeller and Uryasev. The extended framework considers risk measures in this thesis belonging to the intersecting classes of coherent risk measures and distortion risk measures, which are known as coherent distortion risk measures. The considered risk measures belonging to this class are the Conditional Value-at-Risk, the Wang Transform, the Block Maxima and the Dual Block Maxima measures. The extended portfolio optimization framework is applied to a reference portfolio consisting of stocks, options and a bond index. All assets are from the Swedish market. The returns of the assets in the reference portfolio are modelled with elliptical distribution and normal copulas with asymmetric marginal return distributions. The portfolio optimization framework is a simulation-based framework that measures the risk using the simulated scenarios from the assumed portfolio distribution model. To model the return data with asymmetric distributions, the tails of the marginal distributions are fitted with generalized Pareto distributions, and the dependence structure between the assets are captured using a normal copula. The result obtained from the optimizations is compared to different distributional return assumptions of the portfolio and the four risk measures. A Markowitz solution to the problem is computed using the mean average deviation as the risk measure. The solution is the benchmark solution which optimal solutions using the coherent distortion risk measures are compared to. The coherent distortion risk measures have the tractable property of being able to assign user-defined weights to different parts of the loss distribution and hence value increasing loss severities as greater risks. The user-defined loss weighting property and the asymmetric return distribution models are used to find optimal portfolios that account for extreme losses. An important finding of this project is that optimal solutions for asset returns simulated from asymmetric distributions are associated with greater risks, which is a consequence of more accurate modelling of distribution tails. Furthermore, weighting larger losses with increasingly larger weights show that the portfolio risk is greater, and a safer position is taken. / Denna masteruppsats behandlar portföljoptimering med linjära programmeringsalgoritmer. Bidraget av uppsatsen är en utvidgning av det konvexa ramverket för portföljoptimering med Conditional Value-at-Risk, som introducerades av Rockafeller och Uryasev. Det utvidgade ramverket behandlar riskmått som tillhör en sammansättning av den koherenta riskmåttklassen och distortions riksmåttklassen. Denna klass benämns som koherenta distortionsriskmått. De riskmått som tillhör denna klass och behandlas i uppsatsen och är Conditional Value-at-Risk, Wang Transformen, Block Maxima och Dual Block Maxima måtten. Det utvidgade portföljoptimeringsramverket appliceras på en referensportfölj bestående av aktier, optioner och ett obligationsindex från den Svenska aktiemarknaden. Tillgångarnas avkastningar, i referens portföljen, modelleras med både elliptiska fördelningar och normal-copula med asymmetriska marginalfördelningar. Portföljoptimeringsramverket är ett simuleringsbaserat ramverk som mäter risk baserat på scenarion simulerade från fördelningsmodellen som antagits för portföljen. För att modellera tillgångarnas avkastningar med asymmetriska fördelningar modelleras marginalfördelningarnas svansar med generaliserade Paretofördelningar och en normal-copula modellerar det ömsesidiga beroendet mellan tillgångarna. Resultatet av portföljoptimeringarna jämförs sinsemellan för de olika portföljernas avkastningsantaganden och de fyra riskmåtten. Problemet löses även med Markowitz optimering där "mean average deviation" används som riskmått. Denna lösning kommer vara den "benchmarklösning" som kommer jämföras mot de optimala lösningarna vilka beräknas i optimeringen med de koherenta distortionsriskmåtten. Den speciella egenskapen hos de koherenta distortionsriskmåtten som gör det möjligt att ange användarspecificerade vikter vid olika delar av förlustfördelningen och kan därför värdera mer extrema förluster som större risker. Den användardefinerade viktningsegenskapen hos riskmåtten studeras i kombination med den asymmetriska fördelningsmodellen för att utforska portföljer som tar extrema förluster i beaktande. En viktig upptäckt är att optimala lösningar till avkastningar som är modellerade med asymmetriska fördelningar är associerade med ökad risk, vilket är en konsekvens av mer exakt modellering av tillgångarnas fördelningssvansar. En annan upptäckt är, om större vikter läggs på högre förluster så ökar portföljrisken och en säkrare portföljstrategi antas.
140

Tail Risk Protection via reproducible data-adaptive strategies

Spilak, Bruno 15 February 2024 (has links)
Die Dissertation untersucht das Potenzial von Machine-Learning-Methoden zur Verwaltung von Schwanzrisiken in nicht-stationären und hochdimensionalen Umgebungen. Dazu vergleichen wir auf robuste Weise datenabhängige Ansätze aus parametrischer oder nicht-parametrischer Statistik mit datenadaptiven Methoden. Da datengetriebene Methoden reproduzierbar sein müssen, um Vertrauen und Transparenz zu gewährleisten, schlagen wir zunächst eine neue Plattform namens Quantinar vor, die einen neuen Standard für wissenschaftliche Veröffentlichungen setzen soll. Im zweiten Kapitel werden parametrische, lokale parametrische und nicht-parametrische Methoden verglichen, um eine dynamische Handelsstrategie für den Schutz vor Schwanzrisiken in Bitcoin zu entwickeln. Das dritte Kapitel präsentiert die Portfolio-Allokationsmethode NMFRB, die durch eine Dimensionsreduktionstechnik hohe Dimensionen bewältigt. Im Vergleich zu klassischen Machine-Learning-Methoden zeigt NMFRB in zwei Universen überlegene risikobereinigte Renditen. Das letzte Kapitel kombiniert bisherige Ansätze zu einer Schwanzrisikoschutzstrategie für Portfolios. Die erweiterte NMFRB berücksichtigt Schwanzrisikomaße, behandelt nicht-lineare Beziehungen zwischen Vermögenswerten während Schwanzereignissen und entwickelt eine dynamische Schwanzrisikoschutzstrategie unter Berücksichtigung der Nicht-Stationarität der Vermögensrenditen. Die vorgestellte Strategie reduziert erfolgreich große Drawdowns und übertrifft andere moderne Schwanzrisikoschutzstrategien wie die Value-at-Risk-Spread-Strategie. Die Ergebnisse werden durch verschiedene Data-Snooping-Tests überprüft. / This dissertation shows the potential of machine learning methods for managing tail risk in a non-stationary and high-dimensional setting. For this, we compare in a robust manner data-dependent approaches from parametric or non-parametric statistics with data-adaptive methods. As these methods need to be reproducible to ensure trust and transparency, we start by proposing a new platform called Quantinar, which aims to set a new standard for academic publications. In the second chapter, we dive into the core subject of this thesis which compares various parametric, local parametric, and non-parametric methods to create a dynamic trading strategy that protects against tail risk in Bitcoin cryptocurrency. In the third chapter, we propose a new portfolio allocation method, called NMFRB, that deals with high dimensions thanks to a dimension reduction technique, convex Non-negative Matrix Factorization. This technique allows us to find latent interpretable portfolios that are diversified out-of-sample. We show in two universes that the proposed method outperforms other classical machine learning-based methods such as Hierarchical Risk Parity (HRP) concerning risk-adjusted returns. We also test the robustness of our results via Monte Carlo simulation. Finally, the last chapter combines our previous approaches to develop a tail-risk protection strategy for portfolios: we extend the NMFRB to tail-risk measures, we address the non-linear relationships between assets during tail events by developing a specific non-linear latent factor model, finally, we develop a dynamic tail risk protection strategy that deals with the non-stationarity of asset returns using classical econometrics models. We show that our strategy is successful at reducing large drawdowns and outperforms other modern tail-risk protection strategies such as the Value-at-Risk-spread strategy. We verify our findings by performing various data snooping tests.

Page generated in 0.0578 seconds