• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 48
  • 16
  • 6
  • 5
  • 3
  • 2
  • 2
  • 1
  • 1
  • Tagged with
  • 91
  • 91
  • 18
  • 16
  • 16
  • 15
  • 15
  • 14
  • 14
  • 14
  • 13
  • 12
  • 11
  • 11
  • 10
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
11

Risk Measures and Dependence Modeling in Financial Risk Management

Eriksson, Kristofer January 2014 (has links)
In financial risk management it is essential to be able to model dependence in markets and portfolios in an accurate and efficient way. A high positive dependence between assets in a portfolio can be devastating, especially in times of crises, since losses will most likely occur at the same time in all assets for such a portfolio. The dependence is therefore directly linked to the risk of the portfolio. The risk can be estimated by several different risk measures, for example Value-at-Risk and Expected shortfall. This paper studies some different ways to measure risk and model dependence, both in a theoretical and empirical way. The main focus is on copulas, which is a way to model and construct complex dependencies. Copulas are a useful tool since it allows the user to separately specify the marginal distributions and then link them together with the copula. However, copulas can be quite complex to understand and it is not trivial to know which copula to use. An implemented copula model might give the user a "black-box" feeling and a severe model risk if the user trusts the model too much and is unaware of what is going. Another model would be to use the linear correlation which is also a way to measure dependence. This is an easier model and as such it is believed to be easier for all users to understand. However, linear correlation is only easy to understand in the case of elliptical distributions, and when we move away from this assumption (which is usually the case in financial data), some clear drawbacks and pitfalls become present. A third model, called historical simulation, uses the historical returns of the portfolio and estimate the risk on this data without making any parametric assumptions about the dependence. The dependence is assumed to be incorporated in the historical evolvement of the portfolio. This model is very easy and very popular, but it is more limited than the previous two models to the assumption that history will repeat itself and needs much more historical observations to yield good results. Here we face the risk that the market dynamics has changed when looking too far back in history. In this paper some different copula models are implemented and compared to the historical simulation approach by estimating risk with Value-at-Risk and Expected shortfall. The parameters of the copulas are also investigated under calm and stressed market periods. This information about the parameters is useful when performing stress tests. The empirical study indicates that it is difficult to distinguish the parameters between the stressed and calm market period. The overall conclusion is; which model to use depends on our beliefs about the future distribution. If we believe that the distribution is elliptical then a correlation model is good, if it is believed to have a complex dependence then the user should turn to a copula model, and if we can assume that history will repeat itself then historical simulation is advantageous.
12

Automated Market Making: Theory and Practice

Othman, Abraham M 15 May 2012 (has links)
Market makers are unique entities in a market ecosystem. Unlike other participants that have exposure (either speculative or endogenous) to potential future states of the world, market making agents either endeavor to secure a risk-free profit or to facilitate trade that would otherwise not occur. In this thesis we present a principled theoretical framework for market making along with applications of that framework to different contexts. We begin by presenting a synthesis of two concepts—automated market making from the artificial intelligence literature and risk measures from the finance literature—that were developed independently. This synthesis implies that the market making agents we develop in this thesis also correspond to better ways of measuring the riskiness of a portfolio—an important application in quantitative finance. We then present the results of the Gates Hillman Prediction Market (GHPM), a fielded large-scale test of automated market making that successfully predicted the opening date of the new computer science buildings at CMU. Ranging over 365 possible opening days, the market’s large event partition required new advances like a novel span-based elicitation interface. The GHPM uncovered some practical flaws of automated market makers; we investigate how to rectify these failures by describing several classes of market makers that are better at facilitating trade in Internet prediction markets. We then shift our focus to notions of profit, and how a market maker can trade to maximize its own account. We explore applying our work to one of the largest and most heavily-traded markets in the world by recasting market making as an algorithmic options trading strategy. Finally, we investigate optimal market makers for fielding wagers when good priors are known, as in sports betting or insurance.
13

Dynamic yacht strategy optimisation

Tagliaferri, Francesca January 2015 (has links)
Yacht races are won by good sailors racing fast boats. A good skipper takes decisions at key moments of the race based on the anticipated wind behaviour and on his position on the racing area and with respect to the competitors. His aim is generally to complete the race before all his opponents, or, when this is not possible, to perform better than some of them. In the past two decades some methods have been proposed to compute optimal strategies for a yacht race. Those strategies are aimed at minimizing the expected time needed to complete the race and are based on the assumption that the faster a yacht, the higher the number of races that it will win (and opponents that it will defeat). In a match race, however, only two yachts are competing. A skipper’s aim is therefore to complete the race before his opponent rather than completing the race in the shortest possible time. This means that being on average faster may not necessarily mean winning the majority of races. This thesis sets out to investigate the possibility of computing a sailing strategy for a match race that can defeat an opponent who is following a fixed strategy that minimises the expected time of completion of the race. The proposed method includes two novel aspects in the strategy computation: A short-term wind forecast, based on an Artificial Neural Network (ANN) model, is performed in real time during the race using the wind measurements collected on board. Depending on the relative position with respect to the opponent, decisions with different levels of risk aversion are computed. The risk attitude is modeled using Coherent Risk Measures. The proposed algorithm is implemented in a computer program and is tested by simulating match races between identical boats following progressively refined strategies. Results presented in this thesis show how the intuitive idea of taking more risk when losing and having a conservative attitude when winning is confirmed in the risk model used. The performance of ANN for short-term wind forecasting is tested both on wind speed and wind direction. It is shown that for time steps of the order of seconds and adequate computational power ANN perform better than linear models (persistence models, ARMA) and other nonlinear models (Support Vector Machines). The outcome of the simulated races confirms that maximising the probability of winning a match race does not necessarily correspond to minimising the expected time needed to complete the race.
14

Retrocession for Portfolio Optimization in Reinsurance / Retrocession för optimering av återförsäkringsportföljer

Rasmusson, Erik January 2014 (has links)
Reinsurance is the insurance protection of an insurance company. Retrocession is reinsurance for a portfolio of reinsurance contracts. Reinsurance portfolios can comprise several thousand contracts that may be contingent on the same events, which makes retrocession a complex decision. This thesis develops an optimization model for retrocession, where the aim is to maximize the expected result and satisfy contraints on risk. A review and development of risk measures that can be included in the model is performed. The optimization model is implemented and applied to a large portfolio of reinsurance contracts using mathematical programming algorithms. Results suggest that a benefit amounting to several percent of the annual expected result may be obtained by applying optimal retrocession to the reinsurance portfolio. The results depend on several assumptions that, if not fulfilled, may diminish the benefit. / Återförsäkring är försäkringsskydd för försäkringsbolag. Retrocession är återförsäkring för en portfölj av återförsäkringskontrakt. Återförsäkringsportföljer kan bestå av era tusen kontrakt som kan vara beroende av samma händelser, vilket gör beslutsfattande om retrocession komplext. Denna rapport utvecklar en optimeringsmodell för retrocession, med målet att maximera det förväntade resultatet samt uppfylla begränsningar på risk. En överblick och utveckling av riskmått som kan tillämpas i modellen genomförs. Optimeringsmodellen implementeras och tillämpas på en stor återförsäkringsportfölj genom att använda optimeringsalgoritmer. Utfallet av optimeringen indikerar att en förbättring på era procent av det årliga förväntade resultatet skulle kunna uppnås genom att tillämpa optimal retrocession på återförsäkringsportföljen. Utfallet beror dock på att era antaganden gäller. Om antagandena inte gäller så kan förbättringen utebli.
15

Coherent And Convex Measures Of Risk

Yildirim, Irem 01 September 2005 (has links) (PDF)
One of the financial risks an agent has to deal with is market risk. Market risk is caused by the uncertainty attached to asset values. There exit various measures trying to model market risk. The most widely accepted one is Value-at- Risk. However Value-at-Risk does not encourage portfolio diversification in general, whereas a consistent risk measure has to do so. In this work, risk measures satisfying these consistency conditions are examined within theoretical basis. Different types of coherent and convex risk measures are investigated. Moreover the extension of coherent risk measures to multiperiod settings is discussed.
16

Análise dos modelos baseados em lower partial moments: um estudo empírico para o Ibovespa e Dow Jones através da distância Hansen-Jagannathan

Herrera, Christian Jonnatan Jacobsen Soto 01 March 2017 (has links)
Submitted by Renata Lopes (renatasil82@gmail.com) on 2017-06-28T19:37:30Z No. of bitstreams: 1 christianjonnatanjacobsensotoherrera.pdf: 883027 bytes, checksum: 3ee1cf348a7392e28d4ef150125ad72c (MD5) / Approved for entry into archive by Adriana Oliveira (adriana.oliveira@ufjf.edu.br) on 2017-08-07T21:48:11Z (GMT) No. of bitstreams: 1 christianjonnatanjacobsensotoherrera.pdf: 883027 bytes, checksum: 3ee1cf348a7392e28d4ef150125ad72c (MD5) / Made available in DSpace on 2017-08-07T21:48:11Z (GMT). No. of bitstreams: 1 christianjonnatanjacobsensotoherrera.pdf: 883027 bytes, checksum: 3ee1cf348a7392e28d4ef150125ad72c (MD5) Previous issue date: 2017-03-01 / Esta dissertação propõe testar empiricamente, através de otimizações in sample, os modelos de downside risk, Sortino, Upside Pontential Ratio, Omega e Kappa, comparado-os com o tradicional CAPM, derivado a partir da fronteira de média e variância, utilizando as ações listadas no Ibovespa e Dow Jones (DJIA) para construção de carteiras de mercado para cada um dos modelos. Estas duas classes de modelos distinguem-se quanto aos pressupostos e à mensuração do risco. Enquanto o CAPM considera apenas os dois primeiros momentos da distribuição de retornos, as outras medidas levam em conta os momentos superiores. Através da distância Hansen-Jagannathan, que mede o erro de mensuração do Stochastic Discount Factor (SDF) gerado pelos modelos, observou-se grande distinção dos modelos nos dois mercados. Enquanto o CAPM performou melhor no Dow Jones, os modelos de downside risk apresentaram melhores resultados para o Ibovespa, sugerindo vantagem na utilização destes modelos em mercados com menor liquidez e maior assimetria. / This dissertation proposes empirically test the downside risk models, Sortino, Upside Pontential Ratio, Omega and Kappa, by comparing them with the traditional CAPM, derived from the mean and variance boundary, using the listed shares in the Ibovespa and Dow Jones (DJIA) for the construction of market portfolios for each of the models. These two classes of models are distinguished in terms of assumptions and risk measurement. While the CAPM considers only the first two moments of the returns distribution, the other measures take into account the higher moments of such distributions. The Hansen-Jagannathan distance, which measures the Stochastic Discount Factor (SDF) measurement error generated by the models, showed a great distinction of the models in the two markets. While the CAPM performed better in the Dow Jones, the downside risk models presented better results for the Ibovespa, suggesting an advantage in the use of such models in markets with lower liquidity and greater asymmetry.
17

Path-dependent Risk Measures - Theory and Applications

Möller, Philipp Maximilian 12 January 2021 (has links)
No description available.
18

Development and evaluation of stress tests : Utilizing stress tests to complement the current ex-ante analysis at Second Swedish National Pension Fund

Antonsen Åberg, Andreas January 2017 (has links)
Stress tests are on a regular basis mentioned on the financial markets where some institutions have to perform it as a regulatory requirement and others have it as an optional way to complement their predictions. Stress tests are used to see how robust a financial instrument or a portfolio are in various scenarios. The challenge is to construct a stress test that is sufficiently extreme, while it is still plausible. The objective of this work is to study various stress testing methods that can be applied at Second Swedish National Pension Fund (AP2) associated with their prediction of market risks. Two different methods are implemented with various scenarios and thus unique analyzes are performed for each method. Hence, the methods are not compared against each other, but each method is analyzed individually with the advantages and disadvantages based on the choice of method and type of scenarios. The results of the first method, historical stress test, shows that the stressed portfolio would decrease in value under the specified scenario. For the second method, coherent stress test, the results vary for the different scenarios. / På den finansiella marknaden förekommer termen stresstester med jämna mellanrum, där vissa institutioner har det som krav och andra har det som ett frivilligt sätt att komplettera prediktioner. Stresstester används för att mäta hur robust ett finansiellt instrument eller en portfölj är i olika scenarion, där utmaningen blir att konstruera ett stresstest som är relevant och tillräckligt extremt. Målet med arbetet är att studera olika stresstestmetoder som ska kunna bli tillämpade hos Andra AP-fonden (AP2) i samband med deras prediktion av marknadsrisker. Två olika metoder implementeras med olika scenarion och således utförs unika analyser för respektive metod. Därav jämförs inte metoderna mot varandra utan varje metod analyseras individuellt med för- och nackdelar utifrån valet av metod och typen av scenarion. Resultatet för den första metoden, historiskt stresstest, påvisar att portföljen som stressas skulle minska i värde under det specificerade scenariot. För den andra metoden, koherent stresstest, varierar resultatet för de olika scenarierna.
19

A Bayesian approach to financial model calibration, uncertainty measures and optimal hedging

Gupta, Alok January 2010 (has links)
In this thesis we address problems associated with financial modelling from a Bayesian point of view. Specifically, we look at the problem of calibrating financial models, measuring the model uncertainty of a claim and choosing an optimal hedging strategy. Throughout the study, the local volatility model is used as a working example to clarify the proposed methods. This thesis assumes a prior probability density for the unknown parameter in a model we try to calibrate. The prior probability density regularises the ill-posedness of the calibration problem. Further observations of market prices are used to update this prior, using Bayes law, and give a posterior probability density for the unknown model parameter. Resulting Bayes estimators are shown to be consistent for finite-dimensional model parameters. The posterior density is then used to compute the Bayesian model average price. In tests on local volatility models it is shown that this price is closer than the prices of comparable calibration methods to the price given by the true model. The second part of the thesis focuses on quantifying model uncertainty. Using the framework for market risk measures we propose axioms for new classes of model uncertainty measures. Similar to the market risk case, we prove representation theorems for coherent and convex model uncertainty measures. Example measures from the latter class are provided using the Bayesian posterior. These are used to value the model uncertainty for a range of financial contracts priced in the local volatility model. In the final part of the thesis we propose a method for selecting the model, from a set of candidate models, that optimises the hedging of a specified financial contract. In particular we choose the model whose corresponding price and hedge optimises some hedging performance indicator. The selection problem is solved using Bayesian loss functions to encapsulate the loss from using one model to price and hedge when the true model is a different model. Linkages are made with convex model uncertainty measures and traditional utility functions. Numerical experiments on a stochastic volatility model and the local volatility model show that the Bayesian strategy can outperform traditional strategies, especially for exotic options.
20

New aspects of product risk measurement and management in the U.S. life and health insurance industries

Shi, Bo 13 July 2012 (has links)
Product risk is important to firms’ enterprise risk management. This dissertation focuses on product risk in the U.S. life insurance and health insurance industries. In particular, we add new dimensions to the measurement of product risk for these industries, and we explore how these industries manage product risk in a context of other enterprise risks. In this dissertation, we identify new product risks, propose new measures, and study the management of these risks. In the life insurance industry, we identify a new type of product risk, the guarantee risk, caused by variable annuities with guaranteed living benefits (VAGLB). We propose a value-at-risk type measure inspired by the risk-based capital C3 Phase II to quantify the guarantee risk. In the health insurance industry, where the degree of uncertainty varies for different types of health insurance policies, we develop four exposure-based risk measures to capture health insurers’ product risks. Then we study how life and health insurers manage product risks (and asset risks) by using capital in the context of other risks and appropriate controls. We add to the literature in the life insurance industry by examining the relationship between capital and risks when the guarantee risk is accounted for. In the health insurance industry, to our knowledge, no similar research on the relationship between capital and risks has been conducted. In view of the current topicality of health insurance, our research therefore adds a timely contribution to the understanding of health insurer risk management in an era of health care reform. Capital structure theories, transaction cost economics, and insurers’ risk-taking behaviors provide the theoretical foundation for our research. As to methodology, we implement standard capital structure models for the life and health insurance industries using data from the National Association of Insurance Commissioners (NAIC) annual filings of life/health insurers and health insurers. Simultaneous equations modeling is used to model life and health insurers’ enterprise risk management. And the estimation is conducted by the generalized estimation equations (GEE). We find that both U.S. life/health insurers and health insurers prudently build up capital as they experience more product risk and asset risk controlling for the other enterprise risks. We also find that life/health insurers may be using derivatives as a partial substitute for capital when managing new product risk caused by VAGLB, the guarantee risk. / text

Page generated in 0.238 seconds