Spelling suggestions: "subject:"risk amodelling"" "subject:"risk bmodelling""
11 |
Modelling Risk in Real-Life Multi-Asset Portfolios / Riskmodellering av verkliga portföljer med varierande tillgångsklasserHahn, Karin, Backlund, Axel January 2023 (has links)
We develop a risk factor model based on data from a large number of portfolios spanning multiple asset classes. The risk factors are selected based on economic theory through an analysis of the asset holdings, as well as statistical tests. As many assets have limited historical data available, we implement and analyse the impact of regularisation to handle sparsity. Based on the factor model, two parametric methods for calculating Value-at-Risk (VaR) for a portfolio are developed: one with constant volatility and one with a CCC-GARCH volatility updating scheme. These methods are evaluated through backtesting on daily and weekly returns of a selected set of portfolios whose contents reflect the larger majority well. A historical data approach for calculating VaR serves as a benchmark model. We find that under daily returns, the historical data method outperforms the factor models in terms of VaR violation rates. None yield independent violations however. Under weekly returns, both factor models produce more accurate violation rates than the historical data model, with the CCC-GARCH model also yielding independent VaR violations for almost all portfolios due to its ability to adjust up VaR estimates in periods of increased market volatility. We conclude that if weekly VaR estimates are acceptable, tailored risk factor models provide accurate measures of portfolio risk. / Vi bygger en riskfaktormodell givet en stor mängd portföljer innehållande flera olika typer av tillgångar. Riskfaktorerna väljs ut baserat på ekonomisk teori genom en analys av portföljernas innehåll samt genom statistiska test. Eftersom många tillgångar har en liten mängd historisk data tillgänglig implementerar vi och analyserar effekterna av regularisering i faktorregressionen. Två parametriska metoder för att beräkna Value-at-Risk (VaR) utvecklas baserat på faktormodellen: en med konstant volatilitet och en med volatilitetsuppdatering genom CCC-GARCH. Metoderna utvärderas med bakåttestning på daglig och veckovis avkastning från utvalda portföljer vars innehåll reflekterar den större majoriteten. En historisk data-baserad metod för att beräkna VaR används som referensmodell. Under daglig avkastning överträffar historisk data-modellen faktormodellerna med avseende på frekvensen av VaR-överträdelser. Ingen modell resulterar dock i oberoende överträdelser. Under veckovis avkastning å andra sidan ger båda faktormodellerna mer exakta överträdelsefrekvenser än historisk data-modellen, där faktormodellen med CCC-GARCH också ger oberoende överträdelser för nästan alla portföljer, tack vare modellens förmåga att justera upp VaR-estimaten i perioder av högre volatilitet på marknaden. Sammanfattningsvis ger skräddarsydda riskfaktormodeller goda riskestimat, givet att det är acceptabelt med veckovisa beräkningar av VaR.
|
12 |
Modelling Risk Dependencies and Propagation in Supply ChainsMorteza, Beigi Leila 04 1900 (has links)
<p>Today's highly integrated supply chains are exposed to various types of risks which disrupt the normal flow of goods or services within a supply chain network. Since most of these individual risks are interconnected, a mitigation strategy to tackle one risk may result in the exacerbation of another.</p> <p>Risk dependencies have been modelled using two approaches in the financial insurance literature : (i) random variables, and (ii) copulas. In this dissertation these studies are reviewed and extended. Also, applications for these models for different supply chain network configurations are presented. Then, a Poisson process model for risk propagation is proposed. Unlike the existing models, the transition rate of the proposed model not only expresses the time dependency, but also captures other possible dependencies in the network. Finally, the thesis is summarized and general directions and suggestions for future research on risk dependency and propagation modelling are provided.</p> / Master of Science (MSc)
|
13 |
Modelling Credit Spread Risk in the Banking Book (CSRBB) / Modellering av kreditspreadrisken i bankboken (CSRBB)Pahne, Elsa, Åkerlund, Louise January 2023 (has links)
Risk measurement tools and strategies have until recently been calibrated for a low-for-long interest rate environment. However, in the current higher interest rate environment, banking supervisory entities have intensified their regulatory pressure on institutions to enhance their assessment and monitoring of interest rate risk and credit spread risk. The European Banking Authority (EBA) has released updated guidelines on the assessment and monitoring of Credit Spread Risk in the Banking Book (CSRBB), which will replace the current guidelines by 31st December 2023. The new guidelines identify the CSRBB as a separate risk category apart from Interest Rate Risk in the Banking Book (IRRBB), and specifies the inclusion of liabilities in therisk calculations. This paper proposes a CSRBB model that conforms to the updated EBA guidelines. The model uses a historical simulation Value at Risk (HSVaR) and Expected Shortfall (ES) approach, and includes a 90-day holding period, as suggested by Finansinspektionen (FI). To assess the effectiveness of the model, it is compared with a standardised model of FI, and subjected to backtesting. Additionally, the paper suggests modifications to the model to obtain more conservative results. / Riskmätningsverktyg och strategier har sedan nyligen anpassats till en lågräntemiljö. Dock till följd av den nuvarande högre räntemiljön har tillsynsmyndigheter för bankväsendet satt ökat tryck på institutioners utvärdering och rapportering av ränterisk och kreditspreadrisk. Den Europeiska Bankmyndigheten (EBA) har publicerat uppdaterade riktlinjer för bedömning och rapportering av kreditspreadsrisken i bankboken (CSRBB), som ersätter de nuvarande riktlinjerna den 31 december 2023. De nya riktlinjerna identifierar CSRBB som en separat riskkategori från ränterisk i bankboken (IRRBB) och specificerar inkluderingen av skulder i riskberäkningarna. Denna uppsats föreslår en CSRBB-modell som följer EBAs uppdaterade riktlinjer. Modellen använder en Value at Risk (VaR) metodik baserat på historiska simulationer och Expected Shortfall (ES), samt antar en 90-dagars innehavsperiod som föreslås av Finansinspektionen (FI). Modellens effektivitet utvärderas genom en jämförelse med FIs standardmodell för kreditspreadrisken i bankboken, samt genom backtesting. Slutligen diskuteras möjliga justeringar av modellen för att uppnå mer konservativa resultat.
|
14 |
Risques liés de crédit et dérivés de crédit / Dependent credit risks and credit derivativesHarb, Étienne Gebran 08 October 2011 (has links)
Le premier volet de cette thèse traite de l’évaluation du risque de crédit. Après un chapitre introductif offrant une synthèse technique des modèles de risque, nous nous intéressons à la modélisation de la dépendance entre les risques de défaut par les copules qui permettent de mieux fonder les mesures du risque de crédit. Ces dernières assurent une description intégrale de la structure de dépendance et ont l’avantage d’exprimer la distribution jointe en termes des distributions marginales. Nous les appréhendons en termes probabilistes telles qu’elles sont désormais familières, mais également selon des perspectives algébriques, démarche à certains égards plus englobante que l’approche probabiliste. Ensuite, nous proposons un modèle général de pricing des dérivés de crédit inspiré des travaux de Cherubini et Luciano (2003) et de Luciano (2003). Nous évaluons un Credit Default Swap « vulnérable », comprenant un risque de contrepartie. Nous y intégrons la Credit Valuation Adjustment (CVA)préconisée par Bâle III pour optimiser l’allocation du capital économique. Nous reprenons la représentation générale de pricing établie par Sorensen et Bollier (1994) et contrairement aux travaux cités ci-dessus, le paiement de protection ne survient pas forcément à l’échéance du contrat. La dépendance entre le risque de contrepartie et celui de l’entité de référence est approchée par les copules. Nous examinons la vulnérabilité du CDS pour des cas de dépendance extrêmes grâce à un choix de copule mixte combinant des copules usuelles « extrêmes ». En variant le rho de Spearman, la copule mixte balaie un large spectre de dépendances, tout en assurant des closed form prices. Le modèle qui en résulte est adapté aux pratiques du marché et facile à calibrer.Nous en fournissons une application numérique. Nous mettons ensuite en évidence le rôle des dérivés de crédit en tant qu’instruments de couvertures mais aussi comme facteurs de risque, accusés d’être à l’origine de la crise des subprime. Enfin, nous analysons cette dernière ainsi que celle des dettes souveraines, héritant également de l’effondrement du marché immobilier américain. Nous proposons à la suite une étude de soutenabilité de la dette publique des pays périphériques surendettés de la zone euro à l’horizon 2016. / The first part of this thesis deals with the valuation of credit risk. After an introductory chapter providing a technical synthesis of risk models, we model the dependence between default risks with the copula that helps enhancing credit risk measures. This technical tool provides a full description of the dependence structure; one could exploit the possibility of writing any joint distribution function as a copula, taking as arguments the marginal distributions. We approach copulas in probabilistic terms as they are familiar nowadays, then with an algebraic approach which is more inclusive than the probabilistic one. Afterwards, we present a general credit derivative pricing model based on Cherubini and Luciano (2003) and Luciano (2003). We price a “vulnerable”Credit Default Swap, taking into account a counterparty risk. We consider theCredit Valuation Adjustment (CVA) advocated by Basel III to optimize theeconomic capital allocation. We recover the general representation of aproduct with counterparty risk which goes back to Sorensen and Bollier (1994)and differently from the papers mentioned above, the payment of protectiondoes not occur necessarily at the end of the contract. We approach the dependence between counterparty risk and the reference credit’s one with the copula. We study the sensitivity of the CDS in extreme dependence cases with a mixture copula defined in terms of the “extreme” ones. By varying the Spearman’s rho, one can explore the whole range of positive and negative association. Furthermore, the mixture copula provides closed form prices. Our model is then closer to the market practice and easy to implement. Later on, we provide an application on credit market data. Then, we highlight the role of credit derivatives as hedging instruments and as risk factors as well since they are accused to be responsible for the subprime crisis. Finally, we analyze the subprime crisis and the sovereign debt crisis which arose from the U.S. mortgage market collapse as well. We then study the public debt sustainability of the heavily indebted peripheral countries of the eurozone by 2016.
|
15 |
Uncertainty in Aquatic Toxicological Exposure-Effect Models: the Toxicity of 2,4-Dichlorophenoxyacetic Acid and 4-Chlorophenol to Daphnia carinataDixon, William J., bill.dixon@dse.vic.gov.au January 2005 (has links)
Uncertainty is pervasive in risk assessment. In ecotoxicological risk assessments, it arises from such sources as a lack of data, the simplification and abstraction of complex situations, and ambiguities in assessment endpoints (Burgman 2005; Suter 1993). When evaluating and managing risks, uncertainty needs to be explicitly considered in order to avoid erroneous decisions and to be able to make statements about the confidence that we can place in risk estimates. Although informative, previous approaches to dealing with uncertainty in ecotoxicological modelling have been found to be limited, inconsistent and often based on assumptions that may be false (Ferson & Ginzburg 1996; Suter 1998; Suter et al. 2002; van der Hoeven 2004; van Straalen 2002a; Verdonck et al. 2003a). In this thesis a Generalised Linear Modelling approach is proposed as an alternative, congruous framework for the analysis and prediction of a wide range of ecotoxicological effects. This approach was used to investigate the results of toxicity experiments on the effect of 2,4-Dichlorophenoxyacetic Acid (2,4-D) formulations and 4-Chlorophenol (4-CP, an associated breakdown product) on Daphnia carinata. Differences between frequentist Maximum Likelihood (ML) and Bayesian Markov-Chain Monte-Carlo (MCMC) approaches to statistical reasoning and model estimation were also investigated. These approaches are inferentially disparate and place different emphasis on aleatory and epistemic uncertainty (O'Hagan 2004). Bayesian MCMC and Probability Bounds Analysis methods for propagating uncertainty in risk models are also compared for the first time. For simple models, Bayesian and frequentist approaches to Generalised Linear Model (GLM) estimation were found to produce very similar results when non-informative prior distributions were used for the Bayesian models. Potency estimates and regression parameters were found to be similar for identical models, signifying that Bayesian MCMC techniques are at least a suitable and objective replacement for frequentist ML for the analysis of exposureresponse data. Applications of these techniques demonstrated that Amicide formulations of 2,4-D are more toxic to Daphnia than their unformulated, Technical Acid parent. Different results were obtained from Bayesian MCMC and ML methods when more complex models and data structures were considered. In the analysis of 4-CP toxicity, the treatment of 2 different factors as fixed or random in standard and Mixed-Effect models was found to affect variance estimates to the degree that different conclusions would be drawn from the same model, fit to the same data. Associated discrepancies in the treatment of overdispersion between ML and Bayesian MCMC analyses were also found to affect results. Bayesian MCMC techniques were found to be superior to the ML ones employed for the analysis of complex models because they enabled the correct formulation of hierarchical (nested) datastructures within a binomial logistic GLM. Application of these techniques to the analysis of results from 4-CP toxicity testing on two strains of Daphnia carinata found that between-experiment variability was greater than that within-experiments or between-strains. Perhaps surprisingly, this indicated that long-term laboratory culture had not significantly affected the sensitivity of one strain when compared to cultures of another strain that had recently been established from field populations. The results from this analysis highlighted the need for repetition of experiments, proper model formulation in complex analyses and careful consideration of the effects of pooling data on characterising variability and uncertainty. The GLM framework was used to develop three dimensional surface models of the effects of different length pulse exposures, and subsequent delayed toxicity, of 4-CP on Daphnia. These models described the relationship between exposure duration and intensity (concentration) on toxicity, and were constructed for both pulse and delayed effects. Statistical analysis of these models found that significant delayed effects occurred following the full range of pulse exposure durations, and that both exposure duration and intensity interacted significantly and concurrently with the delayed effect. These results indicated that failure to consider delayed toxicity could lead to significant underestimation of the effects of pulse exposure, and therefore increase uncertainty in risk assessments. A number of new approaches to modelling ecotoxicological risk and to propagating uncertainty were also developed and applied in this thesis. In the first of these, a method for describing and propagating uncertainty in conventional Species Sensitivity Distribution (SSD) models was described. This utilised Probability Bounds Analysis to construct a nonparametric 'probability box' on an SSD based on EC05 estimates and their confidence intervals. Predictions from this uncertain SSD and the confidence interval extrapolation methods described by Aldenberg and colleagues (2000; 2002a) were compared. It was found that the extrapolation techniques underestimated the width of uncertainty (confidence) intervals by 63% and the upper bound by 65%, when compared to the Probability Bounds (P3 Bounds) approach, which was based on actual confidence estimates derived from the original data. An alternative approach to formulating ecotoxicological risk modelling was also proposed and was based on a Binomial GLM. In this formulation, the model is first fit to the available data in order to derive mean and uncertainty estimates for the parameters. This 'uncertain' GLM model is then used to predict the risk of effect from possible or observed exposure distributions. This risk is described as a whole distribution, with a central tendency and uncertainty bounds derived from the original data and the exposure distribution (if this is also 'uncertain'). Bayesian and P-Bounds approaches to propagating uncertainty in this model were compared using an example of the risk of exposure to a hypothetical (uncertain) distribution of 4-CP for the two Daphnia strains studied. This comparison found that the Bayesian and P-Bounds approaches produced very similar mean and uncertainty estimates, with the P-bounds intervals always being wider than the Bayesian ones. This difference is due to the different methods for dealing with dependencies between model parameters by the two approaches, and is confirmation that the P-bounds approach is better suited to situations where data and knowledge are scarce. The advantages of the Bayesian risk assessment and uncertainty propagation method developed are that it allows calculation of the likelihood of any effect occurring, not just the (probability)bounds, and that the same software (WinBugs) and model construction may be used to fit regression models and predict risks simultaneously. The GLM risk modelling approaches developed here are able to explain a wide range of response shapes (including hormesis) and underlying (non-normal) distributions, and do not involve expression of the exposure-response as a probability distribution, hence solving a number of problems found with previous formulations of ecotoxicological risk. The approaches developed can also be easily extended to describe communities, include modifying factors, mixed-effects, population growth, carrying capacity and a range of other variables of interest in ecotoxicological risk assessments. While the lack of data on the toxicological effects of chemicals is the most significant source of uncertainty in ecotoxicological risk assessments today, methods such as those described here can assist by quantifying that uncertainty so that it can be communicated to stakeholders and decision makers. As new information becomes available, these techniques can be used to develop more complex models that will help to bridge the gap between the bioassay and the ecosystem.
|
16 |
Optimalizační modelování rizik v GAMSu / Optimization Risk Modelling in GAMSKutílek, Vladislav January 2021 (has links)
The diploma thesis deals with the possibilities of using the optimization modelling software system GAMS in risk management. According to the assignment, emphasis is placed on a detailed approach to the program for those, who are interested in its use in the field of risk engineering applications. The first part of the thesis contains the knowledge to understand what the GAMS program is and what it is used for. The next part of the work provides instructions on how to download, install, activate the program and what the user interface of the program looks like. Thanks to mathematical programming, it will be explained on a project on the distribution of lung ventilators, what basic approaches may be used in risk modelling in the GAMS program on a deterministic model. The following are more complex wait-and-see models, which contains the probability parameters and here-and-now models, where we work with demand scenarios and verify whether if they meets the requirements of other scenarios or calculate costs for the highest demands. The two-stage model is also one of the here-and-now models, but it is significantly more complex in its size and range of input data, it includes additional price parameters for added or removed pieces of lung ventilators from the order.
|
17 |
Vliv nejistoty modelů projektů na investiční rozhodování / The Impact of Uncertainty of Project Models on Investment Decision MakingPískatá, Petra January 2020 (has links)
This doctoral thesis widely analyses the process of investment decision-making. In its individual parts, it researches models used for planning, analysing and evaluation of investments projects, but also models used for final decision about realization of the investment. Investing activity is present in world economic cycle in all it’s phases. Capital sources used for financing if the investment projects are scarce and must be handled with care. For this reason, there are many supportive methodologies and models employed in managing of the investments as well as instruments developed to miti-gate the potential project risks. However, even utilization of these instruments and models can’t guarantee the expected results. There are uncertainties, errors and in-accuracies in the process that can thwart investment decisions. The aim of the thesis is to analyse the investment decision-making process (from the initial idea to the realization of the investment project) and to identify the main un-certainties – factors influencing the success / error rate of models for investment project planning as well as the decision on their realization. The main outcome of the thesis is an overview of these factors and recommenda-tions on how to work with these factors and make the process as effective as possi-ble. Another output is an analysis and recommendations for the use of financing sources and mix of the instruments that should be used to mitigate the potential impact of risks that are connected to all investment projects.
|
18 |
Modelling Credit Spread Risk with a Focus on Systematic and Idiosyncratic Risk / Modellering av Kredit Spreads Risk med Fokus på Systematisk och Idiosynkratisk RiskKorac Dalenmark, Maximilian January 2023 (has links)
This thesis presents an application of Principal Component Analysis (PCA) and Hierarchical PCA to credit spreads. The aim is to identify the underlying factors that drive the behavior of credit spreads as well as the left over idiosyncratic risk, which is crucial for risk management and pricing of credit derivatives. The study employs a dataset from the Swedish market of credit spreads for different maturities and ratings, split into Covered Bonds and Corporate Bonds, and performs PCA to extract the dominant factors that explain the variation in the data of the former set. The results show that most of the systemic movements in Swedish covered bonds can be extracted using a mean which coincides with the first principal component. The report further explores the idiosyncratic risk of the credit spreads to further the knowledge regarding the dynamics of credit spreads and improving risk management in credit portfolios, specifically in regards to new regulation in the form of the Fundemental Review of the Trading Book (FRTB). The thesis also explores a more general model on corporate bonds using HPCA and K-means clustering. Due to data issues it is less explored but there are useful findings, specifically regarding the feasibility of using clustering in combination with HPCA. / I detta arbete presenteras en tillämpning av Principal Komponent Analysis (PCA) och Hierarkisk PCA på kreditspreadar. Syftet är att identifiera de underliggande faktorer som styr kreditspreadarnas beteende samt den kvarvarande idiosynkratiska risken, vilket är avgörande för riskhantering och prissättning av diverse kreditderivat. I studien används en datamängd från den svenska marknaden med kreditspreadar för olika löptider och kreditbetyg, uppdelat på säkerställda obligationer och företagsobligationer, och PCA används för att ta fram de mest signifikanta faktorerna som förklarar variationen i data för de förstnämnda obligationerna. Resultaten visar att de flesta av de systematiska rörelserna i svenska säkerställda obligationer kan extraheras med hjälp av ett medelvärde som sammanfaller med den första principalkomponenten. I rapporten undersöks vidare den idiosynkratiska risken i kreditspreadarna för att öka kunskapen om dynamiken i kreditspreadarna och förbättre riskhanteringen i kreditportföljer, särskilt med tanke på regelverket "Fundemental Review of the Tradring book" (FRTB). I rapporten undersöktes vidare en mer allmän modell för företagsobligationer med hjälp av HPCA och K-means-klustering. På grund av dataproblem är den mindre utforstkad, men det finns användbara resultat, särskild när det gäller möjligheten att använda kluster i kombination med HPCA.
|
Page generated in 0.0695 seconds