Spelling suggestions: "subject:"sannolikhetsteori"" "subject:"sannolikhetsteorin""
211 |
On the use of Value-at-Risk based models for the Fixed Income market as a risk measure for Central Counterparty clearing / Användningen av Value-at-Risk baserade modeller för Fixed Income marknaden som riskmått för Central Counterparty clearingKallur, Oskar January 2016 (has links)
In this thesis the use of VaR based models are investigated for the purpose of setting margin requirements for Fixed Income portfolios. VaR based models has become one of the standard ways for Central Counterparties to determine the margin requirements for different types of portfolios. However there are a lot of different ways to implement a VaR based model in practice, especially for Fixed Income portfolios. The models presented in this thesis are based on Filtered Historical Simulation (FHS). Furthermore a model that combines FHS with a Student’s t copula to model the correlation between instruments in a portfolio is presented. All models are backtested using historical data dating from 1998 to 2016. The FHS models seems to produce reasonably accurate VaR estimates. However there are other market related properties that must be fulfilled for a model to be used to set margin requirements. These properties are investigated and discussed. / I denna uppsats undersöks användningen av VaR baserade modeller för att sätta marginkrav för Fixed Income portföljer. VaR baserade modeller har blivit en standardmetod för Central Counterparties för att räkna ut marginkrav för olika typer av portföljer. Det finns många olika tillvägagångssätt för att räkna ut VaR i praktiken, speciellt för Fixed Income portföljer. Modellerna som presenteras i den här uppsatsen är baserade på Filterad Historisk Simulering (FHS). Dessutom presenteras en modell som kombinerar FHS med en Student’s t copula för att modellera korrelationen mellan olika instrument. Alla modeller backtestas på historisk data från 1998 till 2016. Modellerna ger rimliga VaR skattningar i backtesterna. Däremot finns det andra marknadsrelaterade egenskaper som en modell måste uppfylla för att kunna användas för att sätta margin. Dessa egenskaper undersöks och diskuteras.
|
212 |
On particle-based online smoothing and parameter inference in general hidden Markov modelsWesterborn, Johan January 2015 (has links)
This thesis consists of two papers studying online inference in general hidden Markov models using sequential Monte Carlo methods. The first paper present an novel algorithm, the particle-based, rapid incremental smoother (PaRIS), aimed at efficiently perform online approximation of smoothed expectations of additive state functionals in general hidden Markov models. The algorithm has, under weak assumptions, linear computational complexity and very limited memory requirements. The algorithm is also furnished with a number of convergence results, including a central limit theorem. The second paper focuses on the problem of online estimation of parameters in a general hidden Markov model. The algorithm is based on a forward implementation of the classical expectation-maximization algorithm. The algorithm uses the PaRIS algorithm to achieve an efficient algorithm. / Denna avhandling består av två artiklar som behandlar inferens i dolda Markovkedjor med generellt tillståndsrum via sekventiella Monte Carlo-metoder. Den första artikeln presenterar en ny algoritm, PaRIS, med målet att effektivt beräkna partikelbaserade online-skattningar av utjämnade väntevärden av additiva tillståndsfunktionaler. Algoritmen har, under svaga villkor, en beräkningkomplexitet som växer endast linjärt med antalet partiklar samt h\ögst begränsade minneskrav. Dessutom härleds ett antal konvergensresultat för denna algoritm, såsom en central gränsvärdessats. Den andra artikeln fokuserar på online-estimering av modellparametrar i en generella dolda Markovkedjor. Den presenterade algoritmen kan ses som en kombination av PaRIS och en nyligen föreslagen online-implementation av den klassiska EM-algoritmen. / <p>QC 20150521</p>
|
213 |
Stochastic modelling in disability insuranceLöfdahl, Björn January 2013 (has links)
This thesis consists of two papers related to the stochastic modellingof disability insurance. In the first paper, we propose a stochastic semi-Markovian framework for disability modelling in a multi-period discrete-time setting. The logistic transforms of disability inception and recovery probabilities are modelled by means of stochastic risk factors and basis functions, using counting processes and generalized linear models. The model for disability inception also takes IBNR claims into consideration. We fit various versions of the models into Swedish disability claims data. In the second paper, we consider a large, homogeneous portfolio oflife or disability annuity policies. The policies are assumed to be independent conditional on an external stochastic process representing the economic environment. Using a conditional law of large numbers, we establish the connection between risk aggregation and claims reserving for large portfolios. Further, we derive a partial differential equation for moments of present values. Moreover, we show how statistical multi-factor intensity models can be approximated by one-factor models, which allows for solving the PDEs very efficiently. Finally, we givea numerical example where moments of present values of disabilityannuities are computed using finite difference methods. / <p>QC 20131204</p>
|
214 |
Large deviations for weighted empirical measures and processes arising in importance samplingNyquist, Pierre January 2013 (has links)
This thesis consists of two papers related to large deviation results associated with importance sampling algorithms. As the need for efficient computational methods increases, so does the need for theoretical analysis of simulation algorithms. This thesis is mainly concerned with algorithms using importance sampling. Both papers make theoretical contributions to the development of a new approach for analyzing efficiency of importance sampling algorithms by means of large deviation theory. In the first paper of the thesis, the efficiency of an importance sampling algorithm is studied using a large deviation result for the sequence of weighted empirical measures that represent the output of the algorithm. The main result is stated in terms of the Laplace principle for the weighted empirical measure arising in importance sampling and it can be viewed as a weighted version of Sanov's theorem. This result is used to quantify the performance of an importance sampling algorithm over a collection of subsets of a given target set as well as quantile estimates. The method of proof is the weak convergence approach to large deviations developed by Dupuis and Ellis. The second paper studies moderate deviations of the empirical process analogue of the weighted empirical measure arising in importance sampling. Using moderate deviation results for empirical processes the moderate deviation principle is proved for weighted empirical processes that arise in importance sampling. This result can be thought of as the empirical process analogue of the main result of the first paper and the proof is established using standard techniques for empirical processes and Banach space valued random variables. The moderate deviation principle for the importance sampling estimator of the tail of a distribution follows as a corollary. From this, moderate deviation results are established for importance sampling estimators of two risk measures: The quantile process and Expected Shortfall. The results are proved using a delta method for large deviations established by Gao and Zhao (2011) together with more classical results from the theory of large deviations. The thesis begins with an informal discussion of stochastic simulation, in particular importance sampling, followed by short mathematical introductions to large deviations and importance sampling. / <p>QC 20130205</p>
|
215 |
Pricing Interest Rate Derivatives in the Multi-Curve Framework with a Stochastic BasisEl Menouni, Zakaria January 2015 (has links)
The financial crisis of 2007/2008 has brought about a lot of changes in the interest rate market in particular, as it has forced to review and modify the former pricing procedures and methodologies. As a consequence, the Multi-Curve framework has been adopted to deal with the inconsistencies of the frameworks used so far, namely the single-curve method. We propose to study this new framework in details by focusing on a set of interest rate derivatives such as deposits, swaps and caplets, then we explore a stochastic approach to model the Libor-OIS basis spread, which has appeared since the beginning of the crisis and is now the quantity of interest to which a lot of researchers dedicate their work (F.Mercurio, M.Bianchetti and others). A discussion follows this study to set the light on the challenges and difficulties related to the modeling of basis spread. / Den stora finanskris som inträffade 2007/2008 har visat att nya värderingsmetoder för räntederivat är nödvändiga. Den metod baserat på multipla räntekurvor som introducerats som lösning på de problem som finanskrisen synliggjort, speciellt gällande räntespread, har givit upphov till nya utmaningar och bekymmer. I detta arbete utforskas den nya metoden baserat på multipla räntekurvor samt en stokastisk modell för räntespread. Slutsatserna och diskussionen om resultaten som presenteras tydliggör kvarvarande utmaningar vid modellering av räntespread
|
216 |
Analysis of Hedging Strategies for Hydro Power on the Nordic Power MarketGunnvald, Patrik, Joelsson, Viktor January 2015 (has links)
Hydro power is the largest source for generation of electricity in the Nordic region today. This production is heavily dependent on the weather since it dictates the terms for the availability and the amount of power to be produced. Vattenfall as a company has an incentive to avoid volatile revenue streams as it facilitates economic planning and induces a positive effect on its credit rating, thus also on its bottom line. Vattenfall is a large producer of hydro power with a possibility to move the power market which adds further complexity to the problem. In this thesis the authors develop new hedging strategies which will hedge more efficiently. With efficiency is meant the same risk, or standard deviation, at a lower cost or alternatively formulated lower risk for the same cost. In order to enable comparison and make claims about efficiency, a reference solution is developed that should reflect their current hedging strategy. To achieve higher efficiency we focus on finding dynamic hedging strategies. First a prototype model is suggested to facilitate the construction of the solution methods and if it is worthwhile to pursue a further investigation. As this initial prototype model results showed that there were substantial room for efficiency improvement, a larger main model with parameters estimated from data is constructed which encapsulate the real world scenario much better. Four different solutions methods are developed and applied to this main model setup. The results are then compared to reference strategy. We find that even though the efficiency was less then first expected from the prototype model results, using these new hedging strategies could reduce costs by 1.5 % - 5%. Although the final choice of the hedging strategy might be down to the end user we suggest the strategy called BW to reduce costs and improve efficiency. The paper also discusses among other things; the solution methods and hedging strategies, the term optimality and the impact of parameters in the model.
|
217 |
On Calibrating an Extension of the Chen ModelMöllberg, Martin January 2015 (has links)
There are many ways of modeling stochastic processes of short-term interest rates. One way is to use one-factor models which may be easy to use and easy to calibrate. Another way is to use a three-factor model in the strive for a higher degree of congruency with real world market data. Calibrating such models may however take much more effort. One of the main questions here is which models will be better fit to the data in question. Another question is if the use of a three-factor model can result in better fitting compared to one-factor models. This is investigated by using the Efficient Method of Moments to calibrate a three-factor model with a Lévy process. This model is an extension of the Chen Model. The calibration is done with Euribor 6-month interest rates and these rates are also used with the Vasicek and Cox-Ingersoll-Ross (CIR) models. These two models are calibrated by using Maximum Likelihood Estimation and they are one-factor models. Chi-square goodness-of-fit tests are also performed for all models. The findings indicate that the Vasicek and CIR models fail to describe the stochastic process of the Euribor 6-month rate. However, the result from the goodness-of-fit test of the three-factor model gives support for that model.
|
218 |
Logistic regression modelling for STHR analysis / Logistisk regression för STHR-analysOlsén, Johan January 2014 (has links)
Coronary artery heart disease (CAD) is a common condition which can impair the quality of life and lead to cardiac infarctions. Traditional criteria during exercise tests are good but far from perfect. A lot of patients with inconclusive tests are referred to radiological examinations. By finding better evaluation criteria during the exercise test we can save a lot of money and let the patients avoid unnecessary examinations. Computers record amounts of numerical data during the exercise test. In this retrospective study 267 patients with inconclusive exercise test and performed radiological examinations were included. The purpose was to use clinical considerations as-well as mathematical statistics to be able to find new diagnostic criteria. We created a few new parameters and evaluated them together with previously used parameters. For women we found some interesting univariable results where new parameters discriminated better than the formerly used. However, the number of females with observed CAD was small (14) which made it impossible to obtain strong significance. For men we computed a multivariable model, using logistic regression, which discriminates way better than the traditional parameters for these patients. The area under the ROC curve was 0:90 (95 % CI: 0.83-0.97) which is excellent to outstanding discrimination in a group initially included due to their inconclusive results. If the model can be proved to hold for another population it could contribute a lot to the diagnostics of this common medical conditions
|
219 |
Forecasting of Self-Rated Health Using Hidden Markov AlgorithmLoso, Jesper January 2014 (has links)
In this thesis a model for predicting a person’s monthly average of self-rated health the following month was developed. It was based on statistics from a form constructed by HealthWatch. The model used is a Hidden Markov Algorithm based on Hidden Markov Models where the hidden part is the future value of self-rated health. The emissions were based on five of the eleven questions that make the HealthWatch form. The questions are answered on a scale from zero to one hundred. The model predicts in which of three intervals of SRH the responder most likely will answer on average during the following month. The final model has an accuracy of 80 %.
|
220 |
Liquidity and corporate bond pricing on the Swedish marketNguyen Andersson, Peter January 2014 (has links)
In this thesis a corporate bond valuation model based on Dick-Nielsen, Feldhütter, and Lando (2011) and Chen, Lesmond, and Wei (2007) is examined. The aim is for the model to price corporate bond spreads and in particular capture the price effects of liquidity as well as credit risk. The valuation model is based on linear regression and is conducted on the Swedish market with data provided by Handelsbanken. Two measures of liquidity are analyzed: the bid-ask spread and the zero-trading days. The investigation shows that the bid-ask spread outperforms the zero-trading days in both significance and robustness. The valuation model with the bid-ask spread explains 59% of the cross-sectional variation and has a standard error of 56 bps in its pricing predictions of corporate spreads. A reduced version of the valuation model is also developed to address simplicity and target a larger group of users. The reduced model is shown to maintain a large proportion of the explanation power while including fewer and simpler variables. / I denna uppsats undersöks en värderingsmodell för företagsobligationer, baserad på studierna av Dick-Nielsen, Feldhütter, och Lando (2011) och Chen, Lesmond, och Wei (2007). Syftet med modellen är att kunna prissätta företagsobligationer med precision och i synnerhet hantera priseffekten av likviditet och kreditrisk. Värderingsmodellen är baserad på linjär regression och är tillämpad på den svenska marknaden. Den underliggande datan i undersökningen är tillhandahållen av Handelsbanken. Två mått av likviditet är analyserade: bid-ask-spreaden och noll-handlingsdagarna. Undersökningen visar att likviditetsmåttet för bid-ask-spreaden överträffar måttet för noll-handlingsdagarna i både signifikans och robusthet. Värderingsmodellen, med bid-ask-spreaden som likviditetsmått, förklarar 59% av variationen, mätt i justerat r-kvadrat värde. Standardfelet för modellen är 56 baspunkter. Vidare utvecklas också en reducerad version av värderingsmodellen i syfte att vara mer praktiskt användbar och tillgänglig för en större användargrupp. Undersökningen visar att den reducerade modellen bibehåller en stor del av förklaringsgraden av den ursprungliga modellen, samt att den inkluderar färre och enklare variabler.
|
Page generated in 0.0481 seconds