Spelling suggestions: "subject:"probability "" "subject:"aprobability ""
471 |
The New Standardized Approach for Measuring Counterparty Credit RiskJonsson, Sara, Rönnlund, Beatrice January 2014 (has links)
This study investigates the differences in calculationof exposure at default between the current exposure method (CEM) and the newstandardized approach for measuring counterparty credit risk exposures (SA-CCR)for over the counter (OTC) derivatives. The study intends to analyze theconsequence of the usage of different approaches for netting as well as the differencesin EAD between asset classes. After implementing both models and calculating EADon real trades of a Swedish commercial bank it was obvious that SA-CCR has ahigher level of complexity than its predecessor. The results from this studyindicate that SA-CCR gives a lower EAD than CEM because of the higherrecognition of netting but higher EAD when netting is not allowed. Foreignexchange derivatives are affected to a higher extent than interest ratederivatives in this particular study. Foreign exchange derivatives got lowerEAD both when netting was allowed and when netting was not allowed under SA-CCR.A change of method for calculating EAD from CEM to SA-CCR could result in lowerminimum capital requirements
|
472 |
On Lapse risk factors in Solvency IIBoros, Daniel January 2014 (has links)
In the wake of the sub-prime crisis of 2008, the European Insurance and Occupational Pensions Authority issued the Solvency II directive, aiming at replacing the obsolete Solvency I framework by 2016. Among the quantitative requirements of Solvency II, a measure for an insurance firms solvency risk, the solvency risk capital, is found. It aims at establishing the amount of equity the company needs to hold to be able to meet its insurance obligations with a probability of 0.995 over the coming year. The SCR of a company is essentially built up by the SCR induced by a set of quantifiable risks. Among these, risks originating from the take up rate of contractual options, lapse risks, are included. In this thesis, the contractual options of a life insurer have been identified and risk factors aiming at capturing the risks arising are suggested. It has been concluded that a risk factor estimating the size of mass transfer events captures the risk arising through the resulting rescaling of the balance sheet. Further, a risk factor modeling the deviation of the Company's assumption for the yearly transfer rate is introduced to capture the risks induced by the characteristics of traditional life insurance and unit-linked insurance contracts upon transfer. The risk factors are modeled in a manner to introduce co-dependence with equity returns as well as interest rates of various durations and the model parameters are estimated using statistical methods for Norwegian transfer-frequency data obtained from Finans Norge. The univariate and multivariate properties of the models are investigated in a scenario setting and it is concluded the the suggested models provide predominantly plausible results for the mass-lapse risk factors. However, the performance of the models for the risk factors aiming at capturing deviations in the transfer assumptions are questionable, why two means of increasing its validity have been proposed.
|
473 |
Allocation Methods for Alternative Risk Premia Strategies / Allokeringsmetoder för alternativa riskpremierDrugge, Daniel January 2014 (has links)
We use regime switching and regression tree methods to evaluate performance in the risk premia strategies provided by Deutsche Bank and constructed from U.S. research data from the Fama French library. The regime switching method uses the Baum-Welch algorithm at its core and splits return data into a normal and a turbulent regime. Each regime is independently evaluated for risk and the estimates are then weighted together according to the expected value of the proceeding regime. The regression tree methods identify macro-economic states in which the risk premia perform well or poorly and use these results to allocate between risk premia strategies. The regime switching method proves to be mostly unimpressive but has its results boosted by investing less into risky assets as the probability of an upcoming turbulent regime becomes larger. This proves to be highly effective for all time periods and for both data sources. The regression tree method proves the most effective when making the assumption that we know all macro-economic data the same month as it is valid for. Since this is an unrealistic assumption the best method seems to be to evaluate the performance of the risk premia strategy using macro-economic data from the previous quarter.
|
474 |
Estimation of Loss Given Default for Low Default PortfoliosDahlin, Fredrik, Storkitt, Samuel January 2014 (has links)
The Basel framework allows banks to assess their credit risk by using their own estimates of Loss Given Default (LGD). However, for a Low Default Portfolio (LDP), estimating LGD is difficult due to shortage of default data. This study evaluates different LGD estimation approaches in an LDP setting by using pooled industry data obtained from a subset of the PECDC LGD database. Based on the characteristics of a LDP a Workout LGD approach is suggested. Six estimation techniques, including OLS regression, Ridge regression, two techniques combining logistic regressions with OLS regressions and two tree models, are tested. All tested models give similar error levels when tested against the data but the tree models might produce rather different estimates for specific exposures compared to the other models. Using historical averages yield worse results than the tested models within and out of sample but are not considerably worse out of time.
|
475 |
Semi-Markov modelling in a Gibbssampling algorithm for NIALMMonin Nylund, Jean-Alexander January 2014 (has links)
Residential households in the EU are estimated to have a savings potential of around 27% [1]. The question yet remains on how to realize this savings potential. Non-Intrusive Appliance Load Monitoring (NIALM) aims to disaggregate the combination of household appliance energy signals with only measurements of the total household power load. The core of this thesis has been the implementation of an extension to a Gibbs sampling model with Hidden Markov Models for energy disaggregation. The goal has been to improve overall performance, by including the duration times of electrical appliances in the probabilistic model. The final algorithm was evaluated in comparison to the base algorithm, but results remained at the very best inconclusive, due to the model's inherent limitations. The work was performed at the Swedish company Watty. Watty develops the first energy data analytic tool that can automate the energy efficiency process in buildings.
|
476 |
Prediktering av VD-löner i svenska onoterade aktiebolag / Prediction of CEO salaries in Swedish unlisted companiesEdberg, Erik January 2015 (has links)
Lönen till den verkställande direktören bestäms i motsats till kollektivarbetares individuellt och oberoende av kollektivavtal. Lönen bestäms av företagets styrelse och utgår utifrån en uppskattad värdering av bland annat komplexiteten i arbetet, VD:ns personliga egenskaper, marknadens värdering av liknande uppdrag och tillgången på tänkbara kandidater. Uppsatsens syfte är att konstruera en modell för att prediktera marknadslönen för en befintlig eller blivande VD. Vidare studeras utformningen av ersättningsstrukturen i syfte att finna den ersättningsstruktur som maximerar VD:ns prestation. Ur studien framgår att det är möjligt att prediktera VD-lönen för anställda VD:ar i onoterade aktiebolag med 64% förklaringsgrad. Variationen i VD-lönen förklaras av sex oberoende variabler, fyra variabler som representerar uppdragets karaktär och två variabler som är prestationsrelaterade. Högst förklaringsgrad ger variabeln Omsättning, vilken förklarar knappt 40% av variationen i VD-lönen. Av studien framgår att den optimala ersättningsstrukturen ser olika ut för olika företag. Vidare ges rekommendationer för vad den rörliga ersättningen bör baseras på i syfte att maximera VD:ns prestation. / The CEO’s remuneration is in contradiction to the union labours, set individually and independent from union agreements. The company board determines the remuneration. It’s based on an estimated valuation of variables such as job characteristics, personal qualities of the CEO, market valuation of similar tasks and the availability of possible candidates. The purpose of this thesis is to create a model to predict the market remuneration for a current or forthcoming CEO. Further, the compensation structure will be examined, aiming to find the compensation structure that maximizes the CEO’s performance. This thesis showes that it is possible to predict the CEO remuneration for employed CEOs in unlisted corporations with 64-percentage explanation rate. The variance is explained by six covariates, four covariates representing job characteristics and two related to company performance. The highest explanation rate is given by the covariate turnover, which explains just below 40-percentage of the remuneration variance. This study shows that the optimal compensation structure is different for different companies. Further, recommendations for what the variable remuneration should be based on, in order to maximize the CEO’s performance.
|
477 |
Predicting Bankruptcy with Machine Learning ModelsÅkerblom, Thea January 2022 (has links)
This thesis explores the predictive power of different machine learning algorithms in Swedish firm defaults. Both firm-specific variables and macroeconomic variables are used to calculate the estimated probabilities of firm default. Four different algorithms are used to predict default; Random Forest, Adaboost, Feed Forward Neural Network and Long Short Term Memory Neural Network (LSTM). These models are compared to a classical Logistic Classification model that acts as a benchmark model. The data used is a panel data set of quarterly observations. The study is done on data for the period 2000 to 2018. To evaluate the models Precision and Recall are calculated and compared between the models. The LSTM model performs the best of all five fitted models and correctly classifies 60 % of all defaults in the test data. The data is supplied by the Riksbank, the Swedish central bank. It consists of two data sets, one from Upplysningscentralen AB with firm specific variables, and one from the Riksbank with the macroeconomic variables. Keywords: LTSM, Neural Network, Adaboost, Random Forest, Machine Learning, Default, Panel Data, Longitudinal Data, Risk, Prediction, Precision, Recall
|
478 |
Furstenberg's conjecture and measure rigidity for some classes of non-abelian affine actions on tori / Furstenbergs förmodan och måttrigiditet för några klasser av icke-abelska affina verkningar på torusarZickert, Gustav January 2015 (has links)
In 1967 Furstenberg proved that the set {2n3mα(mod 1) | n, m ∈N} is dense in the circle for any irrational α. He also made the following famous measure rigidity conjecture: the only ergodic measures on the circle invariant under both x —> 2x and x —> 3x are the Lebesgue measure and measures supported on a finite set. In this thesis we discuss both Furstenberg’s theorem and his conjecture, as well as the partial solution of the latter given by Rudolph. Following Matheus’presentation of Avila’s ideas for a proof of a weak version of Rudolph’s theorem, we prove a result on extending measure preservation from a semigroup action to a larger semigroup action. Using this result we obtain restrictions on the set of invariant measures for certain classes of non-abelian affine actions on tori. We also study some general properties of affine abelian and non-abelian actions and we show that analogues of Furstenberg’s theorem hold for affine actions on the circle. / 1967 bevisade Furstenberg att mängden {2n3mα(mod 1) | n, m ∈N} är tät i cirkeln för alla irrationella tal α. Furstenberg ligger även bakom följande berömda förmodan: de enda ergodiska måtten påcirkeln som är invarianta under både x —> 2x och x —> 3x är Lebesguemåttet och mått med ändligt stöd. I det här examensarbetet behandlar vi Furstenbergs sats, Furstenbergs förmodan och Rudolphs sats. Vi följer Matheus presentation av Avilas idéer för ett bevis av en svag variant av Rudolphs sats och vi bevisar att en måttbevarande semigruppverkan under vissa antaganden kan utökas till en semigruppverkan av en större semigrupp. Med hjälp av detta resultat erhåller vi begränsningar av mängden av mått invarianta under vissa klasser av icke-abelska affina verkningar påtorusen. Vi studerar även allmänna egenskaper hos affina abelska och icke-abelska verkningar och vi visar att satser analoga med Furstenbergs sats håller för affina verkningar påcirkeln.
|
479 |
Spectral Data Processing for Steel Industry / Spektral databehandling för stålindustrinBisot, Clémence January 2015 (has links)
For steel industry, knowing and understanding characteristics of a steel strip surface at every steps of the production process is a key element to control final product quality. Today as the quality requirements increase this task gets more and more important. The surface of new steel grades with complex chemical compositions has behaviors especially hard to master. For those grades in particular, surface control is critical and difficult. One of the promising technics to assess the problem of surface quality control is spectra analysis. Over the last few years, ArcelorMittal, world’s leading integrated steel and mining company, has led several projects to investigate the possibility of using devices to measure light spectrum of their product at different stage of the production. The large amount of data generated by these devices makes it absolutely necessary to develop efficient data treatment pipelines to get meaningful information out of the recorded spectra. In this thesis, we developed mathematical models and statistical tools to treat signal measured with spectrometers in the framework of different research projects. / För stålindustrin, att veta och förstå ytegenskaperna på ett stålband vid varje steg i produktionsprocessen är en nyckelfaktor för att styra slutproduktens kvalitet. Den senaste tidens ökande kvalitetskraven har gjort denna uppgift allt mer viktigare. Ytan på nya stål kvaliteter med komplexa kemiska sammansättningar har egenskaper som är särskilt svårt att hantera. För dess kvaliteter är ytkontroll kritisk och svår. En av de tekniker som används för att kontrollera ytans kvalitet är spektrum analys. Arcelor Mittal, världens ledande integrerade stål- och gruvföretag, har under de senaste åren lett flera projekt för att undersöka möjligheten att använda mätinstrument för att mäta spektrum ljuset från sin produkt i olika stadier av produktionen. En av de tekniker som används för att kontrollera ytans kvalitet är spektrum analys. I denna avhandling har vi utvecklat matematiska modeller och statistiska verktyg för att kunna hanskas med signaler som är uppmätt med spektrometrar inom ramen av olika forskningsprojekt hos Arcelor Mittal.
|
480 |
Graphical lasso for covariance structure learning in the high dimensional setting / Graphical lasso för kovariansstrukturs inlärning i högdimensionell miljöFransson, Viktor January 2015 (has links)
This thesis considers the estimation of undirected Gaussian graphical models especially in the high dimensional setting where the true observations are assumed to be non-Gaussian distributed. The first aim is to present and compare the performances of existing Gaussian graphical model estimation methods. Furthermore since the models rely heavily on the normality assumption, various methods for relaxing the normal assumption are presented. In addition to the existing methods, a modified version of the joint graphical lasso method is introduced which monetizes on the strengths of the community Bayes method. The community Bayes method is used to partition the features (or variables) of datasets consisting of several classes into several communities which are estimated to be mutually independent within each class which allows the calculations when performing the joint graphical lasso method, to be split into several smaller parts. The method is also inspired by the cluster graphical lasso and is applicable to both Gaussian and non-Gaussian data, assuming that the normal assumption is relaxed. Results show that the introduced cluster joint graphical lasso method outperforms com-peting methods, producing graphical models which are easier to comprehend due to the added information obtained from the clustering step of the method. The cluster joint graphical lasso is applied to a real dataset consisting of p = 12582 features which resulted in computation gain of a factor 35 when comparing to the competing method which is very significant when analysing large datasets. The method also allows for parallelization where computations can be spread across several computers greatly increasing the computational efficiency. / Denna rapport behandlar uppskattningen av oriktade Gaussiska grafiska modeller speciellt i högdimensionell miljö där dom verkliga observationerna antas vara icke-Gaussiska fördelade. Det första målet är att presentera och jämföra prestandan av befintliga metoder för uppskattning av Gaussiska grafiska modeller. Eftersom modellerna är starkt beroende av normalantagandet, så kommer flertalet metoder för att relaxa normalantagandet att presenteras. Utöver dom befintliga metoderna, kommer en modifierad version av joint graphical lasso att introduceras som bygger på styrkan av community Bayes metod. Community Bayes metod används för att partitionera variabler från datamängder som består av flera klasser i flera samhällen (eller communities) som antas vara oberoende av varandra i varje klass. Detta innebär att beräkningarna av joint graphical lasso kan delas upp i flera mindre problem. Metoden är också inspirerad av cluster graphical lasso och applicerbar för både Gaussisk och icke-gaussisk data, förutsatt att det normala antagandet är relaxed. Resultaten visar att den introducerade cluster joint graphical lasso metoden utklassar konkurrerande metoder, som producerar grafiska modeller som är lättare att förstå på grund av den extra information som erhålls från klustringssteget av metoden. Joint graphical lasso appliceras även på en verklig datauppsättning bestående av p = 12582 variabler som resulterade i minskad beräkningstid av en faktor 35 vid jämförelse av konkurrerande metoder. Detta är mycket betydande när man analyserar stora datamängder. Metoden möjliggör också parallellisering där beräkningar kan spridas över flera datorer vilket ytterligare kraftigt ökar beräkningseffektiviteten.
|
Page generated in 0.0662 seconds