• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 21
  • 15
  • 3
  • 3
  • 2
  • 2
  • 2
  • 2
  • 1
  • Tagged with
  • 53
  • 53
  • 40
  • 35
  • 13
  • 12
  • 12
  • 10
  • 9
  • 9
  • 8
  • 7
  • 7
  • 7
  • 6
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
41

The use of effect sizes in credit rating models

Steyn, Hendrik Stefanus 12 1900 (has links)
The aim of this thesis was to investigate the use of effect sizes to report the results of statistical credit rating models in a more practical way. Rating systems in the form of statistical probability models like logistic regression models are used to forecast the behaviour of clients and guide business in rating clients as “high” or “low” risk borrowers. Therefore, model results were reported in terms of statistical significance as well as business language (practical significance), which business experts can understand and interpret. In this thesis, statistical results were expressed as effect sizes like Cohen‟s d that puts the results into standardised and measurable units, which can be reported practically. These effect sizes indicated strength of correlations between variables, contribution of variables to the odds of defaulting, the overall goodness-of-fit of the models and the models‟ discriminating ability between high and low risk customers. / Statistics / M. Sc. (Statistics)
42

The use of effect sizes in credit rating models

Steyn, Hendrik Stefanus 12 1900 (has links)
The aim of this thesis was to investigate the use of effect sizes to report the results of statistical credit rating models in a more practical way. Rating systems in the form of statistical probability models like logistic regression models are used to forecast the behaviour of clients and guide business in rating clients as “high” or “low” risk borrowers. Therefore, model results were reported in terms of statistical significance as well as business language (practical significance), which business experts can understand and interpret. In this thesis, statistical results were expressed as effect sizes like Cohen‟s d that puts the results into standardised and measurable units, which can be reported practically. These effect sizes indicated strength of correlations between variables, contribution of variables to the odds of defaulting, the overall goodness-of-fit of the models and the models‟ discriminating ability between high and low risk customers. / Statistics / M. Sc. (Statistics)
43

Credit Risk Modeling And Credit Default Swap Pricing Under Variance Gamma Process

Anar, Hatice 01 August 2008 (has links) (PDF)
In this thesis, the structural model in credit risk and the credit derivatives is studied under both Black-Scholes setting and Variance Gamma (VG) setting. Using a Variance Gamma process, the distribution of the firm value process becomes asymmetric and leptokurtic. Also, the jump structure of VG processes allows random default times of the reference entities. Among structural models, the most emphasis is made on the Black-Cox model by building a relation between the survival probabilities of the Black-Cox model and the value of a binary down and out barrier option. The survival probabilities under VG setting are calculated via a Partial Integro Differential Equation (PIDE). Some applications of binary down and out barrier options, default probabilities and Credit Default Swap par spreads are also illustrated in this study.
44

Adaptation des techniques actuelles de scoring aux besoins d'une institution de crédit : le CFCAL-Banque / Adaptation of current scoring techniques to the needs of a credit institution : the Crédit Foncier et Communal d'Alsace et de Lorraine (CFCAL-banque)

Kouassi, Komlan Prosper 26 July 2013 (has links)
Les institutions financières sont, dans l’exercice de leurs fonctions, confrontées à divers risques, entre autres le risque de crédit, le risque de marché et le risque opérationnel. L’instabilité de ces facteurs fragilise ces institutions et les rend vulnérables aux risques financiers qu’elles doivent, pour leur survie, être à même d’identifier, analyser, quantifier et gérer convenablement. Parmi ces risques, celui lié au crédit est le plus redouté par les banques compte tenu de sa capacité à générer une crise systémique. La probabilité de passage d’un individu d’un état non risqué à un état risqué est ainsi au cœur de nombreuses questions économiques. Dans les institutions de crédit, cette problématique se traduit par la probabilité qu’un emprunteur passe d’un état de "bon risque" à un état de "mauvais risque". Pour cette quantification, les institutions de crédit recourent de plus en plus à des modèles de credit-scoring. Cette thèse porte sur les techniques actuelles de credit-scoring adaptées aux besoins d’une institution de crédit, le CFCAL-banque, spécialisé dans les prêts garantis par hypothèques. Nous présentons en particulier deux modèles non paramétriques (SVM et GAM) dont nous comparons les performances en termes de classification avec celles du modèle logit traditionnellement utilisé dans les banques. Nos résultats montrent que les SVM sont plus performants si l’on s’intéresse uniquement à la capacité de prévision globale. Ils exhibent toutefois des sensibilités inférieures à celles des modèles logit et GAM. En d’autres termes, ils prévoient moins bien les emprunteurs défaillants. Dans l’état actuel de nos recherches, nous préconisons les modèles GAM qui ont certes une capacité de prévision globale moindre que les SVM, mais qui donnent des sensibilités, des spécificités et des performances de prévision plus équilibrées. En mettant en lumière des modèles ciblés de scoring de crédit, en les appliquant sur des données réelles de crédits hypothécaires, et en les confrontant au travers de leurs performances de classification, cette thèse apporte une contribution empirique à la recherche relative aux modèles de credit-scoring. / Financial institutions face in their functions a variety of risks such as credit, market and operational risk. These risks are not only related to the nature of the activities they perform, but also depend on predictable external factors. The instability of these factors makes them vulnerable to financial risks that they must appropriately identify, analyze, quantify and manage. Among these risks, credit risk is the most prominent due to its ability to generate a systemic crisis. The probability for an individual to switch from a risked to a riskless state is thus a central point to many economic issues. In credit institution, this problem is reflected in the probability for a borrower to switch from a state of “good risk” to a state of “bad risk”. For this quantification, banks increasingly rely on credit-scoring models. This thesis focuses on the current credit-scoring techniques tailored to the needs of a credit institution: the CFCAL-banque specialized in mortgage credits. We particularly present two nonparametric models (SVM and GAM) and compare their performance in terms of classification to those of logit model traditionally used in banks. Our results show that SVM are more effective if we only focus on the global prediction performance of the models. However, SVM models give lower sensitivities than logit and GAM models. In other words the predictions of SVM models on defaulted borrowers are not satisfactory as those of logit or GAM models. In the present state of our research, even GAM models have lower global prediction capabilities, we recommend these models that give more balanced sensitivities, specificities and performance prediction. This thesis is not completely exhaustive about the scoring techniques for credit risk management. By trying to highlight targeted credit scoring models, adapt and apply them on real mortgage data, and compare their performance through classification, this thesis provides an empirical and methodological contribution to research on scoring models for credit risk management.
45

Modeling credit risk for an SME loan portfolio: An Error Correction Model approach

Lindgren, Jonathan January 2017 (has links)
Sedan den globala finanskrisen 2008 har flera stora regelverk införts för att säkerställa att banker hanterar risker på sunt sätt. Bland dessa regelverk är Basel II som infört kapitalkrav för kreditrisk som baseras på Sannolikhet för Fallissemang och Förlust Givet Fallissemang. Basel II Advanced Internal-Based Approach ger banker möjligheten att skatta dessa riskmått för enskilda portföljer och göra interna kreditriskvärderingar. I överensstämmelse med Advanced Internal-Based-rating undersöker denna uppsats användningen av en Error Correction Model för modellering av Sannolikhet för Fallissemang. En modell som visat sin styrka inom stresstestning. Vidare implementeras en funktion för Förlust Givet Fallissemang som binder samman Sannolikhet för Fallissemang och Förlust Givet Fallissemang med systematisk risk. Error Correction Modellen modellerar Sannolikhet för Fallissemang av en SME-portfölj från en av de "fyra stora" bankerna i Sverige. Modellen utvärderas och stresstestas med Europeiska Bankmyndighetens  stresstestscenario 2016  och analyseras, med lovande resultat. / Since the global financial crisis of 2008, several big regulations have been implemented to assure that banks follow sound risk management. Among these are the Basel II Accords that implement capital requirements for credit risk. The core measures of credit risk evaluation are the Probability of Default and Loss Given Default. The Basel II Advanced Internal-Based-Rating Approach allows banks to model these measures for individual portfolios and make their own evaluations. This thesis, in compliance with the Advanced Internal-Based-rating approach, evaluates the use of an Error Correction Model when modeling the Probability of Default. A model proven to be strong in stress testing. Furthermore, a Loss Given Default function is implemented that ties Probability of Default and Loss Given Default to systematic risk. The Error Correction Model is implemented on an SME portfolio from one of the "big four" banks in Sweden. The model is evaluated and stress tested with the European Banking Authority's 2016 stress test scenario and analyzed, with promising results.
46

Predicting Subprime Customers' Probability of Default Using Transaction and Debt Data from NPLs / Predicering av högriskkunders sannolikhet för fallissemang baserat på transaktions- och lånedata på nödlidande lån

Wong, Lai-Yan January 2021 (has links)
This thesis aims to predict the probability of default (PD) of non-performing loan (NPL) customers using transaction and debt data, as a part of developing credit scoring model for Hoist Finance. Many NPL customers face financial exclusion due to default and therefore are considered as bad customers. Hoist Finance is a company that manages NPLs and believes that not all conventionally considered subprime customers are high-risk customers and wants to offer them financial inclusion through favourable loans. In this thesis logistic regression was used to model the PD of NPL customers at Hoist Finance based on 12 months of data. Different feature selection (FS) methods were explored, and the best model utilized l1-regularization for FS and predicted with 85.71% accuracy that 6,277 out of 27,059 customers had a PD between 0% to 10%, which support this belief. Through analysis of the PD it was shown that the PD increased almost linearly with respect to an increase in either debt quantity, original total claim amount or number of missed payments. The analysis also showed that the payment behaviour in the last quarter had the most predictive power. At the same time, from analysing the type II error it was shown that the model was unable to capture some bad payment behaviour, due to putting to large emphasis on the last quarter. / Det här examensarbetet syftar till att predicera sannolikheten för fallissemang för nödlidande lånekunder genom transaktions- och lånedata. Detta som en del av kreditvärdighetsmodellering för Hoist Finance. På engelska kallas sannolikheten för fallissemang för "probability of default" (PD) och nödlidande lån kallas för "non-performing loan" (NPL). Många NPL-kunder står inför ekonomisk uteslutning på grund av att de konventionellt betraktas som kunder med dålig kreditvärdighet. Hoist Finance är ett företag som förvaltar nödlidande lån och påstår att inte alla konventionellt betraktade "dåliga" kunder är högrisk kunder. Därför vill Hoist Finance inkludera dessa kunder ekonomisk genom att erbjuda gynnsamma lån. I detta examensarbetet har Logistisk regression används för att predicera PD på nödlidande lånekunder på Hoist Finance baserat på 12 månaders data. Olika metoder för urval av attribut undersöktes och den bästa modellen utnyttjade lasso för urval. Denna modell predicerade med 85,71 % noggrannhet att 6 277 av 27 059 kunder har en PD mellan 0 % till 10 %, vilket stödjer påståendet. Från analys av PD visade det sig att PD ökade nästan linjärt med avseende på ökning i antingen kvantitet av lån, det ursprungliga totala lånebeloppet eller antalet missade betalningar. Analysen visade också att betalningsbeteendet under det sista kvartalet hade störst prediktivt värde. Genom analys av typ II-felet, visades det sig samtidigt att modellen hade svårigheter att fånga vissa dåliga betalningsbeteende just på grund av att för stor vikt lades på det sista kvartalet.
47

Peeking Through the Leaves : Improving Default Estimation with Machine Learning : A transparent approach using tree-based models

Hadad, Elias, Wigton, Angus January 2023 (has links)
In recent years the development and implementation of AI and machine learning models has increased dramatically. The availability of quality data paving the way for sophisticated AI models. Financial institutions uses many models in their daily operations. They are however, heavily regulated and need to follow the regulation that are set by central banks auditory standard and the financial supervisory authorities. One of these standards is the disclosure of expected credit losses in financial statements of banks, called IFRS 9. Banks must measure the expected credit shortfall in line with regulations set up by the EBA and FSA. In this master thesis, we are collaborating with a Swedish bank to evaluate different machine learning models to predict defaults of a unsecured credit portfolio. The default probability is a key variable in the expected credit loss equation. The goal is not only to develop a valid model to predict these defaults but to create and evaluate different models based on their performance and transparency. With regulatory challenges within AI the need to introduce transparency in models are part of the process. When banks use models there’s a requirement on transparency which refers to of how easily a model can be understood with its architecture, calculations, feature importance and logic’s behind the decision making process. We have compared the commonly used model logistic regression to three machine learning models, decision tree, random forest and XG boost. Where we want to show the performance and transparency differences of the machine learning models and the industry standard. We have introduced a transparency evaluation tool called transparency matrix to shed light on the different transparency requirements of machine learning models. The results show that all of the tree based machine learning models are a better choice of algorithm when estimating defaults compared to the traditional logistic regression. This is shown in the AUC score as well as the R2 metric. We also show that when models increase in complexity there is a performance-transparency trade off, the more complex our models gets the better it makes predictions. / Under de senaste ̊aren har utvecklingen och implementeringen av AI- och maskininl ̈arningsmodeller o ̈kat dramatiskt. Tillg ̊angen till kvalitetsdata banar va ̈gen fo ̈r sofistikerade AI-modeller. Finansiella institutioner anva ̈nder m ̊anga modeller i sin dagliga verksamhet. De a ̈r dock starkt reglerade och m ̊aste fo ̈lja de regler som faststa ̈lls av centralbankernas revisionsstandard och finansiella tillsynsmyndigheter. En av dessa standarder a ̈r offentligg ̈orandet av fo ̈rva ̈ntade kreditfo ̈rluster i bankernas finansiella rapporter, kallad IFRS 9. Banker m ̊aste ma ̈ta den fo ̈rva ̈ntade kreditfo ̈rlusten i linje med regler som faststa ̈lls av EBA och FSA. I denna uppsats samarbetar vi med en svensk bank fo ̈r att utva ̈rdera olika maskininl ̈arningsmodeller f ̈or att fo ̈rutsa ̈ga fallisemang i en blankokreditsportfo ̈lj. Sannolikheten fo ̈r fallismang ̈ar en viktig variabel i ekvationen fo ̈r fo ̈rva ̈ntade kreditfo ̈rluster. M ̊alet a ̈r inte bara att utveckla en bra modell fo ̈r att prediktera fallismang, utan ocks ̊a att skapa och utva ̈rdera olika modeller baserat p ̊a deras prestanda och transparens. Med de utmaningar som finns inom AI a ̈r behovet av att info ̈ra transparens i modeller en del av processen. Na ̈r banker anva ̈nder modeller finns det krav p ̊a transparens som ha ̈nvisar till hur enkelt en modell kan fo ̈rst ̊as med sin arkitektur, bera ̈kningar, variabel p ̊averkan och logik bakom beslutsprocessen. Vi har ja ̈mfo ̈rt den vanligt anva ̈nda modellen logistisk regression med tre maskininla ̈rningsmodeller: Decision trees, Random forest och XG Boost. Vi vill visa skillnaderna i prestanda och transparens mellan maskininl ̈arningsmodeller och branschstandarden. Vi har introducerat ett verktyg fo ̈r transparensutva ̈rdering som kallas transparensmatris fo ̈r att belysa de olika transparenskraven fo ̈r maskininla ̈rningsmodeller. Resultaten visar att alla tra ̈d-baserade maskininla ̈rningsmodeller a ̈r ett ba ̈ttre val av modell vid prediktion av fallisemang j ̈amfo ̈rt med den traditionella logistiska regressionen. Detta visas i AUC-score samt R2 va ̈rdet. Vi visar ocks ̊a att n ̈ar modeller blir mer komplexa uppst ̊ar en kompromiss mellan prestanda och transparens; ju mer komplexa v ̊ara modeller blir, desto ba ̈ttre blir deras prediktioner.
48

Applying the Shadow Rating Approach: A Practical Review / Tillämpning av skuggrating-modellen: En praktisk studie

Barry, Viktor, Stenfelt, Carl January 2023 (has links)
The combination of regulatory pressure and rare but impactful defaults together comprise the domain of low default portfolios, which is a central and complex topic that lacks clear industry standards. A novel approach that utilizes external data to create a Shadow Rating model has been proposed by Ulrich Erlenmaier. It addresses the lack of data by estimating a probability of default curve from an external rating scale and subsequently training a statistical model to estimate the credit rating of obligors. The thesis intends to first explore the capabilities of the Cohort model and the Pluto and Tasche model to estimate the probability of default associated with banks and financial institutions through the use of external data. Secondly, the thesis will implement a multinomial logistic regression model, an ordinal logistic regression model, Classification and Regression Trees, and a Random Forest model. Subsequently, their performance to correctly estimate the credit rating of companies in a portfolio of banks and financial institutions using financial data is evaluated. Results suggest that the Cohort model is superior in modelling the underlying data, given a Gini coefficient of 0.730 for the base case, as opposed to Pluto and Tasche's 0.260. Moreover, the Random Forest model displays marginally higher performance across all metrics (such as an accuracy of 57%, a mean absolute error of 0.67 and a multiclass receiver operating characteristic of 0.83). However, given a lower degree of interpretability, the more simplistic ordinal logistic regression model (50%, 0.80 and 0.81, respectively) can be preferred due to its clear interpretability and explainability. / Kombinationen av regulatoriskt påtryck och få men påverkande fallissemang utgör tillsammans området lågfallissemangsportföljer, vilket är ett centralt men komplext ämne med avsaknad av tydliga industristandarder. En metod som använder extern data för att skapa en skuggrating-modell har föreslagits av Ulrich Erlenmaier. Den adresserar problemet av bristande data genom att använda externa ratings för att estimera en kurva över sannolikheten. Sedermera implementeras en statistisk modell som estimerar kreditratingen av låntagare. Denna uppsats ämnar för det första att utforska möjligheterna för kohortmodellen samt Pluto-och-Tasche-modellen att estimera sannolikheten för fallissemang associerat med banker och finansiella institutioner genom användandet av extern data. För det andra implementeras statistiska modeller genom nominell logistisk regression, ordinal logistisk regression, klassificerings- och regressionsträd samt Random Forest. Sedermera utvärderas modellernas förmåga att förutse kreditratings för företag från en portfölj av banker och finansiella institutioner. Resultat föreslår att kohortmodellen är att föredra vid modellering av underliggande data, givet en Ginikoefficient på 0.730 för grundfallet, till skillnad från Pluto och Tasches resultat på 0.260. Vidare genererade Random Forest marginellt bättre resultat över alla utvärderingskriterier (till exempel, 57% träffsäkerhet, 0.67 mean absolute error och 0.83 multiclass receiver operating characteristic). Däremot har den en lägre tolkningsbarhet så att ordinal logistisk regression (med respektive värden 50%, 0.80 och 0.81) skulle kunna föredras, givet dess tydlighet och transparens.
49

Evolution des méthodes de gestion des risques dans les banques sous la réglementation de Bale III : une étude sur les stress tests macro-prudentiels en Europe / Evolution of risk management methods in banks under Basel III regulation : a study on macroprudential stress tests in Europe

Dhima, Julien 11 October 2019 (has links)
Notre thèse consiste à expliquer, en apportant quelques éléments théoriques, les imperfections des stress tests macro-prudentiels d’EBA/BCE, et de proposer une nouvelle méthodologie de leur application ainsi que deux stress tests spécifiques en complément. Nous montrons que les stress tests macro-prudentiels peuvent être non pertinents lorsque les deux hypothèses fondamentales du modèle de base de Gordy-Vasicek utilisé pour évaluer le capital réglementaire des banques en méthodes internes (IRB) dans le cadre du risque de crédit (portefeuille de crédit asymptotiquement granulaire et présence d’une seule source de risque systématique qui est la conjoncture macro-économique), ne sont pas respectées. Premièrement, ils existent des portefeuilles concentrés pour lesquels les macro-stress tests ne sont pas suffisants pour mesurer les pertes potentielles, voire inefficaces si ces portefeuilles impliquent des contreparties non cycliques. Deuxièmement, le risque systématique peut provenir de plusieurs sources ; le modèle actuel à un facteur empêche la répercussion propre des chocs « macro ».Nous proposons un stress test spécifique de crédit qui permet d’appréhender le risque spécifique de crédit d’un portefeuille concentré, et un stress test spécifique de liquidité qui permet de mesurer l’impact des chocs spécifiques de liquidité sur la solvabilité de la banque. Nous proposons aussi une généralisation multifactorielle de la fonction d’évaluation du capital réglementaire en IRB, qui permet d’appliquer les chocs des macro-stress tests sur chaque portefeuille sectoriel, en stressant de façon claire, précise et transparente les facteurs de risque systématique l’impactant. Cette méthodologie permet une répercussion propre de ces chocs sur la probabilité de défaut conditionnelle des contreparties de ces portefeuilles et donc une meilleure évaluation de la charge en capital de la banque. / Our thesis consists in explaining, by bringing some theoretical elements, the imperfections of EBA / BCE macro-prudential stress tests, and proposing a new methodology of their application as well as two specific stress tests in addition. We show that macro-prudential stress tests may be irrelevant when the two basic assumptions of the Gordy-Vasicek core model used to assess banks regulatory capital in internal methods (IRB) in the context of credit risk (asymptotically granular credit portfolio and presence of a single source of systematic risk which is the macroeconomic conjuncture), are not respected. Firstly, they exist concentrated portfolios for which macro-stress tests are not sufficient to measure potential losses or even ineffective in the case where these portfolios involve non-cyclical counterparties. Secondly, systematic risk can come from several sources; the actual one-factor model doesn’t allow a proper repercussion of the “macro” shocks. We propose a specific credit stress test which makes possible to apprehend the specific credit risk of a concentrated portfolio, as well as a specific liquidity stress test which makes possible to measure the impact of liquidity shocks on the bank’s solvency. We also propose a multifactorial generalization of the regulatory capital valuation model in IRB, which allows applying macro-stress tests shocks on each sectorial portfolio, stressing in a clear, precise and transparent way the systematic risk factors impacting it. This methodology allows a proper impact of these shocks on the conditional probability of default of the counterparties of these portfolios and therefore a better evaluation of the capital charge of the bank.
50

信用違約機率之預測─Robust Logitstic Regression

林公韻, Lin,Kung-yun Unknown Date (has links)
本研究所使用違約機率(Probability of Default, 以下簡稱PD)的預測方法為Robust Logistic Regression(穩健羅吉斯迴歸),本研究發展且應用這個方法是基於下列兩個觀察:1. 極端值常常出現在橫剖面資料,而且對於實證結果往往有很大地影響,因而極端值必須要被謹慎處理。2. 當使用Logit Model(羅吉斯模型)估計違約率時,卻忽略極端值。試圖不讓資料中的極端值對估計結果產生重大的影響,進而提升預測的準確性,是本研究使用Logit Model並混合Robust Regression(穩健迴歸)的目的所在,而本研究是第一篇使用Robust Logistic Regression來進行PD預測的研究。 變數的選取上,本研究使用Z-SCORE模型中的變數,此外,在考慮公司的營收品質之下,亦針對公司的應收帳款週轉率而對相關變數做了調整。 本研究使用了一些信用風險模型效力驗證的方法來比較模型預測效力的優劣,本研究的實證結果為:針對樣本內資料,使用Robust Logistic Regression對於整個模型的預測效力的確有提升的效果;當營收品質成為模型變數的考量因素後,能讓模型有較高的預測效力。最後,本研究亦提出了一些重要的未來研究建議,以供後續的研究作為參考。 / The method implemented in PD calculation in this study is “Robust Logistic Regression”. We implement this method based on two reasons: 1. In panel data, outliers usually exist and they may seriously influence the empirical results. 2. In Logistic Model, outliers are not taken into consideration. The main purpose of implementing “Robust Logistic Regression” in this study is: eliminate the effects caused by the outliers in the data and improve the predictive ability. This study is the first study to implement “Robust Logistic Regression” in PD calculation. The same variables as those in Z-SCORE model are selected in this study. Furthermore, the quality of the revenue in a company is also considered. Therefore, we adjust the related variables with the company’s accounts receivable turnover ratio. Some validation methodologies for default risk models are used in this study. The empirical results of this study show that: In accordance with the in-sample data, implementing “Robust Logistic Regression” in PD calculation indeed improves the predictive ability. Besides, using the adjusted variables can also improve the predictive ability. In the end of this study, some important suggestions are given for the subsequent studies.

Page generated in 0.1136 seconds