• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 11
  • 6
  • 1
  • Tagged with
  • 20
  • 20
  • 20
  • 12
  • 11
  • 7
  • 6
  • 6
  • 5
  • 4
  • 4
  • 4
  • 4
  • 3
  • 3
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
11

Modelos preditivos para LGD / Predictive models for LGD

João Flávio Andrade Silva 04 May 2018 (has links)
As instituições financeiras que pretendem utilizar a IRB (Internal Ratings Based) avançada precisam desenvolver métodos para estimar a componente de risco LGD (Loss Given Default). Desde a década de 1950 são apresentadas propostas para modelagem da PD (Probability of default), em contrapartida, a previsão da LGD somente recebeu maior atenção após a publicação do Acordo Basileia II. A LGD possui ainda uma literatura pequena, se comparada a PD, e não há um método eficiente em termos de acurácia e interpretação como é a regressão logística para a PD. Modelos de regressão para LGD desempenham um papel fundamental na gestão de risco das instituições financeiras. Devido sua importância este trabalho propõe uma metodologia para quantificar a componente de risco LGD. Considerando as características relatadas sobre a distribuição da LGD e na forma flexível que a distribuição beta pode assumir, propomos uma metodologia de estimação da LGD por meio do modelo de regressão beta bimodal inflacionado em zero. Desenvolvemos a distribuição beta bimodal inflacionada em zero, apresentamos algumas propriedades, incluindo momentos, definimos estimadores via máxima verossimilhança e construímos o modelo de regressão para este modelo probabilístico, apresentamos intervalos de confiança assintóticos e teste de hipóteses para este modelo, bem como critérios para seleção de modelos, realizamos um estudo de simulação para avaliar o desempenho dos estimadores de máxima verossimilhança para os parâmetros da distribuição beta bimodal inflacionada em zero. Para comparação com nossa proposta selecionamos os modelos de regressão beta e regressão beta inflacionada, que são abordagens mais usuais, e o algoritmo SVR , devido a significativa superioridade relatada em outros trabalhos. / Financial institutions willing to use the advanced Internal Ratings Based (IRB) need to develop methods to estimate the LGD (Loss Given Default) risk component. Proposals for PD (Probability of default) modeling have been presented since the 1950s, in contrast, LGDs forecast has received more attention only after the publication of the Basel II Accord. LGD also has a small literature, compared to PD, and there is no efficient method in terms of accuracy and interpretation such as logistic regression for PD. Regression models for LGD play a key role in the risk management of financial institutions, due to their importance this work proposes a methodology to quantify the LGD risk component. Considering the characteristics reported on the distribution of LGD and in the flexible form that the beta distribution may assume, we propose a methodology for estimation of LGD using the zero inflated bimodal beta regression model. We developed the zero inflated bimodal beta distribution, presented some properties, including moments, defined estimators via maximum likelihood and constructed the regression model for this probabilistic model, presented asymptotic confidence intervals and hypothesis test for this model, as well as selection criteria of models, we performed a simulation study to evaluate the performance of the maximum likelihood estimators for the parameters of the zero inflated bimodal beta distribution. For comparison with our proposal we selected the beta regression models and inflated beta regression, which are more usual approaches, and the SVR algorithm, due to the significant superiority reported in other studies.
12

A Dual-Lens Approach to Loss Given Default Estimation: Traditional Methods and Variable Analysis / En metod med två linser för att uppskatta Loss Given Default: Traditionella metoder och variabelanalys

Jaeckel, William, Versteegh, Nicolai January 2023 (has links)
This report seeks to thoroughly examine different approaches to estimating Loss Given Default through a comparison of traditional estimation methods, as well as a deeper variable analysis on micro, small, and medium-sized companies using primarily regression decision trees. The comparative study concluded that estimating loss given default depends heavily on business-specific factors and data variety. While regression models offer interpretability and machine learning techniques offer superior prediction, model selection should balance complexity, computational demands, implementation ease, and overall performance. From the variable analysis, loan size and guarantor property ownership emerged as key drivers for a lower Loss Given Default. / Denna rapport syftar till att grundligt undersöka olika metoder för att uppskatta Loss Given Default genom en jämförelse av traditionella skattningsmetoder samt en djupare variabelanalys av bolag med hjälp av främst regressionsbeslutsträd. I den jämförande studien drogs slutsatsen att uppskattningen av Loss Given Default beror i hög grad på företagsspecifika faktorer och olika typer av data. Medan regressionsmodeller erbjuder tolkningsmöjligheter och maskininlärningstekniker erbjuder överlägsna uppskattningar, bör valet av modell balansera komplexitet, beräkningskrav, enkelhet i genomförandet och övergripande prestanda. I variabelanalysen framkom lånestorlek och borgensmannens fastighetsinnehav som viktiga drivkrafter för en lägre Loss Given Default.
13

A multi-gene symbolic regression approach for predicting LGD : A benchmark comparative study

Tuoremaa, Hanna January 2023 (has links)
Under the Basel accords for measuring regulatory capital requirements, the set of credit risk parameters probability of default (PD), exposure at default (EAD) and loss given default (LGD) are measured with own estimates by the internal rating based approach. The estimated parameters are also the foundation of understanding the actual risk in a banks credit portfolio. The predictive performance of such models are therefore interesting to examine. The credit risk parameter LGD has been seen to give low performance for predictive models and LGD values are generally hard to estimate. The main purpose of this thesis is to analyse the predictive performance of a multi-gene genetic programming approach to symbolic regression compared to three benchmark regression models. The goal of multi-gene symbolic regression is to estimate the underlying relationship in the data through a linear combination of a set of generated mathematical expressions. The benchmark models are Logit Transformed Regression, Beta Regression and Regression Tree. All benchmark models are frequently used in the area. The data used to compare the models is a set of randomly selected, de-identified loans from the portfolios of underlying U.S. residential mortgage-backed securities retrieved from International Finance Research. The conclusion from implementing and comparing the models is that, the credit risk parameter LGD is continued difficult to estimated, the symbolic regression approach did not yield a better predictive ability than the benchmark models and it did not seem to find the underlying relationship in the data. The benchmark models are more user-friendly with easier implementation and they all requires less calculation complexity than symbolic regression.
14

Credit Risk in the Macroprudential Framework: Three Essays / Credit Risk in the Macroprudential Framework: Three Essays

Seidler, Jakub January 2012 (has links)
Charles University in Prague Faculty of Social Sciences Institute of Economic Studies Credit Risk in the Macroprudential Framework: Three Essays DISSERTATION Author: PhDr. Jakub Seidler Supervisor: prof. Ing. Oldřich Dědek, CSc Academic Year: 2011/2012 Abstract This thesis focuses on proper credit risk identification with respect to macroprudential policies, which should mitigate systemic risk accumulation and contribute to higher financial stability of the financial sector. The first essay deals with a key credit risk parameter - Loss Given Default (LGD). We illustrate how the LGD can be estimated with the help of an adjusted Mertonian structural approach. We present a derivation of the formula for expected LGD and show its sensitivity analysis with respect to other company structural parameters. Finally, we estimate the five-year expected LGDs for companies listed on Prague Stock Exchange and find that the average LGD for the analyzed sample is around 20-50%. The second essay examines the issue of how to determine whether the observed level of private sector credit is excessive in the context of the "countercyclical capital buffer", a macroprudential tool proposed in the new regulatory framework of Basel III by the Basel Committee on Banking Supervision. An empirical analysis of selected Central and...
15

Model Risk Management and Ensemble Methods in Credit Risk Modeling

Sexton, Sean January 2022 (has links)
The number of statistical and mathematical credit risk models that financial institutions use and manage due to international and domestic regulatory pressures in recent years has steadily increased. This thesis examines the evolution of model risk management and provides some guidance on how to effectively build and manage different bagging and boosting machine learning techniques for estimating expected credit losses. It examines the pros and cons of these machine learning models and benchmarks them against more conventional models used in practice. It also examines methods for improving their interpretability in order to gain comfort and acceptance from auditors and regulators. To the best of this author’s knowledge, there are no academic publications which review, compare, and provide effective model risk management guidance on these machine learning techniques with the purpose of estimating expected credit losses. This thesis is intended for academics, practitioners, auditors, and regulators working in the model risk management and expected credit loss forecasting space. / Dissertation / Doctor of Philosophy (PhD)
16

A taxa de recuperação de créditos ruins em bancos comerciais privados brasileiros

Araújo, Evaristo Donato 07 April 2004 (has links)
Made available in DSpace on 2010-04-20T20:48:08Z (GMT). No. of bitstreams: 3 68479.pdf.jpg: 16047 bytes, checksum: e55a63085f1340eeed16cabe46f731aa (MD5) 68479.pdf: 840158 bytes, checksum: d13c12beedbf2f62a71d5574af83dfad (MD5) 68479.pdf.txt: 270876 bytes, checksum: 0af13fce10af7db2c36dcb34a90e0b79 (MD5) Previous issue date: 2004-04-07T00:00:00Z / Credit risk comes from the possibility of the debtor not paying its debt at the maturity date, and the promised amount. When the debtor doesn’t pay in full its debt, we say he or she is in default. In this case, the creditor gets a loss. However, the loss could be reduced if the debtor pays part of his or her debt. The measurement of a debtor’s probability of default has been the subject of studies for decades. However, the measurement of how much one can receive from a defaulted credit – the recovery rate – has been given attention only recently. And, most of the time, this measure has been calculated for huge companies in United States financial markets, only. We have defined recovery rate based on financial reports of Brazilian commercial banks, and tracked the path of this variable pari passu to default rate, defined from the same reports also. We established a theoretical framework, and made hypothesis on how such variables as default rates and other credit quality indicators, economic level indicators, nominal and real interest rates, and capital markets indicators could explain variations on the recovery rates we have defined. We gathered information from 46 Brazilian private commercial banks, semiannually, bracing the period between June of 1994 and December 2002. These institutions were segmented by their share on the amount of credit of the private banking industry in Brazil and by the origin of its capital. Statistical models were run on explanatory variables based on original data and on variables obtained from principal components analysis. The models were able to explain most of the variation observed on the recovery rate we have defined, for the segments we have studied. The best models have shown that variations on the recovery rate could be explained by default rates and other indicators of credit quality, economic activity indicators and capital markets indicators. / O risco de crédito decorre da possibilidade de o devedor não honrar sua dívida no montante e na data aprazada. Quando o devedor não liquida sua dívida nas condições contratadas, diz-se que se torna inadimplente. Neste caso, o credor incorre em prejuízo. A perda, entretanto, pode ser reduzida se o cliente pagar parcialmente o que deve. A mensuração da probabilidade de um devedor inadimplir tem sido objeto de estudos há décadas. Entretanto, a quantificação do quanto o credor recebe em caso de inadimplência – a taxa de recuperação –só recentemente tem recebido atenção da academia. E, na maioria das vezes, esta quantificação tem-se limitado aos títulos de grandes empresas, negociados no mercado de capitais dos Estados Unidos da América. Neste trabalho, definiu-se uma taxa de recuperação baseada em informações contábeis de instituições bancárias brasileiras e analisou-se o comportamento desta variável pari passu à taxa de inadimplência, também definida a partir de dados contábeis. Estabeleceu-se um arcabouço teórico capaz de explicar de que forma variáveis como a taxa de inadimplência e outros indicadores de qualidade das carteiras de crédito, indicadores da atividade econômica, níveis de juros nominais e reais e indicadores do mercado de capitais, poderiam explicar as variações na taxa de recuperação das carteiras de crédito. Foram obtidas informações de um conjunto de 46 instituições bancárias privadas brasileiras, semestralmente, para o período compreendido entre junho de 1994 e dezembro de 2002. Essas instituições foram segmentadas pela representatividade de suas carteiras de crédito no volume total de créditos das instituições comerciais brasileiras e por origem de seu capital acionário. Elaboraram-se modelos estatísticos baseados em regressões multivariadas tanto de variáveis originais como de variáveis obtidas através de análise de componentes principais, que se mostraram capazes de explicar parte considerável das variações observadas na taxa de recuperação no conceito contábil, para os vários segmentos de instituições estudados. Mostraram-se como variáveis explicativas relevantes, nos melhores modelos, indicadores de inadimplência, indicadores da atividade econômica e indicadores do mercado de capitais.
17

Modeling credit risk for an SME loan portfolio: An Error Correction Model approach

Lindgren, Jonathan January 2017 (has links)
Sedan den globala finanskrisen 2008 har flera stora regelverk införts för att säkerställa att banker hanterar risker på sunt sätt. Bland dessa regelverk är Basel II som infört kapitalkrav för kreditrisk som baseras på Sannolikhet för Fallissemang och Förlust Givet Fallissemang. Basel II Advanced Internal-Based Approach ger banker möjligheten att skatta dessa riskmått för enskilda portföljer och göra interna kreditriskvärderingar. I överensstämmelse med Advanced Internal-Based-rating undersöker denna uppsats användningen av en Error Correction Model för modellering av Sannolikhet för Fallissemang. En modell som visat sin styrka inom stresstestning. Vidare implementeras en funktion för Förlust Givet Fallissemang som binder samman Sannolikhet för Fallissemang och Förlust Givet Fallissemang med systematisk risk. Error Correction Modellen modellerar Sannolikhet för Fallissemang av en SME-portfölj från en av de "fyra stora" bankerna i Sverige. Modellen utvärderas och stresstestas med Europeiska Bankmyndighetens  stresstestscenario 2016  och analyseras, med lovande resultat. / Since the global financial crisis of 2008, several big regulations have been implemented to assure that banks follow sound risk management. Among these are the Basel II Accords that implement capital requirements for credit risk. The core measures of credit risk evaluation are the Probability of Default and Loss Given Default. The Basel II Advanced Internal-Based-Rating Approach allows banks to model these measures for individual portfolios and make their own evaluations. This thesis, in compliance with the Advanced Internal-Based-rating approach, evaluates the use of an Error Correction Model when modeling the Probability of Default. A model proven to be strong in stress testing. Furthermore, a Loss Given Default function is implemented that ties Probability of Default and Loss Given Default to systematic risk. The Error Correction Model is implemented on an SME portfolio from one of the "big four" banks in Sweden. The model is evaluated and stress tested with the European Banking Authority's 2016 stress test scenario and analyzed, with promising results.
18

Estimation of Loss Given Default Distributions for Non-Performing Loans Using Zero-and-One Inflated Beta Regression Type Models / Estimering av förluster vid fallissemang för icke-presterade lån genom applicering av utvidgad betaregression

Ljung, Carolina, Svedberg, Maria January 2020 (has links)
This thesis investigates three different techniques for estimating loss given default of non-performing consumer loans. This is a contribution to a credit risk evaluation model compliant with the regulations stipulated by the Basel Accords, regulating the capital requirements of European financial institutions. First, multiple linear regression is applied, and thereafter, zero-and-one inflated beta regression is implemented in two versions, with and without Bayesian inference. The model performances confirm that modeling loss given default data is challenging, however, the result shows that the zero-and-one inflated beta regression is superior to the other models in predicting LGD. Although, it shall be recognized that all models had difficulties in distinguishing low-risk loans, while the prediction accuracy of riskier loans, resulting in larger losses, were higher. It is further recommended, in future research, to include macroeconomic variables in the models to capture economic downturn conditions as well as adopting decision trees, for example by applying machine learning. / Detta examensarbete undersöker tre olika metoder för att estimera förlusten vid fallissemang för icke-presterande konsumentlån. Detta som ett bidrag till en kreditrisksmodell i enlighet med bestämmelserna i Baselregelverken, som bland annat reglerar kapitalkraven för europeiska finansiella institut. Inledningsvis tillämpas multipel linjär regression, därefter implementeras två versioner av utvidgad betaregression, med och utan bayesiansk inferens. Resultatet bekräftar att modellering data för förlust givet fallissemang är utmanande, men visar även att den utvidgade betaregressionen utan bayesiansk inferens är bättre de andra modellerna. Det ska dock tilläggas att alla modeller visade svårigheter att estimera lån med låg risk, medan tillförlitligheten hos lån med hög risk, vilka generellt sett medför större förluster, var högre. Vidare rekommenderas det för framtida forskning att inkludera makroekonomiska variabler i modellerna för att fånga ekonomiska nedgångar samt att implementera beslutsträd, exempelvis genom applicering av maskininlärning.
19

Portfolio Risk Modelling in Venture Debt / Kreditriskmodellering inom Venture Debt

Eriksson, John, Holmberg, Jacob January 2023 (has links)
This thesis project is an experimental study on how to approach quantitative portfolio credit risk modelling in Venture Debt portfolios. Facing a lack of applicable default data from ArK and publicly available sets, as well as seeking to capture companies that fail to service debt obligations before defaulting per se, we present an approach to risk modeling based on trends in revenue. The main framework revolves around driving a Monte Carlo simulation with Copluas to predict future revenue scenarios across a portfolio of early-stage technology companies. Three models for a random Gaussian walk, a Linear Dynamic System and an Autoregressive Integrated Moving Average (ARIMA) time series are implemented and evaluated in terms of their portfolio Value-at-Risk influence. The model performance confirms that modeling portfolio risk in Venture Debt is challenging, especially due to lack of sufficient data and thus a heavy reliance on assumptions. However, the empirical results for Value-at-Risk and Expected Shortfall are in line with expectations. The evaluated portfolio is still in an early stage with a majority of assets not yet in their repayment period and consequently the spread of potential losses within one year is very tight. It should further be recognized that the scope in terms of explanatory variables for sales and model complexities has been narrowed and simplified for computational benefits, transparency and communicability. The main conclusion drawn is that alternative approaches to model Venture Debt risk is fully possible, and should improve in reliability and accuracy with more data feeding the model. For future research it is recommended to incorporate macroeconomic variables as well as similar company analysis to better capture macro, funding and sector conditions. Furthermore, it is suggested to extend the set of financial and operational explanatory variables for sales through machine learning or neural networks. / Detta examensarbete är en experimentell studie för kvantitativ modellering av kreditrisk i Venture Debt-portföljer. Givet en brist på tillgänlig konkurs-data från ArK samt från offentligt tillgängliga databaser i kombination med ambitionen att inkludera företag som misslyckas med skuldförpliktelser innan konkurs per se, presenterar vi en metod för riskmodellering baserad på trender i intäkter. Ramverket för modellen kretsar kring Monte Carlo-simulering med Copluas för att estimera framtida intäktsscenarier över en portfölj med tillväxtbolag inom tekniksektorn. Tre modeller för en random walk, ett linjärt dynamiskt system och ARIMA- tidsserier implementeras och utvärderas i termer av deras inflytande på portföljens Value-at- Risk. Modellens prestationer bekräftar att modellering av portföljrisk inom Venture Debt är utmanande, särskilt på grund av bristen på tillräckliga data och därmed ett stort beroende av antaganden. Dock är de empiriska resultaten för Value-at-Risk och Expected Shortfall i linje med förväntningarna. Den utvärderade portföljen är fortfarande i ett tidigt skede där en majoritet av tillgångarna fortfarande befinner sig i en amorteringsfri period och följaktligen är spridningen av potentiella förluster inom ett år mycket snäv. Det bör vidare tillkännages att omfattningen i termer av förklarande variabler för intäkter och modellkomplexitet har förenklats för beräkningsfördelar, transparens och kommunicerbarhet. Den främsta slutsatsen som dras är att alternativa metoder för att modellera risker inom Venture Debt är fullt möjliga och bör förbättras i tillförlitlighet och precision när mer data kan matas in i modellen. För framtida arbete rekommenderas det att inkorporera makroekonomiska variabler samt analys av liknande bolag för att bättre fånga makro-, finansierings- och sektorsförhållanden. Vidare föreslås det att utöka uppsättningen av finansiella och operationella förklarande variabler för intäkter genom maskininlärning eller neurala nätverk.
20

Loss Given Default Estimation with Machine Learning Ensemble Methods / Estimering av förlust vid fallissemang med ensembelmetoder inom maskininlärning

Velka, Elina January 2020 (has links)
This thesis evaluates the performance of three machine learning methods in prediction of the Loss Given Default (LGD). LGD can be seen as the opposite of the recovery rate, i.e. the ratio of an outstanding loan that the loan issuer would not be able to recover in case the customer would default. The methods investigated are decision trees, random forest and boosted methods. All of the methods investigated performed well in predicting the cases were the loan is not recovered, LGD = 1 (100%), or the loan is totally recovered, LGD = 0 (0% ). When the performance of the models was evaluated on a dataset where the observations with LGD = 1 were removed, a significant decrease in performance was observed. The random forest model built on an unbalanced training dataset showed better performance on the test dataset that included values LGD = 1 and the random forest model built on a balanced training dataset performed better on the test set where the observations of LGD = 1 were removed. Boosted models evaluated in this study showed less accurate predictions than other methods used. Overall, the performance of random forest models showed slightly better results than the performance of decision tree models, although the computational time (the cost) was considerably longer when running the random forest models. Therefore decision tree models would be suggested for prediction of the Loss Given Default. / Denna uppsats undersöker och jämför tre maskininlärningsmetoder som estimerar förlust vid fallissemang (Loss Given Default, LGD). LGD kan ses som motsatsen till återhämtningsgrad, dvs. andelen av det utstående lånet som långivaren inte skulle återfå ifall kunden skulle fallera. Maskininlärningsmetoder som undersöks i detta arbete är decision trees, random forest och boosted metoder. Alla metoder fungerade väl vid estimering av lån som antingen inte återbetalas, dvs. LGD = 1 (100%), eller av lån som betalas i sin helhet, LGD = 0 (0%). En tydlig minskning i modellernas träffsäkerhet påvisades när modellerna kördes med ett dataset där observationer med LGD = 1 var borttagna. Random forest modeller byggda på ett obalanserat träningsdataset presterade bättre än de övriga modellerna på testset som inkluderade observationer där LGD = 1. Då observationer med LGD = 1 var borttagna visade det sig att random forest modeller byggda på ett balanserat träningsdataset presterade bättre än de övriga modellerna. Boosted modeller visade den svagaste träffsäkerheten av de tre metoderna som blev undersökta i denna studie. Totalt sett visade studien att random forest modeller byggda på ett obalanserat träningsdataset presterade en aning bättre än decision tree modeller, men beräkningstiden (kostnaden) var betydligt längre när random forest modeller kördes. Därför skulle decision tree modeller föredras vid estimering av förlust vid fallissemang.

Page generated in 0.0638 seconds