• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 19
  • 13
  • 3
  • 3
  • 2
  • 2
  • 2
  • 2
  • 1
  • Tagged with
  • 49
  • 49
  • 37
  • 33
  • 12
  • 12
  • 11
  • 9
  • 8
  • 8
  • 7
  • 7
  • 6
  • 6
  • 6
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
31

Transformações da probabilidade de default: do mundo neutro a risco para o mundo real

Frota, Diego Peterlevitz 24 August 2015 (has links)
Submitted by Diego Peterlevitz Frota (theairguitar@yahoo.com.br) on 2015-09-18T04:44:47Z No. of bitstreams: 1 Dissertação de Mestrado v2.2.1 FINAL.pdf: 702750 bytes, checksum: c584d69fa5b1e9246d10622f4fad5e64 (MD5) / Approved for entry into archive by Renata de Souza Nascimento (renata.souza@fgv.br) on 2015-09-22T13:51:12Z (GMT) No. of bitstreams: 1 Dissertação de Mestrado v2.2.1 FINAL.pdf: 702750 bytes, checksum: c584d69fa5b1e9246d10622f4fad5e64 (MD5) / Made available in DSpace on 2015-09-22T14:21:01Z (GMT). No. of bitstreams: 1 Dissertação de Mestrado v2.2.1 FINAL.pdf: 702750 bytes, checksum: c584d69fa5b1e9246d10622f4fad5e64 (MD5) Previous issue date: 2015-08-24 / This paper covers the fundamentals of the relation between the risk-neutral measure and the real-world, exhibiting some known methods of transforming probability measure associated with each of these two contexts. We show how bonds can be used to estimate the probability of default by their issuers, explaining the reasons that cause it does not reflect, at first, the data observed historically. Using data from Brazilian companies, we estimate the ratio between the risk-neutral and actual probability of default. These results, when compared with other similar studies suggest that the risk premium of Brazilian companies is higher than that of American companies. / Este trabalho aborda os fundamentos da relação entre a medida neutra a risco e o mundo físico, apresentando algumas metodologias conhecidas de transformação da medida de probabilidade associada a cada um destes dois contextos. Mostramos como titulos de crédito podem ser utilizados para a estimação da probabilidade de inadimplência de seus emissores, explicitando os motivos que fazem com que ela não reflita, em um primeiro momento, os dados observados historicamente. Utilizando dados de empresas brasileiras, estimamos a razão entre a probabilidade de default neutra a risco e a probabilidade de default real. Tais resultados, quando comparados com outros trabalhos similares, sugerem que a razão do prêmio de risco de empresas brasileiras possui valor maior do que a de empresas americanas.
32

Uma abordagem Forward-Looking para estimar a PD segundo IFRS9 / A Forward Looking Approach to estimate PD according to IFRS9

Luiz Henrique Outi Kauffmann 20 November 2017 (has links)
Este trabalho tem por objetivo discutir as metodologias de estimação da PD utilizadas na indústria financeira. Além disso, contextualizar a aplicação do trabalho ao IFRS9 e seu direcionamento para o tema de Risco de Crédito. Historicamente os grandes bancos múltiplos utilizam variadas metodologias econométricas para modelar a Probabilidade de Descumprimento (PD),um dos métodos mais tradicionais é a regressão logística, entretanto com a necessidade do cálculo da Perda Esperada de Crédito através do IFRS9, se torna necessário mudar o paradigma de estimação para uma abordagem forward-looking, isto está sendo interpretado por muitas instituições e consultorias como a inclusão de fatores e variáveis projetadas dentro do processo de estimação, ou seja, não serão utilizados apenas os dados históricos para prever o descumprimento ou inadimplência. Dentro deste contexto será proposto uma abordagem que une a estimação da Probabilidade de Descumprimento com a inclusão de um fator foward-looking. / This paper aims to discuss the methodologies used to estimate the Probability Of Default used in the financial industry. In addition, contextualize the application of the work to IFRS9 requirements and its targeting to the Credit Risk theme. Historically large multi-banks use a variety of econometric methodologies to model the Probability of Default, one of the more traditional methods is logistic regression. However, with the need to calculate the expected credit loss through IFRS9, it becomes necessary to change the estimation paradigm to a forwardlooking approach, this is being interpreted by many institutions and consultancies companies as the inclusion of factors and variables projected within the estimation process, that is, not only historical data are used to predict the default. Within this context will be proposed an approach that joins the estimation of Probability of Default with the inclusion of a forward-looking factor.
33

Formováni cen a výnosností obchodovatelných dluhopisů neobchodovatelných emitentů - "dluhopisové IPO" / Price and return formation of the primary bond issued by nonmarket issuers- Bond's IPO

Sushkova, Alina January 2015 (has links)
The diploma thesis focuses on issuance of the primary bond by non-financial companies on the Prague Stock Exchange (PSE). In the theoretical part were described the main parameters of securities and financial indicators of companies that build the risk premium and discussed options of risk-free base. The application part presents the evaluation of major factors influencing price and bond rates on the example of emissions carried on the PSE.
34

Probability of Default Term Structure Modeling : A Comparison Between Machine Learning and Markov Chains

Englund, Hugo, Mostberg, Viktor January 2022 (has links)
During the recent years, numerous so-called Buy Now, Pay Later companies have emerged. A type of financial institution offering short term consumer credit contracts. As these institutions have gained popularity, their undertaken credit risk has increased vastly. Simultaneously, the IFRS 9 regulatory requirements must be complied with. Specifically, the Probability of Default (PD) for the entire lifetime of such a contract must be estimated. The collection of incremental PDs over the entire course of the contract is called the PD term structure. Accurate estimates of the PD term structures are desirable since they aid in steering business decisions based on a given risk appetite, while staying compliant with current regulations. In this thesis, the efficiency of Machine Learning within PD term structure modeling is examined. Two categories of Machine Learning algorithms, in five variations each, are evaluated; (1) Deep Neural Networks; and (2) Gradient Boosted Trees. The Machine Learning models are benchmarked against a traditional Markov Chain model. The performance of the models is measured by a set of calibration and discrimination metrics, evaluated at each time point of the contract as well as aggregated over the entire time horizon. The results show that Machine Learning can be used efficiently within PD term structure modeling. The Deep Neural Networks outperform the Markov Chain model in all performance metrics, whereas the Gradient Boosted Trees are better in all except one metric. For short-term predictions, the Machine Learning models barely outperform the Markov Chain model. For long-term predictions, however, the Machine Learning models are superior. / Flertalet s.k. Köp nu, betala senare-företag har växt fram under de senaste åren. En sorts finansiell institution som erbjuder kortsiktiga konsumentkreditskontrakt. I samband med att dessa företag har blivit alltmer populära, har deras åtagna kreditrisk ökat drastiskt. Samtidigt måste de regulatoriska kraven ställda av IFRS 9 efterlevas. Specifikt måste fallisemangsrisken för hela livslängden av ett sådant kontrakt estimeras. Samlingen av inkrementell fallisemangsrisk under hela kontraktets förlopp kallas fallisemangsriskens terminsstruktur. Precisa estimat av fallisemangsriskens terminsstruktur är önskvärda eftersom de understödjer verksamhetsbeslut baserat på en given riskaptit, samtidigt som de nuvarande regulatoriska kraven efterlevs. I denna uppsats undersöks effektiviteten av Maskininlärning för modellering av fallisemangsriskens terminsstruktur. Två kategorier av Maskinlärningsalgoritmer, i fem variationer vardera, utvärderas; (1) Djupa neuronnät; och (2) Gradient boosted trees. Maskininlärningsmodellerna jämförs mot en traditionell Markovkedjemodell. Modellernas prestanda mäts via en uppsättning kalibrerings- och diskrimineringsmått, utvärderade i varje tidssteg av kontraktet samt aggregerade över hela tidshorisonten. Resultaten visar att Maskininlärning är effektivt för modellering av fallisemangsriskens terminsstruktur. De djupa neuronnäten överträffar Markovkedjemodellen i samtliga prestandamått, medan Gradient boosted trees är bättre i alla utom ett mått. För kortsiktiga prediktioner är Maskininlärningsmodellerna knappt bättre än Markovkedjemodellen. För långsiktiga prediktioner, däremot, är Maskininlärningsmodellerna överlägsna.
35

BNPL Probability of Default Modeling Including Macroeconomic Factors: A Supervised Learning Approach

Hardin, Patrik, Ingre, Robert January 2021 (has links)
In recent years, the Buy Now Pay Later (BNPL) consumer credit industry associated with e-commerce has been rapidly emerging as an alternative to credit cards and traditional consumer credit products. In parallel, the regulation IFRS 9 was introduced in 2018 requiring creditors to become more proactive in forecasting their Expected Credit Losses and include the impact of macroeconomic factors. This study evaluates several methods of supervised statistical learning to model the Probability of Default (PD) for BNPL credit contracts. Furthermore, the study analyzes to what extent macroeconomic factors impact the prediction under the requirements in IFRS 9 and was carried out as a case study with the Swedish fintech firm Klarna. The results suggest that XGBoost produces the highest predictive power measured in Precision-Recall and ROC Area Under Curve, with ROC values between 0.80 and 0.91 in three modeled scenarios. Moreover, the inclusion of macroeconomic variables generally improves the Precision-Recall Area Under Curve. Real GDP growth, housing prices, and unemployment rate are frequently among the most important macroeconomic factors. The findings are in line with previous research on similar industries and contribute to the literature on PD modeling in the BNPL industry, where limited previous research was identified. / De senaste åren har Buy Now Pay Later (BNPL) snabbt vuxit fram som ett alternativ till kreditkort och traditionella kreditprodukter, i synnerhet inom e-handel. Dessutom introducerades 2018 det nya regelverket IFRS 9, vilket kräver att banker och andra kreditgivare ska bli mer framåtblickande i modelleringen av sina förväntade kreditförluster, samt ta hänsyn till effekter från makroekonomiska faktorer. I denna studie utvärderas flera metoder inom statistisk inlärning för att modellera Probability of Default (PD), sannolikheten att en kreditförlust inträffar, för BNPL-kreditkontrakt. Dessutom analyseras i vilken utsträckning makroekonomiska faktorer påverkar modellernas prediktiva förmågor enligt kraven i IFRS 9. Studien genomfördes som en fallstudie med det svenska fintechföretaget Klarna. Resultaten tyder på att XGBoost har den största prediktionsförmågan mätt i Precision-Recall och ROC Area Under Curve, med ROC-värden mellan 0.80 och 0.91 i tre scenarier. Inkludering av makroekonomiska variabler förbättrar generellt PR-Area Under Curve. Real BNP-tillväxt, bostadspriser och arbetslöshet återfinns frekvent bland de viktigaste makroekonomiska faktorerna. Resultaten är i linje med tidigare forskning inom liknande branscher och bidrar till litteraturen om att modellera PD i BNPL-branschen där begränsad tidigare forskning hittades.
36

Model Risk Management and Ensemble Methods in Credit Risk Modeling

Sexton, Sean January 2022 (has links)
The number of statistical and mathematical credit risk models that financial institutions use and manage due to international and domestic regulatory pressures in recent years has steadily increased. This thesis examines the evolution of model risk management and provides some guidance on how to effectively build and manage different bagging and boosting machine learning techniques for estimating expected credit losses. It examines the pros and cons of these machine learning models and benchmarks them against more conventional models used in practice. It also examines methods for improving their interpretability in order to gain comfort and acceptance from auditors and regulators. To the best of this author’s knowledge, there are no academic publications which review, compare, and provide effective model risk management guidance on these machine learning techniques with the purpose of estimating expected credit losses. This thesis is intended for academics, practitioners, auditors, and regulators working in the model risk management and expected credit loss forecasting space. / Dissertation / Doctor of Philosophy (PhD)
37

The use of effect sizes in credit rating models

Steyn, Hendrik Stefanus 12 1900 (has links)
The aim of this thesis was to investigate the use of effect sizes to report the results of statistical credit rating models in a more practical way. Rating systems in the form of statistical probability models like logistic regression models are used to forecast the behaviour of clients and guide business in rating clients as “high” or “low” risk borrowers. Therefore, model results were reported in terms of statistical significance as well as business language (practical significance), which business experts can understand and interpret. In this thesis, statistical results were expressed as effect sizes like Cohen‟s d that puts the results into standardised and measurable units, which can be reported practically. These effect sizes indicated strength of correlations between variables, contribution of variables to the odds of defaulting, the overall goodness-of-fit of the models and the models‟ discriminating ability between high and low risk customers. / Statistics / M. Sc. (Statistics)
38

The use of effect sizes in credit rating models

Steyn, Hendrik Stefanus 12 1900 (has links)
The aim of this thesis was to investigate the use of effect sizes to report the results of statistical credit rating models in a more practical way. Rating systems in the form of statistical probability models like logistic regression models are used to forecast the behaviour of clients and guide business in rating clients as “high” or “low” risk borrowers. Therefore, model results were reported in terms of statistical significance as well as business language (practical significance), which business experts can understand and interpret. In this thesis, statistical results were expressed as effect sizes like Cohen‟s d that puts the results into standardised and measurable units, which can be reported practically. These effect sizes indicated strength of correlations between variables, contribution of variables to the odds of defaulting, the overall goodness-of-fit of the models and the models‟ discriminating ability between high and low risk customers. / Statistics / M. Sc. (Statistics)
39

Credit Risk Modeling And Credit Default Swap Pricing Under Variance Gamma Process

Anar, Hatice 01 August 2008 (has links) (PDF)
In this thesis, the structural model in credit risk and the credit derivatives is studied under both Black-Scholes setting and Variance Gamma (VG) setting. Using a Variance Gamma process, the distribution of the firm value process becomes asymmetric and leptokurtic. Also, the jump structure of VG processes allows random default times of the reference entities. Among structural models, the most emphasis is made on the Black-Cox model by building a relation between the survival probabilities of the Black-Cox model and the value of a binary down and out barrier option. The survival probabilities under VG setting are calculated via a Partial Integro Differential Equation (PIDE). Some applications of binary down and out barrier options, default probabilities and Credit Default Swap par spreads are also illustrated in this study.
40

Adaptation des techniques actuelles de scoring aux besoins d'une institution de crédit : le CFCAL-Banque / Adaptation of current scoring techniques to the needs of a credit institution : the Crédit Foncier et Communal d'Alsace et de Lorraine (CFCAL-banque)

Kouassi, Komlan Prosper 26 July 2013 (has links)
Les institutions financières sont, dans l’exercice de leurs fonctions, confrontées à divers risques, entre autres le risque de crédit, le risque de marché et le risque opérationnel. L’instabilité de ces facteurs fragilise ces institutions et les rend vulnérables aux risques financiers qu’elles doivent, pour leur survie, être à même d’identifier, analyser, quantifier et gérer convenablement. Parmi ces risques, celui lié au crédit est le plus redouté par les banques compte tenu de sa capacité à générer une crise systémique. La probabilité de passage d’un individu d’un état non risqué à un état risqué est ainsi au cœur de nombreuses questions économiques. Dans les institutions de crédit, cette problématique se traduit par la probabilité qu’un emprunteur passe d’un état de "bon risque" à un état de "mauvais risque". Pour cette quantification, les institutions de crédit recourent de plus en plus à des modèles de credit-scoring. Cette thèse porte sur les techniques actuelles de credit-scoring adaptées aux besoins d’une institution de crédit, le CFCAL-banque, spécialisé dans les prêts garantis par hypothèques. Nous présentons en particulier deux modèles non paramétriques (SVM et GAM) dont nous comparons les performances en termes de classification avec celles du modèle logit traditionnellement utilisé dans les banques. Nos résultats montrent que les SVM sont plus performants si l’on s’intéresse uniquement à la capacité de prévision globale. Ils exhibent toutefois des sensibilités inférieures à celles des modèles logit et GAM. En d’autres termes, ils prévoient moins bien les emprunteurs défaillants. Dans l’état actuel de nos recherches, nous préconisons les modèles GAM qui ont certes une capacité de prévision globale moindre que les SVM, mais qui donnent des sensibilités, des spécificités et des performances de prévision plus équilibrées. En mettant en lumière des modèles ciblés de scoring de crédit, en les appliquant sur des données réelles de crédits hypothécaires, et en les confrontant au travers de leurs performances de classification, cette thèse apporte une contribution empirique à la recherche relative aux modèles de credit-scoring. / Financial institutions face in their functions a variety of risks such as credit, market and operational risk. These risks are not only related to the nature of the activities they perform, but also depend on predictable external factors. The instability of these factors makes them vulnerable to financial risks that they must appropriately identify, analyze, quantify and manage. Among these risks, credit risk is the most prominent due to its ability to generate a systemic crisis. The probability for an individual to switch from a risked to a riskless state is thus a central point to many economic issues. In credit institution, this problem is reflected in the probability for a borrower to switch from a state of “good risk” to a state of “bad risk”. For this quantification, banks increasingly rely on credit-scoring models. This thesis focuses on the current credit-scoring techniques tailored to the needs of a credit institution: the CFCAL-banque specialized in mortgage credits. We particularly present two nonparametric models (SVM and GAM) and compare their performance in terms of classification to those of logit model traditionally used in banks. Our results show that SVM are more effective if we only focus on the global prediction performance of the models. However, SVM models give lower sensitivities than logit and GAM models. In other words the predictions of SVM models on defaulted borrowers are not satisfactory as those of logit or GAM models. In the present state of our research, even GAM models have lower global prediction capabilities, we recommend these models that give more balanced sensitivities, specificities and performance prediction. This thesis is not completely exhaustive about the scoring techniques for credit risk management. By trying to highlight targeted credit scoring models, adapt and apply them on real mortgage data, and compare their performance through classification, this thesis provides an empirical and methodological contribution to research on scoring models for credit risk management.

Page generated in 0.1046 seconds