• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 47
  • 5
  • 4
  • 2
  • 1
  • 1
  • Tagged with
  • 75
  • 75
  • 14
  • 12
  • 10
  • 9
  • 9
  • 9
  • 7
  • 7
  • 7
  • 7
  • 6
  • 6
  • 6
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
71

Statistical Modeling for Credit Ratings

Vana, Laura 01 August 2018 (has links) (PDF)
This thesis deals with the development, implementation and application of statistical modeling techniques which can be employed in the analysis of credit ratings. Credit ratings are one of the most widely used measures of credit risk and are relevant for a wide array of financial market participants, from investors, as part of their investment decision process, to regulators and legislators as a means of measuring and limiting risk. The majority of credit ratings is produced by the "Big Three" credit rating agencies Standard & Poors', Moody's and Fitch. Especially in the light of the 2007-2009 financial crisis, these rating agencies have been strongly criticized for failing to assess risk accurately and for the lack of transparency in their rating methodology. However, they continue to maintain a powerful role as financial market participants and have a huge impact on the cost of funding. These points of criticism call for the development of modeling techniques that can 1) facilitate an understanding of the factors that drive the rating agencies' evaluations, 2) generate insights into the rating patterns that these agencies exhibit. This dissertation consists of three research articles. The first one focuses on variable selection and assessment of variable importance in accounting-based models of credit risk. The credit risk measure employed in the study is derived from credit ratings assigned by ratings agencies Standard & Poors' and Moody's. To deal with the lack of theoretical foundation specific to this type of models, state-of-the-art statistical methods are employed. Different models are compared based on a predictive criterion and model uncertainty is accounted for in a Bayesian setting. Parsimonious models are identified after applying the proposed techniques. The second paper proposes the class of multivariate ordinal regression models for the modeling of credit ratings. The model class is motivated by the fact that correlated ordinal data arises naturally in the context of credit ratings. From a methodological point of view, we extend existing model specifications in several directions by allowing, among others, for a flexible covariate dependent correlation structure between the continuous variables underlying the ordinal credit ratings. The estimation of the proposed models is performed using composite likelihood methods. Insights into the heterogeneity among the "Big Three" are gained when applying this model class to the multiple credit ratings dataset. A comprehensive simulation study on the performance of the estimators is provided. The third research paper deals with the implementation and application of the model class introduced in the second article. In order to make the class of multivariate ordinal regression models more accessible, the R package mvord and the complementary paper included in this dissertation have been developed. The mvord package is available on the "Comprehensive R Archive Network" (CRAN) for free download and enhances the available ready-to-use statistical software for the analysis of correlated ordinal data. In the creation of the package a strong emphasis has been put on developing a user-friendly and flexible design. The user-friendly design allows end users to estimate in an easy way sophisticated models from the implemented model class. The end users the package appeals to are practitioners and researchers who deal with correlated ordinal data in various areas of application, ranging from credit risk to medicine or psychology.
72

Uncertainty management in parameter identification / Gestion des incertitudes pour l'identification des paramètres matériau

Sui, Liqi 23 January 2017 (has links)
Afin d'obtenir des simulations plus prédictives et plus précises du comportement mécanique des structures, des modèles matériau de plus en plus complexes ont été développés. Aujourd'hui, la caractérisation des propriétés des matériaux est donc un objectif prioritaire. Elle exige des méthodes et des tests d'identification dédiés dans des conditions les plus proches possible des cas de service. Cette thèse vise à développer une méthodologie d'identification efficace pour trouver les paramètres des propriétés matériau, en tenant compte de toutes les informations disponibles. L'information utilisée pour l'identification est à la fois théorique, expérimentale et empirique : l'information théorique est liée aux modèles mécaniques dont l'incertitude est épistémique; l'information expérimentale provient ici de la mesure de champs cinématiques obtenues pendant l'essai ct dont l'incertitude est aléatoire; l'information empirique est liée à l'information à priori associée à une incertitude épistémique ainsi. La difficulté principale est que l'information disponible n'est pas toujours fiable et que les incertitudes correspondantes sont hétérogènes. Cette difficulté est surmontée par l'utilisation de la théorie des fonctions de croyance. En offrant un cadre général pour représenter et quantifier les incertitudes hétérogènes, la performance de l'identification est améliorée. Une stratégie basée sur la théorie des fonctions de croyance est proposée pour identifier les propriétés élastiques macro et micro des matériaux multi-structures. Dans cette stratégie, les incertitudes liées aux modèles et aux mesures sont analysées et quantifiées. Cette stratégie est ensuite étendue pour prendre en compte l'information à priori et quantifier l'incertitude associée. / In order to obtain more predictive and accurate simulations of mechanical behaviour in the practical environment, more and more complex material models have been developed. Nowadays, the characterization of material properties remains a top-priority objective. It requires dedicated identification methods and tests in conditions as close as possible to the real ones. This thesis aims at developing an effective identification methodology to find the material property parameters, taking advantages of all available information. The information used for the identification is theoretical, experimental, and empirical: the theoretical information is linked to the mechanical models whose uncertainty is epistemic; the experimental information consists in the full-field measurement whose uncertainty is aleatory; the empirical information is related to the prior information with epistemic uncertainty as well. The main difficulty is that the available information is not always reliable and its corresponding uncertainty is heterogeneous. This difficulty is overcome by the introduction of the theory of belief functions. By offering a general framework to represent and quantify the heterogeneous uncertainties, the performance of the identification is improved. The strategy based on the belief function is proposed to identify macro and micro elastic properties of multi-structure materials. In this strategy, model and measurement uncertainties arc analysed and quantified. This strategy is subsequently developed to take prior information into consideration and quantify its corresponding uncertainty.
73

Evaluation of empirical approaches to estimate the variability of erosive inputs in river catchments

Gericke, Andreas 09 December 2013 (has links)
Die Dissertation erforscht die Unsicherheit, Sensitivität und Grenzen großskaliger Erosionsmodelle. Die Modellierung basiert auf der allgemeinen Bodenabtragsgleichung (ABAG), Sedimenteintragsverhältnissen (SDR) und europäischen Daten. Für mehrere Regionen Europas wird die Bedeutung der Unsicherheit topographischer Modellparameter, ABAG-Faktoren und kritischer Schwebstofffrachten für die Anwendbarkeit empirischer Modelle zur Beschreibung von Sedimentfrachten und SDR von Flusseinzugsgebieten untersucht. Der Vergleich alternativer Modellparameter sowie Kalibrierungs- und Validierungsdaten zeigt, dass schon grundlegende Modellentscheidungen mit großen Unsicherheiten behaftet sind. Zur Vermeidung falscher Modellvorhersagen sind kalibrierte Modelle genau zu dokumentieren. Auch wenn die geschickte Wahl nicht-topographischer Algorithmen die Modellgüte regionaler Anwendungen verbessern kann, so gibt es nicht die generell beste Lösung. Die Ergebnisse zeigen auch, dass SDR-Modelle stets mit Sedimentfrachten und SDR kalibriert und evaluiert werden sollten. Mit diesem Ansatz werden eine neue europäische Bodenabtragskarte und ein verbessertes SDR-Modell für Einzugsgebiete nördlich der Alpen und in Südosteuropa abgeleitet. In anderen Regionen Europas ist das SDR-Modell bedingt nutzbar. Die Studien zur jährlichen Variabilität der Bodenerosion zeigen, dass jahreszeitlich gewichtete Niederschlagsdaten geeigneter als ungewichtete sind. Trotz zufriedenstellender Modellergebnisse überwinden weder sorgfältige Algorithmenwahl noch Modellverbesserungen die Grenzen europaweiter SDR-Modelle. Diese bestehen aus der Diskrepanz zwischen modellierten Bodenabtrags- und maßgeblich zur beobachteten bzw. kritischen Sedimentfracht beitragenden Prozessen sowie der außergewöhnlich hohen Sedimentmobilisierung durch Hochwässer. Die Integration von nicht von der ABAG beschriebenen Prozessen und von Starkregentagen sowie die Disaggregation kritischer Frachten sollte daher weiter erforscht werden. / This dissertation thesis addresses the uncertainty, sensitivity and limitations of large-scale erosion models. The modelling framework consists of the universal soil loss equation (USLE), sediment delivery ratios (SDR) and European data. For several European regions, the relevance of the uncertainty in topographic model parameters, USLE factors and critical yields of suspended solids for the applicability of empirical models to predict sediment yields and SDR of river catchments is systematically evaluated. The comparison of alternative model parameters as well as calibration and validation data shows that even basic modelling decisions are associated with great uncertainties. Consequently, calibrated models have to be well-documented to avoid misapplication. Although careful choices of non-topographic algorithms can also be helpful to improve the model quality in regional applications, there is no definitive universal solution. The results also show that SDR models should always be calibrated and evaluated against sediment yields and SDR. With this approach, a new European soil loss map and an improved SDR model for river catchments north of the Alps and in Southeast Europe are derived. For other parts of Europe, the SDR model is of limited use. The studies on the annual variability of soil erosion reveal that seasonally weighted rainfall data is more appropriate than unweighted data. Despite satisfactory model results, neither the careful algorithm choice nor model improvements overcome the limitations of pan-European SDR models. These limitations are related to the mismatch of modelled soil loss processes and the relevant processes contributing to the observed or critical sediment load as well as the extraordinary sediment mobilisation during floods. Therefore, further research on integrating non-USLE processes and heavy-rainfall data as well as on disaggregating critical yields is needed.
74

Estimating and Correcting the Effects of Model Selection Uncertainty / Estimating and Correcting the Effects of Model Selection Uncertainty

Nguefack Tsague, Georges Lucioni Edison 03 February 2006 (has links)
No description available.
75

Macroeconometrics with high-dimensional data

Zeugner, Stefan 12 September 2012 (has links)
CHAPTER 1:<p>The default g-priors predominant in Bayesian Model Averaging tend to over-concentrate posterior mass on a tiny set of models - a feature we denote as 'supermodel effect'. To address it, we propose a 'hyper-g' prior specification, whose data-dependent shrinkage adapts posterior model distributions to data quality. We demonstrate the asymptotic consistency of the hyper-g prior, and its interpretation as a goodness-of-fit indicator. Moreover, we highlight the similarities between hyper-g and 'Empirical Bayes' priors, and introduce closed-form expressions essential to computationally feasibility. The robustness of the hyper-g prior is demonstrated via simulation analysis, and by comparing four vintages of economic growth data.<p><p>CHAPTER 2:<p>Ciccone and Jarocinski (2010) show that inference in Bayesian Model Averaging (BMA) can be highly sensitive to small data perturbations. In particular they demonstrate that the importance attributed to potential growth determinants varies tremendously over different revisions of international income data. They conclude that 'agnostic' priors appear too sensitive for this strand of growth empirics. In response, we show that the found instability owes much to a specific BMA set-up: First, comparing the same countries over data revisions improves robustness. Second, much of the remaining variation can be reduced by applying an evenly 'agnostic', but flexible prior.<p><p>CHAPTER 3:<p>This chapter explores the link between the leverage of the US financial sector, of households and of non-financial businesses, and real activity. We document that leverage is negatively correlated with the future growth of real activity, and positively linked to the conditional volatility of future real activity and of equity returns. <p>The joint information in sectoral leverage series is more relevant for predicting future real activity than the information contained in any individual leverage series. Using in-sample regressions and out-of sample forecasts, we show that the predictive power of leverage is roughly comparable to that of macro and financial predictors commonly used by forecasters. <p>Leverage information would not have allowed to predict the 'Great Recession' of 2008-2009 any better than conventional macro/financial predictors. <p><p>CHAPTER 4:<p>Model averaging has proven popular for inference with many potential predictors in small samples. However, it is frequently criticized for a lack of robustness with respect to prediction and inference. This chapter explores the reasons for such robustness problems and proposes to address them by transforming the subset of potential 'control' predictors into principal components in suitable datasets. A simulation analysis shows that this approach yields robustness advantages vs. both standard model averaging and principal component-augmented regression. Moreover, we devise a prior framework that extends model averaging to uncertainty over the set of principal components and show that it offers considerable improvements with respect to the robustness of estimates and inference about the importance of covariates. Finally, we empirically benchmark our approach with popular model averaging and PC-based techniques in evaluating financial indicators as alternatives to established macroeconomic predictors of real economic activity. / Doctorat en Sciences économiques et de gestion / info:eu-repo/semantics/nonPublished

Page generated in 0.4491 seconds