621 |
Marketing control through statisticsThompson, Paul Christian January 1936 (has links)
This item was digitized by the Internet Archive.
|
622 |
Statistical Modeling for Credit RatingsVana, Laura 01 August 2018 (has links) (PDF)
This thesis deals with the development, implementation and application of statistical modeling techniques which can be employed in the analysis of credit ratings.
Credit ratings are one of the most widely used measures of credit risk and are relevant for a wide array of financial market participants, from investors, as part of their investment decision process, to regulators and legislators as a means of measuring and limiting risk. The majority of credit ratings is produced by the "Big Three" credit
rating agencies Standard & Poors', Moody's and Fitch. Especially in the light of the 2007-2009 financial crisis, these rating agencies have been strongly criticized for failing to assess risk accurately and for the lack of transparency in their rating methodology. However,
they continue to maintain a powerful role as financial market participants and have a huge impact on the cost of funding. These points of criticism call for the development of modeling techniques that can 1) facilitate an understanding of the factors that drive the
rating agencies' evaluations, 2) generate insights into the rating patterns that these agencies exhibit.
This dissertation consists of three research articles.
The first one focuses on variable selection and assessment of variable importance in accounting-based models of credit risk. The credit risk measure employed in the study is derived from credit ratings assigned
by ratings agencies Standard & Poors' and Moody's. To deal with the lack of theoretical foundation specific to this type of models, state-of-the-art statistical methods are employed. Different models are compared based on a predictive criterion and model uncertainty is
accounted for in a Bayesian setting. Parsimonious
models are identified after applying the proposed techniques.
The second paper proposes the class of multivariate ordinal regression models for the modeling of credit ratings. The model class is motivated by the fact that correlated ordinal data arises naturally in the context of credit ratings. From a methodological point of view, we
extend existing model specifications in several directions by allowing, among others, for a flexible covariate dependent correlation structure between the continuous variables underlying the ordinal
credit ratings. The estimation of the proposed models is performed using composite likelihood methods. Insights into the heterogeneity among the "Big Three" are gained when applying this model class to the multiple credit ratings dataset. A comprehensive simulation study on the performance of the estimators is provided.
The third research paper deals with the implementation and application of the model class introduced in the second article. In order to make the class of multivariate ordinal regression models more accessible, the R package mvord and the complementary paper included in this dissertation have been developed. The mvord package is available on the "Comprehensive R Archive Network" (CRAN) for free download and enhances the available ready-to-use statistical software for the analysis of correlated ordinal data. In the creation of the package a strong emphasis has
been put on developing a user-friendly and flexible design. The user-friendly design allows end users to estimate in an easy way sophisticated models from the implemented model class. The end users the package appeals to are practitioners and researchers who deal with correlated ordinal data in various areas of application, ranging from credit risk to medicine or psychology.
|
623 |
Statistical models for pharmaceutical extremesPapastathopoulos, Ioannis January 2015 (has links)
Drug toxicity is usually triggered by the occurrence of a combination of extreme values of laboratory variables that are collected in clinical trials. Drug-induced liver injury (DILI) has been the most frequently related single cause of safety-related drug marketing withdrawals for the past 50 years. The importance of assessing the safety of a drug is illustrated through its pre-marketing evaluation. Safety testing is ubiquitous in all phases of clinical trials and early detection of toxicity is key to preventing severe adverse events as well as to reducing the huge financial cost due to the long-term pre-marketing screening of a new drug. The current applied and methodological interest is in univariate and multivariate extreme value models that are typically fitted to a fraction of the data and form the basis of all subsequent predictions and inferential aspects for the problem under study. Typical challenges that arise in pharmaceutical applications among others are the limited source of information, commonly measured by the sample size, and the accurate estimation of the underlying dependence structure These challenges motivate the present thesis which focuses on constructing and improving extreme value models that have direct potential application to the pharmaceutical industry. In particular, in this thesis we focus on i. providing alternatives to univariate extreme value threshold models that can be fitted to lower thresholds and improve the stability of. the parameter estimates as well as the efficiency of the estimators; ii. introducing additional constraints for, and slight changes in, the model formulation and parameter space of two commonly used multivariate extreme value approaches, namely the conditional extremal dependence model and the component-wise maxima approach. These changes in the method are aimed to overcome complications that have been experienced with using these models in terms of modelling negatively associated random variables, overcoming identifiability problems, and to avoid drawing invalid inferences; 111. extending the conditional extremal dependence model to incorporate subject specific knowledge and a natural ordering between doses in the estimation of the probability of DILI; IV. exploring techniques from multivariate analysis and constructing diagnostic measures for estimating the graphical structure of extreme multivariate events.
|
624 |
Statistical Learning in a Bilingual EnvironmentTsui, Sin Mei 30 August 2018 (has links)
Statistical learning refers to the ability to track regular patterns in sensory input from ambient environments. This learning mechanism can exploit a wide range of statistical structures (e.g., frequency, distribution, and co-occurrence probability). Given its regularities and hierarchical structures, language is essentially a pattern-based system and therefore researchers have argued that statistical learning is fundamental to language acquisition (e.g., Saffran, 2003). Indeed, young infants and adults can find words in artificial languages by tracking syllable co-occurrence probabilities and extracting words on that basis (e.g., Saffran. Aslin & Newport, 1996a). However, prior studies have mainly focused on whether learners can statistically segment words from a single language; whether learners can segment words from two artificial languages remains largely unknown. Given that the majority of the global population is bilingual (Grosjean, 2010), it is necessary to study whether learners can make use of the statistical learning mechanism to segment words from two language inputs, which is the focus of this thesis. I examined adult and infant learners to answer three questions: (i) Can learners make use of French and English phonetic cues within a single individual’s speech to segment words successfully from two languages?; 2) Do bilinguals outperform monolinguals?; and 3) Do specific factors, such as cognitive ability or bilingual experience, underlie any potential bilingual advantage in word segmentation across two languages?
In Study 1, adult learners generally could make use of French and English phonetic cues to segment words from two overlapping artificial languages. Importantly, simultaneous bilinguals who learned French and English since birth segmented more correct words in comparison to monolinguals, multilinguals, and sequential French-English bilinguals. Early bilingual experience may lead learners to be more flexible when processing information in new environments and/or they are more sensitive to subtle cues that mark the changes of language inputs. Further, individuals’ cognitive abilities were not related to learners’ segmentation performance, suggesting that the observed simultaneous bilingual segmentation advantage is not related any bilingual cognitive advantages (Bialystok, Craik, & Luk, 2012).
In Study 2, I tested 9.5-month-olds, who are currently discovering words in their natural environment, in an infant version of the adult task. Surprisingly, monolingual, but not bilingual, infants successfully used French and English phonetic cues to segment words from two languages. The observed difference in segmentation may be related to how infant process native and non-native phonetic cues, as the French phonetic cues are non-native to monolingual infants but are native to bilingual infants. Finally, the observed difference in segmentation ability was again not driven by cognitive skills.
In sum, current thesis provides evidence that both adults and infants can make use of phonetic cues to statistically segment words from two languages. The implications of why early bilingualism plays a role in determining learners’ segmentation ability are discussed.
|
625 |
Informative censoring in transplantation statisticsStaplin, Natalie January 2012 (has links)
Observations are informatively censored when there is dependence between the time to the event of interest and time to censoring. When considering the time to death of patients on the waiting list for a transplant, particularly a liver transplant, patients that are removed for transplantation are potentially informatively censored, as generally the most ill patients are transplanted. If this censoring is assumed to be non-informative then any inferences may be misleading. The existing methods in the literature that account for informative censoring are applied to data to assess their suitability for the liver transplantation setting. As the amount of dependence between the time to failure and time to censoring variables cannot be identied from the observed data, estimators that give bounds on the marginal survival function for a given range of dependence values are considered. However, the bounds are too wide to be of use in practice. Sensitivity analyses are also reviewed as these allow us to assess how inferences are affected by assuming differing amounts of dependence and whether methods that account for informative censoring are necessary. Of the other methods considered IPCW estimators were found to be the most useful in practice. Sensitivity analyses for parametric models are less computationally intensive than those for Cox models, although they are not suitable for all sets of data. Therefore, we develop a sensitivity analysis for piecewise exponential models that is still quick to apply. These models are exible enough to be suitable for a wide range of baseline hazards. The sensitivity analysis suggests that for the liver transplantation setting the inferences about time to failure are sensitive to informative censoring. A simulation study is carried out that shows that the sensitivity analysis is accurate in many situations, although not when there is a large proportion of censoring in the data set. Finally, a method to calculate the survival benefit of liver transplantation is adapted to make it more suitable for UK data. This method calculates the expected change in post-transplant mortality relative to waiting list mortality. It uses IPCW methods to account for the informative censoring encountered when estimating waiting list mortality to ensure the estimated survival benefit is as accurate as possible.
|
626 |
Essays in statistical arbitrageAlsayed, Hamad January 2014 (has links)
This three-paper thesis explores the important relationship between arbitrage and price efficiency. Chapter 3 investigates the risk-bearing capacity of arbitrageurs under varying degrees and types of risk. A novel stochastic process is introduced to the literature that is capable of jointly capturing fundamental risk factors which are absent from extant specifications. Using stochastic optimal control theory, the degree to which arbitrageurs' investment behaviour is affected by aversion to these risks is analytically characterized, as well as conditions under which arbitrageurs cut losses, effectively exacerbating pricing disequilibria. Chapter 4 explores the role of arbitrage in enforcing price parity between cross-listed securities. This work employs an overlooked mechanism by which arbitrage can maintain parity, namely pairs-trading, which is cheaper to implement than the mechanism most commonly employed in the literature on cross-listed securities. This work shows that arbitrage is successful at enforcing parity between cross-listed securities, and also documents the main limits to arbitrage in this market setting. Chapter 5 examines the extent to which arbitrage contributes to the flow of information across markets. It is shown that microscopic lead/lag relationships of the order of a few hundred milliseconds exist across three major international index futures. Importantly, these delays last long enough, and induce pricing anomalies large enough, to compensate arbitrageurs for appropriating pricing disequilibria. These results accord with the view that temporary disequilibria incentivise arbitrageurs to correct pricing anomalies.
|
627 |
Logic and lattices for a statistics advisorO'Keefe, Richard A. January 1987 (has links)
The work partially reported here concerned the development ot a prototype Expert System for giving advice about Statistics experiments, called ASA, and an inference engine to support ASA, called ABASE. This involved discovering what knowledge was necessary for performing the task at a satis? factory level of competence, working out how to represent this knowledge in a computer, and how to process the representations efficiently. Two areas of Statistical knowledge are described in detail: the classification of measure? ments and statistical variables, and the structure of elementary statistical experiments. A knowledge representation system based on lattices is proposed, and it is shown that such representations are learnable by computer programs, and lend themselves to particularly efficient implementation. ABASE was influenced by MBASE, the inference engine of MECHO [Bundy et al 79a]. Both are theorem provers working on typed function-free Horn clauses, with controlled creation of new entities. Their type systems and proof procedures are radically different, though, and ABASE is "conversational" while MBASE is not.
|
628 |
The statistical mechanics of dense fluidsTildesley, D. J. January 1976 (has links)
No description available.
|
629 |
Statistical aspects of credit scoringHenley, William Edward January 1994 (has links)
This thesis is concerned with statistical aspects of credit scoring, the process of determining how likely an applicant for credit is to default with repayments. In Chapters 1-4 a detailed introduction to credit scoring methodology is presented, including evaluation of previous published work on credit scoring and a review of discrimination and classification techniques. In Chapter 5 we describe different approaches to measuring the absolute and relative performance of credit scoring models. Two significance tests are proposed for comparing the bad rate amongst the accepts (or the error rate) from two classifiers. In Chapter 6 we consider different approaches to reject inference, the procedure of allocating class membership probabilities to the rejects. One reason for needing reject inference is to reduce the sample selection bias that results from using a sample consisting only of accepted applicants to build new scorecards. We show that the characteristic vectors for the rejects do not contain information about the parameters of the observed data likelihood, unless extra information or assumptions are included. Methods of reject inference which incorporate additional information are proposed. In Chapter 7 we make comparisons of a range of different parametric and nonparametric classification techniques for credit scoring: linear regression, logistic regression, projection pursuit regression, Poisson regression, decision trees and decision graphs. We conclude that classifier performance is fairly insensitive to the particular technique adopted. In Chapter 8 we describe the application of the k-NN method to credit scoring. We propose using an adjusted version of the Eucidean distance metric, which is designed to incorporate knowledge of class separation contained in the data. We evaluate properties of the k-NN classifier through empirical studies and make comparisons with existing techniques.
|
630 |
Statistical modelling by neural networksFletcher, Lizelle 30 June 2002 (has links)
In this thesis the two disciplines of Statistics and Artificial Neural Networks
are combined into an integrated study of a data set of a weather modification
Experiment.
An extensive literature study on artificial neural network methodology has
revealed the strongly interdisciplinary nature of the research and the applications
in this field.
An artificial neural networks are becoming increasingly popular with data
analysts, statisticians are becoming more involved in the field. A recursive
algoritlun is developed to optimize the number of hidden nodes in a feedforward
artificial neural network to demonstrate how existing statistical techniques
such as nonlinear regression and the likelihood-ratio test can be applied in
innovative ways to develop and refine neural network methodology.
This pruning algorithm is an original contribution to the field of artificial
neural network methodology that simplifies the process of architecture selection,
thereby reducing the number of training sessions that is needed to find
a model that fits the data adequately.
[n addition, a statistical model to classify weather modification data is developed
using both a feedforward multilayer perceptron artificial neural network
and a discriminant analysis. The two models are compared and the effectiveness
of applying an artificial neural network model to a relatively small
data set assessed.
The formulation of the problem, the approach that has been followed to
solve it and the novel modelling application all combine to make an original
contribution to the interdisciplinary fields of Statistics and Artificial Neural
Networks as well as to the discipline of meteorology. / Mathematical Sciences / D. Phil. (Statistics)
|
Page generated in 0.2095 seconds