• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 13
  • 2
  • 1
  • 1
  • Tagged with
  • 19
  • 19
  • 10
  • 10
  • 6
  • 6
  • 6
  • 6
  • 5
  • 5
  • 5
  • 5
  • 5
  • 5
  • 5
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
11

Random effects models for ordinal data

Lee, Arier Chi-Lun January 2009 (has links)
One of the most frequently encountered types of data is where the response variables are measured on an ordinal scale. Although there have been substantial developments in the statistical techniques for the analysis of ordinal data, methods appropriate for repeatedly assessed ordinal data collected from field experiments are limited. A series of biennial field screening trials for evaluating cultivar resistance of potato to the disease, late blight, caused by the fungus Phytophthora infestans (Mont.) de Bary has been conducted by the New Zealand Institute of Crop and Food Research since 1983. In each trial, the progression of late blight was visually assessed several times during the planting season using a nine-point ordinal scale based on the percentage of necrotic tissues. As for many other agricultural field experiments, spatial differences between the experimental units is one of the major concerns in the analysis of data from the potato late blight trial. The aim of this thesis is to construct a statistical model which can be used to analyse the data collected from the series of potato late blight trials. We review existing methodologies for analysing ordinal data with mixed effects particularly those methods in the Bayesian framework. Using data collected from the potato late blight trials we develop a Bayesian hierarchical model for the analyses of repeatedly assessed ordinal scores with spatial effects, in particular the time dependence of the scores assessed on the same experimental units was modelled by a sigmoid logistic curve. Data collected from the potato late blight trials demonstrated the importance of spatial effects in agricultural field trials. These effects cannot be neglected when analysing such data. Although statistical methods can be refined to account for the complexity of the data, appropriate trial design still plays a central role in field experiments. / Accompanying dataset is at http://hdl.handle.net/2292/5240
12

Random effects models for ordinal data

Lee, Arier Chi-Lun January 2009 (has links)
One of the most frequently encountered types of data is where the response variables are measured on an ordinal scale. Although there have been substantial developments in the statistical techniques for the analysis of ordinal data, methods appropriate for repeatedly assessed ordinal data collected from field experiments are limited. A series of biennial field screening trials for evaluating cultivar resistance of potato to the disease, late blight, caused by the fungus Phytophthora infestans (Mont.) de Bary has been conducted by the New Zealand Institute of Crop and Food Research since 1983. In each trial, the progression of late blight was visually assessed several times during the planting season using a nine-point ordinal scale based on the percentage of necrotic tissues. As for many other agricultural field experiments, spatial differences between the experimental units is one of the major concerns in the analysis of data from the potato late blight trial. The aim of this thesis is to construct a statistical model which can be used to analyse the data collected from the series of potato late blight trials. We review existing methodologies for analysing ordinal data with mixed effects particularly those methods in the Bayesian framework. Using data collected from the potato late blight trials we develop a Bayesian hierarchical model for the analyses of repeatedly assessed ordinal scores with spatial effects, in particular the time dependence of the scores assessed on the same experimental units was modelled by a sigmoid logistic curve. Data collected from the potato late blight trials demonstrated the importance of spatial effects in agricultural field trials. These effects cannot be neglected when analysing such data. Although statistical methods can be refined to account for the complexity of the data, appropriate trial design still plays a central role in field experiments. / Accompanying dataset is at http://hdl.handle.net/2292/5240
13

Random effects models for ordinal data

Lee, Arier Chi-Lun January 2009 (has links)
One of the most frequently encountered types of data is where the response variables are measured on an ordinal scale. Although there have been substantial developments in the statistical techniques for the analysis of ordinal data, methods appropriate for repeatedly assessed ordinal data collected from field experiments are limited. A series of biennial field screening trials for evaluating cultivar resistance of potato to the disease, late blight, caused by the fungus Phytophthora infestans (Mont.) de Bary has been conducted by the New Zealand Institute of Crop and Food Research since 1983. In each trial, the progression of late blight was visually assessed several times during the planting season using a nine-point ordinal scale based on the percentage of necrotic tissues. As for many other agricultural field experiments, spatial differences between the experimental units is one of the major concerns in the analysis of data from the potato late blight trial. The aim of this thesis is to construct a statistical model which can be used to analyse the data collected from the series of potato late blight trials. We review existing methodologies for analysing ordinal data with mixed effects particularly those methods in the Bayesian framework. Using data collected from the potato late blight trials we develop a Bayesian hierarchical model for the analyses of repeatedly assessed ordinal scores with spatial effects, in particular the time dependence of the scores assessed on the same experimental units was modelled by a sigmoid logistic curve. Data collected from the potato late blight trials demonstrated the importance of spatial effects in agricultural field trials. These effects cannot be neglected when analysing such data. Although statistical methods can be refined to account for the complexity of the data, appropriate trial design still plays a central role in field experiments. / Accompanying dataset is at http://hdl.handle.net/2292/5240
14

Random effects models for ordinal data

Lee, Arier Chi-Lun January 2009 (has links)
One of the most frequently encountered types of data is where the response variables are measured on an ordinal scale. Although there have been substantial developments in the statistical techniques for the analysis of ordinal data, methods appropriate for repeatedly assessed ordinal data collected from field experiments are limited. A series of biennial field screening trials for evaluating cultivar resistance of potato to the disease, late blight, caused by the fungus Phytophthora infestans (Mont.) de Bary has been conducted by the New Zealand Institute of Crop and Food Research since 1983. In each trial, the progression of late blight was visually assessed several times during the planting season using a nine-point ordinal scale based on the percentage of necrotic tissues. As for many other agricultural field experiments, spatial differences between the experimental units is one of the major concerns in the analysis of data from the potato late blight trial. The aim of this thesis is to construct a statistical model which can be used to analyse the data collected from the series of potato late blight trials. We review existing methodologies for analysing ordinal data with mixed effects particularly those methods in the Bayesian framework. Using data collected from the potato late blight trials we develop a Bayesian hierarchical model for the analyses of repeatedly assessed ordinal scores with spatial effects, in particular the time dependence of the scores assessed on the same experimental units was modelled by a sigmoid logistic curve. Data collected from the potato late blight trials demonstrated the importance of spatial effects in agricultural field trials. These effects cannot be neglected when analysing such data. Although statistical methods can be refined to account for the complexity of the data, appropriate trial design still plays a central role in field experiments. / Accompanying dataset is at http://hdl.handle.net/2292/5240
15

Some properties of measures of disagreement and disorder in paired ordinal data

Högberg, Hans January 2010 (has links)
The measures studied in this thesis were a measure of disorder, D, and a measure of the individual part of the disagreement, the measure of relative rank variance, RV, proposed by Svensson in 1993. The measure of disorder is a useful measure of order consistency in paired assessments of scales with a different number of possible values. The measure of relative rank variance is a useful measure in evaluating reliability and for evaluating change in qualitative outcome variables. In Paper I an overview of methods used in the analysis of dependent ordinal data and a comparison of the methods regarding the assumptions, specifications, applicability, and implications for use were made. In Paper II an application, and a comparison of the results of some standard models, tests, and measures to two different research problems were made. The sampling distribution of the measure of disorder was studied both analytically and by a simulation experiment in Paper III. The asymptotic normal distribution was shown by the theory of U-statistics and the simulation experiments for finite sample sizes and various amount of disorder showed that the sampling distribution was approximately normal for sample sizes of about 40 to 60 for moderate sizes of D and for smaller sample sizes for substantial sizes of D. The sampling distribution of the relative rank variance was studied in a simulation experiment in Paper IV. The simulation experiment showed that the sampling distribution was approximately normal for sample sizes of 60-100 for moderate size of RV, and for smaller sample sizes for substantial size of RV. In Paper V a procedure for inference regarding relative rank variances from two or more samples was proposed. Pair-wise comparison by jackknife technique for variance estimation and the use of normal distribution as approximation in inference for parameters in independent samples based on the results in Paper IV were demonstrated. Moreover, an application of Kruskal-Wallis test for independent samples and Friedman’s test for dependent samples were conducted. / Statistical methods for ordinal data
16

Die statistische Auswertung von ordinalen Daten bei zwei Zeitpunkten und zwei Stichproben / The Statistical Analysis of Ordinal Data at two Timepoints and two Groups

Siemer, Alexander 03 April 2002 (has links)
No description available.
17

Semiparametric Bayesian Approach using Weighted Dirichlet Process Mixture For Finance Statistical Models

Sun, Peng 07 March 2016 (has links)
Dirichlet process mixture (DPM) has been widely used as exible prior in nonparametric Bayesian literature, and Weighted Dirichlet process mixture (WDPM) can be viewed as extension of DPM which relaxes model distribution assumptions. Meanwhile, WDPM requires to set weight functions and can cause extra computation burden. In this dissertation, we develop more efficient and exible WDPM approaches under three research topics. The first one is semiparametric cubic spline regression where we adopt a nonparametric prior for error terms in order to automatically handle heterogeneity of measurement errors or unknown mixture distribution, the second one is to provide an innovative way to construct weight function and illustrate some decent properties and computation efficiency of this weight under semiparametric stochastic volatility (SV) model, and the last one is to develop WDPM approach for Generalized AutoRegressive Conditional Heteroskedasticity (GARCH) model (as an alternative approach for SV model) and propose a new model evaluation approach for GARCH which produces easier-to-interpret result compared to the canonical marginal likelihood approach. In the first topic, the response variable is modeled as the sum of three parts. One part is a linear function of covariates that enter the model parametrically. The second part is an additive nonparametric model. The covariates whose relationships to response variable are unclear will be included in the model nonparametrically using Lancaster and Šalkauskas bases. The third part is error terms whose means and variance are assumed to follow non-parametric priors. Therefore we denote our model as dual-semiparametric regression because we include nonparametric idea for both modeling mean part and error terms. Instead of assuming all of the error terms follow the same prior in DPM, our WDPM provides multiple candidate priors for each observation to select with certain probability. Such probability (or weight) is modeled by relevant predictive covariates using Gaussian kernel. We propose several different WDPMs using different weights which depend on distance in covariates. We provide the efficient Markov chain Monte Carlo (MCMC) algorithms and also compare our WDPMs to parametric model and DPM model in terms of Bayes factor using simulation and empirical study. In the second topic, we propose an innovative way to construct weight function for WDPM and apply it to SV model. SV model is adopted in time series data where the constant variance assumption is violated. One essential issue is to specify distribution of conditional return. We assume WDPM prior for conditional return and propose a new way to model the weights. Our approach has several advantages including computational efficiency compared to the weight constructed using Gaussian kernel. We list six properties of this proposed weight function and also provide the proof of them. Because of the additional Metropolis-Hastings steps introduced by WDPM prior, we find the conditions which can ensure the uniform geometric ergodicity of transition kernel in our MCMC. Due to the existence of zero values in asset price data, our SV model is semiparametric since we employ WDPM prior for non-zero values and parametric prior for zero values. On the third project, we develop WDPM approach for GARCH type model and compare different types of weight functions including the innovative method proposed in the second topic. GARCH model can be viewed as an alternative way of SV for analyzing daily stock prices data where constant variance assumption does not hold. While the response variable of our SV models is transformed log return (based on log-square transformation), GARCH directly models the log return itself. This means that, theoretically speaking, we are able to predict stock returns using GARCH models while this is not feasible if we use SV model. Because SV models ignore the sign of log returns and provides predictive densities for squared log return only. Motivated by this property, we propose a new model evaluation approach called back testing return (BTR) particularly for GARCH. This BTR approach produces model evaluation results which are easier to interpret than marginal likelihood and it is straightforward to draw conclusion about model profitability by applying this approach. Since BTR approach is only applicable to GARCH, we also illustrate how to properly cal- culate marginal likelihood to make comparison between GARCH and SV. Based on our MCMC algorithms and model evaluation approaches, we have conducted large number of model fittings to compare models in both simulation and empirical study. / Ph. D.
18

Statistical Modeling for Credit Ratings

Vana, Laura 01 August 2018 (has links) (PDF)
This thesis deals with the development, implementation and application of statistical modeling techniques which can be employed in the analysis of credit ratings. Credit ratings are one of the most widely used measures of credit risk and are relevant for a wide array of financial market participants, from investors, as part of their investment decision process, to regulators and legislators as a means of measuring and limiting risk. The majority of credit ratings is produced by the "Big Three" credit rating agencies Standard & Poors', Moody's and Fitch. Especially in the light of the 2007-2009 financial crisis, these rating agencies have been strongly criticized for failing to assess risk accurately and for the lack of transparency in their rating methodology. However, they continue to maintain a powerful role as financial market participants and have a huge impact on the cost of funding. These points of criticism call for the development of modeling techniques that can 1) facilitate an understanding of the factors that drive the rating agencies' evaluations, 2) generate insights into the rating patterns that these agencies exhibit. This dissertation consists of three research articles. The first one focuses on variable selection and assessment of variable importance in accounting-based models of credit risk. The credit risk measure employed in the study is derived from credit ratings assigned by ratings agencies Standard & Poors' and Moody's. To deal with the lack of theoretical foundation specific to this type of models, state-of-the-art statistical methods are employed. Different models are compared based on a predictive criterion and model uncertainty is accounted for in a Bayesian setting. Parsimonious models are identified after applying the proposed techniques. The second paper proposes the class of multivariate ordinal regression models for the modeling of credit ratings. The model class is motivated by the fact that correlated ordinal data arises naturally in the context of credit ratings. From a methodological point of view, we extend existing model specifications in several directions by allowing, among others, for a flexible covariate dependent correlation structure between the continuous variables underlying the ordinal credit ratings. The estimation of the proposed models is performed using composite likelihood methods. Insights into the heterogeneity among the "Big Three" are gained when applying this model class to the multiple credit ratings dataset. A comprehensive simulation study on the performance of the estimators is provided. The third research paper deals with the implementation and application of the model class introduced in the second article. In order to make the class of multivariate ordinal regression models more accessible, the R package mvord and the complementary paper included in this dissertation have been developed. The mvord package is available on the "Comprehensive R Archive Network" (CRAN) for free download and enhances the available ready-to-use statistical software for the analysis of correlated ordinal data. In the creation of the package a strong emphasis has been put on developing a user-friendly and flexible design. The user-friendly design allows end users to estimate in an easy way sophisticated models from the implemented model class. The end users the package appeals to are practitioners and researchers who deal with correlated ordinal data in various areas of application, ranging from credit risk to medicine or psychology.
19

Statistical inference for joint modelling of longitudinal and survival data

Li, Qiuju January 2014 (has links)
In longitudinal studies, data collected within a subject or cluster are somewhat correlated by their very nature and special cares are needed to account for such correlation in the analysis of data. Under the framework of longitudinal studies, three topics are being discussed in this thesis. In chapter 2, the joint modelling of multivariate longitudinal process consisting of different types of outcomes are discussed. In the large cohort study of UK north Stafforshire osteoarthritis project, longitudinal trivariate outcomes of continuous, binary and ordinary data are observed at baseline, year 3 and year 6. Instead of analysing each process separately, joint modelling is proposed for the trivariate outcomes to account for the inherent association by introducing random effects and the covariance matrix G. The influence of covariance matrix G on statistical inference of fixed-effects parameters has been investigated within the Bayesian framework. The study shows that by joint modelling the multivariate longitudinal process, it can reduce the bias and provide with more reliable results than it does by modelling each process separately. Together with the longitudinal measurements taken intermittently, a counting process of events in time is often being observed as well during a longitudinal study. It is of interest to investigate the relationship between time to event and longitudinal process, on the other hand, measurements taken for the longitudinal process may be potentially truncated by the terminated events, such as death. Thus, it may be crucial to jointly model the survival and longitudinal data. It is popular to propose linear mixed-effects models for the longitudinal process of continuous outcomes and Cox regression model for survival data to characterize the relationship between time to event and longitudinal process, and some standard assumptions have been made. In chapter 3, we try to investigate the influence on statistical inference for survival data when the assumption of mutual independence on random error of linear mixed-effects models of longitudinal process has been violated. And the study is conducted by utilising conditional score estimation approach, which provides with robust estimators and shares computational advantage. Generalised sufficient statistic of random effects is proposed to account for the correlation remaining among the random error, which is characterized by the data-driven method of modified Cholesky decomposition. The simulation study shows that, by doing so, it can provide with nearly unbiased estimation and efficient statistical inference as well. In chapter 4, it is trying to account for both the current and past information of longitudinal process into the survival models of joint modelling. In the last 15 to 20 years, it has been popular or even standard to assume that longitudinal process affects the counting process of events in time only through the current value, which, however, is not necessary to be true all the time, as recognised by the investigators in more recent studies. An integral over the trajectory of longitudinal process, along with a weighted curve, is proposed to account for both the current and past information to improve inference and reduce the under estimation of effects of longitudinal process on the risk hazards. A plausible approach of statistical inference for the proposed models has been proposed in the chapter, along with real data analysis and simulation study.

Page generated in 0.0807 seconds