Spelling suggestions: "subject:"covariate"" "subject:"kovariate""
1 |
Statistical considerations of noninferiority, bioequivalence and equivalence testing in biosimilars studiesXu, Siyan 22 January 2016 (has links)
In recent years, the development of follow-on biological products (biosimilars) has received increasing attention. The dissertation covers statistical methods related to three topics of Non-inferiority (NI), Bioequivalence (BE) and Equivalence in demonstrating biosimilarity. For NI, one of the key requirements is constancy assumption, that is, the effect of reference treatment is the same in current NI trials as in historical superiority trials. However if a covariate interacts with the treatment arms, then changes in distribution of this covariate will result in violation of constancy assumption. We propose a modified covariate-adjustment fixed margin method, and recommend it based on its performance characteristics in comparison with other methods. Topic two is related to BE inference for log-normal distributed data. Two drugs are bioequivalent if the difference of a pharmacokinetics (PK) parameter of two products falls within prespecified margins. In the presence of unspecified variances, existing methods like two one-sided tests and Bayesian analysis in BE setting limit our knowledge on the extent that inference of BE is affected by the variability of the PK parameter. We propose a likelihood approach that retains the unspecified variances in the model and partitions the entire likelihood function into two components: F-statistic function for variances and t-statistic function for difference of PK parameter. The advantage of the proposed method over existing methods is it helps identify range of variances where BE is more likely to be achieved. In the third topic, we extend the proposed likelihood method for Equivalence inference, where data is often normal distributed. In this part, we demonstrate an additional advantage of the proposed method over current analysis methods such as likelihood ratio test and Bayesian analysis in Equivalence setting. The proposed likelihood method produces results that are same or comparable to current analysis methods in general case when model parameters are independent. However it yields better results in special cases when model parameters are dependent, for example the ratio of variances is directly proportional to the ratio of means. Our research results suggest the proposed likelihood method serves a better alternative than the current analysis methods to address BE/Equivalence inference.
|
2 |
A covariate model in finite mixture survival distributionsSoegiarso, Restuti Widayati January 1992 (has links)
No description available.
|
3 |
Feature distribution learning for covariate shift adaptation using sparse filteringZennaro, Fabio January 2017 (has links)
This thesis studies a family of unsupervised learning algorithms called feature distribution learning and their extension to perform covariate shift adaptation. Unsupervised learning is one of the most active areas of research in machine learning, and a central challenge in this field is to develop simple and robust algorithms able to work in real-world scenarios. A traditional assumption of machine learning is the independence and identical distribution of data. Unfortunately, in realistic conditions this assumption is often unmet and the performances of traditional algorithms may be severely compromised. Covariate shift adaptation has then developed as a lively sub-field concerned with designing algorithms that can account for covariate shift, that is for a difference in the distribution of training and test samples. The first part of this dissertation focuses on the study of a family of unsupervised learning algorithms that has been recently proposed and has shown promise: feature distribution learning; in particular, sparse filtering, the most representative feature distribution learning algorithm, has commanded interest because of its simplicity and state-of-the-art performance. Despite its success and its frequent adoption, sparse filtering lacks any strong theoretical justification. This research questions how feature distribution learning can be rigorously formalized and how the dynamics of sparse filtering can be explained. These questions are answered by first putting forward a new definition of feature distribution learning based on concepts from information theory and optimization theory; relying on this, a theoretical analysis of sparse filtering is carried out, which is validated on both synthetic and real-world data sets. In the second part, the use of feature distribution learning algorithms to perform covariate shift adaptation is considered. Indeed, because of their definition and apparent insensitivity to the problem of modelling data distributions, feature distribution learning algorithms seems particularly fit to deal with covariate shift. This research questions whether and how feature distribution learning may be fruitfully employed to perform covariate shift adaptation. After making explicit the conditions of success for performing covariate shift adaptation, a theoretical analysis of sparse filtering and another novel algorithm, periodic sparse filtering, is carried out; this allows for the determination of the specific conditions under which these algorithms successfully work. Finally, a comparison of these sparse filtering-based algorithms against other traditional algorithms aimed at covariate shift adaptation is offered, showing that the novel algorithm is able to achieve competitive performance. In conclusion, this thesis provides a new rigorous framework to analyse and design feature distribution learning algorithms; it sheds light on the hidden assumptions behind sparse filtering, offering a clear understanding of its conditions of success; it uncovers the potential and the limitations of sparse filtering-based algorithm in performing covariate shift adaptation. These results are relevant both for researchers interested in furthering the understanding of unsupervised learning algorithms and for practitioners interested in deploying feature distribution learning in an informed way.
|
4 |
Nonparametric Estimation and Inference for the Copula Parameter in Conditional CopulasAcar, Elif Fidan 14 January 2011 (has links)
The primary aim of this thesis is the elucidation of covariate effects on the dependence structure of random variables in bivariate or multivariate models. We develop a unified approach via a conditional copula model in which the copula is parametric and its parameter varies as the covariate. We propose a nonparametric procedure based on local likelihood to estimate the functional relationship between the copula parameter and the covariate, derive the asymptotic properties of the proposed estimator and outline the construction of pointwise confidence intervals. We also contribute a novel conditional copula selection method based on cross-validated prediction errors and a generalized likelihood ratio-type test to determine if the copula parameter varies significantly. We derive the asymptotic null distribution of the formal test. Using subsets of the Matched Multiple Birth and Framingham Heart Study datasets, we demonstrate the performance of these procedures via analyses of gestational age-specific twin birth weights and the impact of change in body mass index on the dependence between two consequent pulse pressures taken from the same subject.
|
5 |
Finding the Cutpoint of a Continuous Covariate in a Parametric Survival Analysis ModelJoshi, Kabita 01 January 2016 (has links)
In many clinical studies, continuous variables such as age, blood pressure and cholesterol are measured and analyzed. Often clinicians prefer to categorize these continuous variables into different groups, such as low and high risk groups. The goal of this work is to find the cutpoint of a continuous variable where the transition occurs from low to high risk group. Different methods have been published in literature to find such a cutpoint. We extended the methods of Contal and O’Quigley (1999) which was based on the log-rank test and the methods of Klein and Wu (2004) which was based on the Score test to find the cutpoint of a continuous covariate. Since the log-rank test is a nonparametric method and the Score test is a parametric method, we are interested to see if an extension of the parametric procedure performs better when the distribution of a population is known. We have developed a method that uses the parametric score residuals to find the cutpoint. The performance of the proposed method will be compared with the existing methods developed by Contal and O’Quigley and Klein and Wu by estimating the bias and mean square error of the estimated cutpoints for different scenarios in simulated data.
|
6 |
Predictor Selection in Linear Regression: L1 regularization of a subset of parameters and Comparison of L1 regularization and stepwise selectionHu, Qing 11 May 2007 (has links)
Background: Feature selection, also known as variable selection, is a technique that selects a subset from a large collection of possible predictors to improve the prediction accuracy in regression model. First objective of this project is to investigate in what data structure LASSO outperforms forward stepwise method. The second objective is to develop a feature selection method, Feature Selection by L1 Regularization of Subset of Parameters (LRSP), which selects the model by combining prior knowledge of inclusion of some covariates, if any, and the information collected from the data. Mathematically, LRSP minimizes the residual sum of squares subject to the sum of the absolute value of a subset of the coefficients being less than a constant. In this project, LRSP is compared with LASSO, Forward Selection, and Ordinary Least Squares to investigate their relative performance for different data structures. Results: simulation results indicate that for moderate number of small sized effects, forward selection outperforms LASSO in both prediction accuracy and the performance of variable selection when the variance of model error term is smaller, regardless of the correlations among the covariates; forward selection also works better in the performance of variable selection when the variance of error term is larger, but the correlations among the covariates are smaller. LRSP was shown to be an efficient method to deal with the problems when prior knowledge of inclusion of covariates is available, and it can also be applied to problems with nuisance parameters, such as linear discriminant analysis.
|
7 |
Choosing covariates in the analysis of cluster randomised trialsWright, Neil D. January 2015 (has links)
Covariate adjustment is common in the analysis of randomised trials, and can increase statistical power without increasing sample size. Published research on covariate adjustment, and guidance for choosing covariates, focusses on trials where individuals are randomised to treatments. In cluster randomised trials (CRTs) clusters of individuals are randomised. Valid analyses of CRTs account for the structure imposed by cluster randomisation. There is limited published research on the e ects of covariate adjustment, or guidance for choosing covariates, in analyses of CRTs. I summarise existing guidance for choosing covariates in individually randomised trials and CRTs, and review the methods used to investigate the e ects of covariate adjustment. I review the use of adjusted analyses in published CRTs. I use simulation, analytic methods, and analyses of trial data to investigate the e ects of covariate adjustment in mixed models. I use these results to form guidance for choosing covariates in analyses of CRTs. Guidance to choose covariates a priori and adjust for covariates used to stratify randomisation is also applicable to CRTs. I provide guidance speci c to CRTs using linear and logistic mixed models. Cluster size, the intra-cluster correlations (ICCs) of the outcome and covariate, and the strength of the relationship between the outcome and covariate in uence the power of adjusted analyses and the precision of treatment e ect estimates. An a priori estimate of the product of cluster size and the ICC of the outcome can be used to assist choosing covariates. When this product is close to one, adjusting for a cluster level covariate or a covariate with a negligible ICC provide similar increases in power. For smaller values of this product, adjusting for a cluster level covariate gives minimal increases in power. The use of separate withincluster and contextual covariate e ect parameters may increase power further in some circumstances.
|
8 |
Effect fusion using model-based clusteringMalsiner-Walli, Gertraud, Pauger, Daniela, Wagner, Helga 01 April 2018 (has links) (PDF)
In social and economic studies many of the collected variables are measured on a nominal scale, often with a large number of categories. The definition of categories can be ambiguous and different classification schemes using either a finer or a coarser grid are possible. Categorization has an impact when such a variable is included as covariate in a regression model: a too fine grid will result in imprecise estimates of the corresponding effects, whereas with a too coarse grid important effects will be missed, resulting in biased effect estimates and poor predictive performance.
To achieve an automatic grouping of the levels of a categorical covariate with essentially the same effect, we adopt a Bayesian approach and specify the prior on the level effects as a location mixture of spiky Normal components. Model-based clustering of the effects during MCMC sampling allows to simultaneously detect categories which have essentially the same effect size and identify variables with no effect at all. Fusion of level effects is induced by a prior on the mixture weights which encourages empty components. The properties of this approach are investigated in simulation studies. Finally, the method is applied to analyse effects of high-dimensional categorical predictors on income in Austria.
|
9 |
The impact of differential censoring and covariate relationships on propensity score performance in a time-to-event setting: a simulation studyHinman, Jessica 01 January 2017 (has links)
Objective: To assess the ability of propensity score methods to maintain covariate balance and minimize bias in the estimation of treatment effect in a time-to-event setting.
Data Sources: Generated simulation model
Study Design: Simulation study
Data Collection: 6 scenarios with varying covariate relationships to treatment and outcome with 2 different censoring prevalences
Principal Findings: As time lapses, balance achieved at baseline through propensity score methods between treated and untreated groups trends toward imbalance, particularly in settings with high rates of censoring. Furthermore, there is a high degree of variability in the performance of different propensity score models with respect to effect estimation.
Conclusions: Caution should be used when incorporating propensity score analysis methods in survival analyses. In these settings, if model over-parameterization is a concern, Cox regression stratified on propensity score matched pairs often provides more accurate conditional treatment effect estimates than those of unstratified matched or IPT weighted Cox regression models.
|
10 |
A Comparison of Two Methods of Adjusted Attributable Fraction Estimation as Applied to the Four Major Smoking Related Causes of Death, in Canada, in 2005Baliunas, Dalia Ona 19 January 2012 (has links)
The main objective of the thesis was to compare two methods of calculating adjusted attributable fractions and deaths as applied to smoking exposure and four health outcomes, lung cancer, ischaemic heart disease, chronic obstructive pulmonary disease, and cerebrovascular disease, for Canadians 30 years or older in the year 2005. An additional objective was to calculate variance estimates for the evaluation of precision. Such estimates have not been published for Canada to date.
Attributable fractions were calculated using the fully adjusted method and the partial adjustment method. This method requires confounder strata specific (stratified) estimates of relative risk, along with accompanying estimates of variance. These estimates have not previously been published, and were derived from the Cancer Prevention Study II cohort. Estimates of the prevalence of smoking in Canada were obtained from the Canadian Community Health Survey 2005. Variance estimates were calculated using a Monte Carlo simulation.
The fully adjusted method produced smaller attributable fractions in each of the eight disease-sex-specific categories than the partially adjusted method. This suggests an upwards bias when using the partial adjustment method in the attributable fraction for the relationship between cigarette smoking and cause-specific mortality in Canadian men and women. Summed across both sexes and the four smoking related causes of death, the number of deaths attributable to smoking was estimated to be 25,684 using the fully adjusted method and 28,466 using the partial adjustment method, an upward bias of over ten percent, or 2,782 deaths.
It is desirable, theoretically, to use methods which can fully adjust for the effect of confounding and effect modification. Given the large datasets required and access to original data, using these methods may not be feasible for some who would wish to do so.
|
Page generated in 0.0408 seconds