Spelling suggestions: "subject:"proportional hazard"" "subject:"roportional hazard""
1 |
Assessing time-by-covariate interactions in Cox proportional hazards regression models using cubic spline functions /Hess, Kenneth Robert. Hardy, Robert J. Unknown Date (has links)
Source: Dissertation Abstracts International, Volume: 54-08, Section: B, page: 3941. Supervisor: Robert J. Hardy. Includes bibliographical references (leaves 109-114).
|
2 |
Random effects in survival analysisPutcha, Venkata Rama Prasad January 2000 (has links)
No description available.
|
3 |
noneWang, Chung-Hsin 27 August 2002 (has links)
none
|
4 |
Analysis of failure time data under risk set sampling and missing covariates /Qi, Lihong. January 2003 (has links)
Thesis (Ph. D.)--University of Washington, 2003. / Vita. Includes bibliographical references (p. 141-146).
|
5 |
Variable importance in predictive models : separating borrowing information and forming contrasts /Rudser, Kyle D. January 2007 (has links)
Thesis (150-154)--University of Washington, 2007. / Vita. Includes bibliographical references (p. 150-154).
|
6 |
Estimating the functional form of the effect of a continuous covariate on survival timeHolländer, Norbert. Unknown Date (has links) (PDF)
University, Diss., 2002--Dortmund.
|
7 |
CURE RATE AND DESTRUCTIVE CURE RATE MODELS UNDER PROPORTIONAL HAZARDS LIFETIME DISTRIBUTIONSBarui, Sandip 11 1900 (has links)
Cure rate models are widely used to model time-to-event data in the presence of long-term survivors. Cure rate models, since introduced by Boag (1949), have gained significance over time due to remarkable advancements in the drug industry resulting in cures for a number of diseases. In this thesis, cure rate models are considered under a competing risk scenario wherein the initial number of competing causes is described by a Conway-Maxwell (COM) Poisson distribution, under the assumption of proportional hazards (PH) lifetime for the susceptibles. This provides a natural extension of the work of Balakrishnan & Pal (2013) who had considered independently and identically distributed (i.i.d.) lifetimes in this setup. By linking covariates to the lifetime through PH assumption, we obtain a flexible cure rate model. First, the baseline hazard is assumed to be of the Weibull form. Parameter estimation is carried out using EM algorithm and the standard errors are estimated using Louis' method. The performance of estimation is assessed through a simulation study. A model discrimination study is performed using Likelihood-based and Information-based criteria since the COM-Poisson model includes geometric, Poisson and Bernoulli as special cases. The details are covered in Chapter 2.
As a natural extension of this work, we next approximate the baseline hazard with a piecewise linear functions (PLA) and estimated it non-parametrically for the COM-Poisson cure rate model under PH setup. The corresponding simulation study and model discrimination results are presented in Chapter 3. Lastly, we consider a destructive cure rate model, introduced by Rodrigues et. al (2011), and study it under the PH assumption for the lifetimes of susceptibles. In this, the initial number of competing causes are modeled by a weighted Poisson distribution. We then focus mainly on three special cases, viz., destructive exponentially weighted Poisson, destructive length-biased Poisson and destructive negative binomial cure rate models, and all corresponding results are presented in Chapter 4. / Thesis / Doctor of Philosophy (PhD)
|
8 |
Duration Data Analysis in Longitudinal SurveyBoudreau, Christian January 2003 (has links)
Considerable amounts of event history data are collected through longitudinal surveys. These surveys have many particularities or features that are the results of the dynamic nature of the population under study and of the fact that data collected through longitudinal surveys involve the use of complex survey designs, with clustering and stratification. These particularities include: attrition, seam-effect, censoring, left-truncation and complications in the variance estimation due to the use of complex survey designs. This thesis focuses on the last two points.
Statistical methods based on the stratified Cox proportional hazards model that account for intra-cluster dependence, when the sampling design is uninformative, are proposed. This is achieved using the theory of estimating equations in conjunction with empirical process theory. Issues concerning analytic inference from survey data and the use of weighted versus unweighted procedures are also discussed. The proposed methodology is applied to data from the U. S. Survey of Income and Program Participation (SIPP) and data from the Canadian Survey of Labour and Income Dynamics (SLID).
Finally, different statistical methods for handling left-truncated sojourns are explored and compared. These include the conditional partial likelihood and other methods, based on the Exponential or the Weibull distributions.
|
9 |
Advances in Model Selection Techniques with Applications to Statistical Network Analysis and Recommender SystemsFranco Saldana, Diego January 2016 (has links)
This dissertation focuses on developing novel model selection techniques, the process by which a statistician selects one of a number of competing models of varying dimensions, under an array of different statistical assumptions on observed data. Traditionally, two main reasons have been advocated by researchers for performing model selection strategies over classical maximum likelihood estimates (MLEs). The first reason is prediction accuracy, where by shrinking or setting to zero some model parameters, one sacrifices the unbiasedness of MLEs for a reduced variance, which in turn leads to an overall improvement in predictive performance. The second reason relates to interpretability of the selected models in the presence of a large number of predictors, where in order to obtain a parsimonious representation exhibiting the relationship between the response and covariates, we are willing to sacrifice some of the smaller details brought in by spurious predictors.
In the first part of this work, we revisit the family of variable selection techniques known as sure independence screening procedures for generalized linear models and the Cox proportional hazards model. After clever combination of some of its most powerful variants, we propose new extensions based on the idea of sample splitting, data-driven thresholding, and combinations thereof. A publicly available package developed in the R statistical software demonstrates considerable improvements in terms of model selection and competitive computational time between our enhanced variable selection procedures and traditional penalized likelihood methods applied directly to the full set of covariates.
Next, we develop model selection techniques within the framework of statistical network analysis for two frequent problems arising in the context of stochastic blockmodels: community number selection and change-point detection. In the second part of this work, we propose a composite likelihood based approach for selecting the number of communities in stochastic blockmodels and its variants, with robustness consideration against possible misspecifications in the underlying conditional independence assumptions of the stochastic blockmodel. Several simulation studies, as well as two real data examples, demonstrate the superiority of our composite likelihood approach when compared to the traditional Bayesian Information Criterion or variational Bayes solutions. In the third part of this thesis, we extend our analysis on static network data to the case of dynamic stochastic blockmodels, where our model selection task is the segmentation of a time-varying network into temporal and spatial components by means of a change-point detection hypothesis testing problem. We propose a corresponding test statistic based on the idea of data aggregation across the different temporal layers through kernel-weighted adjacency matrices computed before and after each candidate change-point, and illustrate our approach on synthetic data and the Enron email corpus.
The matrix completion problem consists in the recovery of a low-rank data matrix based on a small sampling of its entries. In the final part of this dissertation, we extend prior work on nuclear norm regularization methods for matrix completion by incorporating a continuum of penalty functions between the convex nuclear norm and nonconvex rank functions. We propose an algorithmic framework for computing a family of nonconvex penalized matrix completion problems with warm-starts, and present a systematic study of the resulting spectral thresholding operators. We demonstrate that our proposed nonconvex regularization framework leads to improved model selection properties in terms of finding low-rank solutions with better predictive performance on a wide range of synthetic data and the famous Netflix data recommender system.
|
10 |
Duration Data Analysis in Longitudinal SurveyBoudreau, Christian January 2003 (has links)
Considerable amounts of event history data are collected through longitudinal surveys. These surveys have many particularities or features that are the results of the dynamic nature of the population under study and of the fact that data collected through longitudinal surveys involve the use of complex survey designs, with clustering and stratification. These particularities include: attrition, seam-effect, censoring, left-truncation and complications in the variance estimation due to the use of complex survey designs. This thesis focuses on the last two points.
Statistical methods based on the stratified Cox proportional hazards model that account for intra-cluster dependence, when the sampling design is uninformative, are proposed. This is achieved using the theory of estimating equations in conjunction with empirical process theory. Issues concerning analytic inference from survey data and the use of weighted versus unweighted procedures are also discussed. The proposed methodology is applied to data from the U. S. Survey of Income and Program Participation (SIPP) and data from the Canadian Survey of Labour and Income Dynamics (SLID).
Finally, different statistical methods for handling left-truncated sojourns are explored and compared. These include the conditional partial likelihood and other methods, based on the Exponential or the Weibull distributions.
|
Page generated in 0.1131 seconds