• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 15
  • 7
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • Tagged with
  • 27
  • 27
  • 27
  • 15
  • 15
  • 8
  • 8
  • 6
  • 6
  • 6
  • 6
  • 6
  • 6
  • 5
  • 5
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
1

Assessing time-by-covariate interactions in Cox proportional hazards regression models using cubic spline functions /

Hess, Kenneth Robert. Hardy, Robert J. Unknown Date (has links)
Source: Dissertation Abstracts International, Volume: 54-08, Section: B, page: 3941. Supervisor: Robert J. Hardy. Includes bibliographical references (leaves 109-114).
2

Analysis of failure time data under risk set sampling and missing covariates /

Qi, Lihong. January 2003 (has links)
Thesis (Ph. D.)--University of Washington, 2003. / Vita. Includes bibliographical references (p. 141-146).
3

Variable importance in predictive models : separating borrowing information and forming contrasts /

Rudser, Kyle D. January 2007 (has links)
Thesis (150-154)--University of Washington, 2007. / Vita. Includes bibliographical references (p. 150-154).
4

Advances in Model Selection Techniques with Applications to Statistical Network Analysis and Recommender Systems

Franco Saldana, Diego January 2016 (has links)
This dissertation focuses on developing novel model selection techniques, the process by which a statistician selects one of a number of competing models of varying dimensions, under an array of different statistical assumptions on observed data. Traditionally, two main reasons have been advocated by researchers for performing model selection strategies over classical maximum likelihood estimates (MLEs). The first reason is prediction accuracy, where by shrinking or setting to zero some model parameters, one sacrifices the unbiasedness of MLEs for a reduced variance, which in turn leads to an overall improvement in predictive performance. The second reason relates to interpretability of the selected models in the presence of a large number of predictors, where in order to obtain a parsimonious representation exhibiting the relationship between the response and covariates, we are willing to sacrifice some of the smaller details brought in by spurious predictors. In the first part of this work, we revisit the family of variable selection techniques known as sure independence screening procedures for generalized linear models and the Cox proportional hazards model. After clever combination of some of its most powerful variants, we propose new extensions based on the idea of sample splitting, data-driven thresholding, and combinations thereof. A publicly available package developed in the R statistical software demonstrates considerable improvements in terms of model selection and competitive computational time between our enhanced variable selection procedures and traditional penalized likelihood methods applied directly to the full set of covariates. Next, we develop model selection techniques within the framework of statistical network analysis for two frequent problems arising in the context of stochastic blockmodels: community number selection and change-point detection. In the second part of this work, we propose a composite likelihood based approach for selecting the number of communities in stochastic blockmodels and its variants, with robustness consideration against possible misspecifications in the underlying conditional independence assumptions of the stochastic blockmodel. Several simulation studies, as well as two real data examples, demonstrate the superiority of our composite likelihood approach when compared to the traditional Bayesian Information Criterion or variational Bayes solutions. In the third part of this thesis, we extend our analysis on static network data to the case of dynamic stochastic blockmodels, where our model selection task is the segmentation of a time-varying network into temporal and spatial components by means of a change-point detection hypothesis testing problem. We propose a corresponding test statistic based on the idea of data aggregation across the different temporal layers through kernel-weighted adjacency matrices computed before and after each candidate change-point, and illustrate our approach on synthetic data and the Enron email corpus. The matrix completion problem consists in the recovery of a low-rank data matrix based on a small sampling of its entries. In the final part of this dissertation, we extend prior work on nuclear norm regularization methods for matrix completion by incorporating a continuum of penalty functions between the convex nuclear norm and nonconvex rank functions. We propose an algorithmic framework for computing a family of nonconvex penalized matrix completion problems with warm-starts, and present a systematic study of the resulting spectral thresholding operators. We demonstrate that our proposed nonconvex regularization framework leads to improved model selection properties in terms of finding low-rank solutions with better predictive performance on a wide range of synthetic data and the famous Netflix data recommender system.
5

Beyond the Cox model : extensions of the model and alternative estimators /

Sasieni, Peter D. January 1989 (has links)
Thesis (Ph. D.)--University of Washington, 1989. / Vita. Includes bibliographical references ([217]-228).
6

Generalized estimating equations for censored multivariate failure time data /

Cai, Jianwen, January 1992 (has links)
Thesis (Ph. D.)--University of Washington, 1992. / Vita. Includes bibliographical references (leaves [135]-138).
7

Semi-parametric analysis of failure time data from case-control family studies on candidate genes /

Chen, Lu, January 2004 (has links)
Thesis (Ph. D.)--University of Washington, 2004. / Vita. Includes bibliographical references (p. 97-102).
8

Semiparametric Estimation of a Gaptime-Associated Hazard Function

Teravainen, Timothy January 2014 (has links)
This dissertation proposes a suite of novel Bayesian semiparametric estimators for a proportional hazard function associated with the gaptimes, or inter-arrival times, of a counting process in survival analysis. The Cox model is applied and extended in order to identify the subsequent effect of an event on future events in a system with renewal. The estimators may also be applied, without changes, to model the effect of a point treatment on subsequent events, as well as the effect of an event on subsequent events in neighboring subjects. These Bayesian semiparametric estimators are used to analyze the survival and reliability of the New York City electric grid. In particular, the phenomenon of "infant mortality," whereby electrical supply units are prone to immediate recurrence of failure, is flexibly quantified as a period of increased risk. In this setting, the Cox model removes the significant confounding effect of seasonality. Without this correction, infant mortality would be misestimated due to the exogenously increased failure rate during summer months and times of high demand. The structural assumptions of the Bayesian estimators allow the use and interpretation of sparse event data without the rigid constraints of standard parametric models used in reliability studies.
9

Session Clustering Using Mixtures of Proportional Hazards Models

Mair, Patrick, Hudec, Marcus January 2008 (has links) (PDF)
Emanating from classical Weibull mixture models we propose a framework for clustering survival data with various proportionality restrictions imposed. By introducing mixtures of Weibull proportional hazards models on a multivariate data set a parametric cluster approach based on the EM-algorithm is carried out. The problem of non-response in the data is considered. The application example is a real life data set stemming from the analysis of a world-wide operating eCommerce application. Sessions are clustered due to the dwell times a user spends on certain page-areas. The solution allows for the interpretation of the navigation behavior in terms of survival and hazard functions. A software implementation by means of an R package is provided. (author´s abstract) / Series: Research Report Series / Department of Statistics and Mathematics
10

Hypothesis testing based on pool screening with unequal pool sizes

Gao, Hongjiang. January 2010 (has links) (PDF)
Thesis (Ph.D.)--University of Alabama at Birmingham, 2010. / Title from PDF title page (viewed on June 28, 2010). Includes bibliographical references.

Page generated in 0.0993 seconds