• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 49
  • 19
  • 12
  • 10
  • 3
  • 1
  • 1
  • 1
  • Tagged with
  • 113
  • 34
  • 30
  • 28
  • 26
  • 23
  • 22
  • 22
  • 20
  • 20
  • 18
  • 14
  • 14
  • 12
  • 12
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
11

Effect of selection of censoring times on survival analysis estimation of disease incidence and association with risk factors

Himali, Jayandra Jung 24 September 2015 (has links)
In longitudinal cohort studies, potential risk factors are measured at baseline, subjects are followed over time, and disease endpoints are ascertained via extensive surveillance. Individual follow-up time is from baseline to the event, if one is observed during the study period. Follow-up time is censored for subjects who are not observed to have the event during the study period, at the end of the study period for subjects who remain event-free, but during the study period for subjects who leave the study early by choice or by mortality, or whose last evaluation was before the end of the study. Survival analytic techniques are unique in that the unit of analysis is not the individual but the person-time contributed by the individual. Surveillance in longitudinal studies is generally quite rigorous. Subjects are examined in waves and their event status is ascertained. Surveillance continues between waves, and events come to the attention of the investigator. If there is a long time between waves, analyses can be conducted on all available data, with non-events censored early at the last examination and events followed beyond the general examination to the incident event. Motivated by analyses using the Framingham Heart Study (FHS) with cardiovascular endpoints, we consider four censoring methods for non-events and evaluate their impact on estimates of incidence, and on tests of association between risk factors and incidence. We further investigate the impact of early censoring of non-events (as compared to events) under various scenarios with respect to incidence estimation, robustness, and power using a simulation study of Weibull survival models over a range of sample sizes and distribution parameters. Our FHS and simulation investigations show early censoring of non-events causes over estimation of incidence, particularly when the baseline incidence is low. Early censoring of non-events did not affect the robustness of the Wald test [Ho: Hazard Ratio (HR) =1]. However, in both the FHS and over the range of simulation scenarios, under early censoring of non-events, estimates of HR were closer to the null (1.0), and the power to detect associations with risk factors was markedly reduced.
12

Informative censoring with an imprecise anchor event: estimation of change over time and implications for longitudinal data analysis

Collins, Jamie Elizabeth 22 January 2016 (has links)
A number of methods have been developed to analyze longitudinal data with dropout. However, there is no uniformly accepted approach. Model performance, in terms of the bias and accuracy of the estimator, depends on the underlying missing data mechanism and it is unclear how existing methods will perform when little is known about the missing data mechanism. Here we evaluate methods for estimating change over time in longitudinal studies with informative dropout in three settings: using a linear mixed effect (LME) estimator in the presence of multiple types of dropout; proposing an update to the pattern mixture modeling (PMM) approach in the presence of imprecision in identifying informative dropouts; and utilizing this new approach in the presence of prognostic factor by dropout interaction. We demonstrate that amount of dropout, the proportion of dropout that is informative, and the variability in outcome all affect the performance of an LME estimator in data with a mixture of informative and non-informative dropout. When the amount of dropout is moderate to large (>20% overall) the potential for relative bias greater than 10% increases, especially with large variability in outcome measure, even under scenarios where only a portion of the dropouts are informative. Under conditions where LME models do not perform well, it is necessary to take the missing data mechanism into account. We develop a method that extends the PMM approach to account for uncertainty in identifying informative dropouts. In scenarios with this uncertainty, the proposed method outperformed the traditional method in terms of bias and coverage. In the presence of interaction between dropout and a prognostic factor, the LME model performed poorly, in terms of bias and coverage, in estimating prognostic factor-specific slopes and the interaction between the prognostic factor and time. The update to the PMM approach, proposed here, outperformed both the LME and traditional PMM. Our work suggests that investigators must be cautious with any analysis of data with informative dropout. We found that particular attention must be paid to the model assumptions when the missing data mechanism is not well understood.
13

Modeling longitudinal data with interval censored anchoring events

Chu, Chenghao 01 March 2018 (has links)
Indiana University-Purdue University Indianapolis (IUPUI) / In many longitudinal studies, the time scales upon which we assess the primary outcomes are anchored by pre-specified events. However, these anchoring events are often not observable and they are randomly distributed with unknown distribution. Without direct observations of the anchoring events, the time scale used for analysis are not available, and analysts will not be able to use the traditional longitudinal models to describe the temporal changes as desired. Existing methods often make either ad hoc or strong assumptions on the anchoring events, which are unveri able and prone to biased estimation and invalid inference. Although not able to directly observe, researchers can often ascertain an interval that includes the unobserved anchoring events, i.e., the anchoring events are interval censored. In this research, we proposed a two-stage method to fit commonly used longitudinal models with interval censored anchoring events. In the first stage, we obtain an estimate of the anchoring events distribution by nonparametric method using the interval censored data; in the second stage, we obtain the parameter estimates as stochastic functionals of the estimated distribution. The construction of the stochastic functional depends on model settings. In this research, we considered two types of models. The first model was a distribution-free model, in which no parametric assumption was made on the distribution of the error term. The second model was likelihood based, which extended the classic mixed-effects models to the situation that the origin of the time scale for analysis was interval censored. For the purpose of large-sample statistical inference in both models, we studied the asymptotic properties of the proposed functional estimator using empirical process theory. Theoretically, our method provided a general approach to study semiparametric maximum pseudo-likelihood estimators in similar data situations. Finite sample performance of the proposed method were examined through simulation study. Algorithmically eff- cient algorithms for computing the parameter estimates were provided. We applied the proposed method to a real data analysis and obtained new findings that were incapable using traditional mixed-effects models. / 2 years
14

Food demand in urban China: An empirical analysis using micro household data

Liu, Kang Ernest 12 February 2003 (has links)
No description available.
15

The general linear model for censored data

Zhao, Yonggang 05 September 2003 (has links)
No description available.
16

Optimal Progressive Type-II Censoring Schemes for Non-Parametric Confidence Intervals of Quantiles

Han, Donghoon 09 1900 (has links)
<p> In this work, optimal censoring schemes are investigated for the non-parametric confidence intervals of population quantiles under progressive Type-II right censoring. The proposed inference can be universally applied to any probability distributions for continuous random variables. By using the interval mass as an optimality criterion, the optimization process is also independent of the actual observed values from a sample as long as the initial sample size n and the number of observations m are predetermined. This study is based on the fact that each (uncensored) order statistic observed from progressive Type-II censoring can be represented as a mixture of underlying ordinary order statistics with exactly known weights [11, 12]. Using several sample sizes combined with various degrees of censoring, the results of the optimization are tabulated here for a wide range of quantiles with selected levels of significance (i.e., α = 0.01, 0.05, 0.10). With the optimality criterion under consideration, the efficiencies of the worst progressive Type-II censoring scheme and ordinary Type-II censoring scheme are also examined in comparison with the best censoring scheme obtained for a given quantile with fixed n and m.</p> / Thesis / Master of Science (MSc)
17

An Asymptotic Approach to Progressive Censoring

Hofmann, Glenn, Cramer, Erhard, Balakrishnan, N., Kunert, Gerd 10 December 2002 (has links) (PDF)
Progressive Type-II censoring was introduced by Cohen (1963) and has since been the topic of much research. The question stands whether it is sensible to use this sampling plan by design, instead of regular Type-II right censoring. We introduce an asymptotic progressive censoring model, and find optimal censoring schemes for location-scale families. Our optimality criterion is the determinant of the 2x2 covariance matrix of the asymptotic best linear unbiased estimators. We present an explicit expression for this criterion, and conditions for its boundedness. By means of numerical optimization, we determine optimal censoring schemes for the extreme value, the Weibull and the normal distributions. In many situations, it is shown that these progressive schemes significantly improve upon regular Type-II right censoring.
18

Precedence-type test based on the Nelson-Aalen estimator of the cumulative hazard function

Galloway, Katherine Anne Forsyth 03 July 2013 (has links)
In reliability studies, the goal is to gain knowledge about a product's failure times or life expectancy. Precedence tests do not require large sample sizes and are used in reliability studies to compare the life-time distributions from two samples. Precedence tests are useful since they provide reliable results early in a life-test and the surviving units can be used in other tests. Ng and Balakrishnan (2010) proposed a precedence-type test based on the Kaplan-Meier estimator of the cumulative distribution function. A precedence-type test based on the Nelson-Aalen estimator of the cumulative hazard function has been proposed. This test was developed for both Type-II right censoring and progressive Type-II right censoring. Numerical results, including illustrative examples, critical values and a power study have been provided. The results from this test were compared with those from the test based on the Kaplan-Meier estimator.
19

Precedence-type test based on the Nelson-Aalen estimator of the cumulative hazard function

Galloway, Katherine Anne Forsyth 03 July 2013 (has links)
In reliability studies, the goal is to gain knowledge about a product's failure times or life expectancy. Precedence tests do not require large sample sizes and are used in reliability studies to compare the life-time distributions from two samples. Precedence tests are useful since they provide reliable results early in a life-test and the surviving units can be used in other tests. Ng and Balakrishnan (2010) proposed a precedence-type test based on the Kaplan-Meier estimator of the cumulative distribution function. A precedence-type test based on the Nelson-Aalen estimator of the cumulative hazard function has been proposed. This test was developed for both Type-II right censoring and progressive Type-II right censoring. Numerical results, including illustrative examples, critical values and a power study have been provided. The results from this test were compared with those from the test based on the Kaplan-Meier estimator.
20

An Asymptotic Approach to Progressive Censoring

Hofmann, Glenn, Cramer, Erhard, Balakrishnan, N., Kunert, Gerd 10 December 2002 (has links)
Progressive Type-II censoring was introduced by Cohen (1963) and has since been the topic of much research. The question stands whether it is sensible to use this sampling plan by design, instead of regular Type-II right censoring. We introduce an asymptotic progressive censoring model, and find optimal censoring schemes for location-scale families. Our optimality criterion is the determinant of the 2x2 covariance matrix of the asymptotic best linear unbiased estimators. We present an explicit expression for this criterion, and conditions for its boundedness. By means of numerical optimization, we determine optimal censoring schemes for the extreme value, the Weibull and the normal distributions. In many situations, it is shown that these progressive schemes significantly improve upon regular Type-II right censoring.

Page generated in 0.0744 seconds