• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 912
  • 413
  • 413
  • 413
  • 413
  • 413
  • 410
  • 145
  • 14
  • 12
  • 6
  • 3
  • 2
  • 1
  • 1
  • Tagged with
  • 2213
  • 973
  • 516
  • 452
  • 315
  • 314
  • 247
  • 173
  • 132
  • 126
  • 101
  • 93
  • 88
  • 82
  • 79
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
51

Longitudinal Analysis of Renal Function using ZIP GEE on OLT Transplant Patients Undergoing NAC Prophylaxis

Mehta, Shekhar Harshad 25 September 2006 (has links)
Organ dysfunction is associated with oxidative stress following Orthotropic Liver Transplant (OLT) surgery. N-Acetyl Cysteine (NAC) is an acetylated form of the amino acid cysteine. NAC is known to replenish glutathione in the bloodstream which helps relieve cell damage caused by oxidative stress. NAC was used in a placebo controlled study to discover its effects on organ dysfunction caused by oxidative stress following OLT surgery. A standard NAC treatment, as used to treat acetaminophen toxicity, was used as a treatment during surgery. Measures of hepatic and renal dysfunction were recorded at unequally spaced time-points for a follow-up time of one year. The Generalized Estimating Equation (GEE) approach was used to model continuous hepatic responses. Discrete renal dysfunction responses are shown to follow a unique distribution. This unique distribution was accommodated by the GEE procedure proposed by Liang and Zeger to produce consistent and efficient estimates of the treatment effect of NAC. The estimates produced contradictory results for a hypothesized protective effect of NAC against hepatic dysfunction. The public health relevance of this work is that NAC treatment, if shown to be efficacious with respect to renal function, can benefit over six thousand OLT patients each year.
52

ACCOUNTING FOR MONOTONE ATTRITION IN A POSTPARTUM DEPRESSION CLINICAL TRIAL

Roumani, Yazan 25 September 2006 (has links)
Longitudinal studies in public health, medicine and the social sciences are often complicated by monotone attrition, where a participant drops out before the end of the study and all his/her subsequent measurements are missing. To obtain accurate non-biased results, it is of public health importance to utilize appropriate missing data analytic methods to address the issue of monotone attrition. The defining feature of longitudinal studies is that several measurements are taken for each participant over time. The commonly used methods to analyze incomplete longitudinal data, complete case analysis and last observation carried forward, are not recommended because they produce biased estimators. Simple imputation and multiple imputation procedures provide alternative approaches for addressing monotone attrition. However, simple imputation is difficult in a multivariate setting and produces biased estimators. Multiple imputation addresses those shortcomings and allows a straightforward assessment of the sensitivity of inferences to various models for non-response. This thesis reviews the literature on missing data mechanisms and missing data analysis methods for monotone attrition. Data from a postpartum depression clinical trial comparing the effects of two drugs (Nortriptyline and Sertraline) on remission status at 8 weeks were re-analyzed using these methods. The original analysis, which only used available data, was replicated first. Then patterns and predictors of attrition were identified. Last observation carried forward, mean imputation and multiple imputation were used to account for both monotone attrition and a small number of intermittent missing measurements. In multiple imputation, every missing measurement was imputed 6 times by predictive matching. Each of the 6 completed data sets was analyzed separately and the results of all the analyses were combined to get the overall estimate and standard errors. In each analysis, continuous remission levels were imputed but the probability of remission was analyzed. The original conclusion of no significant difference in probability of remission at week 8 between the two drug groups was sustained even after carrying the missing measurements forward, mean and multiple imputations. Most drop outs occurred during the first three weeks and participants taking Sertraline who live alone were more likely to drop out.
53

A TREE-STRUCTURED SURVIVAL MODEL WITH INCOMPLETE AND TIME-DEPENDENT COVARIATES: ILLUSTRATIONS USING TYPE 1 DIABETES DATA

Yu, Shui 15 February 2007 (has links)
A tree-structured recursive partitioning algorithm is adapted for censored survival analysis with incomplete and time-dependent covariates. The only assumptions required for this method are those that guarantee identifiability of the conditional distribution of the survival time given the covariates, providing broad applicability. The method also provides personalized prognosis. A conditional incremental imputation procedure, which does not depend on any model assumptions, is implemented to impute missing covariate values. These novel algorithms are applied to assess the role of islet antibodies (ICAs) as predictive markers for Type 1 diabetes mellitus (T1DM) progression in a longitudinal study of 300 first-degree relatives (FDRs) that were consecutively enrolled between 1977 through 2001 from the Childrens Hospital of Pittsburgh Registry. Results provide evidence that ICAs predict a more rapid progression to insulin-requiring diabetes in GAD65 positive relatives. A cross-validation study confirms the findings. Islet-cell antibodies (ICAs) are important markers of Type 1 diabetes. The issue regarding whether or not the measurement of ICAs should be completely replaced by biochemical markers detecting islet autoantibodies (AAs) for the prediction of T1DM has been the subject of endless debates. Our conclusion that ICAs should remain part of the assessment of T1DM risk is of great public health significance.
54

Experimental Design for Unbalanced Data Involving a Two level Logistic Model

Chen, Huanyu 21 June 2007 (has links)
The multilevel logistic model is used to analyze hierarchical data with binary outcomes, to detect variation both between and within clusters. I extended explicit variance formulae for a fixed effect in two level model for balanced binary data to account for imbalance both between and within clusters. The derivation of the variance is based on a linearization of the two level logistic model using first order marginal quasilikelihood (MQL1) estimation. In a simulation study, I used second order propensity quasilikelihood (PQL2) estimation to collaborate the accuracy of the analytic variance formula based on the observed racial distribution in a multi-center study of racial disparities. Using the site specific racial distributions, I simulated the log odds ratio for black race that could be detected with 80% power. These methods are illustrated in the context of a multi-center study of racial disparities in 30-day mortality in the Veterans Affairs (VA) Healthcare System, where the racial distributions are dramatically unbalanced across the 149 sites. We also consider a subset of 42 sites that include a majority of the black hospitalizations. The same analytic variance is obtained when one has either equal numbers of observations per site and/or a constant proportion of black veterans across sites. The observed racial imbalance both within and across sites increases the variance of the race coefficient more in the Random Coefficient (RC) model than in the random intercept (RI) model. Compared to PQL2, the analytic variances using MQL1 are, severely downwardly biased with smaller variance components. The simulation variances are virtually identical to the analytic variances for these data. For a given power, somewhat smaller log odds ratios can be detected in the RI model than in the RC model. The derived formulas provide a basis for planning multi-center studies when a predictor of primary importance is highly imbalanced both between and within sites. In studies of racial disparities in health care, the site-specific population distributions are often known from administrative data. The public health relevance of this work is that these methods for unbalanced data may facilitate more effective planning of multi-center studies of racial disparities.
55

DEVELOPMENT AND COMPARISON OF DIFFERENT METHODS OF EVALUATING FREE-RESPONSE ROC SYSTEMS

Song, Tao 29 January 2009 (has links)
Receiver Operating Characteristic (ROC) analysis has been widely used to evaluate diagnostic systems since the 1970s. In diagnostic imaging the decision task often needs the radiologist to locate the specific region on a subject that actually contains the abnormality. A Free-Response ROC experiment has been more and more accepted for evaluating this type of a diagnostic task. It entails detecting and marking the locations of all suspected abnormalities, as well as indicating a level of suspicion regarding the specific abnormality at each marked location. Several existing approaches of analyzing FROC data used the maximum rating to represent the multiple responses of a subject and then applied an analysis in an ROC concept to summarize the diagnostic systems discriminative ability in a randomly selected pair of actually negative and actually positive subjects. This dissertation proposes and evaluates new methods of subject-based discriminative ability by considering approaches based on the average of multiple ratings and approaches based on the stochastic order. Indices are also formulated by improving the JAFROC-type indices in literature, in order to summarize the diagnostic performance with correct location information. We also propose new indices that can penalize and reward for the number of correct and incorrect marks on the subjects. Asymptotic procedures are developed to compare the discriminative ability between two FROC systems. These asymptotic approaches are then extended to the multi-reader setting, taking into consideration the correlation and heterogeneity between readers. We also apply three different approaches to fit a smooth FROC curve, namely Box-Cox transformation approach, kernel smoothing approach and kernel regression approach. The public health significance of the work lies in our efforts to improve the statistical tools for evaluating medical diagnostic devices, which can help in the development of more specific and affordable diagnostic methods. Our contribution to early diagnosis could improve the timely recognition of reportable diseases.
56

GEE Models for the Longitudinal Analysis of the Effects of Occupational Radiation Exposure on Lymphocyte Counts in Russian Nuclear Workers

Soaita, Adina Iulia 20 February 2007 (has links)
The health effects of occupational radiation exposure have long been a source of scientific and administrative debates related to setting exposure standards. Relevant to this field are the effects of occupational long term radiation exposure on the lymphocyte counts which are especially sensitive to radiation. The trend of lymphocyte counts in radiation workers is of major importance since decreases in lymphocyte counts may be precursors of immunity disorders, cancer susceptibility or other chronic conditions. Another important question is whether the occupational radiation affects the lymphocyte counts similarly in males and females, given the relative lack of information on the effects and health implications of long term occupational radiation exposure on female subjects. This dissertation presents a comprehensive statistical analysis of the relationship between dosimetric (yearly gamma exposure) and hematological (lymphocyte counts) data collected from a historical cohort (1948-1956) of highly exposed radiation workers at Mayak Plant Association located in Russia. The analysis controls for important covariates, such as the baseline lymphocyte counts, sex, work location related to Plutonium exposure lifestyle variables and the number of years from the first exposure. The analysis contrasts the most relevant radiation dose-response models by using marginal models and the GEE technique. STATA programming tools have been developed to check the assumptions required by the GEE technique, with special attention to the missing data mechanisms and patterns in the framework of a longitudinal study with repeated measurements and unbalanced number of observations. The issue of non-linearity between the outcome variable and the explanatory covariates is addressed by the implementation of linear splines within GEE models. Statistical analyses indicate: (a) that a linear radiation dose-response model is appropriate for the data, (b) a statistically significant negative relationship between the log-transformed lymphocyte counts and the log-transformed external gamma dose, (c) no statistically significant differences between males and females regarding the effect of occupational radiation exposure on the lymphocyte counts. Public health significance of this research is: a) The linear radiation dose-response model is reasonable for regulatory purposes, and b) Males and females do not require differential regulatory standards for low dose occupational radiation exposure.
57

Detecting Outliers and Influential Data Points in Receiver Operating Characteristic (ROC) Analysis

Klym, Amy H 28 June 2007 (has links)
<p style="margin: 0in 0in 0pt; text-indent: 0in" class="MsoNormal"><font face="Times New Roman" size="3">Receiver operating characteristic (ROC) studies and analyses are often used to evaluate medical tests and are very useful in the field of radiology to evaluate a single diagnostic imaging system, to compare the accuracy of two or more diagnostic imaging systems, or to assess observer performance.<span>&#x00A0; </span>There have been many refinements in the development of different ROC type study designs and the corresponding statistical analysis. These methods have become increasingly important and ROC methods are the principal approach for evaluating imaging technologies and/or observer performances. The systems that are often evaluated using ROC methodology include digital and radiographic images of the chest and breast. An improved method of evaluating diagnostic imaging systems contributes to the development of better diagnostic methods; hence, improving imaging systems for diagnoses of breast and lung cancer would have major public health significance. In our work with observer performance studies, in which receiver operating characteristic (ROC) analysis is used, we have noted that some contributions of readers and cases can substantially alter the conclusions of the analysis.<span>&#x00A0; </span>To the best of our knowledge, to date there is no statistical test cited in the statistical literature that addresses the detection and influence of outliers on the estimate of the area under the ROC curve.<span>&#x00A0; </span>Evaluating outliers may be especially important for the ROC model since subtle (difficult) cases have the potential for being missed by a reader (e.g. a difficult positive case is rated as an unquestionably negative case), and can have a considerable influence on the estimated area under the ROC curve, especially if the study has a small set of cases.<span>&#x00A0; </span>Therefore, we believe it is important to develop a method for detecting and measuring the influence of outliers for ROC models.<span>&#x00A0; </span>The development of this method will involve deriving a test statistic for outliers based on the jackknife influence values and conducting a preliminary validation of the test.</font></p>
58

INFERENCE ON SURVIVAL DATA UNDER NONPROPORTIONAL HAZARDS

Xu, Qing 21 June 2007 (has links)
The objective of this research is to develop optimal (efficient) test methods for analysis of survival data under random censorship with nonproportional hazards. For the first part we revisit the weighted log-rank test where the weight function was derived by assuming the inverse Gaussian distribution for an omitted exponentiated covariate that induces a nonproportionality under the proportional hazards model. We perform a simulation study to compare the new procedure with ones using other popular weight functions including members of the Harrington-Flemings G-rho family. The nonproportional hazards data are generated by changing the hazard ratios over time under the proportional hazards model. The results indicate that the inverse Gaussian-based test tends to have higher power than some of the members that belong to the G-rho family in detecting a difference between two survival distributions when populations become homogeneous as time progresses. The second part of the research includes development of a parametric method in detecting the validity of the proportional odds model assumption between two groups of survival data. The research is based on the premise that the test procedure developed would take advantage of knowledge of the distributional information about the data, which will improve the sensitivity of a nonparametric test method. We evaluate type I error and power probabilities of the new parametric test by using the simulated survival data following the log-logistic distribution. The error probabilities are compared with ones in the literature. The results indicate that the extended test performs with a higher sensitivity than the existing nonparametric method. The results from the proposed study provide statistical test methods that are more sensitive than existing ones under certain situations which can be used in public health relevance applications such as clinical trials.
59

A COMPARISON OF PRINCIPLE COMPONENT ANALYSIS AND FACTOR ANALYSIS FOR QUANTITATIVE PHENOTYPES ON FAMILY DATA

Wang, Xiaojing 28 June 2007 (has links)
Background: Multivariate analysis, especially principal component analysis (PCA) and factor analysis (FA) is one of the effective methods by which to uncover the common factors (both genetic and environmental) that contribute to complex disease phenotypes, such as bone mineral density for osteoporosis. Although PCA and FA are widely used for this purpose, a formal evaluation of the performance of these two multivariate methodologies is lacking. Method: We conducted a comparison analysis using simulated data on 500 individuals from 250 nuclear families. We first simulated 7 underlying (unobserved) genetic and environmentally determined traits. Then we derived two sets of 50 complex (observed) traits using algebraic combinations of the underlying components plus an error term. We next performed PCA and FA on these complex traits and extracted the first factor/principal component. We studied three aspects of the performance of the methods: 1) the ability to detect the underlying genetic/environmental components; 2) whether the methods worked better when applied to raw traits or to residuals (that is, after regressing out potentially significant environmental covariates); and 3) whether heritabilities of composite PCA and FA phenotypes were higher than those of the original complex traits and/or underlying components. Results: Our results indicated that both multivariate analysis methods behave similarly in most cases, although FA is better able to detect predominant signals from underlying trait, which may improve the downstream QTL analysis. Using residuals (after regressing out potentially significant environmental covariates) in the PCA or FA analyses greatly increases the probability that PCs or factors detect common genetic components instead of common environmental factors, except if there is statistical interaction between genetic and environmental factors. Finally, although there is no predictable relationship between heritabilities obtained from composite phenotypes versus original complex traits, our results indicate that composite trait heritability generally reflects the genetic characteristics of the detectable underlying components. Public health significance: Understanding the strengths and weaknesses of multivariate analysis methods to detect underlying genetic and environmental factors for complex diseases will improve our identification of such factors. and this information may lead to better methods of treatment and prevention.
60

NEW TEST STATISTIC FOR COMPARING MEDIANS WITH INCOMPLETE PAIRED DATA

Tang, Xinyu 28 June 2007 (has links)
This paper is concerned with nonparametric methods for comparing medians of paired data with unpaired values on both responses. A new nonparametric test statistic is proposed in this paper based on a Mann-Whitney U test making comparisons across complete and incomplete pairs. A method of finding the null hypothesis distribution for this statistic is presented using a permutation approach. A Monte Carlo simulation study is described to make power comparisons among four already-existing nonparametric test statistics and this new test statistic. It is concluded that this new test statistic is fairly powerful in handling this kind of data compared to the other four test statistics. Finally, all five test statistics are applied to a real dataset for comparing the proportions of certain T cell receptor gene families in a cancer study. The introduction of this new nonparametric test statistic is of public health importance because it is a powerful statistical method for dealing with a pattern of missing data that may be encountered in clinical and public health research.

Page generated in 0.0895 seconds