• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 49
  • 19
  • 12
  • 9
  • 3
  • 1
  • 1
  • 1
  • Tagged with
  • 112
  • 34
  • 30
  • 27
  • 26
  • 22
  • 22
  • 21
  • 20
  • 20
  • 18
  • 14
  • 14
  • 12
  • 12
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
1

Informative Random Censoring in Parametric Survival Models

Li, Weihong Unknown Date
No description available.
2

Informative Random Censoring in Parametric Survival Models

Li, Weihong 11 1900 (has links)
Informative random censoring survival data are often seen in clinical trials. However, the methodology to deal with this kind of data has not been well developed due to difficulty of identifying the information. Several methods were proposed, for example, by citet{Sia1}. We use simulation studies to investigate sensitivity of these methods and show that the maximum likelihood estimation (MLE) method provides narrower confidence intervals than citet{Sia1}. This is true and expected under the same assumption as in citet{Sia1}. However, we were able to give practical guidelines on how to guess at the missing information of random censoring. We give conditions to obtain more precise estimators for survival data analyses, providing a user-friendly R program. Two real-life data sets are used to illustrate the application of this methodology. / Biostatistics
3

Impact of Informative Censoring on Statistics Used in the Validation of Surrogate Endpoints in Oncology

Liu, Yumeng January 2015 (has links)
In the past few years, biomarkers such as progression free survival (PFS) and time to progression (TTP), have been increasingly used as surrogate endpoints for overall survival (OS) in clinical trials in oncology. An issue occurs when clinical trials which demonstrated statistically significant treatment effect for the surrogate marker, shows no significant effect on the true outcome of interest, OS. It is possible that this lack of concordant results was due to informative censoring. Although it is known that informative censoring may bias the observed results, it is not clear what impact informative censoring has on the surrogacy of one marker in relation to a true outcome. In this thesis, we investigated how informative censoring could affect the results of a surrogate endpoint, and how would that affect the surrogacy of that endpoint. A simulation study was conducted to evaluate the impact of informative censoring on the treatment effect on TTP and the outcomes of the surrogate validation methods relative effect (RE), surrogate threshold effect (STE), and the difference between the treatment effect on TTP and on OS (IRE). The results of the simulation showed that having informative censoring for TTP will indeed bias the treatment effect on TTP as well as the results for the validation methods, RE, STE, and IRE. Hence, we conclude that the effect of informative censoring can greatly influence the ability to validate a surrogate marker, and additionally can bias the ability to determine the efficacy of a new therapy from a clinical trial using a surrogate marker as the primary outcome. / Thesis / Master of Science (MSc)
4

Addressing censoring issues in estimating the serial interval for tuberculosis

Ma, Yicheng 13 November 2019 (has links)
The serial interval (SI), defined as the symptom time between an infector and an infectee, is widely used to better understand transmission patterns of an infectious disease. Estimating the SI for tuberculosis (TB) is complicated by the slow progression from asymptomatic infection to active, symptomatic disease, and the fact that there is only a 5-10% lifetime risk of developing active TB disease. Furthermore, the time of symptom onset for infectors and infectees is rarely observed accurately. In this dissertation, we first conduct a systematic literature review to demonstrate the limited methods currently available to estimate the serial interval for TB as well as the few estimates that have been published. Secondly, under the assumption of an ideal scenario where all SIs are observed with precision, we evaluate the effect of prior information on estimating the SI in a Bayesian framework. Thirdly, we apply cure models, proposed by Boag in 1949, to estimate the SI for TB in a Bayesian framework. We show that the cure models perform better in the presence of credible prior information on the proportion of the study population that develop active TB disease, and should be chosen over traditional survival models which assume that all of the study population will eventually have the event of interest—active TB disease. Next, we modify the method by Reich et al. in 2009 by using a Riemann sum to approximate the likelihood function that involves a double integral. In doing so, we are able to reduce the computing time of the approximation method by around 50%. We are also able to relax the assumption of uniformity on the censoring intervals. We show that when using weights that are consistent with the underlying skewness of the intervals, the proposed approaches consistently produce more accurate estimates than the existing approaches. We provide SI estimates for TB using empirical datasets from Brazil and USA/Canada.
5

Gore Classification and Censoring in Images

Larocque, William 30 November 2021 (has links)
With the large amount of content being posted on the Internet every day, moderators, investigators, and analysts can be exposed to hateful, pornographic, or graphic content as part of their work. Exposure to this kind of content can have a severe impact on the mental health of these individuals. Hence, measures must be taken to lessen their mental health burden. Significant effort has been made to find and censor pornographic content; gore has not been researched to the same extent. Research in this domain has focused on protecting the public from seeing graphic content in images, movies, or online videos. However, these solutions do little to flag this content for employees who need to review such footage as part of their work. In this thesis, we aim to address this problem by creating a full image processing pipeline to find and censor gore in images. This involves creating a dataset, as none are publicly available, training and testing different machine learning solutions to automatically censor gore content. We propose an Image Processing Pipeline consisting of two models: a classification model which aims to find whether the image contains gore, and a segmentation model to censor the gore in the image. The classification results can be used to reduce accidental exposure to gore, by blurring the image in the search results for example. It can also be used to reduce processing time and storage space by ensuring the segmentation model does not need to generate a censored image for every image submitted to the pipeline. Both models use pretrained Convolutional Neural Network (CNN) architectures and weights as part of their design and are fine-tuned using Machine Learning (ML). We have done so to maximize the performance on the small dataset we gathered for these two tasks. The segmentation dataset contains 737 training images while the classification dataset contains 3830 images. We explored various variations on the proposed models that are inspired from existing solutions in similar domains, such as pornographic content detection and censoring and medical wound segmentation. These variations include Multiple Instance Learning (MIL), Generative Adversarial Networks (GANs) and Mask R-CNN. The best classification model we trained is a voting ensemble that combines the results of 4 classification models. This model achieved a 91.92% Double F1-Score, 87.30% precision, and 90.66% recall on the testing set. Our highest performing segmentation model achieved a testing Intersection over Union (IoU) value of 56.75%. However, when we employed the proposed Image Processing Pipeline (classification followed by segmentation), we achieved a testing IoU of 69.95%.
6

Building Prediction Models for Dementia: The Need to Account for Interval Censoring and the Competing Risk of Death

Marchetti, Arika L. 08 1900 (has links)
Indiana University-Purdue University Indianapolis (IUPUI) / Context. Prediction models for dementia are crucial for informing clinical decision making in older adults. Previous models have used genotype and age to obtain risk scores to determine risk of Alzheimer’s Disease, one of the most common forms of dementia (Desikan et al., 2017). However, previous prediction models do not account for the fact that the time to dementia onset is unknown, lying between the last negative and the first positive dementia diagnosis time (interval censoring). Instead, these models use time to diagnosis, which is greater than or equal to the true dementia onset time. Furthermore, these models do not account for the competing risk of death which is quite frequent among elder adults. Objectives. To develop a prediction model for dementia that accounts for interval censoring and the competing risk of death. To compare the predictions from this model with the predictions from a naïve analysis that ignores interval censoring and the competing risk of death. Methods. We apply the semiparametric sieve maximum likelihood (SML) approach to simultaneously model the cumulative incidence function (CIF) of dementia and death while accounting for interval censoring (Bakoyannis, Yu, & Yiannoutsos, 2017). The SML is implemented using the R package intccr. The CIF curves of dementia are compared for the SML and the naïve approach using a dataset from the Indianapolis Ibadan Dementia Project. Results. The CIF from the SML and the naïve approach illustrated that for healthier individuals at baseline, the naïve approach underestimated the incidence of dementia compared to the SML, as a result of interval censoring. Individuals with a poorer health condition at baseline have a CIF that appears to be overestimated in the naïve approach. This is due to older individuals with poor health conditions having an elevated risk of death. Conclusions. The SML method that accounts for the competing risk of death along with interval censoring should be used for fitting prediction/prognostic models of dementia to inform clinical decision making in older adults. Without controlling for the competing risk of death and interval censoring, the current models can provide invalid predictions of the CIF of dementia.
7

Bayesian Cox Models for Interval-Censored Survival Data

Zhang, Yue January 2016 (has links)
No description available.
8

Exact likelihood inference for multiple exponential populations under joint censoring

Su, Feng 04 1900 (has links)
<p>The joint censoring scheme is of practical significance while conducting comparative life-tests of products from different units within the same facility. In this thesis, we derive the exact distributions of the maximum likelihood estimators (MLEs) of the unknown parameters when joint censoring of some form is present among the multiple samples, and then discuss the construction of exact confidence intervals for the parameters.</p> <p>We develop inferential methods based on four different joint censoring schemes. The first one is when a jointly Type-II censored sample arising from $k$ independent exponential populations is available. The second one is when a jointly progressively Type-II censored sample is available, while the last two cases correspond to jointly Type-I hybrid censored and jointly Type-II hybrid censored samples. For each one of these cases, we derive the conditional MLEs of the $k$ exponential mean parameters, and derive their conditional moment generating functions and exact densities, using which we then develop exact confidence intervals for the $k$ population parameters. Furthermore, approximate confidence intervals based on the asymptotic normality of the MLEs, parametric bootstrap intervals, and credible confidence regions from a Bayesian viewpoint are all discussed. An empirical evaluation of all these methods of confidence intervals is also made in terms of coverage probabilities and average widths. Finally, we present examples in order to illustrate all the methods of inference developed here for different joint censoring scenarios.</p> / Doctor of Science (PhD)
9

Censoring and Fusion in Non-linear Distributed Tracking Systems with Application to 2D Radar

Conte, Armond S, II 01 January 2015 (has links)
The objective of this research is to study various methods for censoring state estimate updates generated from radar measurements. The generated 2-D radar data are sent to a fusion center using the J-Divergence metric as the means to assess the quality of the data. Three different distributed sensor network architectures are considered which include different levels of feedback. The Extended Kalman Filter (EKF) and the Gaussian Particle Filter (GPF) were used in order to test the censoring methods in scenarios which vary in their degrees of non-linearity. A derivation for the direct calculation of the J-Divergence using a particle filter is provided. Results show that state estimate updates can be censored using the J-Divergence as a metric controlled via feedback, with higher J-Divergence thresholds leading to a larger covariance at the fusion center.
10

Estimation of an upper tolerance limit for small-samples containing observations below the limit of quantitation

Yan, Donglin January 1900 (has links)
Master of Science / Department of Statistics / Christopher I. Vahl / Chemicals and drugs applied to animals used in meat production often have the potential to cause adverse effects on human consumers. To ensure safety, a withdrawal period, i.e. the minimum time allowed between application of the drug and entry of the animal into the food supply, must be determined for each drug used on food-producing animals. The withdrawal period is based on an upper tolerance limit at a given time point. It is not uncommon that the concentration of the drug in some tissue samples to be measured at a level below the limit of quantitation (LOQ). Because the measurement of the tissue concentration cannot be confidently determined with enough precision, these types of observations are often treated as if they were left censored where the censoring value is equal to the limit of quantitation. Several methods are commonly used in practice to deal with this situation. The simplest methods are either to exclude observations below the limit of quantitation or to replace those values with zero, LOQ or ½ LOQ. Previous studies have shown that these methods result in biased in estimation of the population mean and population variance. Alternatively, one could incorporate censoring into the likelihood and compute the maximum likelihood estimate (MLE) for the population mean and variance assuming a normal or lognormal distribution. These estimates are also biased but it has been shown that they are asymptotically unbiased. However, it is not clear yet how these various methods affect estimation of the upper tolerance limit, especially when the sample size is small, e.g. less than 35. In this report, we will examine the effects of substituting the LOQ or ½ LOQ for censored values as well as using the MLEs of the mean and variance in the construction of an upper tolerance limit for a normal population through simulation. Additionally, we propose a modified substitution method where observations below the LOQ are replaced by functions of the order statistics of non-censored observations under an assumption of symmetry. Its performance relative to the above methods will also be evaluated in the simulation study. In the end, the results from this study will be applied to an environmental study.

Page generated in 0.1598 seconds