• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 5
  • 1
  • 1
  • Tagged with
  • 7
  • 7
  • 5
  • 4
  • 3
  • 3
  • 3
  • 2
  • 2
  • 2
  • 2
  • 2
  • 1
  • 1
  • 1
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
1

Optimal sample size allocation for multi-level stress testing with extreme value regression under type-I censoring.

January 2012 (has links)
在多組壽命試驗中,為了準確地估計模型的參數,我們必須找出最合適的實驗品數量,以分配給每一個應力水平。近來, Ng, Chan and Balakrishnan(2006),在完整樣本情況下,利用「極值回歸模型」發展了找尋實驗品數量最合適的分配方法。其後,Ka, Chan, Ng and Balakrishnan (2011)在同一個回歸模型下,研究了對於「II型截尾樣本」最合適的分配方法。因為我們仍未確立對「I型截尾樣本」的最合適分配方法,所以我們將會在本篇論文中探討如何在「I型截尾壽命試驗」中找出最合適的實驗品分配方法。 / 在本論文中,我們會利用最大似然估計的方法去估計模型參數。我們也會計算出「逆費雪訊息矩陣」(「漸近方差協方差矩陣」)I⁻¹,用以量度參數估計值的準確度。以下是三個對最合適分配方法的決定準則: / 1.費雪訊息矩陣的行列式最大化, / 2. ν1估計值的方差最小化, var( ν1)(V -優化準則 ) / 3.漸近方差協方差矩陣的跡最小化, tr(⁻¹)(A-優化準則 ) / 我們也會討論在「極值回歸模型」的特例:「指數回歸模型」之下最合適的分配方法。 / In multi-group life-testing experiment, it is essential to optimize the allocation of the items under test to dierent stress levels in order to estimate the model parameter accurately. Recently Ng, Chan and Balakrishnan(2006) developed the optimal allocation for complete sample case with extreme value regression model, and Ka, Chan, Ng and Balakrishnan (2011) discussed about the optimal allocation for Type -II censoring cases with the same model. The optimal allocation for Type-I censoring scheme has not been established, so in this thesis, we are going to investigate the optimal allocation if Type-I censoring scheme is adopted in life-testing experiment. / Maximum likelihood estimation method will be adopted in this thesis for estimating model parameter. The inverted Fisher information matrix (asymptotic variance -covariance matrix),I⁻¹ , will be derived and used to measure the accuracy of the estimated parameters. The optimal allocation will be determined based on three optimal criteria: / 1. Maximizing the determinant of the expected Fisher Information matrix, / 2. Minimizing the variance of the estimator of ν1, var( ν1) (V -optimality ) / 3. Minimizing the trace of the variance-covariance matrix, tr(I⁻¹) (A-optimality ) / Optimal allocation under the exponential regression model,which is a spe¬cial case of extreme value regression model, will also be discussed. / Detailed summary in vernacular field only. / Detailed summary in vernacular field only. / Detailed summary in vernacular field only. / Detailed summary in vernacular field only. / Detailed summary in vernacular field only. / Detailed summary in vernacular field only. / So, Hon Yiu. / Thesis (M.Phil.)--Chinese University of Hong Kong, 2012. / Includes bibliographical references (leaves 46-48). / Abstracts also in Chinese. / Abstract --- p.i / Acknowledgement --- p.i / Chapter 1 --- Introduction --- p.1 / Chapter 1.1 --- Accelerated Life Test --- p.1 / Chapter 1.2 --- Life-Stress Relationship --- p.1 / Chapter 1.3 --- Type I Censoring --- p.3 / Chapter 1.4 --- Optimal Allocation --- p.3 / Chapter 1.5 --- The Scope of the Thesis --- p.4 / Chapter 2 --- Extreme Value Regression Model --- p.5 / Chapter 2.1 --- Introduction --- p.5 / Chapter 2.2 --- Model and Maximum Likelihood Estimation --- p.5 / Chapter 2.3 --- Expected Fisher Information --- p.8 / Chapter 3 --- Criteria for Optimization and the Optimal Allocation --- p.12 / Chapter 3.1 --- Introduction --- p.12 / Chapter 3.2 --- Criteria for Optimization --- p.12 / Chapter 3.3 --- Numerical Illustrations and the Optimal Allocation --- p.14 / Chapter 4 --- Sensitivity Analysis --- p.17 / Chapter 4.1 --- Introduction --- p.17 / Chapter 4.2 --- Sensitivity Analysis --- p.17 / Chapter 4.3 --- Numerical Illustrations --- p.19 / Chapter 4.3.1 --- Illustration with McCool (1980) Data --- p.19 / Chapter 4.3.2 --- Further Study --- p.21 / Chapter 5 --- Exponential Regression Estimation --- p.26 / Chapter 5.1 --- Introduction --- p.26 / Chapter 5.2 --- The Model and the Likelihood Inference --- p.27 / Chapter 5.3 --- Optimal Sample Size Allocation for Estimation of Model Pa- rameters --- p.30 / Chapter 5.4 --- Numerical Illustration --- p.33 / Chapter 5.5 --- Sensitivity Analysis --- p.35 / Chapter 5.5.1 --- Parameter Misspeci cation --- p.35 / Chapter 5.5.2 --- Censoring Time --- p.38 / Chapter 5.5.3 --- Further Study --- p.40 / Chapter 6 --- Conclusion and Further Research --- p.44
2

Estimation For The Cox Model With Various Types Of Censored Data

Riddlesworth, Tonya 01 January 2011 (has links)
In survival analysis, the Cox model is one of the most widely used tools. However, up to now there has not been any published work on the Cox model with complicated types of censored data, such as doubly censored data, partly-interval censored data, etc., while these types of censored data have been encountered in important medical studies, such as cancer, heart disease, diabetes, etc. In this dissertation, we first derive the bivariate nonparametric maximum likelihood estimator (BNPMLE) F[subscript n](t,z) for joint distribution function F[sub 0](t,z) of survival time T and covariate Z, where T is subject to right censoring, noting that such BNPMLE F[subscript n] has not been studied in statistical literature. Then, based on this BNPMLE F[subscript n] we derive empirical likelihood-based (Owen, 1988) confidence interval for the conditional survival probabilities, which is an important and difficult problem in statistical analysis, and also has not been studied in literature. Finally, with this BNPMLE F[subscript n] as a starting point, we extend the weighted empirical likelihood method (Ren, 2001 and 2008a) to the multivariate case, and obtain a weighted empirical likelihood-based estimation method for the Cox model. Such estimation method is given in a unified form, and is applicable to various types of censored data aforementioned.
3

Survival Analysis using Bivariate Archimedean Copulas

Chandra, Krishnendu January 2015 (has links)
In this dissertation we solve the nonidentifiability problem of Archimedean copula models based on dependent censored data (see [Wang, 2012]). We give a set of identifiability conditions for a special class of bivariate frailty models. Our simulation results show that our proposed model is identifiable under our proposed conditions. We use EM algorithm to estimate unknown parameters and the proposed estimation approach can be applied to fit dependent censored data when the dependence is of research interest. The marginal survival functions can be estimated using the copula-graphic estimator (see [Zheng and Klein, 1995] and [Rivest and Wells, 2001]) or the estimator proposed by [Wang, 2014]. We also propose two model selection procedures for Archimedean copula models, one for uncensored data and the other one for right censored bivariate data. Our simulation results are similar to that of [Wang and Wells, 2000] and suggest that both procedures work quite well. The idea of our proposed model selection procedure originates from the model selection procedure for Archimedean copula models proposed by [Wang and Wells, 2000] for right censored bivariate data using the L2 norm corresponding to the Kendall distribution function. A suitable bootstrap procedure is yet to be suggested for our method. We further propose a new parameter estimator and a simple goodness-of-fit test for Archimedean copula models when the bivariate data is under fixed left truncation. Our simulation results suggest that our procedure needs to be improved so that it can be more powerful, reliable and efficient. In our strategy, to obtain estimates for the unknown parameters, we heavily exploit the concept of truncated tau (a measure of association established by [Manatunga and Oakes, 1996] for left truncated data). The idea of our goodness of fit test originates from the goodness-of-fit test for Archimedean copula models proposed by [Wang, 2010] for right censored bivariate data.
4

The Joint Modeling of Longitudinal Covariates and Censored Quantile Regression for Health Applications

Hu, Bo January 2022 (has links)
The overall theme of this thesis focuses on the joint modeling of longitudinal covariates and a censored survival outcome, where a survival outcome is modeled using a conditional quantile regression. In traditional joint modeling approaches, a survival outcome is usually parametrically modeled as a Cox regression. Censored quantile regressions can model a survival outcome without pre-specifying a parametric likelihood function or assuming a proportional hazard ratio. Existing censored quantile methods are mostly limited to fixed cross-sectional covariates, while in many longitudinal studies, researchers wish to investigate the associations between longitudinal covariates and a survival outcome. The first part considers the problem of joint modeling with a survival outcome under a mixture of censoring: left censoring, interval censoring or right censoring. We pose a linear mixed effect model for a longitudinal covariate and a conditional quantile regression for a censored survival outcome, assuming that a longitudinal covariate and a survival outcome are conditional independent on individual level random effects. We propose a Gibbs sampling approach as an extension of a censored quantile based data augmentation algorithm, to allow for a longitudinal covariate process. We also propose an iterative algorithm that alternately updates individual level random effects and model parameters, where a censored survival outcome is treated in the way of re-weighting. Both of our methods are illustrated by the application to the LEGACY Girls cohort Study to understand the influence of individual genetic profiles on the pubertal development (i.e., the onset of breast development) while adjusting for BMI growth trajectories. The second part considers the problem of joint modelling with a random right censoring survival outcome. We pose a linear mixed effect model for a longitudinal covariate and a conditional quantile regression for a censored survival outcome, assuming that a longitudinal covariate and a survival outcome are conditional independent on individual level random effects. We propose a Gibbs sampling approach as an extension of a censored quantile based data augmentation algorithm, to allow for a longitudinal covariate process. Theoretical properties for the resulting parameter estimates are established. We also propose an iterative algorithm that alternately updates individual level random effects and model parameters, where a censored survival outcome is treated in the way of re-weighting. Both of our methods are illustrated by the application to Mayo Clinic Primary Biliary Cholangitis Data to assess the effect of drug D-penicilamine on risk of liver transplantation or death, while controlling for age at registration and serBilir marker.
5

Caractérisation de la diversité d'une population à partir de mesures quantifiées d'un modèle non-linéaire. Application à la plongée hyperbare / Characterisation of population diversity from quantified measures of a nonlinear model. Application to hyperbaric diving

Bennani, Youssef 10 December 2015 (has links)
Cette thèse propose une nouvelle méthode pour l'estimation non-paramétrique de densité à partir de données censurées par des régions de formes quelconques, éléments de partitions du domaine paramétrique. Ce travail a été motivé par le besoin d'estimer la distribution des paramètres d'un modèle biophysique de décompression afin d'être capable de prédire un risque d'accident. Dans ce contexte, les observations (grades de plongées) correspondent au comptage quantifié du nombre de bulles circulant dans le sang pour un ensemble de plongeurs ayant exploré différents profils de plongées (profondeur, durée), le modèle biophysique permettant de prédire le volume de gaz dégagé pour un profil de plongée donné et un plongeur de paramètres biophysiques connus. Dans un premier temps, nous mettons en évidence les limitations de l'estimation classique de densité au sens du maximum de vraisemblance non-paramétrique. Nous proposons plusieurs méthodes permettant de calculer cet estimateur et montrons qu'il présente plusieurs anomalies : en particulier, il concentre la masse de probabilité dans quelques régions seulement, ce qui le rend inadapté à la description d'une population naturelle. Nous proposons ensuite une nouvelle approche reposant à la fois sur le principe du maximum d'entropie, afin d'assurer une régularité convenable de la solution, et mettant en jeu le critère du maximum de vraisemblance, ce qui garantit une forte attache aux données. Il s'agit de rechercher la loi d'entropie maximale dont l'écart maximal aux observations (fréquences de grades observées) est fixé de façon à maximiser la vraisemblance des données. / This thesis proposes a new method for nonparametric density estimation from censored data, where the censing regions can have arbitrary shape and are elements of partitions of the parametric domain. This study has been motivated by the need for estimating the distribution of the parameters of a biophysical model of decompression, in order to be able to predict the risk of decompression sickness. In this context, the observations correspond to quantified counts of bubbles circulating in the blood of a set of divers having explored a variety of diving profiles (depth, duration); the biophysical model predicts of the gaz volume produced along a given diving profile for a diver with known biophysical parameters. In a first step, we point out the limitations of the classical nonparametric maximum-likelihood estimator. We propose several methods for its calculation and show that it suffers from several problems: in particular, it concentrates the probability mass in a few regions only, which makes it inappropriate to the description of a natural population. We then propose a new approach relying both on the maximum-entropy principle, in order to ensure a convenient regularity of the solution, and resorting to the maximum-likelihood criterion, to guarantee a good fit to the data. It consists in searching for the probability law with maximum entropy whose maximum deviation from empirical averages is set by maximizing the data likelihood. Several examples illustrate the superiority of our solution compared to the classic nonparametric maximum-likelihood estimator, in particular concerning generalisation performance.
6

Quantile based estimation of treatment effects in censored data

Crotty, Nicholas Paul 27 May 2013 (has links)
M.Sc. (Mathematical Statistics) / Comparison of two distributions via use of the quantile comparison function is carried out specifically from possibly censored data. A semi-parametric method which assumes linearity of the quantile comparison function is examined thoroughly for non-censored data and then extended to incorporate censored data. A fully nonparametric method to construct confidence bands for the quantile comparison function is set out. The performance of all methods examined is tested using Monte Carlo Simulation.
7

Methodology for Handling Missing Data in Nonlinear Mixed Effects Modelling

Johansson, Åsa M. January 2014 (has links)
To obtain a better understanding of the pharmacokinetic and/or pharmacodynamic characteristics of an investigated treatment, clinical data is often analysed with nonlinear mixed effects modelling. The developed models can be used to design future clinical trials or to guide individualised drug treatment. Missing data is a frequently encountered problem in analyses of clinical data, and to not venture the predictability of the developed model, it is of great importance that the method chosen to handle the missing data is adequate for its purpose. The overall aim of this thesis was to develop methods for handling missing data in the context of nonlinear mixed effects models and to compare strategies for handling missing data in order to provide guidance for efficient handling and consequences of inappropriate handling of missing data. In accordance with missing data theory, all missing data can be divided into three categories; missing completely at random (MCAR), missing at random (MAR) and missing not at random (MNAR). When data are MCAR, the underlying missing data mechanism does not depend on any observed or unobserved data; when data are MAR, the underlying missing data mechanism depends on observed data but not on unobserved data; when data are MNAR, the underlying missing data mechanism depends on the unobserved data itself. Strategies and methods for handling missing observation data and missing covariate data were evaluated. These evaluations showed that the most frequently used estimation algorithm in nonlinear mixed effects modelling (first-order conditional estimation), resulted in biased parameter estimates independent on missing data mechanism. However, expectation maximization (EM) algorithms (e.g. importance sampling) resulted in unbiased and precise parameter estimates as long as data were MCAR or MAR. When the observation data are MNAR, a proper method for handling the missing data has to be applied to obtain unbiased and precise parameter estimates, independent on estimation algorithm. The evaluation of different methods for handling missing covariate data showed that a correctly implemented multiple imputations method and full maximum likelihood modelling methods resulted in unbiased and precise parameter estimates when covariate data were MCAR or MAR. When the covariate data were MNAR, the only method resulting in unbiased and precise parameter estimates was a full maximum likelihood modelling method where an extra parameter was estimated, correcting for the unknown missing data mechanism's dependence on the missing data. This thesis presents new insight to the dynamics of missing data in nonlinear mixed effects modelling. Strategies for handling different types of missing data have been developed and compared in order to provide guidance for efficient handling and consequences of inappropriate handling of missing data.

Page generated in 0.1423 seconds