• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 38
  • 25
  • 6
  • 2
  • 2
  • 2
  • 2
  • 1
  • Tagged with
  • 91
  • 91
  • 36
  • 36
  • 35
  • 33
  • 33
  • 32
  • 28
  • 26
  • 25
  • 24
  • 24
  • 22
  • 19
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
51

Modelo de regressão para dados com censura intervalar e dados de sobrevivência grupados / Regression model for interval-censored data and grouped survival data

Hashimoto, Elizabeth Mie 04 February 2009 (has links)
Neste trabalho foi proposto um modelo de regressão para dados com censura intervalar utilizando a distribuição Weibull-exponenciada, que possui como característica principal a função de taxa de falha que assume diferentes formas (unimodal, forma de banheira, crescente e decrescente). O atrativo desse modelo de regressão é a sua utilização para discriminar modelos, uma vez que o mesmo possui como casos particulares os modelos de regressão Exponencial, Weibull, Exponencial-exponenciada, entre outros. Também foi estudado um modelo de regressão para dados de sobrevivência grupados na qual a abordagem é fundamentada em modelos de tempo discreto e em tabelas de vida. A estrutura de regressão representada por uma probabilidade é modelada adotando-se diferentes funções de ligação, tais como, logito, complemento log-log, log-log e probito. Em ambas as pesquisas, métodos de validação dos modelos estatísticos propostos são descritos e fundamentados na análise de sensibilidade. Para detectar observações influentes nos modelos propostos, foram utilizadas medidas de diagnóstico baseadas na deleção de casos, denominadas de influência global e medidas baseadas em pequenas perturbações nos dados ou no modelo proposto, denominada de influência local. Para verificar a qualidade de ajuste do modelo e detectar pontos discrepantes foi realizada uma análise de resíduos nos modelos propostos. Os resultados desenvolvidos foram aplicados a dois conjuntos de dados reais. / In this study, a regression model for interval-censored data were developed, using the Exponentiated- Weibull distribution, that has as main characteristic the hazard function which assumes different forms (unimodal, bathtub shape, increase, decrease). A good feature of that regression model is their use to discriminate models, that have as particular cases, the models of regression: Exponential, Weibull, Exponential-exponentiated, amongst others. Also a regression model were studied for grouped survival data in which the approach is based in models of discrete time and in life tables, the regression structure represented by a probability is modeled through the use of different link function, logit, complementary log-log, log-log or probit. In both studies, validation methods for the statistical models studied are described and based on the sensitivity analysis. To find influential observations in the studied models, diagnostic measures were used based on case deletion, denominated as global influence and measures based on small perturbations on the data or in the studied model, denominated as local influence. To verify the goodness of fitting of the model and to detect outliers it was performed residual analysis for the proposed models. The developed results were applied to two real data sets.
52

Regressão quantílica para dados censurados / Censored quantile regression

Rasteiro, Louise Rossi 18 May 2017 (has links)
A regressão quantílica para dados censurados é uma extensão dos modelos de regressão quantílica que, por levar em consideração a informação das observações censuradas na modelagem, e por apresentar propriedades bastante satisfatórias, pode ser vista como uma abordagem complementar às metodologias tradicionais em Análise de Sobrevivência, com a vantagem de permitir que as conclusões inferenciais sejam tomadas facilmente em relação aos tempos de sobrevivência propriamente ditos, e não em relação à taxa de riscos ou a uma função desse tempo. Além disso, em alguns casos, pode ser vista também como metodologia alternativa aos modelos clássicos quando as suposições destes são violadas ou quando os dados são heterogêneos. Apresentam-se nesta dissertação três técnicas para modelagem com regressão quantílica para dados censurados, que se diferenciam em relação às suas suposições e forma de estimação dos parâmetros. Um estudo de simulação para comparação das três técnicas para dados com distribuição normal, Weibull e log-logística é apresentado, em que são avaliados viés, erro padrão e erro quadrático médio. São discutidas as vantagens e desvantagens de cada uma das técnicas e uma delas é aplicada a um conjunto de dados reais do Instituto do Coração do Hospital das Clínicas da Faculdade de Medicina da Universidade de São Paulo. / Censored quantile regression is an extension of quantile regression, and because it incorporates information from censored data in the modelling, and presents quite satisfactory properties, this class of models can be seen as a complementary approach to the traditional methods in Survival Analysis, with the advantage of allowing inferential conclusions to be made easily in terms of survival times rather than in terms of risk rates or as functions of survival time. Moreover, in some cases, it can also be seen as an alternative methodology to the classical models when their assumptions are violated or when modelling heterogeneity of the data. This dissertation presents three techniques for modelling censored quantile regression, which differ by assumptions and parameter estimation method. A simulation study designed with normal, Weibull and loglogistic distribution is presented to evaluate bias, standard error and mean square error. The advantages and disadvantages of each of the three techniques are then discussed and one of them is applied to a real data set from the Heart Institute of Hospital das Clínicas, University of São Paulo.
53

Regressão quantílica para dados censurados / Censored quantile regression

Louise Rossi Rasteiro 18 May 2017 (has links)
A regressão quantílica para dados censurados é uma extensão dos modelos de regressão quantílica que, por levar em consideração a informação das observações censuradas na modelagem, e por apresentar propriedades bastante satisfatórias, pode ser vista como uma abordagem complementar às metodologias tradicionais em Análise de Sobrevivência, com a vantagem de permitir que as conclusões inferenciais sejam tomadas facilmente em relação aos tempos de sobrevivência propriamente ditos, e não em relação à taxa de riscos ou a uma função desse tempo. Além disso, em alguns casos, pode ser vista também como metodologia alternativa aos modelos clássicos quando as suposições destes são violadas ou quando os dados são heterogêneos. Apresentam-se nesta dissertação três técnicas para modelagem com regressão quantílica para dados censurados, que se diferenciam em relação às suas suposições e forma de estimação dos parâmetros. Um estudo de simulação para comparação das três técnicas para dados com distribuição normal, Weibull e log-logística é apresentado, em que são avaliados viés, erro padrão e erro quadrático médio. São discutidas as vantagens e desvantagens de cada uma das técnicas e uma delas é aplicada a um conjunto de dados reais do Instituto do Coração do Hospital das Clínicas da Faculdade de Medicina da Universidade de São Paulo. / Censored quantile regression is an extension of quantile regression, and because it incorporates information from censored data in the modelling, and presents quite satisfactory properties, this class of models can be seen as a complementary approach to the traditional methods in Survival Analysis, with the advantage of allowing inferential conclusions to be made easily in terms of survival times rather than in terms of risk rates or as functions of survival time. Moreover, in some cases, it can also be seen as an alternative methodology to the classical models when their assumptions are violated or when modelling heterogeneity of the data. This dissertation presents three techniques for modelling censored quantile regression, which differ by assumptions and parameter estimation method. A simulation study designed with normal, Weibull and loglogistic distribution is presented to evaluate bias, standard error and mean square error. The advantages and disadvantages of each of the three techniques are then discussed and one of them is applied to a real data set from the Heart Institute of Hospital das Clínicas, University of São Paulo.
54

Modelo de regressão gama-G em análise de sobrevivência / Gama-G regression model in survival analysis

Hashimoto, Elizabeth Mie 15 March 2013 (has links)
Dados de tempo de falha são caracterizados pela presença de censuras, que são observações que não foram acompanhadas até a ocorrência de um evento de interesse. Para estudar o comportamento de dados com essa natureza, distribuições de probabilidade são utilizadas. Além disso, é comum se ter uma ou mais variáveis explicativas associadas aos tempos de falha. Dessa forma, o objetivo geral do presente trabalho é propor duas novas distribuições utilizando a função geradora de distribuições gama, no contexto de modelos de regressão em análise de sobrevivência. Essa função possui um parâmetro de forma que permite criar famílias paramétricas de distribuições que sejam flexíveis para capturar uma ampla variedade de comportamentos simétricos e assimétricos. Assim, a distribuição Weibull e a distribuição log-logística foram modificadas, dando origem a duas novas distribuições de probabilidade, denominadas de gama-Weibull e gama-log-logística, respectivamente. Consequentemente, os modelos de regressão locação-escala, de longa-duração e com efeito aleatório foram estudados, considerando as novas distribuições de probabilidade. Para cada um dos modelos propostos, foi utilizado o método da máxima verossimilhança para estimar os parâmetros e algumas medidas de diagnóstico de influência global e local foram calculadas para encontrar possíveis pontos influentes. No entanto, os resíduos foram propostos apenas para os modelos locação-escala para dados com censura à direita e para dados com censura intervalar, bem um estudo de simulação para verificar a distribuição empírica dos resíduos. Outra questão explorada é a introdução dos modelos: gama-Weibull inflacionado de zeros e gama-log-logística inflacionado de zeros, para analisar dados de produção de óleo de copaíba. Por fim, diferentes conjunto de dados foram utilizados para ilustrar a aplicação de cada um dos modelos propostos. / Failure time data are characterized by the presence of censoring, which are observations that were not followed up until the occurrence of an event of interest. To study the behavior of the data of that nature, probability distributions are used. Furthermore, it is common to have one or more explanatory variables associated to failure times. Thus, the goal of this work is given to the generating of gamma distributions function in the context of regression models in survival analysis. This function has a shape parameter that allows create parametric families of distributions that are flexible to capture a wide variety of symmetrical and asymmetrical behaviors. Therefore, through the generating of gamma distributions function, the Weibull distribution and log-logistic distribution were modified to give two new probability distributions: gamma-Weibull and gammalog-logistic. Additionally, location-scale regression models, long-term models and models with random effects were also studied, considering the new distributions. For each of the proposed models, we used the maximum likelihood method to estimate the parameters and some diagnostic measures of global and local influence were calculated for possible influential points. However, residuals have been proposed for data with right censoring and interval-censored data and a simulation study to verify the empirical distribution of the residuals. Another issue explored is the introduction of models: gamma-Weibull inflated zeros and gamma-log-logistic inflated zeros, to analyze production data copaiba oil. Finally, different data set are used to illustrate the application of each of the models.
55

Uma sistemática para utilização de dados censurados de garantia para obtenção da confiabilidade automotiva /

Zappa, Eugênio January 2019 (has links)
Orientador: Messias Borges Silva / Resumo: Com um mercado cada vez mais veloz, competitivo e com consumidores mais exigentes que não toleram falhas de produtos, que são amparados por legislações de proteção e defesa do consumidor, as empresas necessitam se esforçar no aprimoramento da qualidade de seus produtos. Entretanto, mesmo com a aplicação de tecnologias no desenvolvimento e fabricação de produtos, as falhas ainda acontecem. Para que um produto possa desempenhar sua função sem falhas num determinado tempo desejável, nas mais diversas condições reais as quais são submetidos, deve-se conhecer e aumentar a sua confiabilidade. Embora os dados de garantia que as empresas possuam dos seus produtos sejam fontes de informações valiosas para a obtenção da confiabilidade de um produto, estes dados ainda são insuficientes, imprecisos ou incompletos para uso direto, sendo necessário o uso de métodos apropriados ainda não muito disseminados. Este trabalho visa aplicar o método de censura por taxa de uso que viabiliza o uso de dados de garantia em análises mais precisas de confiabilidade para que as empresas possam aprimorar os seus produtos. Por meio de uma revisão da literatura e com o uso de dados de garantia, verificou-se a viabilidade da aplicação do método proposto. Com comprovação estatística, o método proposto de modelagem dos dados de garantia atingiu os resultados do estudo de referência adotado. Conclui-se que o método proposto com o objetivo de conhecer com precisão a confiabilidade do produto é aplicável e não ex... (Resumo completo, clicar acesso eletrônico abaixo) / Abstract: With an ever faster, more competitive market, and more demanding consumers who cannot tolerate product failures that are backed by consumer protection and protection laws, companies need to strive to improve the quality of their products. However, even with the application of technologies in product development and manufacturing, failures still occur. For a product to be able to perform its function without fail in a certain desirable time, under the most diverse real conditions to which it is submitted, its reliability must be known and increased. Although the assurance data that companies have of their products is a valuable source of information for the reliability of a product, this data is still insufficient, inaccurate or incomplete for direct use, and appropriate methods not yet widely disseminated are required. . This work aims to apply the usage rate censored method that enables the use of warranty data in more accurate reliability analyzes so that companies can improve their products. Through a literature review and the use of guarantee data, the feasibility of applying the proposed method was verified. With statistical proof, the proposed guarantee data modeling method reached the results of the adopted reference study. It is concluded that the proposed method with the objective of knowing precisely the product reliability is applicable and does not require specialized reliability software for its execution. Therefore, its application can contribute to the developm... (Complete abstract click electronic access below) / Mestre
56

Modelling dependence in actuarial science, with emphasis on credibility theory and copulas

Purcaru, Oana 19 August 2005 (has links)
One basic problem in statistical sciences is to understand the relationships among multivariate outcomes. Although it remains an important tool and is widely applicable, the regression analysis is limited by the basic setup that requires to identify one dimension of the outcomes as the primary measure of interest (the "dependent" variable) and other dimensions as supporting this variable (the "explanatory" variables). There are situations where this relationship is not of primary interest. For example, in actuarial sciences, one might be interested to see the dependence between annual claim numbers of a policyholder and its impact on the premium or the dependence between the claim amounts and the expenses related to them. In such cases the normality hypothesis fails, thus Pearson's correlation or concepts based on linearity are no longer the best ones to be used. Therefore, in order to quantify the dependence between non-normal outcomes one needs different statistical tools, such as, for example, the dependence concepts and the copulas. This thesis is devoted to modelling dependence with applications in actuarial sciences and is divided in two parts: the first one concerns dependence in frequency credibility models and the second one dependence between continuous outcomes. In each part of the thesis we resort to different tools, the stochastic orderings (which arise from the dependence concepts), and copulas, respectively. During the last decade of the 20th century, the world of insurance was confronted with important developments of the a posteriori tarification, especially in the field of credibility. This was dued to the easing of insurance markets in the European Union, which gave rise to an advanced segmentation. The first important contribution is due to Dionne & Vanasse (1989), who proposed a credibility model which integrates a priori and a posteriori information on an individual basis. These authors introduced a regression component in the Poisson counting model in order to use all available information in the estimation of accident frequency. The unexplained heterogeneity was then modeled by the introduction of a latent variable representing the influence of hidden policy characteristics. The vast majority of the papers appeared in the actuarial literature considered time-independent (or static) heterogeneous models. Noticeable exceptions include the pioneering papers by Gerber & Jones (1975), Sundt (1988) and Pinquet, Guillén & Bolancé (2001, 2003). The allowance for an unknown underlying random parameter that develops over time is justified since unobservable factors influencing the driving abilities are not constant. One might consider either shocks (induced by events like divorces or nervous breakdown, for instance) or continuous modifications (e.g. due to learning effect). In the first part we study the recently introduced models in the frequency credibility theory, which can be seen as models of time series for count data, adapted to actuarial problems. More precisely we will examine the kind of dependence induced among annual claim numbers by the introduction of random effects taking unexplained heterogeneity, when these random effects are static and time-dependent. We will also make precise the effect of reporting claims on the a posteriori distribution of the random effect. This will be done by establishing some stochastic monotonicity property of the a posteriori distribution with respect to the claims history. We end this part by considering different models for the random effects and computing the a posteriori corrections of the premiums on basis of a real data set from a Spanish insurance company. Whereas dependence concepts are very useful to describe the relationship between multivariate outcomes, in practice (think for instance to the computation of reinsurance premiums) one need some statistical tool easy to implement, which incorporates the structure of the data. Such tool is the copula, which allows the construction of multivariate distributions for given marginals. Because copulas characterize the dependence structure of random vectors once the effect of the marginals has been factored out, identifying and fitting a copula to data is not an easy task. In practice, it is often preferable to restrict the search of an appropriate copula to some reasonable family, like the archimedean one. Then, it is extremely useful to have simple graphical procedures to select the best fitting model among some competing alternatives for the data at hand. In the second part of the thesis we propose a new nonparametric estimator for the generator, that takes into account the particularity of the data, namely censoring and truncation. This nonparametric estimation then serves as a benchmark to select an appropriate parametric archimedean copula. This selection procedure will be illustrated on a real data set.
57

Regression models with an interval-censored covariate

Langohr, Klaus 16 June 2004 (has links)
El análisis de supervivencia trata de la evaluación estadística de variables que miden el tiempo transcurrido hasta un evento de interés. Una particularidad que ha de considerar el análisis de supervivencia son datos censurados. Éstos aparecen cuando el tiempo de interés no puede ser observado exactamente y la información al respecto es parcial. Se distinguen diferentes tipos de censura: un tiempo censurado por la derecha está presente si el tiempo de supervivencia es sabido mayor a un tiempo observado; censura por izquierda está dada si la supervivencia es menor que un tiempo observado. En el caso de censura en un intervalo, el tiempo está en un intervalo de tiempo observado, y el caso de doble censura aparece cuando, también, el origen del tiempo de supervivencia está censurado.La primera parte del Capítulo 1 contiene un resumen de la metodología estadística para datos censurados en un intervalo, incluyendo tanto métodos paramétricos como no-paramétricos. En la Sección 1.2 abordamos el tema de censura noinformativa que se supone cumplida para todos los métodos presentados. Dada la importancia de métodos de optimización en los demás capítulos, la Sección 1.3 trata de la teoría de optimización. Esto incluye varios algoritmos de optimización y la presentación de herramientas de optimización. Se ha utilizado el lenguaje de programación matemática AMPL para resolver los problemas de maximización que han surgido. Una de las características más importantes de AMPL es la posibilidad de enviar problemas de optimización al servidor 'NEOS: Server for Optimization' en Internet para que sean solucionados por ese servidor.En el Capítulo 2, se presentan los conjuntos de datos que han sido analizados. El primer estudio es sobre la supervivencia de pacientes de tuberculosis co-infectados por el VIH en Barcelona, mientras el siguiente, también del área de VIH/SIDA, trata de usuarios de drogas intra-venosas de Badalona y alrededores que fueron admitidos a la unidad de desintoxicación del Hospital Trias i Pujol. Un área completamente diferente son los estudios sobre la vida útil de alimentos. Se presenta la aplicación de la metodología para datos censurados en un intervalo en esta área. El Capítulo 3 trata del marco teórico de un modelo de vida acelerada con una covariante censurada en un intervalo. Puntos importantes a tratar son el desarrollo de la función de verosimilitud y el procedimiento de estimación de parámetros con métodos del área de optimización. Su uso puede ser una herramienta importante en la estadística. Estos métodos se aplican también a otros modelos con una covariante censurada en un intervalo como se demuestra en el Capítulo 4.Otros métodos que se podrían aplicar son descritos en el Capítulo 5. Se trata sobre todo de métodos basados en técnicas de imputación para datos censurados en un intervalo. Consisten en dos pasos: primero, se imputa el valor desconocido de la covariante, después, se pueden estimar los parámetros con procedimientos estadísticos estándares disponibles en cualquier paquete de software estadístico.El método de maximización simultánea ha sido implementado por el autor con el código de AMPL y ha sido aplicado al conjunto de datos de Badalona. Presentamos los resultados de diferentes modelos y sus respectivas interpretaciones en el Capítulo 6. Se ha llevado a cabo un estudio de simulación cuyos resultados se dan en el Capítulo 7. Ha sido el objetivo comparar la maximización simultánea con dos procedimientos basados en la imputación para el modelo de vida acelerada. Finalmente, en el último capítulo se resumen los resultados y se abordan diferentes aspectos que aún permanecen sin ser resueltos o podrían ser aproximados de manera diferente. / Survival analysis deals with the evaluation of variables which measure the elapsed time until an event of interest. One particularity survival analysis has to account for are censored data, which arise whenever the time of interest cannot be measured exactly, but partial information is available. Four types of censoring are distinguished: right-censoring occurs when the unobserved survival time is bigger, left-censoring when it is less than an observed time, and in case of interval-censoring, the survival time is observed within a time interval. We speak of doubly-censored data if also the time origin is censored.In Chapter 1 of the thesis, we first give a survey on statistical methods for interval-censored data, including both parametric and nonparametric approaches. In the second part of Chapter 1, we address the important issue of noninformative censoring, which is assumed in all the methods presented. Given the importance of optimization procedures in the further chapters of the thesis, the final section of Chapter 1 is about optimization theory. This includes some optimization algorithms, as well as the presentation of optimization tools, which have played an important role in the elaboration of this work. We have used the mathematical programming language AMPL to solve the maximization problems arisen. One of its main features is that optimization problems written in the AMPL code can be sent to the internet facility 'NEOS: Server for Optimization' and be solved by its available solvers.In Chapter 2, we present the three data sets analyzed for the elaboration of this dissertation. Two correspond to studies on HIV/AIDS: one is on the survival of Tuberculosis patients co-infected with HIV in Barcelona, the other on injecting drug users from Badalona and surroundings, most of whom became infected with HIV as a result of their drug addiction. The complex censoring patterns in the variables of interest of the latter study have motivated the development of estimation procedures for regression models with interval-censored covariates. The third data set comes from a study on the shelf life of yogurt. We present a new approach to estimate the shelf lives of food products taking advantage of the existing methodology for interval-censored data.Chapter 3 deals with the theoretical background of an accelerated failure time model with an interval-censored covariate, putting emphasize on the development of the likelihood functions and the estimation procedure by means of optimization techniques and tools. Their use in statistics can be an attractive alternative to established methods such as the EM algorithm. In Chapter 4 we present further regression models such as linear and logistic regression with the same type of covariate, for the parameter estimation of which the same techniques are applied as in Chapter 3. Other possible estimation procedures are described in Chapter 5. These comprise mainly imputation methods, which consist of two steps: first, the observed intervals of the covariate are replaced by an imputed value, for example, the interval midpoint, then, standard procedures are applied to estimate the parameters.The application of the proposed estimation procedure for the accelerated failure time model with an interval-censored covariate to the data set on injecting drug users is addressed in Chapter 6. Different distributions and covariates are considered and the corresponding results are presented and discussed. To compare the estimation procedure with the imputation based methods of Chapter 5, a simulation study is carried out, whose design and results are the contents of Chapter 7. Finally, in the closing Chapter 8, the main results are summarized and several aspects which remain unsolved or might be approximated in another way are addressed.
58

Treatment Comparison in Biomedical Studies Using Survival Function

Zhao, Meng 03 May 2011 (has links)
In the dissertation, we study the statistical evaluation of treatment comparisons by evaluating the relative comparison of survival experiences between two treatment groups. We construct confidence interval and simultaneous confidence bands for the ratio and odds ratio of two survival functions through both parametric and nonparametric approaches.We first construct empirical likelihood confidence interval and simultaneous confidence bands for the odds ratio of two survival functions to address small sample efficacy and sufficiency. The empirical log-likelihood ratio is developed, and the corresponding asymptotic distribution is derived. Simulation studies show that the proposed empirical likelihood band has outperformed the normal approximation band in small sample size cases in the sense that it yields closer coverage probabilities to chosen nominal levels.Furthermore, in order to incorporate prognostic factors for the adjustment of survival functions in the comparison, we construct simultaneous confidence bands for the ratio and odds ratio of survival functions based on both the Cox model and the additive risk model. We develop simultaneous confidence bands by approximating the limiting distribution of cumulative hazard functions by zero-mean Gaussian processes whose distributions can be generated through Monte Carlo simulations. Simulation studies are conducted to evaluate the performance for proposed models. Real applications on published clinical trial data sets are also studied for further illustration purposes.In the end, the population attributable fraction function is studied to measure the impact of risk factors on disease incidence in the population. We develop semiparametric estimation of attributable fraction functions for cohort studies with potentially censored event time under the additive risk model.
59

Bootstrap bandwidth selection in kernel hazard rate estimation / S. Jansen van Vuuren

Van Vuuren, Stefan Jansen January 2011 (has links)
The purpose of this study is to thoroughly discuss kernel hazard function estimation, both in the complete sample case as well as in the presence of random right censoring. Most of the focus is on the very important task of automatic bandwidth selection. Two existing selectors, least–squares cross validation as described by Patil (1993a) and Patil (1993b), as well as the bootstrap bandwidth selector of Gonzalez–Manteiga, Cao and Marron (1996) will be discussed. The bandwidth selector of Hall and Robinson (2009), which uses bootstrap aggregation (or 'bagging'), will be extended to and evaluated in the setting of kernel hazard rate estimation. We will also make a simple proposal for a bootstrap bandwidth selector. The performance of these bandwidth selectors will be compared empirically in a simulation study. The findings and conclusions of this study are reported. / Thesis (M.Sc. (Statistics))--North-West University, Potchefstroom Campus, 2011.
60

Treatment Comparison in Biomedical Studies Using Survival Function

Zhao, Meng 03 May 2011 (has links)
In the dissertation, we study the statistical evaluation of treatment comparisons by evaluating the relative comparison of survival experiences between two treatment groups. We construct confidence interval and simultaneous confidence bands for the ratio and odds ratio of two survival functions through both parametric and nonparametric approaches.We first construct empirical likelihood confidence interval and simultaneous confidence bands for the odds ratio of two survival functions to address small sample efficacy and sufficiency. The empirical log-likelihood ratio is developed, and the corresponding asymptotic distribution is derived. Simulation studies show that the proposed empirical likelihood band has outperformed the normal approximation band in small sample size cases in the sense that it yields closer coverage probabilities to chosen nominal levels.Furthermore, in order to incorporate prognostic factors for the adjustment of survival functions in the comparison, we construct simultaneous confidence bands for the ratio and odds ratio of survival functions based on both the Cox model and the additive risk model. We develop simultaneous confidence bands by approximating the limiting distribution of cumulative hazard functions by zero-mean Gaussian processes whose distributions can be generated through Monte Carlo simulations. Simulation studies are conducted to evaluate the performance for proposed models. Real applications on published clinical trial data sets are also studied for further illustration purposes.In the end, the population attributable fraction function is studied to measure the impact of risk factors on disease incidence in the population. We develop semiparametric estimation of attributable fraction functions for cohort studies with potentially censored event time under the additive risk model.

Page generated in 0.0909 seconds