• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 12
  • 1
  • 1
  • Tagged with
  • 16
  • 16
  • 8
  • 6
  • 6
  • 5
  • 5
  • 5
  • 5
  • 4
  • 4
  • 4
  • 4
  • 3
  • 3
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
11

LIKELIHOOD-BASED INFERENTIAL METHODS FOR SOME FLEXIBLE CURE RATE MODELS

Pal, Suvra 04 1900 (has links)
<p>Recently, the Conway-Maxwell Poisson (COM-Poisson) cure rate model has been proposed which includes as special cases some of the well-known cure rate models discussed in the literature. Data obtained from cancer clinical trials are often right censored and the expectation maximization (EM) algorithm can be efficiently used for the determination of the maximum likelihood estimates (MLEs) of the model parameters based on right censored data.</p> <p>By assuming the lifetime distribution to be exponential, lognormal, Weibull, and gamma, the necessary steps of the EM algorithm are developed for the COM-Poisson cure rate model and some of its special cases. The inferential method is examined by means of an extensive simulation study. Model discrimination within the COM-Poisson family is carried out by likelihood ratio test as well as by information-based criteria. Finally, the proposed method is illustrated with a cutaneous melanoma data on cancer recurrence. As the lifetime distributions considered are not nested, it is not possible to carry out a formal statistical test to determine which among these provides an adequate fit to the data. For this reason, the wider class of generalized gamma distributions is considered which contains all of the above mentioned lifetime distributions as special cases. The steps of the EM algorithm are then developed for this general class of distributions and a simulation study is carried out to evaluate the performance of the proposed estimation method. Model discrimination within the generalized gamma family is carried out by likelihood ratio test and information-based criteria. Finally, for the considered cutaneous melanoma data, the two-way flexibility of the COM-Poisson family and the generalized gamma family is utilized to carry out a two-way model discrimination to select a parsimonious competing cause distribution along with a suitable choice of a lifetime distribution that provides the best fit to the data.</p> / Doctor of Philosophy (PhD)
12

Likelihood inference for multiple step-stress models from a generalized Birnbaum-Saunders distribution under time constraint

Alam, Farouq 11 1900 (has links)
Researchers conduct life testing on objects of interest in an attempt to determine their life distribution as a means of studying their reliability (or survivability). Determining the life distribution of the objects under study helps manufacturers to identify potential faults, and to improve quality. Researchers sometimes conduct accelerated life tests (ALTs) to ensure that failure among the tested units is earlier than what could result under normal operating (or environmental) conditions. Moreover, such experiments allow the experimenters to examine the effects of high levels of one or more stress factors on the lifetimes of experimental units. Examples of stress factors include, but not limited to, cycling rate, dosage, humidity, load, pressure, temperature, vibration, voltage, etc. A special class of ALT is step-stress accelerated life testing. In this type of experiments, the study sample is tested at initial stresses for a given period of time. Afterwards, the levels of the stress factors are increased in agreement with prefixed points of time called stress-change times. In practice, time and resources are limited; thus, any experiment is expected to be constrained to a deadline which is called a termination time. Hence, the observed information may be subjected to Type-I censoring. This study discusses maximum likelihood inferential methods for the parameters of multiple step-stress models from a generalized Birnbaum-Saunders distribution under time constraint alongside other inference-related problems. A couple of general inference frameworks are studied; namely, the observed likelihood (OL) framework, and the expectation-maximization (EM) framework. The last-mentioned framework is considered since there is a possibility that Type-I censored data are obtained. In the first framework, the scoring algorithm is used to get the maximum likelihood estimators (MLEs) for the model parameters. In the second framework, EM-based algorithms are utilized to determine the required MLEs. Obtaining observed information matrices under both frameworks is also discussed. Accordingly, asymptotic and bootstrap-based interval estimators for the model parameters are derived. Model discrimination within the considered generalized Birnbaum-Saunders distribution is carried out by likelihood ratio test as well as by information-based criteria. The discussed step-stress models are illustrated by analyzing three real-life datasets. Accordingly, establishing optimal multiple step-stress test plans based on cost considerations and three optimality criteria is discussed. Since maximum likelihood estimators are obtained by numerical optimization that involves maximizing some objective functions, optimization methods used, and their software implementations in R are discussed. Because of the computational aspects are in focus in this study, the benefits of parallel computing in R, as a high-performance computational approach, are briefly addressed. Numerical examples and Monte Carlo simulations are used to illustrate and to evaluate the methods presented in this thesis. / Thesis / Doctor of Science (PhD)
13

Theoretical advances in the modelling and interrogation of biochemical reaction systems : alternative formulations of the chemical Langevin equation and optimal experiment design for model discrimination

Mélykúti, Bence January 2010 (has links)
This thesis is concerned with methodologies for the accurate quantitative modelling of molecular biological systems. The first part is devoted to the chemical Langevin equation (CLE), a stochastic differential equation driven by a multidimensional Wiener process. The CLE is an approximation to the standard discrete Markov jump process model of chemical reaction kinetics. It is valid in the regime where molecular populations are abundant enough to assume their concentrations change continuously, but stochastic fluctuations still play a major role. We observe that the CLE is not a single equation, but a family of equations with shared finite-dimensional distributions. On the theoretical side, we prove that as many Wiener processes are sufficient to formulate the CLE as there are independent variables in the equation, which is just the rank of the stoichiometric matrix. On the practical side, we show that in the case where there are m_1 pairs of reversible reactions and m_2 irreversible reactions, there is another, simple formulation of the CLE with only m_1+m_2 Wiener processes, whereas the standard approach uses 2m_1+m_2. Considerable computational savings are achieved with this latter formulation. A flaw of the CLE model is identified: trajectories may leave the nonnegative orthant with positive probability. The second part addresses the challenge when alternative, structurally different ordinary differential equation models of similar complexity fit the available experimental data equally well. We review optimal experiment design methods for choosing the initial state and structural changes on the biological system to maximally discriminate between the outputs of rival models in terms of L_2-distance. We determine the optimal stimulus (input) profile for externally excitable systems. The numerical implementation relies on sum of squares decompositions and is demonstrated on two rival models of signal processing in starving Dictyostelium amoebae. Such experiments accelerate the perfection of our understanding of biochemical mechanisms.
14

Univariate and Bivariate ACD Models for High-Frequency Data Based on Birnbaum-Saunders and Related Distributions

Tan, Tao 22 November 2018 (has links)
This thesis proposes a new class of bivariate autoregressive conditional median duration models for matched high-frequency data and develops some inferential methods for an existing univariate model as well as the bivariate models introduced here to facilitate model fitting and forecasting. During the last two decades, the autoregressive conditional mean duration (ACD) model has been playing a dominant role in analyzing irregularly spaced high-frequency financial data. Univariate ACD models have been extensively discussed in the literature. However, some major challenges remain. The existing ACD models do not provide a good distributional fit to financial durations, which are right-skewed and often exhibit unimodal hazard rates. Birnbaum-Saunders (BS) distribution is capable of modeling a wide variety of positively skewed data. Median is not only a robust measure of central tendency, but also a natural scale parameter of the BS distribution. A class of conditional median duration models, the BS-ACD and the scale-mixture BS ACD models based on the BS, BS power-exponential and Student-t BS (BSt) distributions, have been suggested in the literature to improve the quality of the model fit. The BSt-ACD model is more flexible than the BS-ACD model in terms of kurtosis and skewness. In Chapter 2, we develop the maximum likelihood estimation method for the BSt-ACD model. The estimation is performed by utilizing a hybrid of optimization algorithms. The performance of the estimates is then examined through an extensive Monte Carlo simulation study. We also carry out model discrimination using both likelihood-based method and information-based criterion. Applications to real trade durations and comparison with existing alternatives are then made. The bivariate version of the ACD model has not received attention due to non-synchronicity. Although some bivariate generalizations of the ACD model have been introduced, they do not possess enough flexibility in modeling durations since they are conditional mean-based and do not account for non-monotonic hazard rates. Recently, the bivariate BS (BVBS) distribution has been developed with many desirable properties and characteristics. It allows for unimodal shapes of marginal hazard functions. In Chapter 3, upon using this bivariate BS distribution, we propose the BVBS-ACD model as a natural bivariate extension of the BS-ACD model. It enables us to jointly analyze matched duration series, and also capture the dependence between the two series. The maximum likelihood estimation of the model parameters and associated inferential methods have been developed. A Monte Carlo simulation study is then carried out to examine the performance of the proposed inferential methods. The goodness-of-fit and predictive performance of the model are also discussed. A real bivariate duration data analysis is provided to illustrate the developed methodology. The bivariate Student-t BS (BVBSt) distribution has been introduced in the literature as a robust extension of the BVBS distribution. It provides greater flexibility in terms of the kurtosis and skewness through the inclusion of an additional shape parameter. In Chapter 4, we propose the BVBSt-ACD model as a natural extension of the BSt-ACD model to the bivariate case. We then discuss the maximum likelihood estimation of the model parameters. A simulation study is carried out to investigate the performance of these estimators. Model discrimination is then done by using information-based criterion. Methods for evaluating the goodness-of-fit and predictive ability of the model are also discussed. A simulated data example is used to illustrate the proposed model as compared to the BVBS-ACD model. Finally, in Chapter 5, some concluding comments are made and also some problems for future research are mentioned. / Thesis / Master of Science (MSc)
15

MARGINAL LIKELIHOOD INFERENCE FOR FRAILTY AND MIXTURE CURE FRAILTY MODELS UNDER BIRNBAUM-SAUNDERS AND GENERALIZED BIRNBAUM-SAUNDERS DISTRIBUTIONS

Liu, Kai January 2018 (has links)
Survival analytic methods help to analyze lifetime data arising from medical and reliability experiments. The popular proportional hazards model, proposed by Cox (1972), is widely used in survival analysis to study the effect of risk factors on lifetimes. An important assumption in regression type analysis is that all relative risk factors should be included in the model. However, not all relative risk factors are observed due to measurement difficulty, inaccessibility, cost considerations, and so on. These unobservable risk factors can be modelled by the so-called frailty model, originally introduced by Vaupel et al. (1979). Furthermore, the frailty model is also applicable to clustered data. Cluster data possesses the feature that observations within the same cluster share similar conditions and environment, which are sometimes difficult to observe. For example, patients from the same family share similar genetics, and patients treated in the same hospital share the same group of profes- sionals and same environmental conditions. These factors are indeed hard to quantify or measure. In addition, this type of similarity introduces correlation among subjects within clusters. In this thesis, a semi-parametric frailty model is proposed to address aforementioned issues. The baseline hazards function is approximated by a piecewise constant function and the frailty distribution is assumed to be a Birnbaum-Saunders distribution. Due to the advancement in modern medical sciences, many diseases are curable, which in turn leads to the need of incorporating cure proportion in the survival model. The frailty model discussed here is further extended to a mixture cure rate frailty model by integrating the frailty model into the mixture cure rate model proposed originally by Boag (1949) and Berkson and Gage (1952). By linking the covariates to the cure proportion through logistic/logit link function and linking observable covariates and unobservable covariates to the lifetime of the uncured population through the frailty model, we obtain a flexible model to study the effect of risk factors on lifetimes. The mixture cure frailty model can be reduced to a mixture cure model if the effect of frailty term is negligible (i.e., the variance of the frailty distribution is close to 0). On the other hand, it also reduces to the usual frailty model if the cure proportion is 0. Therefore, we can use a likelihood ratio test to test whether the reduced model is adequate to model the given data. We assume the baseline hazard to be that of Weibull distribution since Weibull distribution possesses increasing, constant or decreasing hazard rate, and the frailty distribution to be Birnbaum-Saunders distribution. D ́ıaz-Garc ́ıa and Leiva-Sa ́nchez (2005) proposed a new family of life distributions, called generalized Birnbaum-Saunders distribution, including Birnbaum-Saunders distribution as a special case. It allows for various degrees of kurtosis and skewness, and also permits unimodality as well as bimodality. Therefore, integration of a generalized Birnbaum-Saunders distribution as the frailty distribution in the mixture cure frailty model results in a very flexible model. For this general model, parameter estimation is carried out using a marginal likelihood approach. One of the difficulties in the parameter estimation is that the likelihood function is intractable. The current technology in computation enables us to develop a numerical method through Monte Carlo simulation, and in this approach, the likelihood function is approximated by the Monte Carlo method and the maximum likelihood estimates and standard errors of the model parameters are then obtained numerically by maximizing this approximate likelihood function. An EM algorithm is also developed for the Birnbaum-Saunders mixture cure frailty model. The performance of this estimation method is then assessed by simulation studies for each proposed model. Model discriminations is also performed between the Birnbaum-Saunders frailty model and the generalized Birnbaum-Saunders mixture cure frailty model. Some illustrative real life examples are presented to illustrate the models and inferential methods developed here. / Thesis / Doctor of Science (PhD)
16

CURE RATE AND DESTRUCTIVE CURE RATE MODELS UNDER PROPORTIONAL ODDS LIFETIME DISTRIBUTIONS

FENG, TIAN January 2019 (has links)
Cure rate models, introduced by Boag (1949), are very commonly used while modelling lifetime data involving long time survivors. Applications of cure rate models can be seen in biomedical science, industrial reliability, finance, manufacturing, demography and criminology. In this thesis, cure rate models are discussed under a competing cause scenario, with the assumption of proportional odds (PO) lifetime distributions for the susceptibles, and statistical inferential methods are then developed based on right-censored data. In Chapter 2, a flexible cure rate model is discussed by assuming the number of competing causes for the event of interest following the Conway-Maxwell (COM) Poisson distribution, and their corresponding lifetimes of non-cured or susceptible individuals can be described by PO model. This provides a natural extension of the work of Gu et al. (2011) who had considered a geometric number of competing causes. Under right censoring, maximum likelihood estimators (MLEs) are obtained by the use of expectation-maximization (EM) algorithm. An extensive Monte Carlo simulation study is carried out for various scenarios, and model discrimination between some well-known cure models like geometric, Poisson and Bernoulli is also examined. The goodness-of-fit and model diagnostics of the model are also discussed. A cutaneous melanoma dataset example is used to illustrate the models as well as the inferential methods. Next, in Chapter 3, the destructive cure rate models, introduced by Rodrigues et al. (2011), are discussed under the PO assumption. Here, the initial number of competing causes is modelled by a weighted Poisson distribution with special focus on exponentially weighted Poisson, length-biased Poisson and negative binomial distributions. Then, a damage distribution is introduced for the number of initial causes which do not get destroyed. An EM-type algorithm for computing the MLEs is developed. An extensive simulation study is carried out for various scenarios, and model discrimination between the three weighted Poisson distributions is also examined. All the models and methods of estimation are evaluated through a simulation study. A cutaneous melanoma dataset example is used to illustrate the models as well as the inferential methods. In Chapter 4, frailty cure rate models are discussed under a gamma frailty wherein the initial number of competing causes is described by a Conway-Maxwell (COM) Poisson distribution in which the lifetimes of non-cured individuals can be described by PO model. The detailed steps of the EM algorithm are then developed for this model and an extensive simulation study is carried out to evaluate the performance of the proposed model and the estimation method. A cutaneous melanoma dataset as well as a simulated data are used for illustrative purposes. Finally, Chapter 5 outlines the work carried out in the thesis and also suggests some problems of further research interest. / Thesis / Doctor of Philosophy (PhD)

Page generated in 0.1281 seconds