Spelling suggestions: "subject:"curvival 2analysis."" "subject:"curvival 3analysis.""
71 |
Robustness of Semi-Parametric Survival Model: Simulation Studies and Application to Clinical DataNwi-Mozu, Isaac 01 August 2019 (has links)
An efficient way of analyzing survival clinical data such as cancer data is a great concern to health experts. In this study, we investigate and propose an efficient way of handling survival clinical data. Simulation studies were conducted to compare performances of various forms of survival model techniques using an R package ``survsim". Models performance was conducted with varying sample sizes as small ($n5000$). For small and mild samples, the performance of the semi-parametric outperform or approximate the performance of the parametric model. However, for large samples, the parametric model outperforms the semi-parametric model. We compared the effectiveness and reliability of our proposed techniques using a real clinical data of mild sample size. Finally, systematic steps on how to model and explain the proposed techniques on real survival clinical data was provided.
|
72 |
Survival analysis of the timing of goals in soccer gamesLam, Chung-sang., 林仲生. January 2005 (has links)
published_or_final_version / abstract / Economics and Finance / Master / Master of Philosophy
|
73 |
On testing for the Cox model using resampling methodsFang, Jing, 方婧 January 2007 (has links)
published_or_final_version / abstract / Statistics and Actuarial Science / Master / Master of Philosophy
|
74 |
Modelling multivariate survival data using semiparametric models李友榮, Lee, Yau-wing. January 2000 (has links)
published_or_final_version / Statistics and Actuarial Science / Master / Master of Philosophy
|
75 |
Marginal Models for Modeling Clustered Failure Time DataNIU, YI 01 February 2013 (has links)
Clustered failure time data often arise in biomedical and clinical studies where potential correlation among survival times is
induced in a cluster. In this thesis, we develop a class of marginal models for right censored clustered failure time data and
propose a novel generalized estimating equation approach in a likelihood-based context. We first investigate a semiparametric proportional hazards model for clustered survival data and derive
the large sample properties of the regression estimators. The finite sample studies demonstrate that the good applicability of the proposed method as well as the substantial efficiency improvement in comparison with the existing marginal model for clustered survival data. Another important feature of failure time data we will consider in this thesis is a possible fraction of cured subjects. To accommodate the potential cure fraction, we consider a
proportional hazards mixture cure model for clustered survival data with long-term survivors and develop a set of estimating
equations by incorporating working correlation matrices in an EM algorithm. The dependence among the cure statuses and among the survival times of uncured patients within clusters are modeled by working correlation matrices in the estimating equations. For the parametric proportional hazards mixture cure model, we show that
the estimators of the regression parameters and the parameter in the baseline hazard function are consistent and asymptotically
normal with a sandwich covariance matrix that can be consistently estimated. A numerical study presents that the proposed estimation method is comparable with the existing parametric marginal method. We also extend the proposed generalized estimating equation approach to a semiparametric proportional hazards mixture cure model where the baseline survival function is nonparametrically specified. A bootstrap method is used to obtain the variances of
the estimates. The proposed method is evaluated by a simulation study from which we observe a noticeable efficiency gain of the proposed method over the existing semiparametric marginal method for clustered failure time data with a cure fraction. / Thesis (Ph.D, Mathematics & Statistics) -- Queen's University, 2013-01-30 21:23:48.968
|
76 |
Duration Data Analysis in Longitudinal SurveyBoudreau, Christian January 2003 (has links)
Considerable amounts of event history data are collected through longitudinal surveys. These surveys have many particularities or features that are the results of the dynamic nature of the population under study and of the fact that data collected through longitudinal surveys involve the use of complex survey designs, with clustering and stratification. These particularities include: attrition, seam-effect, censoring, left-truncation and complications in the variance estimation due to the use of complex survey designs. This thesis focuses on the last two points.
Statistical methods based on the stratified Cox proportional hazards model that account for intra-cluster dependence, when the sampling design is uninformative, are proposed. This is achieved using the theory of estimating equations in conjunction with empirical process theory. Issues concerning analytic inference from survey data and the use of weighted versus unweighted procedures are also discussed. The proposed methodology is applied to data from the U. S. Survey of Income and Program Participation (SIPP) and data from the Canadian Survey of Labour and Income Dynamics (SLID).
Finally, different statistical methods for handling left-truncated sojourns are explored and compared. These include the conditional partial likelihood and other methods, based on the Exponential or the Weibull distributions.
|
77 |
Alternative profit scorecards for revolving creditSanchez Barrios, Luis Javier January 2013 (has links)
The aim of this PhD project is to design profit scorecards for a revolving credit using alternative measures of profit that have not been considered in previous research. The data set consists of customers from a lending institution that grants credit to those that are usually financially excluded due to the lack of previous credit records. The study presents for the first time a relative profit measure (i.e.: returns) for scoring purposes and compares results with those obtained from usual monetary profit scores both in cumulative and average terms. Such relative measure can be interpreted as the productivity per customer in generating cash flows per monetary unit invested in receivables. Alternatively, it is the coverage against default if the lender discontinues operations at time t. At an exploratory level, results show that granting credit to financially excluded customers is a profitable business. Moreover, defaulters are not necessarily unprofitable; in average the profits generated by profitable defaulters exceed the losses generated by certain non-defaulters. Therefore, it makes sense to design profit (return) scorecards. It is shown through different methods that it makes a difference to use alternative profit measures for scoring purposes. At a customer level, using either profits or returns alters the chances of being accepted for credit. At a portfolio level, in the long term, productivity (coverage against default) is traded off if profits are used instead of returns. Additionally, using cumulative or average measures implies a trade off between the scope of the credit programme and customer productivity (coverage against default). The study also contributes to the ongoing debate of using direct and indirect prediction methods to produce not only profit but also return scorecards. Direct scores were obtained from borrower attributes, whilst indirect scores were predicted using the estimated probabilities of default and repurchase; OLS was used in both cases. Direct models outperformed indirect models. Results show that it is possible to identify customers that are profitable both in monetary and relative terms. The best performing indirect model used the probabilities of default at t=12 months and of repurchase in t=12, 30 months as predictors. This agrees with banking practices and confirms the significance of the long term perspective for revolving credit. Return scores would be preferred under more conservative standpoints towards default because of unstable conditions and if the aim is to penetrate relatively unknown segments. Further ethical considerations justify their use in an inclusive lending context. Qualitative data was used to contextualise results from quantitative models, where appropriate. This is particularly important in the microlending industry, where analysts’ market knowledge is important to complement results from scorecards for credit granting purposes. Finally, this is the first study that formally defines time-to-profit and uses it for scoring purposes. Such event occurs when the cumulative return exceeds one. It is the point in time when customers are exceedingly productive or alternatively when they are completely covered against default, regardless of future payments. A generic time-to-profit application scorecard was obtained by applying the discrete version of Cox model to borrowers’ attributes. Compared with OLS results, portfolio coverage against default was improved. A set of segmented models predicted time-to-profit for different loan durations. Results show that loan duration has a major effect on time-to-profit. Furthermore, inclusive lending programmes can generate internal funds to foster their growth. This provides useful insight for investment planning objectives in inclusive lending programmes such as the one under analysis.
|
78 |
Finding the Cutpoint of a Continuous Covariate in a Parametric Survival Analysis ModelJoshi, Kabita 01 January 2016 (has links)
In many clinical studies, continuous variables such as age, blood pressure and cholesterol are measured and analyzed. Often clinicians prefer to categorize these continuous variables into different groups, such as low and high risk groups. The goal of this work is to find the cutpoint of a continuous variable where the transition occurs from low to high risk group. Different methods have been published in literature to find such a cutpoint. We extended the methods of Contal and O’Quigley (1999) which was based on the log-rank test and the methods of Klein and Wu (2004) which was based on the Score test to find the cutpoint of a continuous covariate. Since the log-rank test is a nonparametric method and the Score test is a parametric method, we are interested to see if an extension of the parametric procedure performs better when the distribution of a population is known. We have developed a method that uses the parametric score residuals to find the cutpoint. The performance of the proposed method will be compared with the existing methods developed by Contal and O’Quigley and Klein and Wu by estimating the bias and mean square error of the estimated cutpoints for different scenarios in simulated data.
|
79 |
Contributions à l'analyse de survie / Contributions to survival analysisBousquet, Damien 04 October 2012 (has links)
Dans ce travail, nous présentons de nouveaux modèles pour l'analyse de survie. Nous généralisons l'approche de Marshall et Olkin (A New Method for Adding a Parameter to a Family of Distributions with Application to the Exponential and Weibull Families, Biometrika, 1997). Partant d'une distribution de probabilité de base arbitraire, nous lui ajoutons des paramètres extérieurs au sens qu'il ne s'agit pas directement de paramètres d'échelle ou de forme. Nous obtenons des courbes de risque plus riches, garanties de la souplesse de ces nouvelles distributions de probabilité. Des méthodes d'estimation de ces paramètres sont présentées. Nous montrons l'adéquation de notre modèle sur des données réelles. Dans un second temps, nous nous intéressons aussi au problème de l'estimation sans biais dans les échantillons censurés. Nous apportons des résultats nouveaux. En particulier, nous généralisons un résultat de Rubin et Van der Laan (A Doubly Robust Censoring Unbiased Transformation, The International Journal of Biostatistics, 2007). / In this work, we propose new models for survival analysis. We generalise the approach from Marshall and Olkin (A New Method for Adding a Parameter to a Family of Distributions with Application to the Exponential and Weibull Families, Biometrika, 1997). Given an arbitrary baseline probability distribution, we add some external parameters that are not scale or shape parameters. In this way, we obtain more flexible hazard rate curves. Some methods to estimate these parameters are presented. We show the goodness of fit of our model with a real data set. In a second part, we are also interesting by unbiased estimation in a censored sample. We give new results. We generalise a result from Rubin and Van der Laan (A Doubly Robust Censoring Unbiased Transformation, The International Journal of Biostatistics, 2007).
|
80 |
On the Advancement of Probabilistic Models of Decompression SicknessHada, Ethan Alexander January 2016 (has links)
<p>The work presented in this dissertation is focused on applying engineering methods to develop and explore probabilistic survival models for the prediction of decompression sickness in US NAVY divers. Mathematical modeling, computational model development, and numerical optimization techniques were employed to formulate and evaluate the predictive quality of models fitted to empirical data. In Chapters 1 and 2 we present general background information relevant to the development of probabilistic models applied to predicting the incidence of decompression sickness. The remainder of the dissertation introduces techniques developed in an effort to improve the predictive quality of probabilistic decompression models and to reduce the difficulty of model parameter optimization.</p><p>The first project explored seventeen variations of the hazard function using a well-perfused parallel compartment model. Models were parametrically optimized using the maximum likelihood technique. Model performance was evaluated using both classical statistical methods and model selection techniques based on information theory. Optimized model parameters were overall similar to those of previously published Results indicated that a novel hazard function definition that included both ambient pressure scaling and individually fitted compartment exponent scaling terms. </p><p>We developed ten pharmacokinetic compartmental models that included explicit delay mechanics to determine if predictive quality could be improved through the inclusion of material transfer lags. A fitted discrete delay parameter augmented the inflow to the compartment systems from the environment. Based on the observation that symptoms are often reported after risk accumulation begins for many of our models, we hypothesized that the inclusion of delays might improve correlation between the model predictions and observed data. Model selection techniques identified two models as having the best overall performance, but comparison to the best performing model without delay and model selection using our best identified no delay pharmacokinetic model both indicated that the delay mechanism was not statistically justified and did not substantially improve model predictions.</p><p>Our final investigation explored parameter bounding techniques to identify parameter regions for which statistical model failure will not occur. When a model predicts a no probability of a diver experiencing decompression sickness for an exposure that is known to produce symptoms, statistical model failure occurs. Using a metric related to the instantaneous risk, we successfully identify regions where model failure will not occur and identify the boundaries of the region using a root bounding technique. Several models are used to demonstrate the techniques, which may be employed to reduce the difficulty of model optimization for future investigations.</p> / Dissertation
|
Page generated in 0.0428 seconds