Subject dropout is a common problem in repeated measurements health stud ies. Where dropout is related to the response, the results obtained can be substantially biased. The research in this thesis is motivated by a repeated measurements asthma clinical trial with substantial patient dropout. In practice the extent to which missing observations affect parameter esti mates and their efficiency is not clear. Through extensive simulation studies under various scenarios and missing data mechanisms, the effect on para meter estimates of missing observations is explored and compared. Bias in the model estimates is found to be sensitive to the missing data mechanism, the type of model used, the estimation method, and the type of response variable, amongst other factors. Findings from the simulation study highlight the importance of considering the likely dropout mechanism in choosing a model for the analysis of incom plete repeated measurements. For example, generalised estimating equations (GEE) require a missing completely at random (MCAR) assumption in gen eral, as does the summary statistics method. Several formal tests of MCAR have been published, and these tests are compared both quantitatively, and in terms of their various merits and limitations. Other than the sensitivity analysis, there are no widely accepted methods for analysing data with missing observations missing not at random (MNAR), as strong assumptions are required about the missing data mechanism. A method for incorporating cause of dropout into the analysis is proposed for MNAR data. A Bayesian hierarchical model is developed with informative priors for the bias of dropouts compared to completers for each cause of dropout. The feasibility of the proposed prior elicitation is investigated by consultation with clinicians. And the model is assessed through simulation studies, in which the sensitivity of the approach to misspecification of the parameters of the dropout mechanism is examined.
This thesis considers alternative statistical methods for cost-effectiveness analysis (CEA) that use cluster randomised trials (CRTs). The thesis has four objectives: firstly to develop criteria for identifying appropriate methods for CEA that use CRTs; secondly to critically appraise the methods used in applied CEAs that use CRTs; thirdly to assess the performance of alternative methods for CEA that use CRTs in settings where baseline covariates are balanced; fourthly to compare statistical methods that adjust for systematic covariate imbalance in CEA that use CRTs. The thesis developed a checklist to assess the methodological quality of published CEAs that use CRTs. This checklist was informed by a conceptual review of statistical methods, and applied in a systematic literature review of published CEAs that use CRTs. The review found that most studies adopted statistical methods that ignored clustering or correlation between costs and health outcomes. A simulation study was conducted to assess the performance of alternative methods for CEA that use CRTs across different circumstances where baseline covariates are balanced. This study considered: seemingly unrelated regression (SUR) and generalised estimating equations (GEEs), both with a robust standard error; multilevel models (MLMs) and a non-parametric 'two-stage' bootstrap (TS8). Performance was reported as, for example, bias and confidence interval (Cl) coverage of the incremental net benefit. The MLMs and the TSB performed well across all settings; SUR and GEEs reported poor Cl coverage in CRTs with few clusters. The thesis compared methods for CEA that use CRTs when there are systematic differences in baseline covariates between the treatment groups. In a case study and further simulations, the thesis considered SUR, MLMs, and TSB combined with SUR to adjust for covariate imbalance. The case-study showed that cost-effectiveness results can differ according to adjustment method. The simulations reported that MLMs performed well across all settings, and unlike the other methods, provided Cl coverage close to nominal levels, even with few clusters and unequal cluster sizes. The thesis concludes that MLMs are the most appropriate method across the circumstances considered. This thesis presents methods for improving the quality ofCEA that use CRTs, to help future studies provide a sound basis for policy making.
Alqahtani, Khaled Mubarek A.
Accurate survival prediction is critical in the management of cancer patients’ care and well-being. Previous studies have shown that copy number alterations (CNA) in some key genes are individually associated with disease phenotypes and patients’ prognosis. However, in many complex diseases like cancer, it is expected that a large number of genes with such an association span the genome. Furthermore, genome-wide CNA profiles are person-specific. Each patient has their own profile and any differences in the profile between patients may help to explain the differences in the patients’ survival. Hence, extracting the relevant information in the genome-wide CNA profile is critical in the prediction of cancer patients’ survival. It is currently a modelling challenge to incorporate the genome-wide CNA profiles, in addition to the patients’ clinical information, to predict cancer patients survival. Therefore, the focus of this thesis is to establish or develop statistical methods that are able to include CNA (ultra-high dimensional data) in survival Analysis. In order to address this objective, we go throw two main parts. The first part of the thesis concentrates on CNA estimation. CNA can be estimated using the ratio of a tumour sample to a normal sample. Therefore, we investigate the approximations of the distribution of the ratio of two Poisson random variables. In the second part of the thesis, we extend the Cox proportional hazard (PH) model for prediction of patients survival probability by incorporating the genome-wide CNA profiles as random predictors. The patients clinical information remains as fixed predictors in the model. In this part three types of distribution of random effect are investigated. First, the random effects are assumed to be normally distributed with mean zero and diagonal structure covariance matrix which has equal variances and covariances of zero. The diagonal structure of covariance matrix is the simplest possible structure for a variance-covariance matrix. This structure indicates independence between neighbouring genomic windows. However, CNAs have dependencies between neighbouring genomic windows, and spatial characteristics which are ignored with such a covariance structure. We address the spatial dependence structure of CNAs. In order to achieve this, we start first by discussing other structures of variance-covariance matrices of random effects ( Compound symmetry covariance matrix , and Inverse of covariance matrix). Then, we impose smoothness using first and second differences of random effects. Specifically, the random effects are assumed to be correlated random effects that follow a mixture of two distributions, normal and Cauchy, for the first or second differences (SCox). Our approach in these two scenario was a genome-wide approach, in the sense that we took into account all of the CNA information in the genome. In this regard, the model does not include a variable selection mechanism. Third, as the previous methods employ all predictors regardless of their relevance, which make it difficult to interpret the results, we introduce a novel algorithm based on Sparse-smoothed Cox model (SSCox) within a random effects model-frame work to model the survival time using the patients’ clinical characteristics as fixed effects and CNA profiles as random effects. We assumed CNA coefficients to be correlated random effects that follow a mixture of three distributions: normal (to achieve shrinkage around the mean values), Cauchy for the second-order differences (to gain smoothness), and Laplace (to achieve sparsity). We illustrate each method with a real dataset from a lung cancer cohort as well as simulated data. For the simulation studies, we find that our SSCox method generally preformed better than the sparse partial leastsquare methods in prediction performance. Our estimator had smaller mean square error, and mean absolute error than its main competitors. For the real data set, we find that the SSCox model is suitable and has enabled a survival probability prediction based on the patients clinical information and CNA profiles. The results indicate that cancer T- and N-staging are significant factors in affecting the patients survival, and the estimates of random effects allow us to examine the contribution to the survival of some genomic regions across the genome.
Analytic autoethnography : a tool to inform the lecturer's use of self when teaching mental health nursing?Struthers, John January 2012 (has links)
This research explores the value of analytic autoethnography to develop the lecturer’s use of self when teaching mental health nursing. Sharing the lecturer’s selfunderstanding developed through analytic reflexivity focused on their autoethnographic narrative offers a pedagogical approach to contribute to the nursing profession’s policy drive to increase the use of reflective practices. The research design required me to develop my own analytic autoethnography. Four themes emerged from the data ‘Being in between’, ‘Perceived vulnerability of self’, ‘Knowing and doing’, and ‘Uniting selves’. A methodological analysis of the processes involved in undertaking my analytic autoethnography raised issues pertaining to the timing and health warnings of exploring memory as data. Actor-Network Theory was used as an evaluative framework to reposition the research findings back into relationships which support educational practices. The conclusion supports the use of analytic autoethnography to enable lecturers to share hidden practices which underpin the use of self within professional identities. Recommendations seek methodological literature which makes explicit possible emotional reactions to the reconstruction of self through analysis of memories. Being able to share narratives offers a pedagogical approach based on the dilemmas and tensions of being human, bridging the humanity between service user, student and lecturer.
Early phase clinical trials are conducted with limited time and patient resources. Despite design restrictions, patient safety must be prioritised and trial conclusions must be accurate; maximising a promising treatment’s chance of success in later largescale, long-term trials. Increasing the efficiency of early phase clinical trials, through utilising available data more effectively, can lead to improved decision making during, and as a result of, the trial. This thesis contains three distinct pieces of research; each of which proposes a novel, early phase clinical trial design with this overall objective. The initial focus of the thesis is on dose-escalation. In the single-agent setting, subgroups of the population, between which the reaction to treatment may differ, are accounted for in dose-escalation. This is achieved using a Bayesian model-based approach to dose-escalation with spike and slab priors in order to identify a recommended dose of the treatment (for use in later trials) in each subgroup. Accounting for a potential subgroup effect in a dose-escalation trial can yield safety benefits for patients within, and post- trial due to subgorup-specific dosing which should improve the benefit-risk ratio of the treatment. Dual-agent dose-escalation is considered next. In the dual-agent setting, singleagent data, including toxicity and pharmacokinetic exposure information, is available. This information is used to define escalation rules that combine the outputs of independent dose-toxicity and dose-exposure models which are fitted to emerging trial data. This solution is practical to implement and reduces the subjectivity that currently surrounds the use of exposure data in dose-escalation. In addition, escalation decisions and consistency of the final recommended dose-pair are improved. The focus of the third piece of research changes. In this work, Bayesian sample size calculations for single-arm and randomised phase II trials with time-to-event endpoints are considered. Calculation of the sample size required for a trial is based on a proportional hazards assumption and utilises historical data on the control (and experimental) treatments. The sample sizes obtained are consistent with those currently used in practice while better accounting for available information and uncertainty in parameter estimates of the time-to-event distribution. Investigating allocation ratio’s in the randomised setting provides a basis for deciding whether a control arm is indeed necessary. That is, in a randomised trial, whether it is necessary for any patients to be randomised to the control treatment arm.
Application of Bayesian hierarchical models for the analysis of complex clinical trials : an analysis strategy based on two case studies in dental researchGonzalez-Maffe, Juan Guillermo January 2012 (has links)
Aim: To develop a strategy for the analysis of complex experiments using Bayesian hierarchical models, and demonstrate the advantage of the Bayesian formulation when analysing complex experiments. Methods: There is increased popularity in designing complex experiments; such experiments help amplify the efficiency of clinical research. The Bayesian approach is a natural candidate to tackle complex problems in a straightforward manner as it handles efficiently large amounts of missing data and multivariate responses data. Joint models are formulated in order to deal with missing data and multivariate data. A strategy is developed for the analysis of complex experiments based on two clinical experiments in dentistry. Data: Two clinical experiments in dental research are selected for analysis. In dentistry, we encounter complex experiments as the individual units are the teeth which are clustered within subjects. Results and Conclusion: The results indicate that using Bayesian joint models improve the parameters estimation while taking into account the entire complexity of the study design. The Bayesian formulation gives us the advantages to estimate complex joint models in a straightforward manner. Bayesian joint models can deal with missing data and multivariate data efficiently given the exibility by the MCMC analysis. The joint model propagates the entire uncertainty in the model into the posterior distribution. We can easily extend the model to account for different types of missing data, and/or account for different correlation structures when dealing with multivariate data.
Lambert, Paul Christopher
This thesis describes and develops the use of hierarchical models in medical research from both a classical and Bayesian perspective. Hierarchical models are appropriate when observations are clustered into larger units within a data set, which is a common occurence in medical research. The use and versatility of hierarchical models is shown through a number of examples, with the aim of developing improved and more appropriate methods of analysis. The examples are real data sets and present real problems in terms of statistical analysis. The data sets presented include two data sets involved with longitudinal data where repeated measurements are clustered within individuals. One data set has repeated blood pressure measurements taken on pregnant women and the other consists of repeated peak expiratory flow measurements taken on asthmatic children. Bayesian and classical analyses are compared. A number of issues are explored including the modelling of complex mean profiles, interpretation and quantification of variance components and the modelling of heterogeneous within-subject variances. Other data sets are concerned with meta-analysis, where individuals are clustered within studies. The classical and Bayesian frameworks are compared and one data set investigates the potential to combine estimates from different study types in order to estimate the attributable risk. One of the meta-analysis data sets included individual patient data, where there is a substantial amount of missing covariate data. For this data set models that incorporate individuals with incomplete data when modelling survival times for children with Neuroblastoma are developed. This thesis thus demonstrates that hierarchical models are of great importance in analysing data in medical research. In many situations a Bayesian analysis provides a number of advantages over classical models especially when introducing realistic complexity that would be hard to incorporate using classical methodology.
With the molecular revolution in medicine, many new potential prognostic and predictive factors are becoming available. However, whether new factors will lead to substantial improvement in the accuracy of prognostic assessments requires the use of a suitable per formance measure when considering different prognostic models. Several such measures have been proposed for use in survival analysis with a particular emphasis on measures proposed for the Cox proportional hazards model. However, there is no consensus of opinion on this issue. The proposed measures make use of a wide spectrum of techniques from information theory to statistical imputation. No comprehensive systematic summary of these measures has been done, and no adequate comparison of measures, theoretically or in practice, has been reported. This PhD studies the proposed measures systematically. It defines a set of criteria that a measure should possess in the context of survival analysis. Essential aspects of a measure are that it should be consistent under different degrees of censoring and sample size conditions it should also possess properties such as variable and parameter monotonicity. Desirable properties of a measure are robustness and extendability. This thesis compares the existing measures using these criteria discussing their strengths and shortcomings. From a practical point of view, a discussion of why these measures are important and what information they can provide in medical research, practical data analysis, and perhaps most importantly in prognostic modelling is presented. Data has been taken from completed randomised controlled trials in several diseases carried out by MRC Clinical Trials Unit and other research organisations. The measures that have the best properties will be applied to models fitted to these datasets. This allows us to quantify and assess the prognostic ability of the available prognostic factors in several diseases.
Desai, Amit Y.
Hydroxyapatite [Ca10(PO4)6(OH)2, HA] is used in many biomedical applications including bone grafts and joint replacements. Due to its structural and chemical similarities to human bone mineral, HA promotes growth of bone tissue directly on its surface. Substitution of other elements has shown the potential to improve the bioactivity of HA. Magnetron co-sputtering is a physical vapour deposition technique which can be used to create thin coatings with controlled levels of a substituting element. Thin films of titanium-doped hydroxyapatite (HA-Ti) have been deposited onto silicon substrates at three different compositions. With direct current (dc) power to the Ti target of 5, 10, and 15W films with compositions of 0.7, 1.7 and 2.0 at.% titanium were achieved. As-deposited films, 1.2 μm thick, were amorphous but transformed into a crystalline film after heat-treatment at 700C. Raman spectra of the PO4 band suggests the titanium does not substitute for phosphorous. X-ray diffraction revealed the c lattice parameter increases with additional titanium content. XRD traces also showed titanium may be phase separating into TiO2, a result which is supported by analysis of the Oxygen 1s XPS spectrum. In-vitro observations show good adhesion and proliferation of human osteoblast (HOB) cells on the surface of HA-Ti coatings. Electron microscopy shows many processes (i.e. filopodia) extended from cells after day one in-vitro and a confluent, multi-layer of HOB cells after day three. These finding indicate that there may be potential for HA-Ti films as a novel implant coating to improve upon the bioactivity of existing coatings.
Governments, funding agencies, academic institutions, and health care policy makers are increasingly investing in the design, development, and dissemination of systematic reviews (SRs) to inform clinical practice guidelines, ethical guidance of clinical research, and health care practice and policy. SRs need to be sensitive to the dynamic nature of new evidence, such as published papers. The emergence of new evidence over time may undermine the validity of conclusions and recommendations in any given SR and subsequent practice guideline. This issue has only started to be more seriously considered during the last decade or so. Now it is clear that the use of out-dated evidence can lead to a waste of resources, provision of redundant, ineffective or even harmful health care. The author of this dissertation and his colleagues conducted and published three empirical studies and two conceptual articles (in six peer-reviewed journal publications), which addressed the methodologic aspects of when and how to update SRs. This PhD project provides a summary of these publications. The work described herein has had a significant impact on raising awareness and initiating new research efforts for keeping SRs up-to-date. Publication 1 proposed the first formal definition of what constitutes an update of a SR. The article presented distinguishing features of an updated vs. not updated or a new review. Publication 2 (or Publication 3) systematically reviewed methods, techniques, and strategies describing when and how to update SRs (Study #1). Publication 4, an international survey (Study #2), identified and described updating practices and policies of organisations involved in the production and commission of SRs. Publication 5 reviewed the knowledge and efforts in updating SRs and provided guidance for authors and SR groups as to when and how to update comparative effectiveness reviews produced by the Agency for Healthcare Research and Quality’s (AHRQ) Evidence-based Practice Centres (EPCs) throughout North America. Publication 6 (Study #3) described the development, piloting, and feasibility of a surveillance system to assess the need for updating comparative effectiveness reviews produced by the AHRQ’s EPC Program. This surveillance method has proved to be an efficient approach for prioritising SRs with respect to updating need.
Page generated in 0.0371 seconds