• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 34
  • 4
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • Tagged with
  • 63
  • 63
  • 13
  • 11
  • 11
  • 11
  • 11
  • 8
  • 8
  • 8
  • 8
  • 8
  • 7
  • 6
  • 6
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
41

Analysis of longitudinal binary data : an application to a disease process.

Ramroop, Shaun. January 2008 (has links)
The analysis of longitudinal binary data can be undertaken using any of the three families of models namely, marginal, random effects and conditional models. Each family of models has its own respective merits and demerits. The models are applied in the analysis of binary longitudinal data for childhood disease data namely the Respiratory Syncytial Virus (RSV) data collected from a study in Kilifi, coastal Kenya. The marginal model was fitted using generalized estimating equations (GEE). The random effects models were fitted using ‘Proc GLIMMIX’ and ‘NLMIXED’ in SAS and then again in Genstat. Because the data is a state transition type of data with the Markovian property the conditional model was used to capture the dependence of the current response to the previous response which is known as the history. The data set has two main complicating issues. Firstly, there is the question of developing a stochastically based probability model for the disease process. In the current work we use direct likelihood and generalized linear modelling (GLM) approaches to estimate important disease parameters. The force of infection and the recovery rate are the key parameters of interest. The findings of the current work are consistent and in agreement with those in White et al. (2003). The aspect of time dependence on the RSV disease is also highlighted in the thesis by fitting monthly piecewise models for both parameters. Secondly, there is the issue of incomplete data in the analysis of longitudinal data. Commonly used methods to analyze incomplete longitudinal data include the well known available case analysis (AC) and last observation carried forward (LOCF). However, these methods rely on strong assumptions such as missing completely at random (MCAR) for AC analysis and unchanging profile after dropout for LOCF analysis. Such assumptions are too strong to generally hold. In recent years, methods of analyzing incomplete longitudinal data have become available with weaker assumptions, such as missing at random (MAR). Thus we make use of multiple imputation via chained equations that require the MAR assumption and maximum likelihood methods that result in the missing data mechanism becoming ignorable as soon as it is MAR. Thus we are faced with the problem of incomplete repeated non–normal data suggesting the use of at least the Generalized Linear Mixed Model (GLMM) to account for natural individual heterogeneity. The comparison of the parameter estimates using the different methods to handle the dropout is strongly emphasized in order to evaluate the advantages of the different methods and approaches. The survival analysis approach was also utilized to model the data due to the presence of multiple events per subject and the time between these events. / Thesis (Ph.D.)-University of KwaZulu-Natal, Pietermarizburg, 2008.
42

Natural History and Determinants of Changes in Physiological Variables after Ischaemic Stroke.

Andrew Wong Unknown Date (has links)
Abstract Background The prognosis after an ischaemic stroke is determined largely by how much damage is done to the brain. Currently physicians possess only a few therapies that can improve outcome. Early changes in common physiological variables, such as blood pressure, temperature and blood glucose levels, represent a potential therapeutic target, and manipulation of these variables may eventually yield an effective and potentially widely applicable range of therapies for optimising stroke recovery. However, the natural history and determinants of physiological change require clarification before the effects of manipulating physiology can be assessed. Previous research suggests that blood pressure and glucose falls over the first few days and temperature rises over this time. Some of the determinants of this change have been identified, for example stroke severity, but their influence has not been accurately quantified. The lack of detail in previous attempts to characterise these relationships is partly due to a reliance on traditional cross-sectional statistical techniques. My aims were to use the most apposite statistical technique, namely mixed-effects modelling, to accurately characterise the temporal patterns of post-stroke blood pressure, temperature and glucose, and to identify baseline factors that represent determinants of change in these three physiological variables. Methods A cohort of ischaemic stroke patients was recruited within 48 hours of stroke onset, and their blood pressure, temperature and glucose was recorded at least every 4 hours until 48 hours post-stroke. Factors representing potential determinants of change in these physiological variables were also recorded, including stroke severity and the presence of infection. There were no protocols dictating the treatment of these physiological variables, but if this occurred, those treatments were also recorded. In each analysis, mixed-effects models were generated with serial measures of physiology as the outcome factors and the potential determinants of physiological change as the explanatory factors. These determinants included time, representing the temporal patterns of change. Patients with diabetes were excluded from the analysis of glucose, for several reasons including the excessive impact on glycaemia made by dietary intake in patients with diabetes. Results There were 157 eligible patients overall. The analysis of blood pressure (n=157) revealed a linear systolic blood pressure fall of 14.9 mmHg (95% Confidence Interval (CI) 6.2, 22.6 mmHg) and a diastolic blood pressure fall of 6.1 mmHg (95%CI 1.6, 10.5 mmHg) over the first 48 hours after stroke. Patients with post-stroke infection exhibited a slight rise in systolic blood pressure of about 4 mmHg. Higher systolic blood pressures were seen in older patients and in those with pre-existing or previously treated hypertension, previous strokes or transient ischaemic attacks, in regular alcohol users and in those with mild to moderately severe stroke. Systolic blood pressures were 4.6 mmHg (95%CI 2.35,6.85 mmHg) lower in current smokers than in non-smokers. Of the 156 patients eligible for the temperature analysis, temperature rose by 0.17 deg C in patients with mild stroke (National Institutes of Health Stroke Score (NIHSS)≤6) and 0.35 deg C in patients with moderate to severe stroke (NIHSS≥6) over the first 48 hours after stroke. Temperatures were higher in those who required paracetamol. Temperatures were 0.33 deg C (95%CI 0.07, 0.58) higher in patients with infection and the effect was fixed during the 48 hour observation period. Blood glucose remained static in the 124 patients without diabetes during the first 48 hours after stroke. Glucose levels where higher in those requiring glucose lowering therapy, and in those with more severe stroke. Conclusions I have quantified the amount by which blood pressure falls and temperature rises over the first 48 hours after stroke. In addition, I have shown that mean glucose levels remain static during this time, suggesting that previous reports of acutely resolving post-stroke hyperglycaemia may have represented misinterpretation of regression to the mean. Several determinants of change in post-stroke physiological variables were identified, with unexpected findings in several cases. Higher systolic blood pressures were seen with stroke of moderate severity but not mild or severe stroke. This relationship was fixed during the first 48 hours after stroke, but while more severe stroke was also associated with higher temperatures, the latter effect became more marked as time passed. Conversely, infection was associated with a fixed elevation in temperature, but was associated with systolic blood pressures that rose slightly during the observation period. These apparent inconsistencies require clarification in future work, for example studies of whether markers of the inflammatory or neuroendocrine stress responses evolve in parallel with the changes in physiological variables. This work provides fundamental information regarding the natural history and determinants of changes in physiological variables post-stroke, and will improve the design of future studies investigating the prognostic significance of untreated and treated physiological variables after stroke. This will ultimately lead to the refinement of clinical guidelines for the management of physiological variables post-stroke and to better outcomes for stroke patients.
43

Long-term progression of structural joint damage in early rheumatoid arthritis

Carpenter, Lewis January 2017 (has links)
Rheumatoid Arthritis (RA) is a chronic auto-immune disease that causes in ammation in the joints. Left uncontrolled, this prolonged in ammation can lead to pain and structural damage, resulting in erosions to the bones and total breakdown of the surrounding cartilage. Structural joint damage, measured by plain radiographs, is an important outcome measure of RA. It provides an objective marker of disease activity to assess any improvements or failures of treatments in controlling for the disease. Increased long-term joint damage has been linked with increased functional disability and decreased quality of life for RA patients. While a range of studies have looked at radiographic outcomes from observational data, they tend to be restricted to historical cohorts, with little long-term data on how radiographic progression may have changed in line with changes in clinical management. Additionally, these studies have not used the appropriate statistical methods to account for non-normal data distributions and within-patient variation over time. As a result, the main aim of this thesis is to investigate the long-term progression of structural joint damage in patients with early RA. The speci c objectives were to; (1) investigate the current evidence base to identify common methods in measuring and analysing radiographic outcomes, (2) assess what statistical methods are most appropriate in modelling long-term radiographic data, (3) use these models to understand the natural progression of radiographic damage using data from two UK inception cohorts, and nally, (4) expand these models to investigate the long-term relationship of radiographic damage with two important clinical outcomes; disease activity and functional disability. The analysis is based on longitudinal data from two UK prospective, multi-centre, early RA observational cohorts. These cohorts represent two distinct eras in the management and treatment of RA, making them invaluable for investigating how key RA outcomes have progressed in clinical practice over time. Using multi-level count models, precise rates of radiographic progression for both cohorts are presented. The models look at how seropositive RA and increased disease activity are related to increased radiographic progression, and what impact this has on functional disability. The results show that rates of radiological damage have declined dramatically in recent years. Possible attributable factors to these declines include both milder disease and more e ective treatment strategies. Analysis of the earlier cohort (1986-2001) shows how seropositive RA and increased disease activity lead to clinically meaningful increases in radiological damage. Conversely, their impact on patients in the more recent cohort (2002-2011) suggest that their e ect on radiographic progression is reduced, where increases in radiological damage were not larger than clinically meaningful thresholds. This has large implications on the debate around the use of biologic therapies in patients with less severe RA. However more data is sorely needed, particularly long-term radiographic data from those patients on biologics treatments, before any de nitive conclusions can be made. The possible impact of these declines on functional disability appears to be relatively small. The analysis shows that radiographic damage is more strongly associated with functional disability in later disease, but there is little evidence to indicate that declines in radiographic damage has lead to large improvements in long-term functional disability. These ndings are explored within the framework of a dual-pathway model, which suggests that functional disability is caused by two distinct mechanisms, either structural joint damage, or through increased pain. Research so far has predominantly focused on pharmacological treatments in reducing in ammation. More research is needed to explore the role of psychosocial factors and pain perception in order to create a more holistic treatment programme for RA patients.
44

Early screening and diagnosis of diabetic retinopathy

Leontidis, Georgios January 2016 (has links)
Diabetic retinopathy (DR) is a chronic, progressive and possibly vision-threatening eye disease. Early detection and diagnosis of DR, prior to the development of any lesions, is paramount for more efficiently dealing with it and managing its consequences. This thesis investigates and proposes a number of candidate geometric and haemodynamic biomarkers, derived from fundus images of the retinal vasculature, which can be reliably utilised for identifying the progression from diabetes to DR. Numerous studies exist in literature that investigate only some of these biomarkers in independent normal, diabetic and DR cohorts. However, none exist, to the best of my knowledge, that investigates more than 100 biomarkers altogether, both geometric and haemodynamic ones, for identifying the progression to DR, by also using a novel experimental design, where the same exact matched junctions and subjects are evaluated in a four year period that includes the last three years pre-DR (still diabetic eye) and the onset of DR (progressors’ group). Multiple additional conventional experimental designs, such as non-matched junctions, non-progressors’ group, and a combination of them are also adopted in order to present the superiority of this type of analysis for retinal features. Therefore, this thesis aims to present a complete framework and some novel knowledge, based on statistical analysis, feature selection processes and classification models, so as to provide robust, rigorous and meaningful statistical inferences, alongside efficient feature subsets that can identify the stages of the progression. In addition, a new and improved method for more accurately summarising the calibres of the retinal vessel trunks is also presented. The first original contribution of this thesis is that a series of haemodynamic features (blood flow rate, blood flow velocity, etc.), which are estimated from the retinal vascular geometry based on some boundary conditions, are applied to studying the progression from diabetes to DR. These features are found to undoubtedly contribute to the inferences and the understanding of the progression, yielding significant results, mainly for the venular network. The second major contribution is the proposed framework and the experimental design for more accurately and efficiently studying and quantifying the vascular alterations that occur during the progression to DR and that can be safely attributed only to this progression. The combination of the framework and the experimental design lead to more sound and concrete inferences, providing a set of features, such as the central retinal artery and vein equivalent, fractal dimension, blood flow rate, etc., that are indeed biomarkers of progression to DR. The third major contribution of this work is the new and improved method for more accurately summarising the calibre of an arterial or venular trunk, with a direct application to estimating the central retinal artery equivalent (CRAE), the central retinal vein equivalent (CRVE) and their quotient, the arteriovenous ratio (AVR). Finally, the improved method is shown to truly make a notable difference in the estimations, when compared to the established alternative method in literature, with an improvement between 0.24% and 0.49% in terms of the mean absolute percentage error and 0.013 in the area under the curve. I have demonstrated that some thoroughly planned experimental studies based on a comprehensive framework, which combines image processing algorithms, statistical and classification models, feature selection processes, and robust haemodynamic and geometric features, extracted from the retinal vasculature (as a whole and from specific areas of interest), provide altogether succinct evidence that the early detection of the progression from diabetes to DR can be indeed achieved. The performance that the eight different classification combinations achieved in terms of the area under the curve varied from 0.745 to 0.968.
45

Event History Analysis in Multivariate Longitudinal Data

Yuan, Chaoyu January 2021 (has links)
This thesis studies event history analysis in multivariate longitudinal observational databases (LODs) and its application in postmarketing surveillance to identify and measure the relationship between events of health outcomes and drug exposures. The LODs contain repeated measurements on each individual whose healthcare information is recorded electronically. Novel statistical methods are being developed to handle challenging issues arising from the scale and complexity of postmarketing surveillance LODs. In particular, the self-controlled case series (SCCS) method has been developed with two major features (1) it only uses individuals with at least one event for analysis and inference and, (2) it uses each individual to be served as his/her own control, effectively requiring a person to switch treatments during the observation period. Although this method handles heterogeneity and bias, it does not take full advantage of the observational databases. In this connection, the SCCS method may lead to a substantial loss of efficiency. We proposed a multivariate proportional intensity modeling approach with random effect for multivariate LODs. The proposed method can explain the heterogeneity and eliminate bias in LODs. It also handles multiple types of event cases and makes full use of the observational databases. In the first part of this thesis, we present the multivariate proportional intensity model with correlated frailty. We explore the correlation structure between multiple types of clinical events and drug exposures. We introduce a multivariate Gaussian frailty to incorporate thewithin-subject heterogeneity, i.e. hidden confounding factors. For parameter estimation, we adopt the Bayesian approach using the Markov chain Monte Carlo method to get a series of samples from the targeted full likelihood. We compare the new method with the SCCS method and some frailty models through simulation studies. We apply the proposed model to an electronic health record (EHR) dataset and identify event types as defined in Observational Outcomes Medical Partnership (OMOP) project. We show that the proposed method outperforms the existing methods in terms of common metrics, such as receiver operating characteristic (ROC) metrics. Finally, we extend the proposed correlated frailty model to include a dynamic random effect. We establish a general asymptotic theory for the nonparametric maximum likelihood estimators in terms of identifiability, consistency, asymptotic normality and asymptotic efficiency. A detailed illustration of the proposed method is done with the clinical event Myocardial Infarction (MI) and drug treatment of Angiotensin-converting-enzyme (ACE) inhibitors, showing the dynamic effect of unobserved heterogeneity.
46

Mind the developmental gap: Identifying adverse drug effects across childhood to evaluate biological mechanisms from growth and development

Giangreco, Nicholas Paul January 2022 (has links)
Adverse drug reactions are a leading cause of morbidity and mortality that costs billions of dollars for the healthcare system. In children, there is increased risk for adverse drug reactions with potentially lasting adverse effects into adulthood. The current pediatric drug safety landscape, including clinical trials, is limited as it rarely includes children and relies on extrapolation from adults. Children are not small adults but go through an evolutionarily conserved and physiologically dynamic process of growth and maturation. We hypothesize that adverse drug reactions manifest from the interaction between drug exposure and dynamic biological processes during child growth and development. While pediatric pharmacologists have studied and recognized this interaction, the evidence from these studies have focused on a few, well-known drug toxicities largely within animal models that have limited translation to children and their clinical care. Moreover, preclinical studies during drug development do not consider growth and maturation of children, which severely limits our knowledge of drug safety in this population. Post-marketing pediatric drug safety studies, on the other hand, leverage large amounts of observations to identify and characterize adverse drug events in the pediatric population after drugs enter the market. However, these observational studies have been limited to event surveillance and have not focused on evaluating why adverse drug events may manifest in children. We hypothesize that by developing statistical methodologies with prior knowledge of dynamic, shared information during development, we can improve the detection of adverse drug events in children. We further hypothesize that detecting adverse drug events in this way also improves the evaluation of dynamic biological and physiological processes during child growth and development. In chapter 1, we described the pediatric drug safety landscape, dynamic processes from pediatric developmental biology, and motivation for a large-scale and data-driven approach to study the interaction between drug treatment and child development. In chapter 2, using drug event reports collected by the Food and Drug Administration (FDA), we evaluated statistical models for identifying temporal trends of adverse effects across childhood. We found the generalized additive model (GAM), as compared to a popular disproportionality method, show improved detection performance especially of rare pediatric adverse drug events. In chapter 3, we applied covariate-adjusted drug-event GAMs in a systematic way to develop a resource of nearly half a million adverse drug event (ADE) risk estimates across child development stages. We showed that not only do significant ADEs through childhood recapitulate dynamic organ and system maturation, but we also provide granular, development-specific risk for known pediatric drug effects that were previously unknown. Importantly, this approach facilitated the evaluation of dynamic biological processes, such as drug-metabolizer gene expression levels across childhood, that we observed coincided with dynamic risk of adverse drug effects. In chapter 4, we performed several case studies showing population-level evidence for well-known pediatric adverse drug reactions using our generated resource. In addition, we developed an accessible web portal, the Pediatric Drug Safety portal (PDSportal), to retrieve from our resource the population-level evidence of user-specified adverse drug events in the pediatric population across child development stages. In conclusion, we summarize three key research directions in data-driven pediatric drug safety research: quantifying child vs. adult drug safety profiles, predicting pre-clinical drug toxicity across childhood, and detecting genetic susceptibility of pediatric adverse drug events. Our results demonstrate that developing pediatric drug safety methods directly for children using data-driven approaches improves both identification and evaluation of adverse drug events during the period of child growth and development.
47

Data Quality Assessment for the Secondary Use of Person-Generated Wearable Device Data: Assessing Self-Tracking Data for Research Purposes

Cho, Sylvia January 2021 (has links)
The Quantified Self movement has led to an increased routine use of consumer wearables, generating large amounts of person-generated wearable device data. This has become an opportunity to researchers to conduct research with large-scale person-generated wearable device data without having to collect data in a costly and time-consuming way. However, there are known challenges of wearable device data such as missing data or inaccurate data which raises the need to assess the quality of data before conducting research. Currently, there is a lack of in-depth understanding on data quality challenges of using person-generated wearable device data for research purposes, and how data quality assessment should be conducted. Data quality assessment could be especially a burden to those without the domain knowledge on a specific data type, which might be the case for emerging biomedical data sources. The goal of this dissertation is to advance the knowledge on data quality challenges and assessment of person-generated wearable device data and facilitate data quality assessment for those without the domain knowledge on the emerging data type. The dissertation consists of two aims: (1) identifying data quality dimensions important for assessing the quality of person-generated wearable device data for research purposes, (2) designing and evaluating an interactive data quality characterization tool that supports researchers in assessing the fitness-for-use of fitness tracker data. In the first aim, a multi-method approach was taken, conducting literature review, survey, and focus group discussion sessions. We found that intrinsic data quality dimensions applicable to electronic health record data such as conformance, completeness, and plausibility are applicable to person-generated wearable device data. In addition, contextual/fitness-for-use dimensions such as breadth and density completeness, and temporal data granularity were identified given the fact that our focus was on assessing data quality for research purposes. In the second aim, we followed an iterative design process from understanding informational needs to designing a prototype, and evaluating the usability of the final version of a tool. The tool allows users to customize the definition of data completeness (fitness-for-use measures), and provides data summarization on the cohort that meets that definition. We found that the interactive tool that incorporates fitness-for-use measures and allows customization on data completeness, can support assessing fitness-for-use assessment more accurately and in less time than a tool that only presents information on intrinsic data quality measures.
48

Statistical Methods for Learning Patients Heterogeneity and Treatment Effects to Achieve Precision Medicine

Xu, Tianchen January 2022 (has links)
The burgeoning adoption of modern technologies provides a great opportunity for gathering multiple modalities of comprehensive personalized data on individuals. The thesis aims to address statistical challenges in analyzing these data, including patient-specific biomarkers, digital phenotypes and clinical data available from the electronic health records (EHRs) linked with other data sources to achieve precision medicine. The first part of the thesis introduces a dimension reduction method of microbiome data to facilitate subsequent analysis such as regression and clustering. We adopt the proposed zero-inflated Poisson factor analysis (ZIPFA) model on the Oral Infections, Glucose Intolerance and Insulin Resistance Study (ORIGINS) and provide valuable insights into the relation between subgingival microbiome and periodontal disease. The second part focuses on modeling the intensive longitudinal digital phenotypes collected by mobile devices. We develop a method based on a generalized state-space model to estimate the latent process of patient's health status. The application to the Mobile Parkinson's Observatory for Worldwide Evidence-based Research (mPower) data reveals the low-rank structure of digital phenotypes and infers the short-term and long-term Levodopa treatment effect. The third part proposes a self-matched learning method to learn individualized treatment rule (ITR) from longitudinal EHR data. The medical history data in EHRs provide the opportunity to alleviate unmeasured time-invariant confounding by matching different periods of treatments within the same patient (self-controlled matching). We estimate the ITR for type 2 diabetes patients for reducing the risk of diabetes-related complications using the EHRs data from New York Presbyterian (NYP) hospital. Furthermore, we include an additional example of self-controlled case series (SCCS) study on the side effect of stimulants. Significant associations between the use of stimulants and mortality are found from both FDA Adverse Event Reporting System and the SCCS study, but the latter uses a much smaller sample size which suggests high efficiency of the SCCS design.
49

A practical introduction to medical statistics.

Scally, Andy J. 16 October 2013 (has links)
no / Medical statistics is a vast and ever-growing field of academic endeavour, with direct application to developing the robustness of the evidence base in all areas of medicine. Although the complexity of available statistical techniques has continued to increase, fuelled by the rapid data processing capabilities of even desktop/laptop computers, medical practitioners can go a long way towards creating, critically evaluating and assimilating this evidence with an understanding of just a few key statistical concepts. While the concepts of statistics and ethics are not common bedfellows, it should be emphasised that a statistically flawed study is also an unethical study.[1] This review will outline some of these key concepts and explain how to interpret the output of some commonly used statistical analyses. Examples will be confined to two-group tests on independent samples, using both a continuous and a dichotomous/binary outcome measure.
50

Considerações sobre a estatística médica: uma análise crítica do movimento \"Medicina baseada em evidências\" / Thoughts on medical statistics: a critical analysis of \"Evidencebased medicine\"

Hadad Filho, Alvaro 12 December 2018 (has links)
O movimento \"Medicina baseada em evidências (EBM), surgido na década de 1990, encontrou rápida aceitação por parte da comunidade médica e dos sistemas de saúde. Entre suas principais características, encontram-se a exigência de que a prática clínica seja baseada na melhor evidência disponível, a hierarquização da evidência, a valorização dos ensaios clínicos e, sobretudo, o recurso extensivo a procedimentos de análise estatística. Neste trabalho, apresentamos a EBM, descrevemos seus conceitos e procedimentos centrais e indicamos alguns de seus antecedentes históricos. Damos especial atenção aos conceitos de randomização, significância estatística, evidência científica e eficácia terapêutica. Finalmente, desenvolvemos uma crítica às concepções de cientificidade e progresso defendidas pela EBM e a utilizamos como ponto de partida para tecermos considerações gerais acerca do estatuto epistemológico da medicina, do progresso médico e das funções que a estatística desempenha na medicina contemporânea. / Evidence-based medicine (EBM) is a medical movement whose first appearance dates back to the 1990s. Since then, it has received wide acceptance from the medical community and international health systems. Among its most important characteristics, it is possible to indicate the demand to base the clinical practice on the best current evidence, the hierarchies of evidence, the valorisation of the randomized-controlled trials, and, especially, the extensive recourse to procedures of statistical analysis. This Masters dissertation is intended to present the EBM movement, describe its main concepts and procedures, and identify some of its historical backgrounds. Special consideration is given to the concepts of randomization, statistical significance, scientific evidence, and therapeutic efficacy. Finally, we present some criticisms on the conceptions of medical science and medical progress defended by EBM proponents. We then use them as a starting point for the development of our own considerations about the epistemological status of medicine, the medical progress and the advancement of knowledge in the contemporary medical sciences.

Page generated in 0.0824 seconds