• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • No language data
  • Tagged with
  • 336
  • 27
  • 18
  • 12
  • 12
  • 11
  • 10
  • 9
  • 8
  • 8
  • 8
  • 8
  • 8
  • 7
  • 7
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
291

Supporting the information systems requirements of distributed healthcare teams

Skilton, Alysia January 2011 (has links)
The adoption of a patient-centric approach to healthcare delivery in the National Health Service (NHS) in the UK has led to changing requirements for information systems supporting the work of health and care practitioners. In particular, the patient-centric approach emphasises teamwork and cross-boundary coordination and collaboration. Although a great deal of both time and money has been invested in modernising healthcare information systems, they do not yet meet the requirements of patient-centric work. Current proposals for meeting these needs focus on providing cross-boundary information access in the form of an integrated Electronic Patient Record (EPR). This research considers the requirements that are likely to remain unmet after an integrated EPR is in place and how to meet these. Because the patient-centric approach emphasises teamwork, a conceptual model which uses care team meta-data to track and manage team members and professional roles is proposed as a means to meet this broader range of requirements. The model is supported by a proof of concept prototype which leverages team information to provide tailored information access, targeted notifications and alerts, and patient and team management functionality. Although some concerns were raised regarding implementation, the proposal was met with enthusiasm by both clinicians and developers during evaluation. However, the area of need is broad and there is still a great deal of work to be done if this work is to be taken forward.
292

Supporting integrated care pathways with workflow technology

Alsalamah, Hessah January 2012 (has links)
Modern healthcare has moved to a focus on providing patient centric care rather than disease centred care. This new approach is provided by a unique care team which is formed to treat a patient. At the start of the treatment, the care team decide on the treatment pathway for the patient. This is a series of treatment stages where at the end of each stage, the care team use the patient’s current condition to decide whether the treatment moves to the next stage, continues in the treatment stage, or moves to an unanticipated stage. The initial treatment pathway for each patient is based on the clinical guidelines in an Integrated Care Pathway (ICP) [1] modified to suit the patient state. This research mapped a patient ICP decided by the healthcare providers into a Workflow Management System (WFMS) [2]. The clinical guidelines reflect the patient-centric flow to create an IT system supporting the care team. In the initial stage of the research the IT development team at Velindre Hospital identified that team communication and care coordination were obstacles hindering the implementation of a patient-centric delivery model. This was investigated to determine the causes, which were identified as difficulty in accessing the medical information held in dispersed legacy systems. Moreover, a major constraint in the domain is the need to keep legacy systems in operation and so there is a need to investigate approaches to enhance their functionalities. These information systems cannot be changed across all healthcare organisations and their complete autonomy needs to be retained as they are in constant use at the sites. Using workflow technology, an independent application representing an ICP was implemented. This was used to construct an independent layer in the software architecture to interact with legacy Clinical Information Systems (CISs) and so evolve their offered functionalities to support the teams. This was used to build a Virtual Organisation (VO) [3, 4] around a patient which facilitates patient-centric care. Moreover, the VO virtually integrates the data from legacy systems and ensures its availability (as needed) at the different treatment stages along the care pathway. Implications of the proposal include: formalising the treatment process, filtering and gathering the patient’s information, ensuring care continuity, and pro-acting to change. Evaluation of the proposal involved three stages; First, usefulness evaluation by the healthcare providers representing the users; Second, setup evaluation by developers of CISs; and Finally, technical evaluation by the community of the technology. The evaluation proved; the healthcare providers’ need for an adaptive and a proactive system, the possibility of adopting the proposed system, and the novelty and innovation of the proposed approach. The research proposes a patient-centric system achieved by creating a version of an ICP in the system for each patient. It also provides focussed support for team communication and care coordination, by identifying the treatment stages and providing the care team requirements at each stage. It utilises the data within the legacy system to be proactive. Moreover, it makes these required data for the actions available from the running legacy system which is required for patient-centred care. In the future the worth could be extended by mapping other ICPs into the system. This work has been published in four full papers. It found acceptance in the health informatics community [5, 6, 7] as well as the BPM community [8, 9]. It is also the winner of the 2011 “Global Award of Excellence in Adaptive Case Management (ACM)” in “Medical and Healthcare” [10] of the Workflow Management Coalition (WFMC) [11].
293

Investigation of over-fitting and optimism in prognostic models

Richardson, Matthew January 2010 (has links)
This work seeks to develop a high quality prognostic model for the CARE-HF data; see (Richardson et al. 2007). The CARE-HF trial was a major study into the effects of cardiac resynchronization. Cardiac resynchronization has been shown to reduce mortality in patients suffering heart failure due to electrical problems in the heart. The prognostic model presented in this work was motivated by the question as to which patient characteristics may modify the effect of cardiac resynchronization. This is a question of great importance to clinicians. Efforts are made to produce a high quality prognostic model in part through the application of methods to reduce the risk of over-fitting. One method discussed in this work is the strategy proposed by Frank Harrell Jr. The various aspects of Harrell’s approach are discussed. An attempt is made to extend Harrell’s strategy to frailty models. Key issues such as missing data and imputation, specification of the functional form of the model, and validation are examined in relation to the prognostic model for the CARE-HF data. Material is presented covering survival analysis, maximum likelihood methods, model selection criteria (AIC, BIC), specification of functional form (cubic splines and fractional polynomials) and validation methods (cross-validation, bootstrap methods). The concepts of over-fitting and optimism are examined. The author concludes that whilst Harrell’s strategy is valuable it is still quite possible to produce models that are over-fitted. MDL (Minimum Description Length) is suggested as potentially useful methods by which statistical models can be obtained that have an in built resistance to over-fitting. The author also recommends that concepts such as over-fitting, optimism and model validation are introduced earlier in more elementary courses on statistical modelling.
294

Determinants of prostate cancer : the Birmingham Prostatic Neoplasms Association study

Khan, Humera January 2011 (has links)
This Birmingham Prostatic Neoplasms Association Study (BiPAS) was initiated to investigate determinants of prostate cancer. The study recruited 314 prostate cancer patients, 381 active surveillance patients, 201 hospital controls and 175 population controls. By comparing groups of varying risk, the aetiology of the disease was investigated. Within the BiPAS dataset, sun exposure, physical activity and obesity were analysed. The association with occupation was assessed by performing a meta analysis of 7, 762 cases and 20, 634 controls. Finally, a replication study on genetic polymorphisms on 8q24 using 277 cases and 282 controls from the Netherlands Cohort Study (NLCS) is presented. A protective effect was observed for high sun exposure in early adulthood and high intensity exercise. An increased risk was observed for low intensity exercise and men classed as obese at age 20. The meta analysis suggested moderately increased and decreased risks associated with a number of job titles, however none were statistically significant. The results for allele A on the single nucleotide polymorphism rs1447295 were replicated; however a decreased risk was detected for allele -8 on the microsatellite DG8S737. No significant difference was detected for analysis comparing prostate cancer or high PSA cases.
295

Adverse health outcomes in survivors of childhood cancer

Reulen, Raoul January 2009 (has links)
This thesis concerns investigations into adverse health outcomes among survivors of childhood cancer using the British Childhood Cancer Survivor Study (BCCSS). The BCCSS is a large-scale population-based cohort of 17,981 survivors of childhood cancer who were diagnosed with childhood cancer (age 0-14 years) between 1940 and 1991, in Britain, and had survived for at least five years. The specific aims were to investigate, within the BCCSS cohort; (1) the psychometric properties of the SF-36 health-status questionnaire, (2) the self-reported health-status by using the SF-36, (3) the effect of therapeutic radiation on the offspring sex ratio, (4) the risks of adverse pregnancy outcomes, and (5) the risks of second primary breast cancer. This thesis demonstrates that the SF-36 questionnaire exhibits good validity and reliability when used in long-term survivors of childhood cancer. Survivors rate their physical and mental health similarly to those in the general population, apart from bone and central nervous system tumour survivors who rate their physical health below population norms. Therapeutic irradiation does not alter the sex ratio of offspring. Female survivors exposed to abdominal irradiation are at a three-fold risk of delivering premature and two-fold risk of producing low birth-weight offspring. Lastly, the risk of breast cancer among female survivors is two-fold that of the general population, but is not sustained into ages at which the risk of breast cancer in the general population becomes substantial.
296

Exploring nature of the structured data in GP electronic patient records

Ranandeh Kalankesh, Leila January 2011 (has links)
No description available.
297

Evaluating a virtual learning environment in medical education

Ellaway, Rachel Helen January 2006 (has links)
The use of technology-supported teaching and learning in higher education has moved from a position of peripheral interest a few years ago to become a fundamental ingredient in the experience of many if not most students today. A major part of that change has been wrought by the widespread introduction and use of ‘virtual learning environments’ (VLEs). A defining characteristic of VLEs is that they combine a variety of tools and resources into a single integrated system. To use a VLE is not just to employ a single intervention but to change the very fabric of the students’ experience of study and the university. Despite this, much of the literature on VLEs has concentrated on producing typologies by listing and comparing system functions, describing small scale and short duration applications or providing speculative theories and predictions. Little attention has so far been paid to analysing what effects a VLE’s use has on the participants and the context of use, particularly across a large group of users and over a substantial period of time. This work presents the evaluation of a VLE developed and used to support undergraduate medical education at the University of Edinburgh since 1999. This system is called ‘EEMeC’ and was developed specifically within and in support of its context of use. EEMeC provides a large number of features and functions to many different kinds of user, it has evolved continuously since it was introduced and it has had a significant impact on teaching and learning in the undergraduate medical degree programme (MBChB). In such circumstances evaluation methodologies that depend on controls and single variables are nether applicable or practical. In order to approach the task of evaluating such a complex entity a multi-modal evaluation framework has been developed based on taking a series of metaphor-informed perspectives derived from the organisational theories of Gareth Morgan(Morgan 1997). The framework takes seven approaches to evaluation of EEMeC covering a range of quantitative and qualitative methodologies. These are combined in a dialectical analysis of EEMeC from these different evaluation perspectives. This work provides a detailed and multi-faceted account of a VLE-in-use and the ways in which it interacts with its user community in its context of use. Furthermore, the method of taking different metaphor-based evaluation perspectives of a complex problem space is presented as a viable approach for studying and evaluating similar learning support systems. The evaluation framework that has been developed would be particularly useful to those practitioners who have a pressing and practical need for meaningful evaluation techniques to inform and shape how complex systems such as VLEs are deployed and used. As such, this work can provide insights not just into EEMeC, but into the way VLEs are changing the environments and contexts in which they are used across the tertiary sector as a whole.
298

The development and use of environmental health indicators for epidemiology and policy applications : a geographical analysis

Wills, John Trevelyan January 1998 (has links)
This thesis examines the development and use of environmental health indicators for epidemiology, risk assessment and policy applications from a geographical perspective. Although indicators have traditionally been used to examine temporal trends, the development of environmental health indicators (EHIs) may enable comparisons to be made between areas with contrasting environmental health conditions, support efforts to highlight ‘hot spots’ and facilitate the analysis of spatial patterns in environmental health conditions and health risk. The use of environmental health indicators is relatively new and little research has been conducted in this area. In the light of this, this thesis examines EHIs in the context of contemporary developments in environmental indicators, health-related and quality of life indicators, and indicators of sustainable development. Essential characteristics and requirements for EHIs are identified and the main areas of application are discussed. In the second part of the thesis, the development and use of Effis for evaluating exposure to traffic-related air pollution is examined, using GIS techniques. Potential indicators of exposure are identified and these are applied at a range of spatial scales, along with a number of additional measures. The results of this exercise show that although exposure to traffic-related air pollution is both difficult and costly to evaluate, proxy measures may be used. Pollutant concentrations, for example are frequently used to assess exposure, yet the lack of suitable data may also frequently preclude their use. Whilst other, cruder measures may be used, the relationship between these indicators, measured concentration and exposure is often uncertain. Consequently, EHIs for exposure to traffic-related air pollution may not provide a reliable indication of exposure and health risk. Their use in this area should therefore be undertaken with great caution and attempts made to validate specific measures prior to their use. At the same time, however, coarser ‘upstream’ indicators may provide relevant information in a policy context. For use in highlighting areas of concern, raising awareness about environmental health issues and encouraging policies which aim to improve environmental health conditions, ease of data collection and relation to policy may be more important than relation to specific health effects
299

Multilevel regression modelling of melanoma incidence

Brown, Antony Clark January 2007 (has links)
This thesis is concerned with developing and implementing a method for modelling and projecting cancer incidence data. The burden of cancer is an increasing problem for society and therefore, the ability to analyse and predict trends in large scale populations is vital. These predictions based on incidence and mortality data collected by cancer registries, can be used for estimation of current and future rates, which is helpful for public health planning. A large body of work already exists on the use of various modelling strategies, methods and fitting techniques. A multilevel method of preparing the data is proposed, fitted to historical data using regression modelling, to predict future rates of incidence for a given population. The proposed model starts with a model for the total incidence of the population, with each successive level stratifying the data into progressively more specific groupings, based on age. Each grouping is partitioned into subgroups, and each subgroup is expressed as a proportion of the parent group. Models are fitted to each of the proportional age-groups, and a combination of these models produces a model that predicts incidence for a specific age. A simple, efficient implementation of the modelling procedure is described, including key algorithms and measures of performance. The method is applied to data from populations that have very different melanoma incidence (the USA and Australia). The proportional structure reveals that the proportional age trends present in both populations are remarkably similar, indicating that there are links between causative factors in both populations. The method is applied fully to data from a variety of populations, and compared with results from existing models. The method is shown to be able to produce results that are reliable and stable, and are generally significantly more accurate than those of other models.
300

Multilevel modelling of event history data : comparing methods appropriate for large datasets

Stewart, Catherine Helen January 2010 (has links)
Abstract When analysing medical or public health datasets, it may often be of interest to measure the time until a particular pre-defined event occurs, such as death from some disease. As it is known that the health status of individuals living within the same area tends to be more similar than for individuals from different areas, event times of individuals from the same area may be correlated. As a result, multilevel models must be used to account for the clustering of individuals within the same geographical location. When the outcome is time until some event, multilevel event history models must be used. Although software does exist for fitting multilevel event history models, such as MLwiN, computational requirements mean that the use of these models is limited for large datasets. For example, to fit the proportional hazards model (PHM), the most commonly used event history model for modelling the effect of risk factors on event times, in MLwiN a Poisson model is fitted to a person-period dataset. The person-period dataset is created by rearranging the original dataset so that each individual has a line of data corresponding to every risk set they survive until either censoring or the event of interest occurs. When time is treated as a continuous variable so that each risk set corresponds to a distinct event time, as is the case for the PHM, the size of the person-period dataset can be very large. This presents a problem for those working in public health as datasets used for measuring and monitoring public health are typically large. Furthermore, individuals may be followed-up for a long period of time and this can also contribute to a large person-period dataset. A further complication is that interest may be in modelling a rare event, resulting in a high proportion of censored observations. This can also be problematic when estimating multilevel event history models. Since multilevel event history models are important in public health, the aim of this thesis is to develop these models so they can be fitted to large datasets considering, in particular, datasets with long periods of follow-up and rare events. Two datasets are used throughout the thesis to investigate three possible alternatives to fitting the multilevel proportional hazards model in MLwiN in order to overcome the problems discussed. The first is a moderately-sized Scottish dataset, which will be the main focus of the thesis, and is used as a ‘training dataset’ to explore the limitations of existing software packages for fitting multilevel event history models and also for investigating alternative methods. The second dataset, from Sweden, is used to test the effectiveness of each alternative method when fitted to a much larger dataset. The adequacy of the alternative methods are assessed on the following criteria: how effective they are at reducing the size of the person-period dataset, how similar parameter estimates obtained from using methods are compared to the PHM and how easy they are to implement. The first alternative method involves defining discrete-time risk sets and then estimating discrete-time hazard models via multilevel logistic regression models fitted to a person-period dataset. The second alternative method involves aggregating the data of individuals within the same higher-level units who have the same values for the covariates in a particular model. Aggregating the data like this means that one line of data is used to represent all such individuals since these individuals are at risk of experiencing the event of interest at the same time. This method is termed ‘grouping according to covariates’. Both continuous-time and discrete-time event history models can be fitted to the aggregated person-period dataset. The ‘grouping according to covariates’ method and the first method, which involves defining discrete-time risk sets, are both implemented in MLwiN and pseudo-likelihood methods of estimation are used. The third and final method to be considered, however, involves fitting Bayesian event history (frailty) models and using Markov chain Monte Carlo (MCMC) methods of estimation. These models are fitted in WinBUGS, a software package specially designed to make practical MCMC methods available to applied statisticians. In WinBUGS, an additive frailty model is adopted and a Weibull distribution is assumed for the survivor function. Methodological findings were that the discrete-time method led to a successful reduction in the continuous-time person-period dataset; however, it was necessary to experiment with the length of time intervals in order to have the widest interval without influencing parameter estimates. The grouping according to covariates method worked best when there were, on average, a larger number of individuals per higher-level unit, there were few risk factors in the model and little or none of the risk factors were continuous. The Bayesian method could be favourable as no data expansion is required to fit the Weibull model in WinBUGS and time is treated as a continuous variable. However, models took a much longer time to run using MCMC methods of estimation as opposed to likelihood methods. This thesis showed that it was possible to use a re-parameterised version of the Weibull model, as well as a variance expansion technique, to overcome slow convergence by reducing correlation in the Markov chains. This may be a more efficient way to reduce computing time than running further iterations.

Page generated in 0.2528 seconds