• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 103
  • 6
  • 1
  • Tagged with
  • 121
  • 121
  • 12
  • 10
  • 10
  • 8
  • 8
  • 8
  • 7
  • 7
  • 7
  • 6
  • 6
  • 6
  • 6
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
21

Detecting Asthma Exacerbations in a Pediatric Emergency Department

Sanders, David L 17 April 2006 (has links)
This thesis describes the development and evaluation of a computerized algorithm for detecting patients with acute asthma exacerbations who present to a pediatric emergency department (ED). A rule-based algorithm was designed to collect patient information from the computerized patient record at the time of ED triage. We confirmed the feasibility of this approach through a retrospective analysis. The algorithm was then implemented in the pediatric ED as a real-time asthma detection system. Its performance was evaluated prospectively during a two-month study period on over 3,500 ED patients, of which 342 had an asthma exacerbation. The system was able detect patients presenting with acute asthma with high accuracy. Sensitivity was 71.6%, specificity was 97.8%, positive predictive value was 77.0%, and negative predictive value was 97.1%. This research could be applied to detect and automatically initiate guidelines for the management of asthma in eligible patients, and could serve as a model for detecting other conditions which are managed by standardized guidelines in the ED.
22

Developing Computer-generated PubMed Queries for Identifying Drug-Drug Interaction Content in MEDLINE

Duda, Stephany Norah 13 December 2005 (has links)
Unwanted drug-drug interactions endanger millions of patients each year and burden families and the hospital system with escalating costs. Computer-based alerting systems are designed to prevent these interactions, yet the knowledge bases that support these systems often contain incomplete, clinically insignificant, and inaccurate drug information that can contribute to false alerts and wasted time. It may be possible to improve the content of these drug interaction databases by facilitating access to new or underused sources of drug-drug interaction information. The National Library of Medicine's MEDLINE database represents a respected source of peer-reviewed biomedical citations that would serve as a valuable source of information if the relevant articles could be pinpointed effectively and efficiently. This research compared the classification capabilities of human-generated and computer-generated Boolean queries as methods for locating articles about drug interactions. Two manual queries were assembled by medical librarians specializing in MEDLINE searches, and three computer-based queries were developed using a decision tree modeled on Support Vector Machine output. All five queries were tested on a corpus of manually-labeled positive and negative drug-drug interaction citations. Overall, the study showed that computer-generated queries derived from automated classification techniques have the potential to perform at least as well as manual queries in identifying drug-drug interaction articles in MEDLINE.
23

Models to Predict Survival after Liver Transplantation

Hoot, Nathan Rollins 16 December 2005 (has links)
In light of the growing scarcity of livers available for transplantation, careful decisions must be made in organ allocation. The current standard of care for transplant decision making is the use of clinical judgment, although a good model to predict survival after liver transplantation may be useful to support these difficult decisions. This thesis explores the use of informatics techniques to improve upon past research in modeling liver transplant survival. A systematic literature revealed that the use of machine learning techniques has not been thoroughly explored in the field. Several experiments examined different modeling techniques using a database from the United Network for Organ Sharing. A Bayesian network was created to predict survival after liver transplantation, and it exceeded the performance of other models published in the literature. Fully automated feature selection techniques were used to identify the key predictors of liver transplant survival in a large database. A support vector machine was used to show that a relatively simple model, consisting of main effects and two-way interactions, may be adequate for predicting liver transplant survival. A pilot study was conducted to assess the ability of expert clinicians in predicting survival, and they tended to perform similarly to mathematical models. The results lay a foundation for future refinements in survival modeling and for a clinical trial of decision support in liver transplantation.
24

A Comparison of State-of-the-Art Algorithms for Learning Bayesian Network Structure from Continuous Data

Fu, Lawrence Dachen 19 December 2005 (has links)
In biomedical and biological domains, researchers typically study continuous data sets. In these domains, an increasingly popular tool for understanding the relationship between variables is Bayesian network structure learning. There are three methods for learning Bayesian network structure from continuous data. The most popular approach is discretizing the data prior to structure learning. Alternative approaches are integrating discretization with structure learning as well as learning directly with continuous data. It is not known which method is best since there has not been a unified study of the three approaches. The purpose of this work was to perform an extensive experimental evaluation of them. For large data sets consisting of originally discrete variables, discretization-based approaches learned the most accurate structures. With smaller sample sizes or data without an underlying discrete mechanism, a method learning directly with continuous data performed best. Also, for some quality metrics, the integrated methods did not provide improvements over simple discretization methods. In terms of time-efficiency, the integrated approaches were the most computationally intensive, while methods from the other categories were the least intensive.
25

A Computerized Pneumococcal Vaccination Reminder System in the Adult Emergency Department

Dexheimer, Judith W 07 June 2006 (has links)
Preventive care measures including vaccinations are underutilized. The Emergency Department (ED) environment has been recommended to be a feasible environment to offer pneumococcal vaccination; however, the ED is a challenging setting for implementing a sustainable vaccination program. We designed, implemented, and evaluated a closed-loop, computerized reminder system in the ED that integrated four different computer systems. The computerized triage application screened patients for eligibility with information from the electronic patient record. The computerized provider order entry system reminded clinicians to order the vaccination for eligible patients, which was passed to the order tracker application. Documentation of vaccine administration was then added to the electronic patient record. During a two-month, prospective study 433 (51.9%) patients 65 years and older were up-to-date with pneumococcal vaccination, and 271 (32.5%) declined to receive the vaccine during their ED visit. From the physician prompts, 94 (11.2%) declined to order the vaccine; 37 (4.4%) patients received the vaccine. The computerized reminder system increased vaccination rate from 51.9% to 56.4 % (p < 0.01). The closed-loop informatics solution seems to be a feasible and sustainable model to increase vaccination rates in a challenging ED environment.
26

Understanding workflow and information flow in chronic disease care

Unertl, Kim Marie 28 November 2006 (has links)
Chronic disease care is a significant and growing problem in healthcare today. Current healthcare processes are more focused on dealing with acute episodes of care rather than the longitudinal care requirements of chronic disease. The use of information technology is one tool that may assist in improving chronic disease care. The goals of the study were to study workflow and information flow in three ambulatory chronic disease clinics to develop general models of workflow and information flow in chronic disease care. Over 150 hours of direct observation in the three clinics identified elements of the workflow and information flow, the features of existing informatics tools used, and gaps between user needs and existing functionality. Clinic-specific models of workflow, information flow, and temporal flow were developed. Semi-structured interviews were conducted to gather additional data and verify the models. Generalized models were developed that identified the common aspects of workflow and information flow across all three clinics. Aspects of chronic disease care workflow that are important to address in the design of informatics tools were identified. The study confirmed that there are core similarities between different chronic disease domains, although there are some crucial differences and suggested approaches to dealing with the unique needs of the chronic disease care environment.
27

In silico evaluation of DNA-pooled allelotyping versus individual genotyping for genome-wide association analysis of complex disease.

Pratap, Siddharth 24 July 2007 (has links)
Recent advances in single nucleotide polymorphism (SNP) genotyping techniques, public databases, and genomic knowledge via the Human Genome Project and the Haplotype Mapping project (HapMap) allow for true genome-wide association (GWA) analysis for common complex diseases such as heart disease, diabetes, and Alzheimers. A major obstacle in genome-wide association analysis is the prohibitively high cost of projects that require genotyping hundreds, even thousands, of individuals in order to achieve appropriate statistical significance. One potential solution to the prohibitive cost is to combine or pool the DNA of case and control individuals and to use pooled genotyping or allelotyping for association analysis by determining the genotype allele frequency differences between case and control populations. While pooling can dramatically increase efficiencies by lowering cost and time, it also introduces additional sources of error and noise. In this study, we comparatively examine DNA pooled genotyping versus individual genotyping for genome-wide association analysis of complex disease. Our work has created a system and process that allows for the direct evaluation and comparison of pooled genotyping versus individual genotyping by using and modifying existing bioinformatics tools. Our results show that pooled GWA studies are limited to resolving complex disease with medium to high relative risks ratios. Pooling errors have a very large effect on the overall statistical significance of a pooled GWA. Genotyping errors have a modest effect on pooled and individual GWA which is much less in magnitude to pooling errors.
28

DEVELOPMENT AND EVALUATION OF A PROTOTYPE SYSTEM FOR AUTOMATED ANALYSIS OF CLINICAL MASS SPECTROMETRY DATA

Fananapazir, Nafeh 31 July 2007 (has links)
Mass Spectrometry (MS) is emerging as a breakthrough mass-throughput technology believed to have powerful potential for producing clinical diagnostic and prognostic models and for identifying relevant disease biomarkers. A major barrier to making mass spectrometry clinically useful and to exploring its potential in an efficient and reliable manner is the challenge posed by data analysis of proteomic spectra in order to produce reliable predictor models of disease and clinical outcomes. This thesis describes the development and evaluation of a fully-automated software system (FAST-AIMS), capable of analyzing mass spectra to produce high-quality diagnostic and outcome prediction models.
29

Improving Provider-to-Provider Communication: Evaluation of a Computerized Inpatient Sign-out Tool

Campion, Thomas Richmond 13 September 2007 (has links)
Physicians use of a computerized inpatient sign-out tool has been shown to reduce the risk of preventable adverse events. The researcher evaluated sign-out software usage at Vanderbilt University Medical Center in order to understand user behavior and identify software enhancements. To accomplish these goals, the researcher created software to record sign-out data, determined descriptive statistics of software utilization, collected feedback from users regarding new software enhancements, and analyzed the content produced by sign-out tool users. Results included the identification of unanticipated software usage by non-providers, different use patterns across hospital units/services, and a variety of discipline-specific sign-out note styles. These results combined with a comparison to the literatures recommendations guided the design specification for new sign-out software. Further study is required to determine relevant outcome measures related to both the sign-out process and the impact of sign-out software.
30

KNOWLEDGE-BASED ENVIRONMENT POTENTIALS FOR PROTEIN STRUCTURE PREDICTION

Durham, Elizabeth Ashley 06 June 2008 (has links)
This Masters Thesis project had as its objectives: (1) to optimize algorithms for solvent-accessible surface area (SASA) approximation to develop an environment free energy knowledge-based potential; and, (2) to assess the knowledge-based environment free energy potentials for de novo protein structure prediction. This project achieved its goals by developing, implementing, optimizing, and evaluating four different algorithms for approximating the SASA of a given protein model and generating knowledge-based potentials for de novo protein structure prediction. The algorithms are entitled Neighbor Count, Neighbor Vector, Artificial Neural Network, and Overlapping Spheres.

Page generated in 0.1439 seconds