• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 1258
  • 960
  • 482
  • 266
  • 46
  • 35
  • 27
  • 22
  • 17
  • 10
  • 8
  • 6
  • 6
  • 4
  • 3
  • Tagged with
  • 3510
  • 745
  • 681
  • 667
  • 657
  • 648
  • 606
  • 461
  • 371
  • 323
  • 302
  • 295
  • 241
  • 222
  • 203
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
61

Comparison of Three Clustering Methods for Dissecting Trait Heterogeneity in Genotypic Data

Thornton-Wells, Tricia Ann 23 July 2005 (has links)
Trait heterogeneity, which exists when a trait has been defined with insufficient specificity such that it is actually two or more distinct traits, has been implicated as a confounding factor in traditional statistical genetics of complex human disease. In the absence of detailed phenotypic data collected consistently in combination with genetic data, unsupervised computational methodologies offer the potential for discovering underlying trait heterogeneity. The performance of three such methodsBayesian Classification, Hypergraph-Based Clustering, and Fuzzy k-Modes Clusteringthat are appropriate for categorical data were compared. Also tested was the ability of these methods to additionally detect trait heterogeneity in the presence of locus heterogeneity and gene-gene interaction, which are two other complicating factors in discovering genetic models of complex human disease. Bayesian Classification performed well under the simplest of genetic models simulated, and it outperformed the other two methods, with the exception that the Fuzzy k-Modes Clustering performed best on the most complex genetic model. Permutation testing showed that Bayesian Classification controlled Type I error very well but produced less desirable Type II error rates. Methodological limitations and future directions are discussed.
62

Automatic Cancer Diagnostic Decision Support System for Gene Expression Domain

Statnikov, Alexander R 29 July 2005 (has links)
The success of treatment of patients with cancer depends on establishing an accurate diagnosis. To this end, we have built a system called GEMS (Gene Expression Model Selector) for the automated development and evaluation of high-quality cancer diagnostic models and biomarker discovery from microarray gene expression data. In order to determine and equip the system with the best performing diagnostic methodologies in this domain, we first conducted a comprehensive evaluation of classification algorithms using 11 cancer microarray datasets. After the system was built, we performed a preliminary evaluation of the system with 5 new datasets. The performance of the models produced automatically by GEMS is comparable or better than the results obtained by human analysts. Additionally, we performed a cross-dataset evaluation of the system. This involved using a dataset to build a diagnostic model and to estimate its future performance, then applying this model and evaluating its performance on a different dataset. We found that models produced by GEMS indeed perform well in independent samples and, furthermore, the cross-validation performance estimates output by the system approximate well the error obtained by the independent validation. GEMS is freely available for download for non-commercial use from http://www.gems-system.org.
63

Detecting Asthma Exacerbations in a Pediatric Emergency Department

Sanders, David L 17 April 2006 (has links)
This thesis describes the development and evaluation of a computerized algorithm for detecting patients with acute asthma exacerbations who present to a pediatric emergency department (ED). A rule-based algorithm was designed to collect patient information from the computerized patient record at the time of ED triage. We confirmed the feasibility of this approach through a retrospective analysis. The algorithm was then implemented in the pediatric ED as a real-time asthma detection system. Its performance was evaluated prospectively during a two-month study period on over 3,500 ED patients, of which 342 had an asthma exacerbation. The system was able detect patients presenting with acute asthma with high accuracy. Sensitivity was 71.6%, specificity was 97.8%, positive predictive value was 77.0%, and negative predictive value was 97.1%. This research could be applied to detect and automatically initiate guidelines for the management of asthma in eligible patients, and could serve as a model for detecting other conditions which are managed by standardized guidelines in the ED.
64

Developing Computer-generated PubMed Queries for Identifying Drug-Drug Interaction Content in MEDLINE

Duda, Stephany Norah 13 December 2005 (has links)
Unwanted drug-drug interactions endanger millions of patients each year and burden families and the hospital system with escalating costs. Computer-based alerting systems are designed to prevent these interactions, yet the knowledge bases that support these systems often contain incomplete, clinically insignificant, and inaccurate drug information that can contribute to false alerts and wasted time. It may be possible to improve the content of these drug interaction databases by facilitating access to new or underused sources of drug-drug interaction information. The National Library of Medicine's MEDLINE database represents a respected source of peer-reviewed biomedical citations that would serve as a valuable source of information if the relevant articles could be pinpointed effectively and efficiently. This research compared the classification capabilities of human-generated and computer-generated Boolean queries as methods for locating articles about drug interactions. Two manual queries were assembled by medical librarians specializing in MEDLINE searches, and three computer-based queries were developed using a decision tree modeled on Support Vector Machine output. All five queries were tested on a corpus of manually-labeled positive and negative drug-drug interaction citations. Overall, the study showed that computer-generated queries derived from automated classification techniques have the potential to perform at least as well as manual queries in identifying drug-drug interaction articles in MEDLINE.
65

Models to Predict Survival after Liver Transplantation

Hoot, Nathan Rollins 16 December 2005 (has links)
In light of the growing scarcity of livers available for transplantation, careful decisions must be made in organ allocation. The current standard of care for transplant decision making is the use of clinical judgment, although a good model to predict survival after liver transplantation may be useful to support these difficult decisions. This thesis explores the use of informatics techniques to improve upon past research in modeling liver transplant survival. A systematic literature revealed that the use of machine learning techniques has not been thoroughly explored in the field. Several experiments examined different modeling techniques using a database from the United Network for Organ Sharing. A Bayesian network was created to predict survival after liver transplantation, and it exceeded the performance of other models published in the literature. Fully automated feature selection techniques were used to identify the key predictors of liver transplant survival in a large database. A support vector machine was used to show that a relatively simple model, consisting of main effects and two-way interactions, may be adequate for predicting liver transplant survival. A pilot study was conducted to assess the ability of expert clinicians in predicting survival, and they tended to perform similarly to mathematical models. The results lay a foundation for future refinements in survival modeling and for a clinical trial of decision support in liver transplantation.
66

A Comparison of State-of-the-Art Algorithms for Learning Bayesian Network Structure from Continuous Data

Fu, Lawrence Dachen 19 December 2005 (has links)
In biomedical and biological domains, researchers typically study continuous data sets. In these domains, an increasingly popular tool for understanding the relationship between variables is Bayesian network structure learning. There are three methods for learning Bayesian network structure from continuous data. The most popular approach is discretizing the data prior to structure learning. Alternative approaches are integrating discretization with structure learning as well as learning directly with continuous data. It is not known which method is best since there has not been a unified study of the three approaches. The purpose of this work was to perform an extensive experimental evaluation of them. For large data sets consisting of originally discrete variables, discretization-based approaches learned the most accurate structures. With smaller sample sizes or data without an underlying discrete mechanism, a method learning directly with continuous data performed best. Also, for some quality metrics, the integrated methods did not provide improvements over simple discretization methods. In terms of time-efficiency, the integrated approaches were the most computationally intensive, while methods from the other categories were the least intensive.
67

A Computerized Pneumococcal Vaccination Reminder System in the Adult Emergency Department

Dexheimer, Judith W 07 June 2006 (has links)
Preventive care measures including vaccinations are underutilized. The Emergency Department (ED) environment has been recommended to be a feasible environment to offer pneumococcal vaccination; however, the ED is a challenging setting for implementing a sustainable vaccination program. We designed, implemented, and evaluated a closed-loop, computerized reminder system in the ED that integrated four different computer systems. The computerized triage application screened patients for eligibility with information from the electronic patient record. The computerized provider order entry system reminded clinicians to order the vaccination for eligible patients, which was passed to the order tracker application. Documentation of vaccine administration was then added to the electronic patient record. During a two-month, prospective study 433 (51.9%) patients 65 years and older were up-to-date with pneumococcal vaccination, and 271 (32.5%) declined to receive the vaccine during their ED visit. From the physician prompts, 94 (11.2%) declined to order the vaccine; 37 (4.4%) patients received the vaccine. The computerized reminder system increased vaccination rate from 51.9% to 56.4 % (p < 0.01). The closed-loop informatics solution seems to be a feasible and sustainable model to increase vaccination rates in a challenging ED environment.
68

Understanding workflow and information flow in chronic disease care

Unertl, Kim Marie 28 November 2006 (has links)
Chronic disease care is a significant and growing problem in healthcare today. Current healthcare processes are more focused on dealing with acute episodes of care rather than the longitudinal care requirements of chronic disease. The use of information technology is one tool that may assist in improving chronic disease care. The goals of the study were to study workflow and information flow in three ambulatory chronic disease clinics to develop general models of workflow and information flow in chronic disease care. Over 150 hours of direct observation in the three clinics identified elements of the workflow and information flow, the features of existing informatics tools used, and gaps between user needs and existing functionality. Clinic-specific models of workflow, information flow, and temporal flow were developed. Semi-structured interviews were conducted to gather additional data and verify the models. Generalized models were developed that identified the common aspects of workflow and information flow across all three clinics. Aspects of chronic disease care workflow that are important to address in the design of informatics tools were identified. The study confirmed that there are core similarities between different chronic disease domains, although there are some crucial differences and suggested approaches to dealing with the unique needs of the chronic disease care environment.
69

In silico evaluation of DNA-pooled allelotyping versus individual genotyping for genome-wide association analysis of complex disease.

Pratap, Siddharth 24 July 2007 (has links)
Recent advances in single nucleotide polymorphism (SNP) genotyping techniques, public databases, and genomic knowledge via the Human Genome Project and the Haplotype Mapping project (HapMap) allow for true genome-wide association (GWA) analysis for common complex diseases such as heart disease, diabetes, and Alzheimers. A major obstacle in genome-wide association analysis is the prohibitively high cost of projects that require genotyping hundreds, even thousands, of individuals in order to achieve appropriate statistical significance. One potential solution to the prohibitive cost is to combine or pool the DNA of case and control individuals and to use pooled genotyping or allelotyping for association analysis by determining the genotype allele frequency differences between case and control populations. While pooling can dramatically increase efficiencies by lowering cost and time, it also introduces additional sources of error and noise. In this study, we comparatively examine DNA pooled genotyping versus individual genotyping for genome-wide association analysis of complex disease. Our work has created a system and process that allows for the direct evaluation and comparison of pooled genotyping versus individual genotyping by using and modifying existing bioinformatics tools. Our results show that pooled GWA studies are limited to resolving complex disease with medium to high relative risks ratios. Pooling errors have a very large effect on the overall statistical significance of a pooled GWA. Genotyping errors have a modest effect on pooled and individual GWA which is much less in magnitude to pooling errors.
70

DEVELOPMENT AND EVALUATION OF A PROTOTYPE SYSTEM FOR AUTOMATED ANALYSIS OF CLINICAL MASS SPECTROMETRY DATA

Fananapazir, Nafeh 31 July 2007 (has links)
Mass Spectrometry (MS) is emerging as a breakthrough mass-throughput technology believed to have powerful potential for producing clinical diagnostic and prognostic models and for identifying relevant disease biomarkers. A major barrier to making mass spectrometry clinically useful and to exploring its potential in an efficient and reliable manner is the challenge posed by data analysis of proteomic spectra in order to produce reliable predictor models of disease and clinical outcomes. This thesis describes the development and evaluation of a fully-automated software system (FAST-AIMS), capable of analyzing mass spectra to produce high-quality diagnostic and outcome prediction models.

Page generated in 0.0604 seconds