• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 1258
  • 962
  • 482
  • 266
  • 46
  • 35
  • 27
  • 22
  • 17
  • 10
  • 8
  • 6
  • 6
  • 4
  • 3
  • Tagged with
  • 3514
  • 745
  • 682
  • 668
  • 657
  • 649
  • 607
  • 461
  • 372
  • 323
  • 304
  • 296
  • 241
  • 222
  • 204
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
81

A PRELIMINARY STUDY OF AUTOMATED TOOLS TO MONITOR MICROBIOLOGICAL DATA AND IMPROVE COMPLIANCE WITH HOSPITAL INFECTION CONTROL POLICIES

Carnevale, Randy Joseph 03 August 2007 (has links)
This Masters Thesis project had as its objectives: (1) to provide Vanderbilt University Hospital (VUH) with computerized tools for monitoring microbiological data; (2) to provide the VUH Infection Control Service with tools to help monitor and track infection-relevant patient-related data such as culture results, hospital location, current orders, and contact precautions status; and, (3) to initiate studies to improve compliance with VUH contact precautions policies specifically, those for antibiotic-resistant organisms such as methicillin-resistant Staphylococcus aureus (MRSA) and vancomycin-resistant Enterococcus (VRE). This project achieved its goals by developing and formatively evaluating the MicroTools suite of programs. MicroParse processes VUH microbiology laboratory reports, MicroDash provides infection control staff with aggregated information on patients with a history of antibiotic-resistant infections, and MicroGram generates antibiograms for VUH clinicians.
82

EVALUATION OF A NOVEL TERMINOLOGY TO CATEGORIZE CLINICAL DOCUMENT SECTION HEADERS AND A RELATED CLINICAL NOTE SECTION TAGGER

Denny, Joshua C 03 August 2007 (has links)
The aims of this project are to 1) build and evaluate a terminology that provides categorization labels, or tags, for common segments within clinical documents, and 2) to evaluate a tool to parse and label natural-language clinical documents using the terminology. Clinical documents generally contain many sections and subsections, such as history of present illness, physical examination, and cardiovascular exam. The author developed a section header terminology that models common section names, subsection names, and their relationships. This terminology was built using existing standardized terminologies, textbooks, and review of over 9,000 clinical notes. The section tagging tool, named SecTag, identifies terminology matches from clinical documents using a combination of linguistic, natural language processing, and machine learning techniques. The evaluation study focused on recognizing sections in 319 randomly-chosen history and physical examination notes that were generated during hospitalizations and outpatient visits. The overall recall and precision were 99% and 96%, respectively, over 16,036 possible sections. Recall and precision for sections not labeled in the document were 97% and 87%, respectively. The system correctly tagged 93% of the section start and end boundaries. SecTag failed to label 160 sections (1%); only 11 were headings that were absent in the terminology and which should be added to it. SecTag and its terminology are important first steps for understanding clinical notes. Future studies are needed to extend the terminology to other clinical note types and to link SecTag to a more in-depth natural language processing system.
83

IDENTIFYING HIGH QUALITY MEDLINE ARTICLES AND WEB SITES USING MACHINE LEARNING

Aphinyanaphongs, Yindalon 28 December 2007 (has links)
In this dissertation, I explore the applicability of text categorization machine learning methods to identify clinically pertinent and evidence-based articles in the literature and web pages on the internet. In the first series of experiments, I found that text categorization techniques identify high quality articles in internal medicine in the content categories of prognosis, diagnosis, etiology, and treatment better than the Clinical Query Filters of Pubmed. In a second set of experiments, I established that the text categorization models generalized both to time periods outside the training set and to areas outside of internal medicine including pediatrics, oncology, and surgery. My third set of experiments revealed that text categorization models built for a specific purpose identified articles better than both bibliometric (number of citations and impact factor) and web-based measures (Google PageRank, Yahoo WebRanks, and total web page hit count). In the fourth set of experiments, I built models for purpose, format, and additional content categories from a labeled gold standard that have high discriminatory power. Furthermore, we built a system called EBMSearch that implements these models to all of MEDLINE. Finally I extended these methods to the web and built the first validated models that identify websites that make false cancer treatment claims outperforming previous unvalidated models and PageRank by 30% area under the receiver operating curve. In conclusion, machine learning-based text categorization methods provide a powerful framework for identifying clinically applicable articles in the medical literature and the Internet.
84

A System to Monitor and Improve Medication Safety in the Setting of Acute Kidney Injury

McCoy, Allison Beck 25 April 2008 (has links)
Clinical decision support systems can decrease common errors related to inadequate dosing for nephrotoxic or renally cleared drugs. Within the computerized provider order entry (CPOE) system, we developed, implemented, and evaluated a set of interventions with varying levels of workflow intrusiveness to continuously monitor for and alert providers about acute kidney injury. Passive alerts appeared as persistent text within the CPOE system and on rounding reports, requiring no provider response. Exit check alerts interrupted the provider at the end of the CPOE session, requiring the provider to modify or discontinue the drug order, assert the current dose as correct, or defer the alert. In the intervention period, the number of drugs modified or discontinued within 24 hours increased from 35.7% to 50.9%, and the median time to modification or discontinuation decreased from 27.1 hours to 12.9 hours. Providers delayed decisions by repeatedly deferring the alerts. Future enhancements will address frequent deferrals by involving other team members in making mid-regimen prescription decisions.
85

The Tarsqi Toolkits Recognition of Temporal Expressions within Medical Documents

Ong, Ferdo Renardi 25 May 2010 (has links)
To diagnose and treat patients, clinicians concern themselves with a complex set of events. They need to know when, how long, and in what sequence certain events occur. An automated means of extracting temporal meaning in electronic medical records has been particularly challenging in natural language processing. Currently the Tarsqi Toolkit (TTK) is the only complete software package (open source) freely available for the temporal ordering of events within narrative free text documents. This project focused on the TTKs ability to recognize temporal expressions within Veterans Affairs electronic medical documents. A baseline evaluation of TTKs performance on the Timebank v1.2 corpora of 183 news articles and a set of 100 VA hospital admission and discharge notes showed an F-measure of 0.53 and 0.15, respectively. Project development included the correction of missed and partial recognition of temporal expressions, and the expansion of its coverage of time expressions for medical documents. Post-modification, the TTK achieved an F-measure of 0.71 on a different set of 100 VA hospital admission and discharge notes. Future work will evaluate TTKs recognition of temporal expressions within additional sets of medical documents.
86

Improving Biomedical Information Retrieval Citation Metrics Using Machine Learning

Fu, Lawrence Dachen 15 December 2008 (has links)
The evaluation of the literature is an increasingly integral part of biomedical research. Clinicians, researchers, librarians, and others routinely use the literature to answer questions for clinical care and research. The size of the literature prevents the manual review of all documents, and automated methods are necessary for identifying high quality articles as a major filtering step. This work aimed to improve the performance and usability of existing tools with machine learning methods. First, evaluation methods for journals, articles, and websites were studied to determine if their performance varied widely for different topics. Second, the feasibility of predicting article citation count was examined by training Support Vector Machine (SVM) models on content and bibliometric features. Third, SVM models were used to automatically classify instrumental and non-instrumental citations.
87

ALGORITHMS FOR DISCOVERY OF MULTIPLE MARKOV BOUNDARIES: APPLICATION TO THE MOLECULAR SIGNATURE MULTIPLICITY PROBLEM

Statnikov, Alexander Romanovich 06 December 2008 (has links)
Algorithms for discovery of a Markov boundary from data constitute one of the most important recent developments in machine learning, primarily because they offer a principled solution to the variable/feature selection problem and give insight about local causal structure. Even though there is always a single Markov boundary of the response variable in faithful distributions, distributions with violations of the intersection property may have multiple Markov boundaries. Such distributions are abundant in practical data-analytic applications, and there are several reasons why it is important to induce all Markov boundaries from such data. However, there are currently no practical algorithms that can provably accomplish this task. To this end, I propose a novel generative algorithm (termed TIE*) that can discover all Markov boundaries from data. The generative algorithm can be instantiated to discover Markov boundaries independent of data distribution. I prove correctness of the generative algorithm and provide several admissible instantiations. The new algorithm is then applied to identify the set of maximally predictive and non-redundant molecular signatures. TIE* identifies exactly the set of true signatures in simulated distributions and yields signatures with significantly better predictivity and reproducibility than prior algorithms in human microarray gene expression datasets. The results of this thesis also shed light on the causes of molecular signature multiplicity phenomenon.
88

BUILDING AN ONLINE COMMUNITY TO SUPPORT LOCAL CANCER SURVIVORSHIP: COMBINING INFORMATICS AND PARTICIPATORY ACTION RESEARCH FOR COLLABORATIVE DESIGN

Weiss, Jacob Berner 14 April 2009 (has links)
The purpose of this research was to evaluate the collaborative design of an online community for cancer survivorship in middle Tennessee. The four primary aims of this qualitative study were to define the local cancer survivorship community, identify its strengths and opportunities to improve, build an online community to address these opportunities, and evaluate the collaborative design and development of this online community. A total of 43 cancer survivors, family members, health-care professionals, and community professionals participated in key informant interviews, sense of community surveys, and the collaborative design of the online community over a one-year period. The results of this study include a formal definition of the local cancer survivorship community and illustrate how support for cancer survivors extends throughout the local community. Six opportunities were identified to improve the sense of community in the local cancer survivorship community, and an online community was successfully developed to address these opportunities. The evaluation of the collaborative process resulted in a seven element framework for the discovery and development of community partnerships for informatics design. These results demonstrate the potential for an informatics-based approach to bring local communities together to improve supportive care for cancer survivors. Implications of the findings call for a new initiative for cancer survivorship that uses emerging web-based technologies to improve collaborative cancer care and quality of life in local communities.
89

DESIGN AND IMPLEMENTATION OF A COMPUTERIZED ASTHMA MANAGEMENT SYSTEM IN THE PEDIATRIC EMERGENCY DEPARTMENT

Dexheimer, Judith Wehling 15 April 2011 (has links)
Pediatric asthma exacerbations account for >1.8 million ED visits annually. Guidelines can decrease variability in asthma treatment and improve clinical outcomes; however, guideline adherence is inadequate. We evaluated a computerized asthma detection system based upon the NHLBI guidelines and evidence-based practice to improve care in a pediatric ED in a two phase study. Phase I looked at using an automatic disease detection system to identify eligible patients and then printing the paper-based guideline. Phase II implemented a fully computerized asthma management system. Although the time to disposition decision was not statistically significant, we believe this management system to be a sustainable computerized management system to help standardize asthma care. The computerized asthma management system represents a work-flow oriented, sustainable approach in a challenging environment.
90

A Machine Learning-Based Information Retrieval Framework for Molecular Medicine Predictive Models

Wehbe, Firas Hazem 16 April 2011 (has links)
Molecular medicine encompasses the application of molecular biology techniques and knowledge to the prevention, diagnosis and treatment of diseases and disorders. Statistical and computational models can predict clinical outcomes, such as prognosis or response to treatment, based on the results of molecular assays. For advances in molecular medicine to translate into clinical results, clinicians and translational researchers need to have up-to-date access to high-quality predictive models. The large number of such models reported in the literature is growing at a pace that overwhelms the human ability to manually assimilate this information. Therefore the important problem of retrieving and organizing the vast amount of published information within this domain needs to be addressed. The inherent complexity of this domain and the fast pace of scientific discovery make this problem particularly challenging. <p> This dissertation describes a framework for retrieval and organization of clinical bioinformatics predictive models. A semantic analysis of this domain was performed. The semantic analysis informed the design of the framework. Specifically, it allowed the development of a specialized annotation scheme of published articles that can be used for meaningful organization and for indexing and efficient retrieval. This annotation scheme was codified using an annotation form and accompanying guidelines document that were used by multiple human experts to annotate over 1000 articles. These datasets were then used to train and test support vector machine (SVM) machine learning classifiers. The classifiers were designed to provide a scalable mechanism to replicate human experts ability (1) to retrieve relevant MEDLINE articles and (2) to annotate these articles using the specialized annotation scheme. The machine learning classifiers showed very good predictive ability that was also shown to generalize to different disease domains and to datasets annotated by independent experts. The experiments highlighted the need for providing unambiguous operational definitions of the complex concepts used for semantic annotations. The impact of the semantic definitions on the quality of manual annotations and on the performance of the machine learning classifiers was discussed.

Page generated in 0.0452 seconds