• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 103
  • 6
  • 1
  • Tagged with
  • 121
  • 121
  • 12
  • 10
  • 10
  • 8
  • 8
  • 8
  • 7
  • 7
  • 7
  • 6
  • 6
  • 6
  • 6
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
41

IDENTIFYING HIGH QUALITY MEDLINE ARTICLES AND WEB SITES USING MACHINE LEARNING

Aphinyanaphongs, Yindalon 28 December 2007 (has links)
In this dissertation, I explore the applicability of text categorization machine learning methods to identify clinically pertinent and evidence-based articles in the literature and web pages on the internet. In the first series of experiments, I found that text categorization techniques identify high quality articles in internal medicine in the content categories of prognosis, diagnosis, etiology, and treatment better than the Clinical Query Filters of Pubmed. In a second set of experiments, I established that the text categorization models generalized both to time periods outside the training set and to areas outside of internal medicine including pediatrics, oncology, and surgery. My third set of experiments revealed that text categorization models built for a specific purpose identified articles better than both bibliometric (number of citations and impact factor) and web-based measures (Google PageRank, Yahoo WebRanks, and total web page hit count). In the fourth set of experiments, I built models for purpose, format, and additional content categories from a labeled gold standard that have high discriminatory power. Furthermore, we built a system called EBMSearch that implements these models to all of MEDLINE. Finally I extended these methods to the web and built the first validated models that identify websites that make false cancer treatment claims outperforming previous unvalidated models and PageRank by 30% area under the receiver operating curve. In conclusion, machine learning-based text categorization methods provide a powerful framework for identifying clinically applicable articles in the medical literature and the Internet.
42

A System to Monitor and Improve Medication Safety in the Setting of Acute Kidney Injury

McCoy, Allison Beck 25 April 2008 (has links)
Clinical decision support systems can decrease common errors related to inadequate dosing for nephrotoxic or renally cleared drugs. Within the computerized provider order entry (CPOE) system, we developed, implemented, and evaluated a set of interventions with varying levels of workflow intrusiveness to continuously monitor for and alert providers about acute kidney injury. Passive alerts appeared as persistent text within the CPOE system and on rounding reports, requiring no provider response. Exit check alerts interrupted the provider at the end of the CPOE session, requiring the provider to modify or discontinue the drug order, assert the current dose as correct, or defer the alert. In the intervention period, the number of drugs modified or discontinued within 24 hours increased from 35.7% to 50.9%, and the median time to modification or discontinuation decreased from 27.1 hours to 12.9 hours. Providers delayed decisions by repeatedly deferring the alerts. Future enhancements will address frequent deferrals by involving other team members in making mid-regimen prescription decisions.
43

The Tarsqi Toolkits Recognition of Temporal Expressions within Medical Documents

Ong, Ferdo Renardi 25 May 2010 (has links)
To diagnose and treat patients, clinicians concern themselves with a complex set of events. They need to know when, how long, and in what sequence certain events occur. An automated means of extracting temporal meaning in electronic medical records has been particularly challenging in natural language processing. Currently the Tarsqi Toolkit (TTK) is the only complete software package (open source) freely available for the temporal ordering of events within narrative free text documents. This project focused on the TTKs ability to recognize temporal expressions within Veterans Affairs electronic medical documents. A baseline evaluation of TTKs performance on the Timebank v1.2 corpora of 183 news articles and a set of 100 VA hospital admission and discharge notes showed an F-measure of 0.53 and 0.15, respectively. Project development included the correction of missed and partial recognition of temporal expressions, and the expansion of its coverage of time expressions for medical documents. Post-modification, the TTK achieved an F-measure of 0.71 on a different set of 100 VA hospital admission and discharge notes. Future work will evaluate TTKs recognition of temporal expressions within additional sets of medical documents.
44

Improving Biomedical Information Retrieval Citation Metrics Using Machine Learning

Fu, Lawrence Dachen 15 December 2008 (has links)
The evaluation of the literature is an increasingly integral part of biomedical research. Clinicians, researchers, librarians, and others routinely use the literature to answer questions for clinical care and research. The size of the literature prevents the manual review of all documents, and automated methods are necessary for identifying high quality articles as a major filtering step. This work aimed to improve the performance and usability of existing tools with machine learning methods. First, evaluation methods for journals, articles, and websites were studied to determine if their performance varied widely for different topics. Second, the feasibility of predicting article citation count was examined by training Support Vector Machine (SVM) models on content and bibliometric features. Third, SVM models were used to automatically classify instrumental and non-instrumental citations.
45

ALGORITHMS FOR DISCOVERY OF MULTIPLE MARKOV BOUNDARIES: APPLICATION TO THE MOLECULAR SIGNATURE MULTIPLICITY PROBLEM

Statnikov, Alexander Romanovich 06 December 2008 (has links)
Algorithms for discovery of a Markov boundary from data constitute one of the most important recent developments in machine learning, primarily because they offer a principled solution to the variable/feature selection problem and give insight about local causal structure. Even though there is always a single Markov boundary of the response variable in faithful distributions, distributions with violations of the intersection property may have multiple Markov boundaries. Such distributions are abundant in practical data-analytic applications, and there are several reasons why it is important to induce all Markov boundaries from such data. However, there are currently no practical algorithms that can provably accomplish this task. To this end, I propose a novel generative algorithm (termed TIE*) that can discover all Markov boundaries from data. The generative algorithm can be instantiated to discover Markov boundaries independent of data distribution. I prove correctness of the generative algorithm and provide several admissible instantiations. The new algorithm is then applied to identify the set of maximally predictive and non-redundant molecular signatures. TIE* identifies exactly the set of true signatures in simulated distributions and yields signatures with significantly better predictivity and reproducibility than prior algorithms in human microarray gene expression datasets. The results of this thesis also shed light on the causes of molecular signature multiplicity phenomenon.
46

BUILDING AN ONLINE COMMUNITY TO SUPPORT LOCAL CANCER SURVIVORSHIP: COMBINING INFORMATICS AND PARTICIPATORY ACTION RESEARCH FOR COLLABORATIVE DESIGN

Weiss, Jacob Berner 14 April 2009 (has links)
The purpose of this research was to evaluate the collaborative design of an online community for cancer survivorship in middle Tennessee. The four primary aims of this qualitative study were to define the local cancer survivorship community, identify its strengths and opportunities to improve, build an online community to address these opportunities, and evaluate the collaborative design and development of this online community. A total of 43 cancer survivors, family members, health-care professionals, and community professionals participated in key informant interviews, sense of community surveys, and the collaborative design of the online community over a one-year period. The results of this study include a formal definition of the local cancer survivorship community and illustrate how support for cancer survivors extends throughout the local community. Six opportunities were identified to improve the sense of community in the local cancer survivorship community, and an online community was successfully developed to address these opportunities. The evaluation of the collaborative process resulted in a seven element framework for the discovery and development of community partnerships for informatics design. These results demonstrate the potential for an informatics-based approach to bring local communities together to improve supportive care for cancer survivors. Implications of the findings call for a new initiative for cancer survivorship that uses emerging web-based technologies to improve collaborative cancer care and quality of life in local communities.
47

DESIGN AND IMPLEMENTATION OF A COMPUTERIZED ASTHMA MANAGEMENT SYSTEM IN THE PEDIATRIC EMERGENCY DEPARTMENT

Dexheimer, Judith Wehling 15 April 2011 (has links)
Pediatric asthma exacerbations account for >1.8 million ED visits annually. Guidelines can decrease variability in asthma treatment and improve clinical outcomes; however, guideline adherence is inadequate. We evaluated a computerized asthma detection system based upon the NHLBI guidelines and evidence-based practice to improve care in a pediatric ED in a two phase study. Phase I looked at using an automatic disease detection system to identify eligible patients and then printing the paper-based guideline. Phase II implemented a fully computerized asthma management system. Although the time to disposition decision was not statistically significant, we believe this management system to be a sustainable computerized management system to help standardize asthma care. The computerized asthma management system represents a work-flow oriented, sustainable approach in a challenging environment.
48

A Machine Learning-Based Information Retrieval Framework for Molecular Medicine Predictive Models

Wehbe, Firas Hazem 16 April 2011 (has links)
Molecular medicine encompasses the application of molecular biology techniques and knowledge to the prevention, diagnosis and treatment of diseases and disorders. Statistical and computational models can predict clinical outcomes, such as prognosis or response to treatment, based on the results of molecular assays. For advances in molecular medicine to translate into clinical results, clinicians and translational researchers need to have up-to-date access to high-quality predictive models. The large number of such models reported in the literature is growing at a pace that overwhelms the human ability to manually assimilate this information. Therefore the important problem of retrieving and organizing the vast amount of published information within this domain needs to be addressed. The inherent complexity of this domain and the fast pace of scientific discovery make this problem particularly challenging. <p> This dissertation describes a framework for retrieval and organization of clinical bioinformatics predictive models. A semantic analysis of this domain was performed. The semantic analysis informed the design of the framework. Specifically, it allowed the development of a specialized annotation scheme of published articles that can be used for meaningful organization and for indexing and efficient retrieval. This annotation scheme was codified using an annotation form and accompanying guidelines document that were used by multiple human experts to annotate over 1000 articles. These datasets were then used to train and test support vector machine (SVM) machine learning classifiers. The classifiers were designed to provide a scalable mechanism to replicate human experts ability (1) to retrieve relevant MEDLINE articles and (2) to annotate these articles using the specialized annotation scheme. The machine learning classifiers showed very good predictive ability that was also shown to generalize to different disease domains and to datasets annotated by independent experts. The experiments highlighted the need for providing unambiguous operational definitions of the complex concepts used for semantic annotations. The impact of the semantic definitions on the quality of manual annotations and on the performance of the machine learning classifiers was discussed.
49

Exploring Adverse Drug Effect Discovery from Data Mining of Clinical Notes

Smith, Joshua Carl 05 July 2012 (has links)
Many medications have potentially serious adverse effects detected only after FDA approval. After 80 million people worldwide received prescriptions for the drug rofecoxib (Vioxx), its manufacturer withdrew it from the marketplace in 2004. Epidemiological data showed that it increases risk of heart attack and stroke. Recently, the FDA warned that the commonly prescribed statin drug class (e.g., Lipitor, Zocor, Crestor) may increase risk of memory loss and Type 2 diabetes. These incidents illustrate the difficulty of identifying adverse effects of prescription medications during premarketing trials. Only post-marketing surveillance can detect some types of adverse effects (e.g., those requiring years of exposure). We explored the use of data mining on clinical notes to detect novel adverse drug effects. We constructed a knowledge base using UMLS and other data sources that could classify drug-finding pairs as currently known adverse effects (drug causes finding), known indications (drug treats/prevents finding), or unknown relationship. We used natural language processing (NLP) to extract current medications and clinical findings (including diseases) from 360,000 de-identified history and physical examination (H&P) notes. We identified 35,000 interesting co-occurrences of medication-finding concepts that exceeded threshold probabilities of appearance. These involved ~600 drugs and ~2000 findings. Among the identified pairs are several that the FDA recognized as harmful in postmarketing surveillance, including rofecoxib and heart attack, rofecoxib and stroke, statins and diabetes, and statins and memory loss. Our preliminary results illustrate both the problems and potential of using data mining of clinical notes for adverse drug effect discovery.
50

EVALUATING THE PATIENT-CENTERED AUTOMATED SMS TAGGING ENGINE (PASTE): NATURAL LANGUAGE PROCESSING APPLIED TO PATIENT-GENERATED SMS TEXT MESSAGES

Stenner, Shane P. 27 July 2011 (has links)
Pilot studies have demonstrated the feasibility of using mobile technologies as a platform for electronic patient-centered medication management. Such tools may be used to intercept drug interactions, stop unintentional medication overdoses, prevent improper scheduling of medications, and to gather real-time data about symptoms, outcomes, and activities of daily living. Unprompted text-message communication with patients using natural language could engage patients in their healthcare but presents unique natural language processing (NLP) challenges. A major technical challenge is to process text messages and output an unambiguous, computable format that can be used by a subsequent medication management system. NLP challenges unique to text message communication include common use of ad hoc abbreviations, acronyms, phonetic lingoes, improper auto-spell correction, and lack of formal punctuation. While models exist for text message normalization, including dictionary substitution and statistical machine translation approaches, we are not aware of any publications that describe an approach specific to patient text messages or to text messages in the domain of medicine. To allow two-way interaction with patients using mobile phone-based short message service (SMS) technology, we developed the Patient-centered Automated SMS Tagging Engine (PASTE). The PASTE webservice uses NLP methods, custom lexicons, and existing knowledge sources, to extract and tag medication concepts and action concepts from patient-generated text messages. A pilot evaluation of PASTE using 130 medication messages anonymously submitted by 16 volunteers established the feasibility of extracting medication information from patient-generated medication messages and suggested improvements. A subsequent evaluation study using 700 patient-generated text messages from 14 teens and 5 adults demonstrated improved performance from the pilot version of PASTE, with F-measures over 90% for medication concepts and medication action concepts when compared to manually tagged messages. We report on recall and precision of PASTE for extracting and tagging medication information from patient messages.

Page generated in 0.1302 seconds