• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 103
  • 6
  • 1
  • Tagged with
  • 121
  • 121
  • 12
  • 10
  • 10
  • 8
  • 8
  • 8
  • 7
  • 7
  • 7
  • 6
  • 6
  • 6
  • 6
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
71

Engineering an EMR System in the Developing World Necessity is the Mother of Invention

Douglas, Gerald Paul 14 May 2009 (has links)
While Electronic Medical Record (EMR) systems continue to improve the efficacy of healthcare delivery in the West, they have yet to be widely deployed in the developing world, where more than 90% of the global disease burden exists. The benefits afforded by an EMR notwithstanding, there is some skepticism regarding the feasibility of operationalizing an EMR system in a low-resource setting. This dissertation challenges these preconceptions and advances the understanding of the problems faced when implementing EMR systems to support healthcare delivery in a developing-world setting. Our methodology relies primarily on eight years of in-field experimentation and study. To facilitate a better understanding of the needs and challenges, we created a pilot system in a large government central hospital in Malawi, Africa. Learning from the pilot we developed and operationalized a point-of-care EMR system for managing the care and treatment of patients receiving antiretroviral therapy, which we put forth as a demonstration of feasibility in a developing-world setting. The pilot identified many unique challenges of healthcare delivery in the developing world, and reinforced the need to engineer solutions from scratch rather than blindly transplant systems developed in and for the West. Three novel technologies were developed over the course of our study, the most significant of which is the touchscreen clinical workstation appliance. Each of the novel technologies and their contribution towards successful implementation are described in the context of both an engineering and a risk management framework. A small comparative study to address data quality concerns associated with a point-of-care approach concluded that there was no significant difference in the accuracy of data collected through the use of a prototype point-of-care system compared to that of data entered retrospectively from paper records. We conclude by noting that while feasibility has been demonstrated the greatest challenge to sustainability is the lack of financial resources to monitor and support EMR systems once in place.
72

SPEECH TO CHART: SPEECH RECOGNITION AND NATURAL LANGUAGE PROCESSING FOR DENTAL CHARTING

Irwin, Regina (Jeannie) Yuhaniak 11 January 2010 (has links)
Typically, when using practice management systems (PMS), dentists perform data entry by utilizing an assistant as a transcriptionist. This prevents dentists from interacting directly with the PMSs. Speech recognition interfaces can provide the solution to this problem. Existing speech interfaces of PMSs are cumbersome and poorly designed. In dentistry, there is a desire and need for a usable natural language interface for clinical data entry. Objectives. (1) evaluate the efficiency, effectiveness, and user satisfaction of the speech interfaces of four dental PMSs, (2) develop and evaluate a speech-to-chart prototype for charting naturally spoken dental exams. Methods. We evaluated the speech interfaces of four leading PMSs. We manually reviewed the capabilities of each system and then had 18 dental students chart 18 findings via speech in each of the systems. We measured time, errors, and user satisfaction. Next, we developed and evaluated a speech-to-chart prototype which contained the following components: speech recognizer; post-processor for error correction; NLP application (ONYX) and; graphical chart generator. We evaluated the accuracy of the speech recognizer and the post-processor. We then performed a summative evaluation on the entire system. Our prototype charted 12 hard tissue exams. We compared the charted exams to reference standard exams charted by two dentists. Results. Of the four systems, only two allowed both hard tissue and periodontal charting via speech. All interfaces required using specific commands directly comparable to using a mouse. The average time to chart the nine hard tissue findings was 2:48 and the nine periodontal findings was 2:06. There was an average of 7.5 errors per exam. We created a speech-to-chart prototype that supports natural dictation with no structured commands. On manually transcribed exams, the system performed with an average 80% accuracy. The average time to chart a single hard tissue finding with the prototype was 7.3 seconds. An improved discourse processor will greatly enhance the prototypes accuracy. Conclusions. The speech interfaces of existing PMSs are cumbersome, require using specific speech commands, and make several errors per exam. We successfully created a speech-to-chart prototype that charts hard tissue findings from naturally spoken dental exams.
73

Foundational studies for measuring the impact, prevalence, and patterns of publicly sharing biomedical research data

Piwowar, Heather Alyce 18 May 2010 (has links)
Many initiatives encourage research investigators to share their raw research datasets in hopes of increasing research efficiency and quality. Despite these investments of time and money, we do not have a firm grasp on the prevalence or patterns of data sharing and reuse. Previous survey methods for understanding data sharing patterns provide insight into investigator attitudes, but do not facilitate direct measurement of data sharing behaviour or its correlates. In this study, we evaluate and use bibliometric methods to understand the impact, prevalence, and patterns with which investigators publicly share their raw gene expression microarray datasets after study publication. To begin, we analyzed the citation history of 85 clinical trials published between 1999 and 2003. Almost half of the trials had shared their microarray data publicly on the internet. Publicly available data was significantly (p=0.006) associated with a 69% increase in citations, independently of journal impact factor, date of publication, and author country of origin. Digging deeper into data sharing patterns required methods for automatically identifying data creation and data sharing. We derived a full-text query to identify studies that generated gene expression microarray data. Issuing the query in PubMed Central, Highwire Press, and Google Scholar found 56% of the data-creation studies in our gold standard, with 90% precision. Next, we established that searching ArrayExpress and the Gene Expression Omnibus databases for PubMed article identifiers retrieved 77% of associated publicly-accessible datasets. We used these methods to identify 11603 publications that created gene expression microarray data. Authors of at least 25% of these publications deposited their data in the predominant public databases. We collected a wide set of variables about these studies and derived 15 factors that describe their authorship, funding, institution, publication, and domain environments. In second-order analysis, authors with a history of sharing and reusing shared gene expression microarray data were most likely to share their data, and those studying human subjects and cancer were least likely to share. We hope these methods and results will contribute to a deeper understanding of data sharing behavior and eventually more effective data sharing initiatives.
74

DETECTING ADVERSE DRUG REACTIONS IN THE NURSING HOME SETTING USING A CLINICAL EVENT MONITOR

Handler, Steven Mark 18 May 2010 (has links)
Adverse drug reactions (ADRs) are the most clinically significant and costly medication-related problems in nursing homes (NH), and are associated with an estimated 93,000 deaths a year and as much as $4 billion of excess healthcare expenditures. Current ADR detection and management strategies that rely on pharmacist retrospective chart reviews (i.e., usual care) are inadequate. Active medication monitoring systems, such as clinical event monitors, are recommended by many safety organizations as an alternative to detect and manage ADRs. These systems have been shown to be less expensive, faster, and identify ADRs not normally detected by clinicians in the hospital setting. The main research goal of this dissertation is to review the rationale for the development and subsequent evaluation of an active medication monitoring system to automate the detection of ADRs in the NH setting. This dissertation includes three parts and each part has its own emphasis and methodology centered on the main topic of better understanding of how to detect ADRs in the NH setting. The first paper describes a systematic review of pharmacy and laboratory signals used by clinical event monitors to detect ADRs in hospitalized adult patients. The second paper describes the development of a consensus list of agreed upon laboratory, pharmacy, and Minimum Data Set signals that can be used by a clinical event monitor to detect potential ADRs. The third paper describes the implementation and pharmacist evaluation of a clinical event monitor using the signals developed by consensus. The findings in the papers described will help us to better understand, design, and evaluate active medication monitoring systems to automate the detection of ADRs in the NH setting. Future research is needed to determine if NH patients managed by physicians who receive active medication monitoring alerts have more ADRs detected, have a faster ADR management response time, and result in more cost-savings from a societal perspective, compared to usual care.
75

Effect of a metacognitive intervention on cognitive heuristic use during diagnostic reasoning

Payne, Velma Lucille 16 May 2011 (has links)
Medical judgment and decision-making frequently occur under conditions of uncertainty. In order to reduce the complexity of diagnosis, physicians often rely on cognitive heuristics. Use of heuristics during clinical reasoning can be effective; however when used inappropriately the result can be flawed reasoning, medical errors and patient harm. Many researchers have attempted to debias individuals from inappropriate heuristic use by designing interventions based on normative theories of decision-making. There have been few attempts to debias individuals using interventions based on descriptive decision-making theories. Objectives: (1) Assess use of Anchoring and Adjustment and Confirmation Bias during diagnostic reasoning; (2) Investigate the impact of heuristic use on diagnostic accuracy; (3) Determine the impact of a metacognitive intervention based on the Mental Model Theory designed to reduce biased judgment by inducing physicians to 'think about how they think'; and (4) Test a novel technique using eye-tracking to determine heuristic use and diagnostic accuracy within mode of thinking as defined by the Dual Process Theory. Methods: Medical students and residents assessed clinical scenarios using a computer system, specified a diagnosis, and designated the data used to arrive at the diagnosis. During case analysis, subjects either verbalized their thoughts or wore eye-tracking equipment to capture eye movements and pupil size as they diagnosed cases. Diagnostic data specified by the subject was used to measure heuristic use and assess the impact of heuristic use on diagnostic accuracy. Eye-tracking data was used to determine the frequency of heuristic use (Confirmation Bias only) and mode of thinking. Statistic models were executed to determine the effect of the metacognitive intervention. Results: Use of cognitive heuristics during diagnostic reasoning was common for this subject population. Logistic regression showed case difficulty to be an important factor contributing to diagnostic error. The metacognitive intervention had no effect on heuristic use and diagnostic accuracy. Eye-tracking data reveal this subject population infrequently assess cases in the Intuitive mode of thinking; spend more time in the Analytical mode of thinking, and switches between the two modes frequently as they reason through a case to arrive at a diagnosis.
76

Active Learning for Named Entity Recognition in Clinical Text

Chen, Yukun 25 June 2015 (has links)
Named entity recognition (NER) is one of the fundamental tasks for building clinical natural language processing (NLP) systems. Machine learning (ML) based approaches can achieve good performance. However, they often require large numbers of annotated samples, which are expensive to build with the use of domain experts in annotation. Active learning (AL), a sample selection approach that can be integrated with supervised ML, has shown the promising potential to minimize the annotation cost while maximizing the performance of ML-based models in various NLP tasks. However, very few studies have investigated AL for clinical NER in a real-life setting. In this dissertation research, I systematically studied AL in a clinical NER task to identify medical problems, treatments, and lab tests in clinical notes. Novel AL algorithms were developed to query the most informative and least costly sentences based on three properties: uncertainty, representativeness, and annotation time. I also developed the first AL-enabled annotation system for clinical NER. Using this system, I further conducted user studies to assess the performance of AL in real world annotation processes for building clinical NER systems. The initial user study shows that conventional AL methods with no consideration of annotation time did not always perform better than random sampling for different users. However, our newly developed AL algorithms with cost models for estimating annotation time were more promising in practice. To achieve an NER model with 0.70 in F-measure, simulated results show that the new AL method saved ~33.3% in estimated annotation time, compared to random sampling. In the user study, the new AL algorithm achieved better performance than random sampling and saved up to ~26.5% real annotation time for one of the users. To the best of our knowledge, this is the first study examining the practical AL systems for clinical NER. Our study demonstrates that AL has the potential to save annotation time and improve model quality for building ML-based NER systems, when novel querying algorithms are implemented. Our future work includes developing better querying algorithms and evaluating the system with larger number of users.
77

Measuring the health of medication process using an EHR data repository

Bhatia, Haresh Lokumal 06 August 2015 (has links)
Dissertation under the direction of Dr. Christoph U. Lehmann Health care providers prescribe medications to treat disease or maintain health of their patients. In an inpatient environment, once a medication order has been issued a complex medication process must be successfully and error free navigated to assure delivery and administration of the medication. This process involves providers, pharmacists, pharmacy technicians, nurses, and nursing aids and can result in errors in dispensing, delivery, administration, and documentation. This project analyzed the medication process in a childrens hospital to develop new approaches and methodologies to determine the effectiveness of the medication process and to develop measures to monitor its health. The approach chosen to determine the health of the medication process for the pediatric patients at the Vanderbilt University Medical Center (VUMC) was threefold: 1. Assessment of the proportion of medications not administered to patients including truly missed doses (no administration record) and non-administered medications 2. Assessment of intervals from ordering to dispensing, scheduling, and administration of medications 3. Analysis of factors influencing non-administration and delays in administration Analysis of orders issued between July 1, 2010 and December 31, 2013 revealed a very small proportion (< 3%) of doses that were not administered. However, we observed that the non-administration of the ordered medication is associated with the corresponding Anatomical Therapeutic Chemical (ATC) class. The analysis of the schedule and administration times of the ordered medications with respect to the order type, required verification, unit, ATC class, and scheduled-hour provided greater insights including understanding of delays associated with peek demands (pharmacy) and shift changes (nurses). Incidentally, while analyzing the schedule and administration times, we observed noticeably better times for specific antimicrobials in one of the Neonatal ICUs (NICUs). Further inquiry revealed that the effect was due to a quality improvement effort initiated at this NICU. Conclusion: Ordering, dispensing, scheduling, and administration data are useful in determining the health of the medication process in a university childrens hospital. Further, this analysis independently can detect ongoing improvement efforts.
78

Random Forest Classification of Acute Coronary Syndrome

VanHouten, Jacob Paul 16 December 2013 (has links)
Coronary artery disease (CAD) is the leading cause of death worldwide. Acute coronary syndromes (ACS), a subset of CAD, account for 1.4 million hospitalizations $165 billion in costs in the United States alone. A major challenge to the physician when diagnosing and treating patients with suspected ACS is that there is significant overlap between patients with and without ACS. There is a high cost to missing a diagnosis of ACS, but also a high cost to inappropriate treatment of patients without ACS. American College of Cardiology/American Heart Association guidelines recommend early risk stratification of patients to determine their likelihood of major adverse events, but many individual tests and prognostic indices lack sufficient performance characteristics for use in clinical practice. Prognostic indices specifically are often not representative of the population on which they are used and rely on complete and accurate data. We explored the use of state-of-the-art machine learning techniques random forest and elastic net on 23,576 records from the Synthetic Derivative to develop models with better performance characteristics than previously established prognostic indices in determining the risk of ACS for patients presenting with suspicious symptoms. We bootstrapped the process of model creation, and found that the random forest significantly outperformed elastic net, L2 regularized regression, and the previously-developed TIMI and GRACE scores. We also assessed the model calibration for the random forest and explored methods of correction. Our preliminary findings suggest that machine learning applied to noisy and largely missing data can still perform as well or better than previously developed scoring metrics.
79

Greazy: Open-Source Software for Automated Phospholipid MS/MS Identification

Kochen, Michael Allen 06 July 2015 (has links)
Lipid identification from data sets produced with high-throughput technologies is essential to discovery in lipidomics experiments. A number of software tools for making lipid identifications from tandem spectra have been developed in recent years, but they lack the robustness and sophistication of their proteomics counterparts. We have developed Greazy, a tool for the automated identification of phospholipids from tandem mass spectra, which utilizes methods developed for proteomics. Greazy builds user-defined a search space of phospholipids and associated theoretical tandem spectra. Experimental spectra are scored against search space lipids with similar precursor masses using two probability-based scores: a peak score that employs the hypergeometric distribution and an intensity score that utilizes the percentage of total ion intensity residing in matching peaks. These results are filtered with LipidLama using a mixture modelling approach and utilizing a density estimation algorithm. We assess the performance of Greazy against the NIST 2014 metabolomics library, observing high accuracy in a search of multiple lipid classes. We compare Greazy/LipidLama against the commercial lipid identification software LipidSearch and show that the two platforms differ considerably in the sets of identified spectra while showing good agreement on those spectra identified by both. Lastly, we demonstrate the utility of Greazy/LipidLama by searching data sets from four biological replicates. These findings substantiate the application of methods developed for proteomics to the identification of lipids.
80

Defining Phenotypes, Predicting Drug Response, and Discovering Genetic Associations in the Electronic Health Record with Applications in Rheumatoid Arthritis

Carroll, Robert James 26 November 2014 (has links)
Electronic Health Records (EHRs) allow for the digital capture of patient information and have proven to be a valuable tool for patient treatment. In this dissertation, I explore reuse of EHR data for clinical and genomic research with a focus on rheumatoid arthritis (RA). RA is a chronic autoimmune disorder that primarily affects joints with swelling, stiffness, and pain, and if left untreated can lead to permanent joint damage. Phenome wide association studies (PheWAS) leverage the breadth of codified diagnostic information about patients in the EHR to find disease associations. A package for the R statistical language is presented here that includes the tools needed to perform EHR-based or observational trial PheWAS, from ICD-9 code translation to association testing and meta-analysis. It includes a versatile plotting system for phenotype related information following the Manhattan plot paradigm. This methodology is applied in conjunction with genetic risk scores (GRS) to assess pleiotropy and shared genetic risk among phenotypes. Investigations of 99 known risk variants for RA and three formulations of GRS show that the GRS is more specific to RA than the individual single nucleotide polymorphisms, but the GRSs had clinically interesting associations with hypothyroidism. Presented next is the development of an algorithm to retrospectively identify drug response to etanercept in the EHR. Using chart reviews and a variety of input data including billing codes, processed free text, and medication entries, a support vector machine and random forest classifier were created that can discriminate between drug responders and non-responders with an area under the receiver operating characteristic curve of 0.939 and 0.923, respectively. The drug response algorithm was applied to create a case control cohort. Using these records, the final study identifies phenotypes associated with etanercept response, including fibromyalgia and several axial skeleton disease phenotypes: intervertebral disc disorders, degeneration of intervertebral disc, and spinal stenosis. Taken together, these studies demonstrate that EHR data can be an important tool for clinical and genomic research, and offer particular promise for the study of RA.

Page generated in 0.095 seconds