• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 103
  • 6
  • 1
  • Tagged with
  • 121
  • 121
  • 12
  • 10
  • 10
  • 8
  • 8
  • 8
  • 7
  • 7
  • 7
  • 6
  • 6
  • 6
  • 6
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
11

Comprehensive Analysis of the Spatial Distribution of Missense Variants in Protein Structures Reveals Patterns Predictive of Pathogenicity

Sivley, Robert Michael 16 March 2017 (has links)
The spatial distribution of genetic variation within proteins is shaped by evolutionary constraint and thus can provide insights into the functional importance of protein regions and the potential pathogenicity of protein alterations. To facilitate the spatial analysis of coding variation in protein structure, we develop PDBMap, an automated pipeline for mapping genetic variants into all solved and predicted protein structures. We then comprehensively evaluate the 3D spatial patterns of constraint on human germline and somatic variation in 4,568 solved protein structures. Different classes of coding variants have significantly different spatial distributions. Neutral missense variants exhibit a range of 3D constraint patterns, with a general trend of spatial dispersion driven by constraint on core residues. In contrast, germline and variants are significantly more likely to be clustered in protein structure space. Finally, we demonstrate that this difference in the spatial distributions of disease-associated and benign germline variants provides a signature for accurately classifying variants of unknown significance (VUS) that is complementary to current approaches for VUS classification.
12

Learning the State of Patient Care and Opportunities for Improvement from Electronic Health Record Data with Applications in Breast Cancer Patients

Harrell, Morgan Rachel 17 April 2017 (has links)
Patient care is complex and imperfect. Understanding and improving patient care requires clinical datasets and scientific methodology. We designed a set of methods to characterize the state of patient care and identify opportunities for improvement from electronic health record (EHR) data. The state of patient care is the distribution of patients throughout a clinical workflow. An opportunity for improvement is a means to shift patient distribution away from suboptimal states. We tested our methods within Vanderbilt University Medical Centerâs (VUMC) EHR system and the adjuvant endocrine therapy domain. Our methods divide into three aims: 1) Determine sufficiency of the data, 2) Characterize the state of care, and 3) Identify opportunities for improvement. Data sufficiency is the rise and persistence of data in an EHR system. We built metrics for data sufficiency that can be used in cohort and data selection. We find that despite inconsistent and missing data, we can leverage EHR data for studies on patient care. To characterize the state of patient care, we built a state diagram for adjuvant endocrine therapy at VUMC, and used EHR data to determine the distribution of patient across states. We measured drug choice frequencies, rates of adverse events, and recurrence rates. We also determined the extent to which EHR data can characterize complete patient care. To identify an opportunity for patient care improvement, we identified a suboptimal state (failure to follow-up) among VUMC adjuvant endocrine therapy patients and framed a classification problem using EHR data. We used supervised machine learning to predict follow-up and identify significant predictors that may inform on improvement. Patients that fail to follow-up may receive the majority of their care outside of VUMC. Follow-up could be improved by 1) referral to VUMC primary care provider or 2) documenting where patients follow-up to reduce ambiguity of care. These methods characterized the state of patient care and opportunities for improvement among an adjuvant endocrine therapy patient population using VUMCâs EHR data. We believe these methods are extensible to other EHR systems and other healthcare domains. These methods are valuable for drawing new clinical knowledge from clinical datasets.
13

The Effect of Risk Attitude and Uncertainty Comfort on Primary Care Physicians' Use of Electronic Information Resources

McKibbon, Kathleen Ann 30 September 2005 (has links)
Background: Clinicians use information regularly in clinical care. New electronic information resources provided in push, pull, and prompting formats have potential to improve information support but have not been designed for individualization. Physicians with differing risk status use healthcare resources differently often without an improvement in outcomes. Questions: Do physicians who are risk seeking or risk avoiding and comfortable or uncomfortable with uncertainty use or prefer electronic information resources differently when answering simulated clinical questions and can the processes be modeled with existing theoretical models? Design: Cohort study. Methods: Primary care physicians in Canada and the United States were screened for risk status. Those with high and low scores on 2 validated scales answered 23 multiple-choice questions and searched for information using their own electronic resources for 2 of these questions. They also answered 2 other questions using information from 2 electronic information sources: PIER© and Clinical Evidence© . Results: The physicians did not differ for number of correct answers according to risk status although the number of correct answers was low and not substantially higher than chance. Their searching process was consistent with 2 information-seeking models from information science (modified Wilson Problem Solving and Card/Pirolli Information Foraging/Information Scent models). Few differences were seen for any electronic searching or information use outcome based on risk status although those physicians who were comfortable with uncertainty used more searching heuristics and spent less effort on direct searching. More than 20% of answers were changed after searchingalmost the same number going from incorrect to correct and from correct to incorrect. These changes from a correct to incorrect answer indicate that some electronic information resources may not be ideal for direct clinical care or integration into electronic medical record systems. Conclusions: Risk status may not be a major factor in the design of electronic information resources for primary care physicians. More research needs to be done to determine which computerized information resources and which features of these resources are associated with obtaining and maintaining correct answers to clinical questions.
14

An OLAP-GIS System for Numerical-Spatial Problem Solving in Community Health Assessment Analysis

Scotch, Matthew 19 April 2006 (has links)
Community health assessment (CHA) professionals who use information technology need a complete system that is capable of supporting numerical-spatial problem solving. On-Line Analytical Processing (OLAP) is a multidimensional data warehouse technique that is commonly used as a decision support system in standard industry. Coupling OLAP with Geospatial Information System (GIS) offers the potential for a very powerful system. For this work, OLAP and GIS were combined to develop the Spatial OLAP Visualization and Analysis Tool (SOVAT) for numerical-spatial problem solving. In addition to the development of this system, this dissertation describes three studies in relation to this work: a usability study, a CHA survey, and a summative evaluation. The purpose of the usability study was to identify human-computer interaction issues. Fifteen participants took part in the study. Three participants per round used the system to complete typical numerical-spatial tasks. Objective and subjective results were analyzed after each round and system modifications were implemented. The result of this study was a novel OLAP-GIS system streamlined for the purposes of numerical-spatial problem solving. The online CHA survey aimed to identify the information technology currently used for numerical-spatial problem solving. The survey was sent to CHA professionals and allowed for them to record the individual technologies they used during specific steps of a numerical-spatial routine. In total, 27 participants completed the survey. Results favored SPSS for numerical-related steps and GIS for spatial-related steps. Next, a summative within-subjects crossover design compared SOVAT to the combined use of SPSS and GIS (termed SPSS-GIS) for numerical-spatial problem solving. Twelve individuals from the health sciences at the University of Pittsburgh participated. Half were randomly selected to use SOVAT first, while the other half used SPSS-GIS first. In the second session, they used the alternate application. Objective and subjective results favored SOVAT over SPSS-GIS. Inferential statistics were analyzed using linear mixed model analysis. At the .01 level, SOVAT was statistically significant from SPSS-GIS for satisfaction and time (p < .002). The results demonstrate the potential for OLAP-GIS in CHA analysis. Future work will explore the impact of an OLAP-GIS system in other areas of public health.
15

DESIGN AND IMPLEMENTATION OF A COMPUTERIZED INFORMATICS TOOL TO FACILITATE CLINICIAN ACCESS TO A STATES PRESCRIPTION DRUG MONITORING PROGRAM DATABASE

White, Steven John 08 April 2013 (has links)
BIOMEDICAL INFORMATICS DESIGN AND IMPLEMENTATION OF A COMPUTERIZED INFORMATICS TOOL TO FACILITATE CLINICIAN ACCESS TO A STATES PRESCRIPTION DRUG MONITORING PROGRAM DATABASE STEVEN JOHN WHITE Thesis under the direction of Professor Dario Giuse Within the past decade, prescription drug abuse has emerged as a nationwide epidemic, with opioid-related poisoning deaths more than tripling since 1999. In an effort to bring this public health crisis under control, 43 states, including Tennessee, have enacted prescription drug monitoring programs (PDMPs), computerized databases of DEA-controlled substance prescriptions filled at pharmacies within the given state. Such programs have been found to be effective in curbing prescription opioid abuse by alerting prescribers to aberrant prescription-filling activity. However, they are commonly underutilized and have workflow barriers that impede clinical use. Ideally, PDMP queries could be generated seamlessly from within a medical enterprises electronic health record (EHR) system, using an application-programming interface (API) supplied by the states PDMP vendor. However, the enabling legislative language currently prohibits such access. Therefore, we developed and evaluated a Perl software program activated from within Vanderbilt University Medical Centers EHR patient chart to send the properly coded/formatted user and patient-demographic information packets to the Tennessee PDMP website, without the use of an API. The program parses the returned data file for important prescription information and displays the filtered information to the user. By allowing the query to occur in the background, the users tether time to the computer is decreased from 3 minutes to 10 seconds per query. During the evaluation phase, we used a quasi-experimental intervention design with two alternating 2-week control and intervention periods. Twenty-eight ED attending physicians participated in the study and queried the PDMP at their clinical discretion. During integrated PDMP query tool availability, 5.9 % (169/2844) of emergency department patients were screened compared with 2.2 % (62/2786) during periods when the tool was not available (p<0.001, Pearsons Chi square). Data was not viewed in 20% of integrated tool assisted queries. The EHR-integrated PDMP query tool was well regarded by study physicians as an enhancement to workflow.
16

BCL::SAXS - Small Angle X-Ray Scattering Profiles to Assist Protein Structure Prediction

Putnam, Daniel Kent 08 April 2013 (has links)
The Biochemical Library (BCL) is a protein structure prediction algorithm developed in the Meiler Lab at Vanderbilt University based on the placement of secondary structure elements. This algorithm can use experimental data such as nuclear magnetic resonance (NMR), Cryo-electron microscopy (CryoEM), and electron paramagnetic resonance (EPR), to assist in protein structure prediction but does not have the ability to use SAXS data. The first phase of my project was to add this capability to the BCL and create a SAXS compatibility score. GPU acceleration was used to parallelize the computations. The second phase of the project was to compute SAXS scores for protein models without side chain and loop region coordinate information, but preserve atom type information. Finally, the BCL::SAXS score was added to the minimization process in BCL::FOLD. The SAXS score can be used to filter erroneous initial protein models from further refinement, thus saving time and computation resources.
17

Applying Active Learning to Biomedical Text Processing

Chen, Yukun 29 July 2013 (has links)
Objective: Supervised machine learning methods have shown good performance in text classification tasks in the biomedical domain, but they often require large annotated corpora, which are costly to develop. Our goal is to assess whether active learning strategies can be integrated with supervised machine learning methods, thus reducing the annotation cost while keeping or improving the quality of classification models for biomedical text. Methods: We have applied active learning to two biomedical natural language processing (NLP) tasks: 1) the assertion classification task in the 2010 i2b2/VA Clinical NLP Challenge, which was to determine the assertion status of clinical concepts; and 2) a supervised word sense disambiguation (WSD) task that was to disambiguate 197 ambiguous words and abbreviations in MEDLINE abstracts. We developed Support Vector Machines (SVMs) based classifiers for both tasks. We then implemented several existing and newly developed active learning algorithms to integrate with SVM classifiers and evaluated their performance on both tasks. Results: In assertion classification task, our results showed that to achieve the same classification performance, active learning strategies required much fewer samples than the random sampling method. For example, to achieve an AUC of 0.79, the random sampling method used 32 samples, while our best active learning algorithm required only 12 samples, a reduction of 62.5% in manual annotation effort. In the WSD task, our results also demonstrated that active learners significantly outperformed the passive learner, showing better performance for 177 out of 197 (89.8%) ambiguous terms. Further analysis showed that to achieve an average accuracy of 90%, the passive learner needed 38 samples, while the active learners needed only 24 annotated samples, a 37% reduction of annotation effort. Moreover, we also analyzed cases where active learning algorithms did not achieve superior performance and summarized three causes: (1) poor model in early learning stage; (2) easy WSD cases; and (3) difficult WSD cases, which provide useful insight for future improvements. Conclusion: Both studies demonstrated that integrating active learning strategies with supervised learning methods could effectively reduce annotation cost and improve the classification models in biomedical text processing.
18

Web-Based Concept Indexing Tool For Online Content Management Of Medical School Curriculum - Dissecting An Anatomy Course Experience.

Wehbe, Firas Hazem 12 August 2004 (has links)
<p>A traditional medical school curriculum consists of a large amount of information presented by a large number of faculty. Faced with a growing and evolving flood of information, medical educators require and seek assistance to manage this knowledge base. Between the fall of 2001 and summer of 2002, researchers and educators at the Vanderbilt University Department of Biomedical Informatics have constructed KnowledgeMap (KM), a web-based knowledge management tool to support medical instruction at the Vanderbilt University School of Medicine. The KM knowledge store contains all curricular documents in a searchable database. The KM interface makes available online this information to students, faculty members, and administrators. This thesis analyzes the use of KM during its pilot implementation in the first year medical school anatomy course during the fall of 2002.</p><p>Data was collected from first year medical students and the anatomy course faculty through server log files, a survey, and interviews. The data revealed that students have utilized the system near unanimously and that the majority of them have expressed satisfaction with the system. Computer proficiency and test anxiety were identified as factors affecting the adoption of the system. The study addressed issues relating to both students and faculty that arise from the use KM. They include student time constraints and learning styles, class attendance, intellectual property concerns, communication among faculty, and the properties of hypermedia as a medium for instruction at the Vanderbilt University School of Medicine.</p>
19

Comparison of Three Clustering Methods for Dissecting Trait Heterogeneity in Genotypic Data

Thornton-Wells, Tricia Ann 23 July 2005 (has links)
Trait heterogeneity, which exists when a trait has been defined with insufficient specificity such that it is actually two or more distinct traits, has been implicated as a confounding factor in traditional statistical genetics of complex human disease. In the absence of detailed phenotypic data collected consistently in combination with genetic data, unsupervised computational methodologies offer the potential for discovering underlying trait heterogeneity. The performance of three such methodsBayesian Classification, Hypergraph-Based Clustering, and Fuzzy k-Modes Clusteringthat are appropriate for categorical data were compared. Also tested was the ability of these methods to additionally detect trait heterogeneity in the presence of locus heterogeneity and gene-gene interaction, which are two other complicating factors in discovering genetic models of complex human disease. Bayesian Classification performed well under the simplest of genetic models simulated, and it outperformed the other two methods, with the exception that the Fuzzy k-Modes Clustering performed best on the most complex genetic model. Permutation testing showed that Bayesian Classification controlled Type I error very well but produced less desirable Type II error rates. Methodological limitations and future directions are discussed.
20

Automatic Cancer Diagnostic Decision Support System for Gene Expression Domain

Statnikov, Alexander R 29 July 2005 (has links)
The success of treatment of patients with cancer depends on establishing an accurate diagnosis. To this end, we have built a system called GEMS (Gene Expression Model Selector) for the automated development and evaluation of high-quality cancer diagnostic models and biomarker discovery from microarray gene expression data. In order to determine and equip the system with the best performing diagnostic methodologies in this domain, we first conducted a comprehensive evaluation of classification algorithms using 11 cancer microarray datasets. After the system was built, we performed a preliminary evaluation of the system with 5 new datasets. The performance of the models produced automatically by GEMS is comparable or better than the results obtained by human analysts. Additionally, we performed a cross-dataset evaluation of the system. This involved using a dataset to build a diagnostic model and to estimate its future performance, then applying this model and evaluating its performance on a different dataset. We found that models produced by GEMS indeed perform well in independent samples and, furthermore, the cross-validation performance estimates output by the system approximate well the error obtained by the independent validation. GEMS is freely available for download for non-commercial use from http://www.gems-system.org.

Page generated in 0.1091 seconds