• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 103
  • 6
  • 1
  • Tagged with
  • 121
  • 121
  • 12
  • 10
  • 10
  • 8
  • 8
  • 8
  • 7
  • 7
  • 7
  • 6
  • 6
  • 6
  • 6
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
1

BCL::SAS- Small Angle X-Ray / Neutron Scattering Profiles to Assist Protein Structure Prediction

Putnam, Daniel Kent 28 March 2016 (has links)
The Biochemical Library (BCL) is a protein structure prediction algorithm developed in the Meiler Lab at Vanderbilt University based on the placement of secondary structure elements (SSEs). This algorithm incorporates sparse experimental data constraints from nuclear magnetic resonance (NMR), Cryo- electron microscopy (CryoEM), and electron paramagnetic resonance (EPR), to restrict the conformational sampling space but does not have the capability to use Small Angle X-Ray / Neutron data. This dissertation delineates my work to add this capability to BCL::Fold. Specifically I show and show for what type of structures SAXS/SANS experimental data improves the accuracy BCL:: Fold and importantly where it does not. Furthermore, in collaboration with Oak Ridge National Labs, I present my work on structural determination of the Cellulose Synthase Complex in Arabidopsis Thaliana.
2

Omicron: a Galaxy for reproducible proteogenomics

Chambers, Matthew Chase 05 August 2016 (has links)
Proteomics allows us to see post-translational modifications and expression patterns that we cannot see with genomics and transcriptomics alone. By itself, proteomics has limited sensitivity to detect genetic variation (e.g. single-nucleotide polymorphisms and insertion/deletion mutations), but we can improve that with access to genomic data: an approach known as proteogenomics. As in many of the -omics fields, reproducibility of proteogenomic results is a problem. Since 2005, the web application âGalaxyâ has been available to improve the transparency and reproducibility of -omic analyses. However, a Galaxy server is not easy to set up, and to work around that, investigators have sometimes distributed their customizations as virtual machines (VMs). In recent years, a more efficient approach for software isolation - âcontainersâ - has become popular. A proteogenomics âflavorâ of Galaxy â Omicron â was created to simplify reproduction of proteogenomic workflows. An easy way for anyone to launch Omicron on Amazon Web Services, paired with a scalable compute cluster, was also created. Using Omicron, results from a 2014 Nature paper were partially reproduced. Due to changes in online reference data and possibly due to different tool versions, it was not possible to perfectly reproduce the previous results. However, other investigators could easily reproduce the Omicron results without digging through methods and supplemental data. Then they could easily apply the same workflow to their own data.
3

Predicting Colorectal Cancer Recurrence by Utilizing Multiple-View Multiple-Learner Supervised Learning

Castellanos, Jason Alfred 15 June 2016 (has links)
Colorectal Cancer (CRC) remains a leading cause of cancer-related mortality in the United States. A key therapeutic dilemma in the treatment of CRC is whether patients with stage II and stage III disease require adjuvant chemotherapy after surgical resection. Attempts to improve identification of patients at increased risk of recurrence have yielded many predictive models based on gene expression data, but none are FDA approved and none are used in standard clinical practice. To improve recurrence prediction, we utilize an ensemble learning approach to predict recurrence status at 3 years after diagnosis. Multiple views of a microarray dataset were generated then used to train a diverse pool of base learners using 10x 10-fold cross-validation. Stacked generalization was used to train an ensemble model. Our results demonstrate that molecular data predicts recurrence significantly better than basic clinical data. We also demonstrate that the performance of the multiple-view multiple learner (MVML) supervised learning framework exceeds or matches that of the best base learners across all performance metrics.
4

Operationalizing Tumor Molecular Profile Reporting in Clinical Workflows and for Translational Discovery

Rioth, Matthew John 11 April 2016 (has links)
Thesis under the direction of Dr. Jeremy L. Warner The practice of oncology increasingly relies on genetic information from tumors to determine diagnosis, prognosis and therapy. These tumor molecular profiling reports are generated by clinical laboratories specializing in molecular diagnostics; however, there is no consensus on what the reports should contain, how they should be structured, or how best to transmit and present them to oncologists. This thesis outlines a framework for the data elements, file structure, transmission requirements, clinical information technology requirements and secondary use cases for molecular profiling. This framework is used as a guide to describe the implementation of tumor molecular profile reports into clinical workflows. The experience of implementing automated structured molecular profile reports from a third party laboratory into an electronic health record is described. Using file structures and data elements from the framework, molecular profile genetic data and sample metadata can be accurately parsed, restructured and aggregated for secondary uses. A system of parsing this data into a database and the use cases this database satisfies is described. This framework helps to inform clinical and translational uses of tumor molecular profiling.
5

Design of an Interactive Crowdsourcing Platform to Facilitate User-Centered Information Needs Evaluation

Dufendach, Kevin Reid 27 July 2016 (has links)
Background Effective medical software is designed to fit the needs of the end users, translating their work into action. User-centered design seeks to involve users at all stages of the design process, but the process itself can be tedious, leading to variable degrees of implementation amongst vendors. This research seeks to create a new method of involving multiple end users remotely in the user-centered design process in order to establish the features and design required for clinicians need to perform effectively. Objectives The objectives of this research are to summarize currently identified necessary pediatric-specific EHR functionalities and create an online software platform to delineate further needs and functionalities, contributing to remote user-centered design of electronic medical record software. Methods We created Vanderbilt Active Interface Design (VandAID), a novel web-based software platform for crowdsourcing user interface design. The platform provides immediate real-time feedback on user interface design and layout decisions using example patient scenarios. The scenarios can pull information from a variety of sources using standards such as a Fast Health Interoperability Resource (FHIR). The design platform allows the selected options to be sent to a REDCap project for statistical analysis or viewed directly in the VandAID platform. We performed a randomized controlled trial to test the usability and utility of this software platform for the design of a neonatal handoff tool. Conclusions This research advances scientific approaches to user-centered design of health information technology by creating a means of collecting remote feedback from multiple users. Results from the randomized controlled trial in the first use case demonstrate this software platform to be a highly usable and effective means of performing cooperative user-centered design.
6

Using Abstraction to Overcome Problems of Sparsity, Irregularity, and Asynchrony in Structured Medical Data

VanHouten, Jacob Paul 29 July 2016 (has links)
Electronic health records (EHRs) are rich data sources that can be analyzed to discover new, clinically relevant patterns of disease manifestations. However, sparsity, irregularity, and asynchrony in health records pose challenges for their use in such discovery tasks, as standard statistical and machine learning techniques possess limited ability to handle these complications. Abstracting the clinical data into models and then using elements of those models as input to statistical and machine learning algorithms is one approach to overcoming these challenges. This dissertation provides insight into the use of different models for this purpose. First, I examine the effect of model complexity on algorithm performance. Specifically, I examine how well different models capture the low-specificity information distributed throughout electronic health data. For several predictive algorithms, low-complexity models turn out to be nearly as powerful and much less costly as high-complexity models. I then explore the use of continuous longitudinal models of laboratory results and diagnosis billing codes to discover clinically relevant patterns between and among these data. I look for associations between clusters of specific laboratory values and single billing codes, and identify known associations as well as others that are consistent with current medical knowledge but not expected a priori. Finally, I use the same longitudinal abstraction models as inputs into more complex probabilistic models that adjust for indirect associations, and find that diagnosis codes can be used to predict the laboratory status of a patient.
7

Use and Effects of Health Information Technologies in Surgical Practice

Robinson, Jamie Rene 25 May 2017 (has links)
Increasing health information technology (HIT) adoption has led to growth in research on its implementation and use, the majority of which has been conducted in primary care and medical specialty settings. This thesis comprises three research projects that expand the knowledge base about HIT in surgery. A systematic review summarized the evidence about the effects of major categories of HIT (e.g., electronic health records, computerized order entry) on surgical outcomes and demonstrated improvement in the quality of surgical documentation, increased adherence to guidelines for perioperative prophylactic medication administration, and improvements in patient care with provider alerts. The review identified gaps in the literature about consumer HIT use by surgical patients and providers. A second study demonstrated modest use of a patient portal by surgical patients during hospitalizations and found increased inpatient use for patients who were white, male, and had longer lengths of stay. This study showed that a patient portal designed for the outpatient setting could be employed by surgical patients during hospitalizations. A third study analyzed the nature of the communications in patient portal messages threads between surgeons and their patients. Two-thirds of message threads involved medical care with predominantly straightforward and low complexity decision-making. This study highlighted the need for expanded models for compensation of online care. This thesis provides insights into the use and effects of HIT in surgical practice. As HIT continues to evolve, the unique perspectives of surgical providers and patients should be represented in the design, implementation, evaluation, and regulation of its use.
8

Quantifying Burden of Treatment for Breast Cancer Patients from Clinical Encounter Data

Cheng, Alex Chih-Ray 21 November 2016 (has links)
Breast cancer patients suffer from the symptoms of their illness as well as from burden of treatment imposed by their care. Patients with high levels of burden tend to be less compliant with treatment plans resulting in worsening outcomes. To address the problem of overburden, some providers have proposed practicing minimally disruptive medicine, where treatment plans are tailored to the patientsâ capacity to handle the care. While some researchers have developed surveys that identify and quantify factors that contribute to treatment burden, no studies have used the electronic health record to assess patient burden. We developed measures derived from outpatient and inpatient encounter data that included time spent in appointments, waiting time, unique appointment days, and total inpatient length of stay. We used these measures to differentiate burden of treatment in early stage breast cancer patients in the first eighteen months after diagnosis. This method allowed us to identify outliers and to characterize the pattern of treatment over time. Our measures could also be used to evaluate new therapeutic and operational interventions for their effect on treatment burden. In patients receiving chemotherapy at Vanderbilt, a non-inferior change in protocol successfully reduced treatment burden while a therapeutically superior treatment may have imposed an increase of burden on patients. As the complexity of healthcare increases and patients take on more responsibility to manage their care, understanding treatment burden is critical to helping providers prescribe care right-sized for the patient to improve compliance and clinical outcomes.
9

Data-Driven System for Perioperative Acuity Prediction

Zhang, Linda 21 November 2016 (has links)
The widely used American Society of Anesthesiologistâsâ (ASA) Physical Status classification is subjective and requires time-consuming clinician assessment. Machine learning can be used to develop a system that predicts the ASA score a patient should be given based on routinely available preoperative data. The problem of ASA prediction is reframed into a binary classification problem for predicting between ASA 1/2 versus ASA 3/4/5. Retrospective ASA scores from the Vanderbilt Perioperative Data Warehouse are used as labels, allowing the use of supervised machine learning techniques. Routinely available preoperative data is used to select features and train four different models: logistic regression, k-nearest neighbors, random forests, and neural networks. Of the selected features, ICD9 codes were tested by incorporating temporality and hierarchy. The area under the curve (AUC) of the receiver operating characteristic (ROC) of each model on a holdout set is compared. The Cohenâs Kappa is calculated for the model versus the raw data and the model versus our anesthesiologist. Results: The best performing model was the random forest, achieving an AUC of 0.884. This model results in a 0.63 Cohenâs Kappa versus the raw data, and a 0.54 Kappa against our anesthesiologist, which is comparable to unweighted Kappa values found in literature. The results suggest that a machine learning model can predict ASA score with high AUC, and achieve agreement similar to an anesthesiologist. This demonstrates the feasibility of using this model as a standardized ASA scorer.
10

Performance Drift of Clinical Prediction Models: Impact of modeling methods on prospective model performance

Davis, Sharon Elizabeth 05 April 2017 (has links)
Integrating personalized risk predictions into clinical decision support requires well-calibrated models, yet model accuracy deteriorates as patient populations shift. Understanding the influence of modeling methods on performance drift is essential for designing updating protocols. Using national cohorts of Department of Veterans Affairs hospital admissions, we compared the temporal performance of seven regression and machine learning models for hospital-acquired acute kidney injury and 30-day mortality after admission. All modeling methods were robust in terms of discrimination and experienced deteriorating calibration. Random forest and neural network models experienced lower levels of calibration drift than regressions. The L-2 penalized logistic regression for mortality demonstrated drift similar to the random forest. Increasing overprediction by all models correlated with declining event rates. Diverging patterns of calibration drift among acute kidney injury models coincided with predictor-outcome association changes. The mortality models revealed reduced susceptibility of random forest, neural network, and L-2 penalized logistic regression models to case mix-driven calibration drift. These findings support the advancement of clinical predictive analytics and lay a foundation for systems to maintain model accuracy. As calibration drift impacted each method, all clinical prediction models should be routinely reassessed and updated as needed. Regression models have a greater need for frequent evaluation and updating than machine learning models, highlighting the importance of tailoring updating protocols to variations in the susceptibility of models to patient population shifts. While the suite of best practices remains to be developed, modeling methods will be an essential component in determining when and how models are updated.

Page generated in 0.1124 seconds