• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • No language data
  • Tagged with
  • 9
  • 9
  • 5
  • 4
  • 4
  • 4
  • 3
  • 3
  • 3
  • 3
  • 3
  • 3
  • 3
  • 3
  • 3
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
1

<strong>MODELING ACUTE CARE UTILIZATION FOR INSOMNIA PATIENTS </strong>

Zitong Zhu (16629747) 30 August 2023 (has links)
<p>    </p> <p>Machine learning (ML) models can help improve health care services. However, they need to be practical to gain wide adoption. A methodology is proposed in this study to evaluate the utility of different data modalities and cohort segmentation strategies when designing these models. The methodology is used to compare models that predict emergency department (ED) and inpatient hospital (IH) visits. The data modalities include socio-demographics, diagnosis and medications and cohort segmentation is based on age group and disease severity. The proposed methodology is applied to models developed using a cohort of insomnia patients and a cohort of general non- insomnia patients under different data modalities and segmentation strategies. All models are evaluated using the traditional intra-cohort testing. In addition, to establish the need for disease- specific segmentation, transfer testing is recommended where the same insomnia test patients used for intra-cohort testing are submitted to the general-patient model. The results indicate that using both diagnosis and medications as a source of data does not generally improve model performance and may increase its overhead. For insomnia patients, the best ED and IH models using both data modalities or either one of the modalities achieved an area under the receiver operating curve (AUC) of 0.71 and 78, respectively. Our results also show that an insomnia-specific model is not necessary when predicting future ED visits but may have merit when predicting IH visits. As such, we recommend the evaluation of disease-specific models using transfer testing. Based on these initial findings, a language model was pretrained using diagnosis codes. This model can be used for the prediction of future ED and IH visits for insomnia and non-insomnia patients. </p>
2

NON-INTRUSIVE WIRELESS SENSING WITH MACHINE LEARNING

YUCHENG XIE (16558152) 30 August 2023 (has links)
<p>This dissertation explores the world of non-intrusive wireless sensing for diet and fitness activity monitoring, in addition to assessing security risks in human activity recognition (HAR). It delves into the use of WiFi and millimeter wave (mmWave) signals for monitoring eating behaviors, discerning intricate eating activities, and observing fitness movements. The proposed systems harness variations in wireless signal propagation to record human behavior while providing exhaustive details on dietary and exercise habits. Significant contributions encompass unsupervised learning methodologies for detecting dietary and fitness activities, implementing soft-decision and deep neural networks for assorted activity recognition, constructing tiny motion mechanisms for subtle mouth muscle movement recovery, employing space-time-velocity features for multi-person tracking, as well as utilizing generative adversarial networks and domain adaptation structures to enable less cumbersome training efforts and cross-domain deployments. A series of comprehensive tests validate the efficacy and precision of the proposed non-intrusive wireless sensing systems. Additionally, the dissertation probes the security vulnerabilities in mmWave-based HAR systems and puts forth various sophisticated adversarial attacks - targeted, untargeted, universal, and black-box. It designs adversarial perturbations aiming to deceive the HAR models whilst striving to minimize detectability. The research offers powerful insights into issues and efficient solutions relative to non-intrusive sensing tasks and security challenges linked with wireless sensing technologies.</p>
3

EFFICIENT INTELLIGENCE TOWARDS REAL-TIME PRECISION MEDICINE WITH SYSTEMATIC PRUNING AND QUANTIZATION

Maneesh Karunakaran (18823297) 03 September 2024 (has links)
<p dir="ltr"> The widespread adoption of Convolutional Neural Networks (CNNs) in real-world applications, particularly on resource-constrained devices, is hindered by their computational complexity and memory requirements. This research investigates the application of pruning and quantization techniques to optimize CNNs for arrhythmia classification using the MIT-BIH Arrhythmia Database. By combining magnitude-based pruning, regularization-based pruning, filter map-based pruning, and quantization at different bit-widths (4-bit, 8-bit, 2-bit, and 1-bit), the study aims to develop a more compact and efficient CNN model while maintaining high accuracy. The experimental results demonstrate that these techniques effectively reduce model size, improve inference speed, and maintain accuracy, adapting them for use on devices with limited resources. The findings highlight the potential of these optimization techniques for real-time applications in mobile health monitoring and edge computing, paving the way for broader adoption of deep learning in resource-limited environments.</p>
4

Clinical Analytics and Personalized Medicine

Chih-Hao Fang (13978917) 19 October 2022 (has links)
<p>The increasing volume and availability of Electronic Health Records (EHRs) open up opportunities for computational models to improve patient care. Key factors in improving patient outcomes include identifying patient sub-groups with distinct patient characteristics and providing personalized treatment actions with expected improved outcomes. This thesis investigates how well-formulated matrix decomposition and causal inference techniques can be leveraged to tackle the problem of disease sub-typing and inferring treatment recommendations in healthcare. In particular, the research resulted in computational techniques based on archetypal analysis to identify and analyze disease sub-types and a causal reinforcement learning method for learning treatment recommendations. Our work on these techniques are divided into four part in this thesis:</p> <p><br></p> <p>In the first part of the thesis, we present a retrospective study of Sepsis patients in intensive care environments using patient data. Sepsis accounts for more than 50% of hospital deaths, and the associated cost ranks the highest among hospital admissions in the US. Sepsis may be misdiagnosed because the patient is not thoroughly assessed or the symptoms are misinterpreted, which can lead to serious health complications or even death. An improved understanding of disease states, progression, severity, and clinical markers can significantly improve patient outcomes and reduce costs. We have developed a computational framework based on archetypal analysis that identifies disease states in sepsis using clinical variables and samples in the MIMIC-III database. Each identified state is associated with different manifestations of organ dysfunction. Patients in different states are observed to be statistically significantly composed of distinct populations with disparate demographic and comorbidity profiles. We furthermore model disease progression using a Markov chain. Our progression model accurately characterizes the severity level of each pathological trajectory and identifies significant changes in clinical variables and treatment actions during sepsis state transitions. Collectively, our framework provides a holistic view of sepsis, and our findings provide the basis for the future development of clinical trials and therapeutic strategies for sepsis. These results have significant implications for a large number of hospitalizations.</p> <p><br></p> <p><br></p> <p>In the second part, we focus on the problem of recommending optimal personalized treatment policies from observational data. Treatment policies are typically based on randomized controlled trials (RCTs); these policies are often sub-optimal, inconsistent, and have potential biases. Using observational data, we formulate suitable objective functions that encode causal reasoning in a reinforcement learning (RL) framework and present efficient algorithms for learning optimal treatment policies using interventional and counterfactual reasoning. We demonstrate the efficacy of our method on two observational datasets: (i) observational data to study the effectiveness of right heart catheterization (RHC) in the initial care of 5735 critically ill patients, and (ii) data from the Infant Health and Development Program (IHDP), aimed at estimating the effect of the intervention on the neonatal health for 985 low-birth-weight, premature infants. For the RHC dataset, our method's policy prescribes right heart catheterization (RHC) for 11.5% of the patients compared to the best current method that prescribes RHC for 38% of the patients. Even with this significantly reduced intervention, our policy yields a 1.5% improvement in the 180-day survival rate and a 2.2% improvement in the 30-day survival rate. For the IHDP dataset, we observe a 3.16% improvement in the rate of improvement of neonatal health using our method's policy.</p> <p><br></p> <p>In the third part, we consider the Supervised Archetypal Analysis (SAA) problem, which incorporates label information to compute archetypes. We formulate a new constrained optimization problem incorporating Laplacian regularization to guide archetypes towards groupings of similar data points, resulting in label-coherent archetypes and label-consistent soft assignments. We first use the MNIST dataset to show that SAA can can yield better cluster quality over baselines on any chosen number of archetypes. We then use the CelebFaces Attributes dataset to demonstrate the superiority of SAA in terms of cluster quality and interpretability over competing supervised and unsupervised methods. We also demonstrate the interpretability of SAA decompositions in the context of a movie rating application. We show that the archetypes from SAA can be directly interpreted as user ratings and encode class-specific movie preferences. Finally, we demonstrate how the SAA archetypes can be used for personalized movie recommendations. </p> <p><br></p> <p>In the last part of this thesis, we apply our SAA technique to clinical settings. We study the problem of developing methods for ventilation recommendations for Sepsis patients. Mechanical ventilation is an essential and commonly prescribed intervention for Sepsis patients. However, studies have shown that mechanical ventilation is associated with higher mortality rates on average, it is generally believed that this is a consequence of broad use of ventilation, and that a more targeted use can significantly improve average treatment effect and, consequently, survival rates. We develop a computational framework using Supervised Archetypal Analysis to stratify our cohort to identify groups that benefit from ventilators. We use SAA to group patients based on pre-treatment variables as well as treatment outcomes by constructing a Laplacian regularizer from treatment response (label) information and incorporating it into the objective function of AA. Using our Sepsis cohort, we demonstrate that our method can effectively stratify our cohort into sub-cohorts that have positive and negative ATEs, corresponding to groups of patients that should and should not receive mechanical ventilation, respectively. </p> <p>We then train a classifier to identify patient sub-cohorts with positive and negative treatment effects. We show that our treatment recommender, on average, has a high positive ATE for patients that are recommended ventilator support and a slightly negative ATE for those not recommended ventilator support. We use SHAP (Shapley Additive exPlanations) techniques for generating clinical explanations for our classifier and demonstrate their use in the generation of patient-specific classification and explanation. Our framework provides a powerful new tool to assist in the clinical assessment of Sepsis patients for ventilator use.</p>
5

<b>WEARABLE BIG DATA HARNESSING WITH DEEP LEARNING, EDGE COMPUTING AND EFFICIENCY OPTIMIZATION</b>

Jiadao Zou (16920153) 03 January 2024 (has links)
<p dir="ltr">In this dissertation, efforts and innovations are made to advance subtle pattern mining, edge computing, and system efficiency optimization for biomedical applications, thereby advancing precision medicine big data.</p><p dir="ltr">Brain visual dynamics encode rich functional and biological patterns of the neural system, promising for applications like intention decoding, cognitive load quantization and neural disorder measurement. We here focus on the understanding of the brain visual dynamics for the Amyotrophic lateral sclerosis (ALS) population. We leverage a deep learning framework for automatic feature learning and classification, which can translate the eye Electrooculography (EOG) signal to meaningful words. We then build an edge computing platform on the smart phone, for learning, visualization, and decoded word demonstration, all in real-time. In a further study, we have leveraged deep transfer learning to boost EOG decoding effectiveness. More specifically, the model trained on basic eye movements is leveraged and treated as an additional feature extractor when classifying the signal to the meaningful word, resulting in higher accuracy.</p><p dir="ltr">Efforts are further made to decoding functional Near-Infrared Spectroscopy (fNIRS) signal, which encodes rich brain dynamics like the cognitive load. We have proposed a novel Multi-view Multi-channel Graph Neural Network (mmGNN). More specifically, we propose to mine the multi-channel fNIRS dynamics with a multi-stage GNN that can effectively extract the channel- specific patterns, propagate patterns among channels, and fuse patterns for high-level abstraction. Further, we boost the learning capability with multi-view learning to mine pertinent patterns in temporal, spectral, time-frequency, and statistical domains.</p><p dir="ltr">Massive-device systems, like wearable massive-sensor computers and Internet of Things (IoTs), are promising in the era of big data. The crucial challenge is about how to maximize the efficiency under coupling constraints like energy budget, computing, and communication. We propose a deep reinforcement learning framework, with a pattern booster and a learning adaptor. This framework has demonstrated optimally maximizes the energy utilization and computing efficiency on the local massive devices under a one-center fifteen-device circumstance.</p><p dir="ltr">Our research and findings are expected to greatly advance the intelligent, real-time, and efficient big data harnessing, leveraging deep learning, edge computing, and efficiency optimization.</p>
6

ENHANCING POLICY OPTIMIZATION FOR IMPROVED SAMPLE EFFICIENCY AND GENERALIZATION IN DEEP REINFORCEMENT LEARNING

Md Masudur Rahman (19818171) 08 October 2024 (has links)
<p dir="ltr">The field of reinforcement learning has made significant progress in recent years, with deep reinforcement learning (RL) being a major contributor. However, there are still challenges associated with the effective training of RL algorithms, particularly with respect to sample efficiency and generalization. This thesis aims to address these challenges by developing RL algorithms capable of generalizing to unseen environments and adapting to dynamic conditions, thereby expanding the practical applicability of RL in real-world tasks. The first contribution of this thesis is the development of novel policy optimization techniques that enhance the generalization capabilities of RL agents. These techniques include the Thinker method, which employs style transfer to diversify observation trajectories, and Bootstrap Advantage Estimation, which improves policy and value function learning through augmented data. These methods have demonstrated superior performance in standard benchmarks, outperforming existing data augmentation and policy optimization techniques. Additionally, this thesis introduces Robust Policy Optimization, a method that enhances exploration in policy gradient-based RL by perturbing action distributions. This method addresses the limitations of traditional methods, such as entropy collapse and primacy bias, resulting in improved sample efficiency and adaptability in continuous action spaces. The thesis further explores the potential of natural language descriptions as an alternative to image-based state representations in RL. This approach enhances interpretability and generalization in tasks involving complex visual observations by leveraging large language models. Furthermore, this work contributes to the field of semi-autonomous teleoperated robotic surgery by developing systems capable of performing complex surgical tasks remotely, even under challenging conditions such as communication delays and data scarcity. The creation of the DESK dataset supports knowledge transfer across different robotic platforms, further enhancing the capabilities of these systems. Overall, the advancements presented in this thesis represent significant steps toward developing more robust, adaptable, and efficient autonomous agents. These contributions have broad implications for various real-world applications, including autonomous systems, robotics, and safety-critical tasks such as medical surgery.</p>
7

Myson Burch Thesis

Myson C Burch (16637289) 08 August 2023 (has links)
<p>With the completion of the Human Genome Project and many additional efforts since, there is an abundance of genetic data that can be leveraged to revolutionize healthcare. Now, there are significant efforts to develop state-of-the-art techniques that reveal insights about connections between genetics and complex diseases such as diabetes, heart disease, or common psychiatric conditions that depend on multiple genes interacting with environmental factors. These methods help pave the way towards diagnosis, cure, and ultimately prediction and prevention of complex disorders. As a part of this effort, we address high dimensional genomics-related questions through mathematical modeling, statistical methodologies, combinatorics and scalable algorithms. More specifically, we develop innovative techniques at the intersection of technology and life sciences using biobank scale data from genome-wide association studies (GWAS) and machine learning as an effort to better understand human health and disease. <br> <br> The underlying principle behind Genome Wide Association Studies (GWAS) is a test for association between genotyped variants for each individual and the trait of interest. GWAS have been extensively used to estimate the signed effects of trait-associated alleles, mapping genes to disorders and over the past decade about 10,000 strong associations between genetic variants and one (or more) complex traits have been reported. One of the key challenges in GWAS is population stratification which can lead to spurious genotype-trait associations. Our work proposes a simple clustering-based approach to correct for stratification better than existing methods. This method takes into account the linkage disequilibrium (LD) while computing the distance between the individuals in a sample. Our approach, called CluStrat, performs Agglomerative Hierarchical Clustering (AHC) using a regularized Mahalanobis distance-based GRM, which captures the population-level covariance (LD) matrix for the available genotype data.<br> <br> Linear mixed models (LMMs) have been a popular and powerful method when conducting genome-wide association studies (GWAS) in the presence of population structure. LMMs are computationally expensive relative to simpler techniques. We implement matrix sketching in LMMs (MaSk-LMM) to mitigate the more expensive computations. Matrix sketching is an approximation technique where random projections are applied to compress the original dataset into one that is significantly smaller and still preserves some of the properties of the original dataset up to some guaranteed approximation ratio. This technique naturally applies to problems in genetics where we can treat large biobanks as a matrix with the rows representing samples and columns representing SNPs. These matrices will be very large due to the large number of individuals and markers in biobanks and can benefit from matrix sketching. Our approach tackles the bottleneck of LMMs directly by using sketching on the samples of the genotype matrix as well as sketching on the markers during the computation of the relatedness or kinship matrix (GRM). <br> <br> Predictive analytics have been used to improve healthcare by reinforcing decision-making, enhancing patient outcomes, and providing relief for the healthcare system. These methods help pave the way towards diagnosis, cure, and ultimately prediction and prevention of complex disorders. The prevalence of these complex diseases varies greatly around the world. Understanding the basis of this prevalence difference can help disentangle the interaction among different factors causing complex disorders and identify groups of people who may be at a greater risk of developing certain disorders. This could become the basis of the implementation of early intervention strategies for populations at higher risk with significant benefits for public health.<br> <br> This dissertation broadens our understanding of empirical population genetics. It proposes a data-driven perspective to a variety of problems in genetics such as confounding factors in genetic structure. This dissertation highlights current computational barriers in open problems in genetics and provides robust, scalable and efficient methods to ease the analysis of genotype data.</p>
8

<b>Digital Health And Improvement Of Healthcare Access</b>

Mateus Schmitt (18445557) 26 April 2024 (has links)
<p dir="ltr">Digital Health technologies have revolutionized healthcare delivery, offering innovative solutions that enhance access, improve patient outcomes, and optimize the use of resources. Despite this advancement, health outcomes remain disparate across different social groups, with underprivileged populations at an increased risk of poor health outcomes due to inadequate access to care. Digital Health technologies serve as a critical intervention in mitigating these disparities, particularly for groups affected by geographical, economic, and infrastructural barriers.<br><br>The purpose of this study was to conduct a review of the current state of Digital Health technologies, including Software as a Medical Device (SaMD), Wearable Health, Portable Diagnostic Devices, and remote care platforms, and their impact on healthcare accessibility. Employing qualitative methodology, this metasynthesis emphasized an important discovery: the need for a paradigm shift among stakeholders in healthcare towards integrated and digitally-driven patient care. This shift requires more than just an understanding of new technologies. It demands a fundamental re-evaluation of patient care methods and the orchestration of the entire healthcare system towards integrated digital practices. Importantly, this study found that the pace of digitalization must be carefully managed and cultural factors must be considered and signals the urgency for a balanced approach to digital integration in healthcare.</p>
9

EXPLORING GRAPH NEURAL NETWORKS FOR CLUSTERING AND CLASSIFICATION

Fattah Muhammad Tahabi (14160375) 03 February 2023 (has links)
<p><strong>Graph Neural Networks</strong> (GNNs) have become excessively popular and prominent deep learning techniques to analyze structural graph data for their ability to solve complex real-world problems. Because graphs provide an efficient approach to contriving abstract hypothetical concepts, modern research overcomes the limitations of classical graph theory, requiring prior knowledge of the graph structure before employing traditional algorithms. GNNs, an impressive framework for representation learning of graphs, have already produced many state-of-the-art techniques to solve node classification, link prediction, and graph classification tasks. GNNs can learn meaningful representations of graphs incorporating topological structure, node attributes, and neighborhood aggregation to solve supervised, semi-supervised, and unsupervised graph-based problems. In this study, the usefulness of GNNs has been analyzed primarily from two aspects - <strong>clustering and classification</strong>. We focus on these two techniques, as they are the most popular strategies in data mining to discern collected data and employ predictive analysis.</p>

Page generated in 0.0971 seconds