Return to search

Adversarial Attacks On Graph Convolutional Transformer With EHR Data

<p dir="ltr">This research explores adversarial attacks on Graph Convolutional Transformer (GCT) models that utilize Electronic Health Record (EHR) data. As deep learning models become increasingly integral to healthcare, securing their robustness against adversarial threats is critical. This research assesses the susceptibility of GCT models to specific adversarial attacks, namely the Fast Gradient Sign Method (FGSM) and the Jacobian-based Saliency Map Attack (JSMA). It examines their effect on the model’s prediction of mortality and readmission. Through experiments conducted with the MIMIC-III and eICU datasets, the study finds that although the GCT model exhibits superior performance in processing EHR data under normal conditions, its accuracy drops when subjected to adversarial conditions—from an accuracy of 86% with test data to about 57% and an area under the curve (AUC) from 0.86 to 0.51. These findings averaged across both datasets and attack methods, underscore the urgent need for effective adversarial defense mechanisms in AI systems used in healthcare. This thesis contributes to the field by identifying vulnerabilities and suggesting various strategies to enhance the resilience of GCT models against adversarial manipulations.</p>

  1. 10.25394/pgs.25685052.v1
Identiferoai:union.ndltd.org:purdue.edu/oai:figshare.com:article/25685052
Date28 April 2024
CreatorsSiddhartha Pothukuchi (18437181)
Source SetsPurdue University
Detected LanguageEnglish
TypeText, Thesis
RightsCC BY 4.0
Relationhttps://figshare.com/articles/thesis/Adversarial_Attacks_On_Graph_Convolutional_Transformer_With_EHR_Data/25685052

Page generated in 0.0018 seconds