Return to search

Robust, Interpretable, and Portable Deep Learning Systems for Detection of Ophthalmic Diseases

The World Health Organization estimates that there are 285 million people suffering from visual impairment worldwide. The top two causes of uncorrectable vision loss are glaucoma and age-related macular degeneration (AMD), with 112 million people anticipated to be impacted by glaucoma by 2040 and nearly 15% of U.S. adults aged 43-86 predicted to be diagnosed with AMD over the next 15 years. To slow the progression of these ophthalmic diseases, the most valuable preventive action is timely detection and treatment by an ophthalmologist. However, over 50% of glaucoma cases go undetected due to lack of timely assessment by a medical expert. This thesis seeks to transform artificial intelligence (AI) into a trustworthy partner to clinicians, aiding in expediting diagnostic screening for obvious cases and serving as corroboration/a ‘second opinion’ in ambiguous cases. In order to develop AI algorithms that can be trusted as team-mates in the clinic, the AI must be robust to data collected at various sites/from various patient populations, its decision-making mechanisms must be explainable, and to benefit the broadest population (for whom expensive imaging equipment and/or specialist time may not be available), it must be portable.

This thesis addresses these three challenges (1) by developing and evaluating robust deep learning (DL) algorithms for detection of glaucoma and AMD from data collected at multiple sites or using multiple imaging modalities, (2) by making AI interpretable, through: (a) comparison of image concepts used by DL systems for decision-making with image regions fixated upon by human experts during glaucoma diagnosis, and (b) through odds ratio ranking of clinical biomarkers most indicative of AMD risk used by both experts and AI, and (3) by enhancing theimage quality of data collected via a portable OCT device using deep-learning based super-resolution generative adversarial network (GAN) approaches. The resulting robust deep learning algorithms achieve accuracy as high as 95% at detection of glaucoma and AMD from optical coherence tomography (OCT) and OCT angiography images/volumes. The interpretable AI-concept/expert-eye-movement comparison showed the importance of three OCT-report sub-regions used by both AI and human experts for glaucoma detection.

The pipeline described here for evaluating AI robustness and validating interpretable image concepts used by deep learning systems in conjunction with expert eye movements has the potential to help standardize the acceptance of new AI tools for use in the clinic. Furthermore, the eye movement collection protocols introduced in this thesis may also help to train current medical residents and fellows regarding key features employed by expert specialists for accurate and efficient eye disease diagnosis. The odds ratio ranking of AMD biomarkers distinguished the top two clinical features (choroidal neovascularization and geographic atrophy) most indicative of AMD risk that are agreed upon by both AI and experts.

Lastly, GAN-based super-resolution of portable OCT images boosted performance of downstream deep learning systems for AMD detection, facilitating future work toward embedding AI algorithms within portable OCT systems, in order for a larger population to gain access to potentially sight-saving technology. By enhancing AI robustness, interpretability, and portability, this work paves the way for ophthalmologist-AI teams to achieve augmented performance compared to human experts or AI alone, leading to expedited eye disease detection, treatment, and thus better patient outcomes.

Identiferoai:union.ndltd.org:columbia.edu/oai:academiccommons.columbia.edu:10.7916/kr3y-bh86
Date January 2022
CreatorsThakoor, Kaveri Anil
Source SetsColumbia University
LanguageEnglish
Detected LanguageEnglish
TypeTheses

Page generated in 0.0017 seconds