<p dir="ltr">Generative Image-to-Image Translation (I2IT) involves transforming an input image from one domain to another. Typically, this transformation retains the content in the input image while adjusting the domain-dependent style elements. Generative I2IT finds utility in a wide range of applications, yet its effectiveness hinges on adaptations to the unique characteristics of the data at hand. This dissertation pushes the boundaries of I2IT by applying it to stain-related problems in computational pathology. Particularly, the main contributions span two major applications of stain translation: H&E-to-H&E and H&E-to-IHC, each with its unique requirements and challenges. More specifically, the first contribution addresses the generalization challenge posed by the high variability in H&E stain appearances to any task-specific machine learning models. To this end, the Generative Stain Augmentation Network (G-SAN) is introduced to augment the training images in any downstream task with random and diverse H&E stain appearances. Experimental results demonstrate G-SAN’s ability to enhance model generalization across stain variations in downstream tasks. The second key contribution in this dissertation focuses on H&E-to-IHC stain translation. The major challenge in learning accurate H&E-to-IHC stain translation is the frequent and sometimes severe inconsistencies in the groundtruth H&E-IHC image pairs. To make training more robust to these inconsistencies, a novel contrastive learning based loss, named the Adaptive Supervised PatchNCE (ASP) loss is presented. Experimental results suggest that the proposed ASP-based framework outperforms the state-of-the-art in H&E-to-IHC stain translation by significant margins. Additionally, a new dataset for H&E-to-IHC translation – the Multi-IHC Stain Translation (MIST) dataset, is released to the public, featuring paired images from H&E to four different IHC stains. For future directions of generative I2IT in stain translation problems, a proof-of-concept study of applying the latest diffusion model based I2IT methods to the problem of virtual H&E staining is presented.</p>
Identifer | oai:union.ndltd.org:purdue.edu/oai:figshare.com:article/24425383 |
Date | 24 October 2023 |
Creators | Fangda Li (17272816) |
Source Sets | Purdue University |
Detected Language | English |
Type | Text, Thesis |
Rights | CC BY 4.0 |
Relation | https://figshare.com/articles/thesis/Generative_Image-to-Image_Translation_with_Applications_in_Computational_Pathology/24425383 |
Page generated in 0.1184 seconds