Return to search

TwinLossGAN: Domain Adaptation Learning for Semantic Segmentation

Most semantic segmentation methods based on Convolutional Neural Networks (CNNs) rely on supervised pixel-level labelling, but because pixel-level labelling is time-consuming and laborious, synthetic images are generated by software, and their label information is already embedded inside the data; therefore, labelling can be done automatically. This advantage makes synthetic datasets widely used in training deep learning models for real-world cases. Still, compared to supervised learning with real-world labelled images, the accuracy of the models trained using synthetic datasets is not high when applied to real-world data.
So, researchers have turned their interest to Unsupervised Domain Adaptation (UDA), which is mainly used to transfer knowledge learned from one domain to another. That is why we can use synthetic data to train the model. Then, the model can use what it learned to deal with real-world problems. UDA is an essential part of transfer learning. It aims to make two domain feature distributions as close as possible. In other words, UDA is mainly used to migrate the learned knowledge from one domain to another, so the knowledge and distribution learned from the source domain feature space can be migrated to the target space to improve the prediction accuracy of the target domain.
However, compared with the traditional supervised learning model, the accuracy of UDA is not high when the trained UDA is used for scene segmentation of real images. The reason for the low accuracy of UDA is that the domain gap between the source and target domains is too large. The image distribution information learned by the model from the source domain cannot be applied to the target domain, which limits the development of UDA.
Therefore we propose a new UDA model called TwinLossGAN, which will reduce the domain gap in two steps. The first step is to mix images from the source and target domains. The purpose is to allow the model to learn the features of images from both domains well. Mixing is performed by selecting a synthetic image on the source domain and then selecting a real-world image on the target domain. The two selected images are input to the segmenter to obtain semantic segmentation results separately. Then, the segmentation results are fed into the mixing module. The mixing model uses the ClassMix method to copy and paste some segmented objects from one image into another using segmented masks. Additionally, it generates inter-domain composite images and the corresponding pseudo-label. Then, in the second step, we modify a Generative Adversarial Network (GAN) to reduce the gap between domains further. The original GAN network has two main parts: generator and discriminator. In our proposed TwinLossGAN, the generator performs semantic segmentation on the source domain images and the target domain images separately. Segmentations are trained in parallel. The source domain synthetic images are segmented, and the loss is computed using synthetic labels. At the same time, the generated inter-domain composite images are fed to the segmentation module. The module compares its semantic segmentation results with the pseudo-label and calculates the loss. These calculated twin losses are used as generator loss for the GAN cycle for iterations. The GAN discriminator examines whether the semantic segmentation results originate from the source or target domain.
The premise was that we retrieved data from GTA5 and SYNTHIA as the source domain data and images from CityScapes as the target domain data. The result was that the accuracy indicated by the TwinLossGAN that we proposed was much higher than the base UDA models.

Identiferoai:union.ndltd.org:uottawa.ca/oai:ruor.uottawa.ca:10393/43939
Date19 August 2022
CreatorsSong, Yuehua
ContributorsLee, Wonsook
PublisherUniversité d'Ottawa / University of Ottawa
Source SetsUniversité d’Ottawa
LanguageEnglish
Detected LanguageEnglish
TypeThesis
Formatapplication/pdf

Page generated in 0.0542 seconds