Return to search

Learning to Adapt Neural Networks Across Visual Domains

In the field of machine learning (ML) a very commonly encountered problem is the lack of generalizability of learnt classification functions when subjected to new samples that are not representative of the training distribution. The discrepancy between the training (a.k.a. source) and test (a.k.a.target) distributions are caused by several latent factors such as change in appearance, illumination, viewpoints and so on, which is also popularly known as domain-shift. In order to make a classifier cope with such domain-shifts, a sub-field in machine learning called domain adaptation (DA) has emerged that jointly uses the annotated data from the source domain together with the unlabelled data from the target domain of interest. For a classifier to be adapted to an unlabelled target data set is of tremendous practical significance because it has no associated labelling cost and allows for more accurate predictions in the environment of interest. A majority of the DA methods which address the single source and single target domain scenario are not easily extendable to many practical DA scenarios. As there has been as increasing focus to make ML models deployable, it calls for devising improved methods that can handle inherently complex practical DA scenarios in the real world.

In this work we build towards this goal of addressing more practical DA settings and help realize novel methods for more real world applications: (i) We begin our work with analyzing and addressing the single source and single target setting by proposing whitening-based embedded normalization layers to align the marginal feature distributions between two domains. To better utilize the unlabelled target data we propose an unsupervised regularization loss that encourages both confident and consistent predictions. (ii) Next, we build on top of the proposed normalization layers and use them in a generative framework to address multi-source DA by posing it as an image translation problem. This proposed framework TriGAN allows a single generator to be learned by using all the source domain data into a single network, leading to better generation of target-like source data. (iii) We address multi-target DA by learning a single classifier for all of the target domains. Our proposed framework exploits feature aggregation with a graph convolutional network to align feature representations of similar samples across domains. Moreover, to counteract the noisy pseudo-labels we propose to use a co-teaching strategy with a dual classifier head. To enable smoother adaptation, we propose a domain curriculum learning ,when the domain labels are available, that adapts to one target domain at a time, with increasing domain gap. (iv) Finally, we address the challenging source-free DA where the only source of supervision is a source-trained model. We propose to use Laplace Approximation to build a probabilistic source model that can quantify the uncertainty in the source model predictions on the target data. The uncertainty is then used as importance weights during the target adaptation process, down-weighting target data that do not lie in the source manifold.

Identiferoai:union.ndltd.org:unitn.it/oai:iris.unitn.it:11572/354343
Date29 September 2022
CreatorsRoy, Subhankar
ContributorsRoy, Subhankar, Ricci, Elisa, Sebe, Niculae
PublisherUniversità degli studi di Trento, place:Trento
Source SetsUniversità di Trento
LanguageEnglish
Detected LanguageEnglish
Typeinfo:eu-repo/semantics/doctoralThesis
Rightsinfo:eu-repo/semantics/openAccess
Relationfirstpage:1, lastpage:110, numberofpages:110

Page generated in 0.0023 seconds