Return to search

Continual Learning for Deep Dense Prediction

Transferring a deep learning model from old tasks to a new one is known to suffer from the catastrophic forgetting effects. Such forgetting mechanism is problematic as it does not allow us to accumulate knowledge sequentially and requires retaining and retraining on all the training data. Existing techniques for mitigating the abrupt performance degradation on previously trained tasks are mainly studied in the context of image classification. In this work, we present a simple method to alleviate catastrophic forgetting for pixel-wise dense labeling problems. We build upon the regularization technique using knowledge distillation to minimize the discrepancy between the posterior distribution of pixel class labels for old tasks predicted from 1) the original and 2) the updated networks. This technique, however, might fail in circumstances where the source and target distribution differ significantly. To handle the above scenario, we further propose an improvement to the distillation based approach by adding adaptive l2-regularization depending upon the per-parameter importance to the older tasks. We train our model on FCN8s, but our training can be generalized to stronger models like DeepLab, PSPNet, etc. Through extensive evaluation and comparisons, we show that our technique can incrementally train dense prediction models for novel object classes, different visual domains, and different visual tasks. / Master of Science / Modern deep networks have been successful on many important problems in computer vision viz. object classification, object detection, pixel-wise dense labeling, etc. However, learning various tasks incrementally still remains a fundamental problem in computer vision. When trained incrementally on multiple tasks, deep networks have been known to suffer from catastrophic forgetting on the older tasks, which leads to significant decrease in accuracy. Such forgetting is problematic and fundamentally constraints the knowledge that can be accumulated within these networks.

In this work, we present a simple algorithm to alleviate catastrophic forgetting for pixel-wise dense labeling problems. To prevent the network from forgetting connections important to the older tasks, we record the predicted labels/outputs of the older tasks for the images of the new task. While training on the images of the new task, we enforce a constraint to “remember” the recorded predictions of the older tasks while learning a new task. Additionally, we identify which connections in the deep network are important to the older tasks and prevent these connections from changing significantly. We show that our proposed algorithm can incrementally learn dense prediction models for novel object classes, different visual domains, and different visual tasks.

Identiferoai:union.ndltd.org:VTETD/oai:vtechworks.lib.vt.edu:10919/83513
Date11 June 2018
CreatorsLokegaonkar, Sanket Avinash
ContributorsComputer Science, Ramakrishnan, Naren, Huang, Jia-Bin, Huang, Bert
PublisherVirginia Tech
Source SetsVirginia Tech Theses and Dissertation
Detected LanguageEnglish
TypeThesis
FormatETD, application/pdf
RightsIn Copyright, http://rightsstatements.org/vocab/InC/1.0/

Page generated in 0.0058 seconds