Return to search

Application of prior information to discriminative feature learning

Learning discriminative feature representations has attracted a great deal of attention since it is a critical step to facilitate the subsequent classification, retrieval and recommendation tasks. In this dissertation, besides incorporating prior knowledge about image labels into the image classification as most prevalent feature learning methods currently do, we also explore some other general-purpose priors and verify their effectiveness in the discriminant feature learning. As a more powerful representation can be learned by implementing such general priors, our approaches achieve state-of-the-art results on challenging benchmarks. We elaborate on these general-purpose priors and highlight where we have made novel contributions. We apply sparsity and hierarchical priors to the explanatory factors that describe the data, in order to better discover the data structure. More specifically, in the first approach we propose that we only incorporate sparse priors into the feature learning. To this end, we present a support discrimination dictionary learning method, which finds a dictionary under which the feature representation of images from the same class have a common sparse structure while the size of the overlapped signal support of different classes is minimised. Then we incorporate sparse priors and hierarchical priors into a unified framework, that is capable of controlling the sparsity of the neuron activation in deep neural networks. Our proposed approach automatically selects the most useful low-level features and effectively combines them into more powerful and discriminative features for our specific image classification problem. We also explore priors on the relationships between multiple factors. When multiple independent factors exist in the image generation process and only some of them are of interest to us, we propose a novel multi-task adversarial network to learn a disentangled feature which is optimized with respect to the factor of interest to us, while being distraction factors agnostic. When common factors exist in multiple tasks, leveraging common factors cannot only make the learned feature representation more robust, but also enable the model to generalise from very few labelled samples. More specifically, we address the domain adaptation problem and propose the re-weighted adversarial adaptation network to reduce the feature distribution divergence and adapt the classifier from source to target domains.

Identiferoai:union.ndltd.org:bl.uk/oai:ethos.bl.uk:763715
Date January 2018
CreatorsLiu, Yang
ContributorsWassell, Ian
PublisherUniversity of Cambridge
Source SetsEthos UK
Detected LanguageEnglish
TypeElectronic Thesis or Dissertation
Sourcehttps://www.repository.cam.ac.uk/handle/1810/285558

Page generated in 0.0018 seconds