Return to search

Structured Disentangling Networks for Learning Deformation Invariant Latent Spaces

abstract: Disentangling latent spaces is an important research direction in the interpretability of unsupervised machine learning. Several recent works using deep learning are very effective at producing disentangled representations. However, in the unsupervised setting, there is no way to pre-specify which part of the latent space captures specific factors of variations. While this is generally a hard problem because of the non-existence of analytical expressions to capture these variations, there are certain factors like geometric

transforms that can be expressed analytically. Furthermore, in existing frameworks, the disentangled values are also not interpretable. The focus of this work is to disentangle these geometric factors of variations (which turn out to be nuisance factors for many applications) from the semantic content of the signal in an interpretable manner which in turn makes the features more discriminative. Experiments are designed to show the modularity of the approach with other disentangling strategies as well as on multiple one-dimensional (1D) and two-dimensional (2D) datasets, clearly indicating the efficacy of the proposed approach. / Dissertation/Thesis / Masters Thesis Electrical Engineering 2019

Identiferoai:union.ndltd.org:asu.edu/item:54893
Date January 2019
ContributorsKoneripalli, Kaushik (Author), Turaga, Pavan (Advisor), Papandreou-Suppappola, Antonia (Committee member), Jayasuriya, Suren (Committee member), Arizona State University (Publisher)
Source SetsArizona State University
LanguageEnglish
Detected LanguageEnglish
TypeMasters Thesis
Format53 pages
Rightshttp://rightsstatements.org/vocab/InC/1.0/

Page generated in 0.0014 seconds