1 |
Structured Disentangling Networks for Learning Deformation Invariant Latent SpacesJanuary 2019 (has links)
abstract: Disentangling latent spaces is an important research direction in the interpretability of unsupervised machine learning. Several recent works using deep learning are very effective at producing disentangled representations. However, in the unsupervised setting, there is no way to pre-specify which part of the latent space captures specific factors of variations. While this is generally a hard problem because of the non-existence of analytical expressions to capture these variations, there are certain factors like geometric
transforms that can be expressed analytically. Furthermore, in existing frameworks, the disentangled values are also not interpretable. The focus of this work is to disentangle these geometric factors of variations (which turn out to be nuisance factors for many applications) from the semantic content of the signal in an interpretable manner which in turn makes the features more discriminative. Experiments are designed to show the modularity of the approach with other disentangling strategies as well as on multiple one-dimensional (1D) and two-dimensional (2D) datasets, clearly indicating the efficacy of the proposed approach. / Dissertation/Thesis / Masters Thesis Electrical Engineering 2019
|
2 |
Optimizing Threads of Computation in Constraint Logic ProgramsPippin, William E., Jr. 29 January 2003 (has links)
No description available.
|
Page generated in 0.0619 seconds