Return to search

Multi-Label Latent Spaces with Semi-Supervised Deep Generative Models

Expert labeling, tagging, and assessment are far more costly than the processes of collecting raw data. Generative modeling is a very powerful tool to tackle this real-world problem. It is shown here how these models can be used to allow for semi-supervised learning that performs very well in label-deficient conditions.
The foundation for the work in this dissertation is built upon visualizing generative models' latent spaces to gain deeper understanding of data, analyze faults, and propose solutions. A number of novel ideas and approaches are presented to improve single-label classification. This dissertation's main focus is on extending semi-supervised Deep Generative Models for solving the multi-label problem by proposing unique mathematical and programming concepts and organization.
In all naive mixtures, using multiple labels is detrimental and causes each label's predictions to be worse than models that utilize only a single label. Examining latent spaces reveals that in many cases, large regions in the models generate meaningless results. Enforcing a priori independence is essential, and only when applied can multi-label models outperform the best single-label models. Finally, a novel learning technique called open-book learning is described that is capable of surpassing the state-of-the-art classification performance of generative models for multi-labeled, semi-supervised data sets.

Identiferoai:union.ndltd.org:uno.edu/oai:scholarworks.uno.edu:td-3616
Date18 May 2018
CreatorsRastgoufard, Rastin
PublisherScholarWorks@UNO
Source SetsUniversity of New Orleans
Detected LanguageEnglish
Typetext
Formatapplication/pdf
SourceUniversity of New Orleans Theses and Dissertations

Page generated in 0.0023 seconds