Deep learning networks in the literature traditionally only used a single input modality (or data stream). Integrating multiple modalities into deep learning networks with the goal of correlating extracted features was a major issue. Traditional methods involved treating each modality separately and then writing custom code to combine the extracted features. Current solutions for small numbers of modalities (three or less) showed there are multiple architectures for modality integration. With an increase in the number of modalities, the “curse of dimensionality” affects the performance of the system. The research showed current methods for larger scale integrations required separate, custom created modules with another integration layer outside the deep learning network. These current solutions do not scale well nor provide good generalized performance. This research report studied architectures using multiple modalities and the creation of a scalable and efficient architecture.
Identifer | oai:union.ndltd.org:nova.edu/oai:nsuworks.nova.edu:gscis_etd-2003 |
Date | 01 January 2017 |
Creators | McNeil, Patrick |
Publisher | NSUWorks |
Source Sets | Nova Southeastern University |
Detected Language | English |
Type | text |
Format | application/pdf |
Source | CEC Theses and Dissertations |
Page generated in 0.0018 seconds