In machine learning, a sub-field of computer science, a two-tower architecture model is a specialised type of neural network model that encodes paired data from different modalities (like text and images, sound and video, or proteomics and gene expression profiles) into a shared latent representation space. However, when training these models using a specific contrastive loss function, known as the multimodalinfoNCE loss, seems to often lead to a unique geometric phenomenon known as the modality gap. This gap is a clear geometric separation of the embeddings of the modalities in the joint contrastive latent space. This thesis investigates the modality gap in multimodal machine learning, specifically in two-tower neural networks trained with multimodal-infoNCE loss. We examine the adequacy of the current definition of the modality gap, the conditions under which the modality gap phenomenon manifests, and its impact on representation quality and downstream task performance. The approach to address these questions consists of a two-phase experimental strategy. Phase I involves a series of experiments, ranging from toy synthetic simulations to true multimodal machine learning with complex datasets, to explore and characterise the modality gap under varying conditions. Phase II focuses on modifying the modality gap and analysing representation quality, evaluating different loss functions and their impact on the modality gap. This methodical exploration allows us to systematically dissect the emergence and implications of the modality gap phenomenon, providing insights into its impact on downstream tasks, measured with proxy metrics based on semantic clustering in the shared latent representation space and modality-specific linear probe evaluation. Our findings reveal that the modality gap definition proposed by W. Liang et al. 2022, is insufficient. We demonstrate that similar modality gap magnitudes can exhibit varying linear separability between modality embeddings in the contrastive latent space and varying embedding topologies, indicating the need for additional metrics to capture the true essence of the gap. Furthermore, our experiments show that the temperature hyperparameter in the multimodal infoNCE loss function plays a crucial role in the emergence of the modality gap, and this effect varies with different data sets. This suggests that individual dataset characteristics significantly influence the modality gap's manifestation. A key finding is the consistent emergence of modality gaps with small temperature settings in the fixed temperature mode of the loss function and almost invariably under learned temperature mode settings, regardless of the initial temperature value. Additionally, we observe that the magnitude of the modality gap is influenced by distribution shifts, with the gap magnitude increasing progressively from the training set to the validation set, then to the test set, and finally to more distributionally shifted datasets. We discover that the choice of contrastive learning method, temperature settings, and temperature values is crucial in influencing the modality gap. However, reducing the gap does not consistently improve downstream task performance, suggesting its role may be more nuanced than previously understood. This insight indicates that the modality gap might be a geometric by-product of the learning methods rather than a critical determinant of representation quality. Our results encourage the need to reevaluate the modality gap's significance in multimodal contrastive learning, emphasising the importance of dataset characteristics and contrastive learning methodology.
Identifer | oai:union.ndltd.org:UPSALLA1/oai:DiVA.org:uu-517811 |
Date | January 2023 |
Creators | Al-Jaff, Mohammad |
Publisher | Uppsala universitet, Industriell teknik |
Source Sets | DiVA Archive at Upsalla University |
Language | English |
Detected Language | English |
Type | Student thesis, info:eu-repo/semantics/bachelorThesis, text |
Format | application/pdf |
Rights | info:eu-repo/semantics/openAccess |
Relation | UPTEC X ; 23037 |
Page generated in 0.0024 seconds