This work investigates the robustness of learned representations of self-supervised learn-ing approaches, focusing on distribution shifts in computer vision. Joint embedding architecture and method-based self-supervised learning approaches have shown advancesin learning representations in a label-free manner and efficient knowledge transfer towardreducing human annotation needs. However, the empirical analysis is majorly limitedto the downstream task’s performance on natural scenes within the distribution. This constraint evaluation does not reflect the detailed comparative performance of learn-ing methods, preventing it from highlighting the limitations of these methods towards systematic improvement. This work evaluates the robustness of self-supervised learn-ing methods on the distribution shift and corrupted dataset ImageNet-C quantitatively and qualitatively. Several self-supervised learning approaches are considered for compre-hensiveness, including contrastive learning, knowledge distillation, mutual information maximization, and clustering. A detailed comparative analysis is presented to under-stand the retention of robustness against the varying severity of induced corruptions and noise present in data. This work provides insights into appropriate method selectionunder different conditions and highlights the limitations for future method development.
Identifer | oai:union.ndltd.org:UPSALLA1/oai:DiVA.org:ltu-97950 |
Date | January 2023 |
Creators | Rodahl Holmgren, Johan |
Publisher | Luleå tekniska universitet, Institutionen för system- och rymdteknik |
Source Sets | DiVA Archive at Upsalla University |
Language | English |
Detected Language | English |
Type | Student thesis, info:eu-repo/semantics/bachelorThesis, text |
Format | application/pdf |
Rights | info:eu-repo/semantics/openAccess |
Page generated in 0.0022 seconds