Return to search

More efficient training using equivariant neural networks

Convolutional neural networks are equivariant to translations; equivariance to other symmetries, however, is not defined and the class output may vary depending on the input's orientation. To mitigate this, the training data can be augmented at the cost of increased redundancy in the model. Another solution is to build an equivariant neural network and thereby increasing the equivariance to a larger symmetry group. In this study, two convolutional neural networks and their respective equivariant counterparts are constructed and applied to the symmetry groups D4 and C8 to explore the impact on performance when removing and adding batch normalisation and data augmentation. The results suggest that data augmentation is irrelevant to an equivariant model and equivariance to more symmetries can slightly improve accuracy. The convolutional neural networks rely heavily on batch normalisation, whereas the equivariant models achieve high accuracy, although lower than with batch normalisation present.

Identiferoai:union.ndltd.org:UPSALLA1/oai:DiVA.org:uu-507213
Date January 2023
CreatorsBylander, Karl
PublisherUppsala universitet, Avdelningen Vi3
Source SetsDiVA Archive at Upsalla University
LanguageEnglish
Detected LanguageEnglish
TypeStudent thesis, info:eu-repo/semantics/bachelorThesis, text
Formatapplication/pdf
Rightsinfo:eu-repo/semantics/openAccess
RelationUPTEC X ; 23004

Page generated in 0.0019 seconds