Return to search

Recurrent Spatial Attention for Facial Emotion Recognition

Automatic processing of emotion information through deep neural networks (DNN) can have great benefits (e.g., for human-machine interaction). Vice versa, machine learning can profit from concepts known from human information processing (e.g., visual attention). We employed a recurrent DNN incorporating a spatial attention mechanism for facial emotion recognition (FER) and compared the output of the network with results from human experiments. The attention mechanism enabled the network to select relevant face regions to achieve state-of-the-art performance on a FER database containing images from realistic settings. A visual search strategy showing some similarities with
human saccading behavior emerged when the model’s perceptive capabilities were restricted. However, the model then failed to form a useful scene representation.

Identiferoai:union.ndltd.org:DRESDEN/oai:qucosa:de:qucosa:72453
Date15 October 2020
CreatorsForch, Valentin, Vitay, Julien, Hamker, Fred H.
PublisherTechnische Universität Chemnitz
Source SetsHochschulschriftenserver (HSSS) der SLUB Dresden
LanguageEnglish
Detected LanguageEnglish
Typeinfo:eu-repo/semantics/publishedVersion, doc-type:conferenceObject, info:eu-repo/semantics/conferenceObject, doc-type:Text
Rightsinfo:eu-repo/semantics/openAccess
Relationurn:nbn:de:bsz:ch1-qucosa-228139, 0947-5125, qucosa:20770

Page generated in 0.0019 seconds