Return to search

The Role of Task and Environment in Biologically Inspired Artificial Intelligence: Learning as an Active, Sensorimotor Process

The fields of biologically inspired artificial intelligence, neuroscience, and psychology have had exciting influences on each other over the past decades. Especially recently, with the increased popularity and success of artificial neural networks (ANNs), ANNs have enjoyed frequent use as models for brain function. However, there are still many disparities between the implementation, algorithms, and learning environment used for deep learning and those employed by the brain, which is reflected in their differing abilities. I first briefly introduce ANNs and survey the differences and similarities between them and the brain. I then make a case for designing the learning environment of ANNs to be more similar to that in which brains learn, namely by allowing them to actively interact with the world and decreasing the amount of external supervision. To implement this sensorimotor learning in an artificial agent, I use deep reinforcement learning, which I will also briefly introduce and compare to learning in the brain.

In the research presented in this dissertation, I focus on testing the hypothesis that the learning environment matters and that learning in an embodied way leads to acquiring different representations of the world. We first tested this on human subjects, comparing spatial knowledge acquisition in virtual reality to learning from an interactive map. The corresponding two publications are complemented by a methods paper describing eye tracking in virtual reality as a helpful tool in this type of research. After demonstrating that subjects do indeed learn different spatial knowledge in the two conditions, we test whether this transfers to artificial agents. Two further publications show that an ANN learning through interaction learns significantly different representations of the sensory input than ANNs that learn without interaction. We also demonstrate that through end-to-end sensorimotor learning, an ANN can learn visually-guided motor control and navigation behavior in a complex 3D maze environment without any external supervision using curiosity as an intrinsic reward signal. The learned representations are sparse, encode meaningful, action-oriented information about the environment, and can perform few-shot object recognition despite not knowing any labeled data beforehand. Overall, I make a case for increasing the realism of the computational tasks ANNs need to solve (largely self-supervised, sensorimotor learning) to improve some of their shortcomings and make them better models of the brain.

Identiferoai:union.ndltd.org:uni-osnabrueck.de/oai:osnadocs.ub.uni-osnabrueck.de:ds-202204226768
Date22 April 2022
CreatorsClay, Viviane
ContributorsProf. Dr. Gordon Pipa, Prof. Dr. Kai-Uwe Kühnberger, Prof. Dr. Peter König, Assistant Prof. Dr. Tim Kietzmann
Source SetsUniversität Osnabrück
LanguageEnglish
Detected LanguageEnglish
Typedoc-type:doctoralThesis
Formatapplication/zip, application/pdf
RightsAttribution 3.0 Germany, http://creativecommons.org/licenses/by/3.0/de/

Page generated in 0.0023 seconds