1 |
Towards biologically plausible mechanisms of predictive learningChapman IV, G. William 26 March 2024 (has links)
Animals perform a myriad of behaviors such as object tracking and spatial navigation, primarily in the absence of explicit target signals. In the absence of targets, neural circuits must implement a different target function. One primary theory for self-supervised learning is predictive learning, in which a system predicts feedforward signals over time, and in which internal representations emerge to provide longer-term structural information. While such theories are inspired by neural properties, they often lack direct links to low-level neural mechanisms.
In the first study, a model of the formation of internal representations is presented. I introduce the canonical microcircuit of cortical structures, including general connectivity and unique physiological properties of neural subpopulations. I then introduce a learning rule based on the contrast of feedforward potentials in pyramidal neurons with their feedback-controlled burst rates. Utilizing these two signals the learning rule instantiates a feedback-gated temporal error minimization. Combined with a set of feedforward-only units and organized hierarchically, the model learns to tracks the dynamics of external stimuli with high accuracy, and successive regions are shown to code temporal derivatives of their feedforward inputs.
The second study presents an electrophysiological experiment which showed a novel functional cell type in the retrosplenial cortex of behaving Long-Evans rats. Through rigorous statistical analysis we show that these neurons contain a egocentric representation of boundary locations. Combined with their location in the cortical hierarchy, this suggests that the retrosplenial neurons provide a mechanism for translating self-centered sensory information to the map-like representations present in subcortical structures.
In the final study I integrate the basic modular architecture of the first study with the specific afferent stimuli and macroscale connectivity patterns involved in spatial navigation. I simulate an agent in a simple virtual environment and compare the learned representations to tuning curves from experiments such as study two. I find the expected development of neural responses corresponding to egocentric sensory representations (retrosplenial cortex), self-oriented allocentric coding (postrhinal cortex) and allocentric spatial representations (hippocampus).
Together, these modeling results show how self-gated and guided learning in pyramidal ensembles can form useful and stable internal representations depending on the task at hand.
|
2 |
The emergence of cognitive patterns in learning: Implementation of an ecodynamic approachCastillo Guevara, Ramon Daniel 17 October 2014 (has links)
No description available.
|
3 |
Apprentissage autosupervisé de modèles prédictifs de segmentation à partir de vidéos / Self-supervised learning of predictive segmentation models from videoLuc, Pauline 25 June 2019 (has links)
Les modèles prédictifs ont le potentiel de permettre le transfert des succès récents en apprentissage par renforcement à de nombreuses tâches du monde réel, en diminuant le nombre d’interactions nécessaires avec l’environnement.La tâche de prédiction vidéo a attiré un intérêt croissant de la part de la communauté ces dernières années, en tant que cas particulier d’apprentissage prédictif dont les applications en robotique et dans les systèmes de navigations sont vastes.Tandis que les trames RGB sont faciles à obtenir et contiennent beaucoup d’information, elles sont extrêmement difficile à prédire, et ne peuvent être interprétées directement par des applications en aval.C’est pourquoi nous introduisons ici une tâche nouvelle, consistant à prédire la segmentation sémantique ou d’instance de trames futures.Les espaces de descripteurs que nous considérons sont mieux adaptés à la prédiction récursive, et nous permettent de développer des modèles de segmentation prédictifs performants jusqu’à une demi-seconde dans le futur.Les prédictions sont interprétables par des applications en aval et demeurent riches en information, détaillées spatialement et faciles à obtenir, en s’appuyant sur des méthodes état de l’art de segmentation.Dans cette thèse, nous nous attachons d’abord à proposer pour la tâche de segmentation sémantique, une approche discriminative se basant sur un entrainement par réseaux antagonistes.Ensuite, nous introduisons la tâche nouvelle de prédiction de segmentation sémantique future, pour laquelle nous développons un modèle convolutionnel autoregressif.Enfin, nous étendons notre méthode à la tâche plus difficile de prédiction de segmentation d’instance future, permettant de distinguer entre différents objets.Du fait du nombre de classes variant selon les images, nous proposons un modèle prédictif dans l’espace des descripteurs d’image convolutionnels haut niveau du réseau de segmentation d’instance Mask R-CNN.Cela nous permet de produire des segmentations visuellement plaisantes en haute résolution, pour des scènes complexes comportant un grand nombre d’objets, et avec une performance satisfaisante jusqu’à une demi seconde dans le futur. / Predictive models of the environment hold promise for allowing the transfer of recent reinforcement learning successes to many real-world contexts, by decreasing the number of interactions needed with the real world.Video prediction has been studied in recent years as a particular case of such predictive models, with broad applications in robotics and navigation systems.While RGB frames are easy to acquire and hold a lot of information, they are extremely challenging to predict, and cannot be directly interpreted by downstream applications.Here we introduce the novel tasks of predicting semantic and instance segmentation of future frames.The abstract feature spaces we consider are better suited for recursive prediction and allow us to develop models which convincingly predict segmentations up to half a second into the future.Predictions are more easily interpretable by downstream algorithms and remain rich, spatially detailed and easy to obtain, relying on state-of-the-art segmentation methods.We first focus on the task of semantic segmentation, for which we propose a discriminative approach based on adversarial training.Then, we introduce the novel task of predicting future semantic segmentation, and develop an autoregressive convolutional neural network to address it.Finally, we extend our method to the more challenging problem of predicting future instance segmentation, which additionally segments out individual objects.To deal with a varying number of output labels per image, we develop a predictive model in the space of high-level convolutional image features of the Mask R-CNN instance segmentation model.We are able to produce visually pleasing segmentations at a high resolution for complex scenes involving a large number of instances, and with convincing accuracy up to half a second ahead.
|
Page generated in 0.1089 seconds