Spelling suggestions: "subject:"1earning rate modulation"" "subject:"c1earning rate modulation""
1 |
Understanding Generalization, Credit Assignment and the Regulation of Learning Rate in Human Motor LearningGonzalez Castro, Luis Nicolas January 2011 (has links)
Understanding the neural processes underlying motor learning in humans is important to facilitate the acquisition of new motor skills and to aid the relearning of skills lost after neurologic injury. Although it is known that the learning of a new movement is guided by the error feedback received after each repeated attempt to produce the movement, how the central nervous system (CNS) processes individual errors and how it modulates its learning rate in response to the history of errors experienced are issues that remain to be elucidated. To address these issues we studied the generalization of learning and learning decay – the transfer of what has been learned, or unlearned, in a particular movement condition to new movement conditions. Generalization offers a window into the process of error credit assignment during motor learning, since it allows us to measure which actions benefit the most in terms of learning after experiencing an error. We found that the distributions that describe generalization after learning are unimodal and biased towards the motion directions experienced during training, a finding that suggests that the credit for the learning experienced after a particular trial is assigned to the actual motion (motion-referenced learning) and not to the planned motion (plan-referenced learning) as it had previously been assumed in the motor learning literature. In addition, after training the same action along multiple directions, we found that the pattern of learning decay has two distinct components: one that is time-dependent and affects all trained directions, and one that is trial-dependent and affects mostly the direction where decay was induced, generalizing narrowly with a unimodal pattern similar to the one observed for learning generalization. We finally studied the effect that the consistency of the error perturbations in the training environment has on the learning rate adopted by the CNS. We found that learning rate increases when the perturbations experienced in training are consistent, and decreases when these perturbations are inconsistent. Besides increasing our understanding of the mechanisms underlying motor learning, the findings described in the present dissertation will enable the principled design of skill training and rehabilitation protocols that accelerate learning. / Engineering and Applied Sciences
|
2 |
Look-ahead meta-learning for continual learningGupta, Gunshi 07 1900 (has links)
Le problème “d’apprentissage continu” implique l’entraînement des modèles profonds avec
une capacité limitée qui doivent bien fonctionner sur un nombre inconnu de tâches arrivant
séquentiellement. Cette configuration peut souvent résulter en un système d’apprentissage
qui souffre de “l’oublie catastrophique”, lorsque l’apprentissage d’une nouvelle tâche provoque
des interférences sur la progression de l’apprentissage des anciennes tâches. Les travaux
récents ont montré que les techniques de “méta-apprentissage” ont le potentiel de ré-
duire les interférences entre les anciennes et les nouvelles tâches. Cependant, les procé-
dures d’entraînement ont présentement une tendance à être lente ou hors ligne et sensibles
à de nombreux hyperparamètres. Dans ce travail, nous proposons “Look-ahead MAML
(La-MAML)”, un algorithme de méta-apprentissage rapide basé sur l’optimisation pour
l’apprentissage continu en ligne et aidé par une petite mémoire épisodique. Ceci est réalisé en
utilisant l’équivalence d’un objectif MAML en plusieurs étapes et un objectif d’apprentissage
continu “temps conscient”. L’équivalence résulte au développement d’un algorithme intuitif
que nous appelons Continual-MAML (C-MAML), utilisant un méta-apprentissage continu
pour optimiser un modèle afin qu’il fonctionne bien sur une série de distributions de don-
nées changeantes. En intégrant la modulation des taux d’apprentissage par paramètre dans
La-MAML, notre approche fournit un moyen plus flexible et efficace d’atténuer l’oubli catas-
trophique par rapport aux méthodes classiques basées sur les prieurs. Cette modulation a
également des liens avec des travaux sur la métadescendance, que nous identifions comme
une direction importante de la recherche pour développer de meilleurs optimiser pour un ap-
prentissage continu. Dans des expériences menées sur des repères de classification visuelle du
monde réel, La-MAML atteint des performances supérieures aux autres approches basées sur
la relecture, basées sur les prieurs et basées sur le méta-apprentissage pour un apprentissage
continu. Nous démontrons également qu’elle est robuste et plus évolutive que de nombreuses
approches de pointe. / The continual learning problem involves training models with limited capacity to perform
well on a set of an unknown number of sequentially arriving tasks. This setup can of-
ten see a learning system undergo catastrophic forgetting, when learning a newly seen task
causes interference on the learning progress of old tasks. While recent work has shown that
meta-learning has the potential to reduce interference between old and new tasks, the current
training procedures tend to be either slow or offline, and sensitive to many hyper-parameters.
In this work, we propose Look-ahead MAML (La-MAML), a fast optimisation-based meta-
learning algorithm for online-continual learning, aided by a small episodic memory. This is
achieved by realising the equivalence of a multi-step MAML objective to a time-aware con-
tinual learning objective adopted in prior work. The equivalence leads to the formulation of
an intuitive algorithm that we call Continual-MAML (C-MAML), employing continual meta-
learning to optimise a model to perform well across a series of changing data distributions.
By additionally incorporating the modulation of per-parameter learning rates in La-MAML,
our approach provides a more flexible and efficient way to mitigate catastrophic forgetting
compared to conventional prior-based methods. This modulation also has connections to
prior work on meta-descent, which we identify as an important direction of research to de-
velop better optimizers for continual learning. In experiments conducted on real-world visual
classification benchmarks, La-MAML achieves performance superior to other replay-based,
prior-based and meta-learning based approaches for continual learning. We also demonstrate
that it is robust, and more scalable than many recent state-of-the-art approaches.
|
Page generated in 0.158 seconds