• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 4
  • 1
  • Tagged with
  • 5
  • 5
  • 5
  • 4
  • 3
  • 3
  • 3
  • 3
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
1

A Crucial Role of the Frontal Operculum in Task-Set Dependent Visuomotor Performance Monitoring

Quirmbach, Felix, Limanowski, Jakub 04 June 2024 (has links)
For adaptive goal-directed action, the brain needs to monitor action performance and detect errors. The corresponding information may be conveyed via different sensory modalities; for instance, visual and proprioceptive body position cues may inform about current manual action performance. Thereby, contextual factors such as the current task set may also determine the relative importance of each sensory modality for action guidance. Here, we analyzed human behavioral, functional magnetic resonance imaging (fMRI), and magnetoencephalography (MEG) data from two virtual reality-based hand–target phase-matching studies to identify the neuronal correlates of performance monitoring and error processing under instructed visual or proprioceptive task sets. Our main result was a general, modality-independent response of the bilateral frontal operculum (FO) to poor phase-matching accuracy, as evident from increased BOLD signal and increased source-localized gamma power. Furthermore, functional connectivity of the bilateral FO to the right posterior parietal cortex (PPC) increased under a visual versus proprioceptive task set. These findings suggest that the bilateral FO generally monitors manual action performance; and, moreover, that when visual action feedback is used to guide action, the FO may signal an increased need for control to visuomotor regions in the right PPC following errors.
2

Robotic self-exploration and acquisition of sensorimotor skills

Berthold, Oswald 26 June 2020 (has links)
Die Interaktion zwischen Maschinen und ihrer Umgebung sollte zuverlässig, sicher und ökologisch adequat sein. Um das in komplexen Szenarien langfristig zu gewährleisten, wird eine Theorie adaptiven Verhaltens benötigt. In der Entwicklungsrobotik und verkörperten künstlichen Intelligenz wird Verhalten als emergentes Phänomen auf der fortlaufenden dynamischen Interaktion zwischen Agent, Körper und Umgebung betrachtet. Die Arbeit untersucht Roboter, die in der Lage sind, schnell und selbständig einfache Bewegungen auf Grundlage sensomotorischer Information zu erlernen. Das langfristige Ziel dabei ist die Wiederverwendung gelernter Fertigkeiten in späteren Lernprozessen um damit ein komplexes Interaktionsrepertoire mit der Welt entstehen zu lassen, das durch Entwicklungsprozesse vollständig und fortwährend adaptiv in der sensomotorischen Erfahrung verankert ist. Unter Verwendung von Methoden des maschinellen Lernens, der Neurowissenschaft, Statistik und Physik wird die Frage in die Komponenten Repräsentation, Exploration, und Lernen zerlegt. Es wird ein Gefüge für die systematische Variation und Evaluation von Modellen errichtet. Das vorgeschlagene Rahmenwerk behandelt die prozedurale Erzeugung von Hypothesen als Flussgraphen über einer festen Menge von Funktionsbausteinen, was die Modellsuche durch nahtlose Anbindung über simulierte und physikalische Systeme hinweg ermöglicht. Ein Schwerpunkt der Arbeit liegt auf dem kausalen Fussabdruck des Agenten in der sensomotorischen Zeit. Dahingehend wird ein probabilistisches graphisches Modell vorgeschlagen um Infor- mationsflussnetzwerke in sensomotorischen Daten zu repräsentieren. Das Modell wird durch einen auf informationtheoretischen Grössen basierenden Lernalgorithmus ergänzt. Es wird ein allgemeines Modell für Entwicklungslernen auf Basis von Echtzeit-Vorhersagelernen präsentiert und anhand dreier Variationen näher besprochen. / The interaction of machines with their environment should be reliable, safe, and ecologically adequate. To ensure this over long-term complex scenarios, a theory of adaptive behavior is needed. In developmental robotics, and embodied artificial intelligence behavior is regarded as a phenomenon that emerges from an ongoing dynamic interaction between entities called agent, body, and environment. The thesis investigates robots that are able to learn rapidly and on their own, how to do primitive motions, using sensorimotor information. The long-term goal is to reuse acquired skills when learning other motions in the future, and thereby grow a complex repertoire of possible interactions with the world, that is fully grounded in, and continually adapted to sensorimotor experience through developmental processes. Using methods from machine learning, neuroscience, statistics, and physics, the question is decomposed into the relationship of representation, exploration, and learning. A framework is provided for systematic variation and evaluation of models. The proposed framework considers procedural generation of hypotheses as scientific workflows using a fixed set of functional building blocks, and allows to search for models by seamless evaluation in simulation and real world experiments. Additional contributions of the thesis are related to the agent's causal footprint in sensorimotor time. A probabilistic graphical model is provided, along with an information-theoretic learning algorithm, to discover networks of information flow in sensorimotor data. A generic developmental model, based on real time prediction learning, is presented and discussed on the basis of three different algorithmic variations.
3

Neurodynamische Module zur Bewegungssteuerung autonomer mobiler Roboter

Hild, Manfred 07 January 2008 (has links)
In der vorliegenden Arbeit werden rekurrente neuronale Netze im Hinblick auf ihre Eignung zur Bewegungssteuerung autonomer Roboter untersucht. Nacheinander werden Oszillatoren für Vierbeiner, homöostatische Ringmodule für segmentierte Roboter und monostabile Neuromodule für Roboter mit vielen Freiheitsgraden und komplexen Bewegungsabläufen besprochen. Neben dem mathematisch-theoretischen Hintergrund der Neuromodule steht in gleichberechtigter Weise deren praktische Implementierung auf realen Robotersystemen. Hierzu wird die funktionale Einbettung ins Gesamtsystem ebenso betrachtet, wie die konkreten Aspekte der zugrundeliegenden Hardware: Rechengenauigkeit, zeitliche Auflösung, Einfluss verwendeter Materialien und dergleichen mehr. Interessante elektronische Schaltungsprinzipien werden detailliert besprochen. Insgesamt enthält die vorliegende Arbeit alle notwendigen theoretischen und praktischen Informationen, um individuelle Robotersysteme mit einer angemessenen Bewegungssteuerung zu versehen. Ein weiteres Anliegen der Arbeit ist es, aus der Richtung der klassischen Ingenieurswissenschaften kommend, einen neuen Zugang zur Theorie rekurrenter neuronaler Netze zu schaffen. Gezielte Vergleiche der Neuromodule mit analogen elektronischen Schaltungen, physikalischen Modellen und Algorithmen aus der digitalen Signalverarbeitung können das Verständnis von Neurodynamiken erleichtern. / How recurrent neural networks can help to make autonomous robots move, will be investigated within this thesis. First, oscillators which are able to control four-legged robots will be dealt with, then homeostatic ring modules which control segmented robots, and finally monostable neural modules, which are able to drive complex motion sequences on robots with many degrees of freedom will be focused upon. The mathematical theory of neural modules will be addressed as well as their practical implementation on real robot platforms. This includes their embedding into a major framework and concrete aspects, like computational accuracy, timing and dependance on materials. Details on electronics will be given, so that individual robot systems can be built and equipped with an appropriate motion controller. It is another concern of this thesis, to shed a new light on the theory of recurrent neural networks, from the perspective of classical engineering science. Selective comparisons to analog electronic schematics, physical models, and digital signal processing algorithms can ease the understanding of neural dynamics.
4

Causal Models over Infinite Graphs and their Application to the Sensorimotor Loop / Kausale Modelle über unendlichen Grafen und deren Anwendung auf die sensomotorische Schleife - stochastische Aspekte und gradientenbasierte optimale Steuerung

Bernigau, Holger 27 April 2015 (has links) (PDF)
Motivation and background The enormous amount of capabilities that every human learns throughout his life, is probably among the most remarkable and fascinating aspects of life. Learning has therefore drawn lots of interest from scientists working in very different fields like philosophy, biology, sociology, educational sciences, computer sciences and mathematics. This thesis focuses on the information theoretical and mathematical aspects of learning. We are interested in the learning process of an agent (which can be for example a human, an animal, a robot, an economical institution or a state) that interacts with its environment. Common models for this interaction are Markov decision processes (MDPs) and partially observable Markov decision processes (POMDPs). Learning is then considered to be the maximization of the expectation of a predefined reward function. In order to formulate general principles (like a formal definition of curiosity-driven learning or avoidance of unpleasant situation) in a rigorous way, it might be desirable to have a theoretical framework for the optimization of more complex functionals of the underlying process law. This might include the entropy of certain sensor values or their mutual information. An optimization of the latter quantity (also known as predictive information) has been investigated intensively both theoretically and experimentally using computer simulations by N. Ay, R. Der, K Zahedi and G. Martius. In this thesis, we develop a mathematical theory for learning in the sensorimotor loop beyond expected reward maximization. Approaches and results This thesis covers four different topics related to the theory of learning in the sensorimotor loop. First of all, we need to specify the model of an agent interacting with the environment, either with learning or without learning. This interaction naturally results in complex causal dependencies. Since we are interested in asymptotic properties of learning algorithms, it is necessary to consider infinite time horizons. It turns out that the well-understood theory of causal networks known from the machine learning literature is not powerful enough for our purpose. Therefore we extend important theorems on causal networks to infinite graphs and general state spaces using analytical methods from measure theoretic probability theory and the theory of discrete time stochastic processes. Furthermore, we prove a generalization of the strong Markov property from Markov processes to infinite causal networks. Secondly, we develop a new idea for a projected stochastic constraint optimization algorithm. Generally a discrete gradient ascent algorithm can be used to generate an iterative sequence that converges to the stationary points of a given optimization problem. Whenever the optimization takes place over a compact subset of a vector space, it is possible that the iterative sequence leaves the constraint set. One possibility to cope with this problem is to project all points to the constraint set using Euclidean best-approximation. The latter is sometimes difficult to calculate. A concrete example is an optimization over the unit ball in a matrix space equipped with operator norm. Our idea consists of a back-projection using quasi-projectors different from the Euclidean best-approximation. In the matrix example, there is another canonical way to force the iterative sequence to stay in the constraint set: Whenever a point leaves the unit ball, it is divided by its norm. For a given target function, this procedure might introduce spurious stationary points on the boundary. We show that this problem can be circumvented by using a gradient that is tailored to the quasi-projector used for back-projection. We state a general technical compatibility condition between a quasi-projector and a metric used for gradient ascent, prove convergence of stochastic iterative sequences and provide an appropriate metric for the unit-ball example. Thirdly, a class of learning problems in the sensorimotor loop is defined and motivated. This class of problems is more general than the usual expected reward maximization and is illustrated by numerous examples (like expected reward maximization, maximization of the predictive information, maximization of the entropy and minimization of the variance of a given reward function). We also provide stationarity conditions together with appropriate gradient formulas. Last but not least, we prove convergence of a stochastic optimization algorithm (as considered in the second topic) applied to a general learning problem (as considered in the third topic). It is shown that the learning algorithm converges to the set of stationary points. Among others, the proof covers the convergence of an improved version of an algorithm for the maximization of the predictive information as proposed by N. Ay, R. Der and K. Zahedi. We also investigate an application to a linear Gaussian dynamic, where the policies are encoded by the unit-ball in a space of matrices equipped with operator norm.
5

Causal Models over Infinite Graphs and their Application to the Sensorimotor Loop: Causal Models over Infinite Graphs and their Application to theSensorimotor Loop: General Stochastic Aspects and GradientMethods for Optimal Control

Bernigau, Holger 04 July 2015 (has links)
Motivation and background The enormous amount of capabilities that every human learns throughout his life, is probably among the most remarkable and fascinating aspects of life. Learning has therefore drawn lots of interest from scientists working in very different fields like philosophy, biology, sociology, educational sciences, computer sciences and mathematics. This thesis focuses on the information theoretical and mathematical aspects of learning. We are interested in the learning process of an agent (which can be for example a human, an animal, a robot, an economical institution or a state) that interacts with its environment. Common models for this interaction are Markov decision processes (MDPs) and partially observable Markov decision processes (POMDPs). Learning is then considered to be the maximization of the expectation of a predefined reward function. In order to formulate general principles (like a formal definition of curiosity-driven learning or avoidance of unpleasant situation) in a rigorous way, it might be desirable to have a theoretical framework for the optimization of more complex functionals of the underlying process law. This might include the entropy of certain sensor values or their mutual information. An optimization of the latter quantity (also known as predictive information) has been investigated intensively both theoretically and experimentally using computer simulations by N. Ay, R. Der, K Zahedi and G. Martius. In this thesis, we develop a mathematical theory for learning in the sensorimotor loop beyond expected reward maximization. Approaches and results This thesis covers four different topics related to the theory of learning in the sensorimotor loop. First of all, we need to specify the model of an agent interacting with the environment, either with learning or without learning. This interaction naturally results in complex causal dependencies. Since we are interested in asymptotic properties of learning algorithms, it is necessary to consider infinite time horizons. It turns out that the well-understood theory of causal networks known from the machine learning literature is not powerful enough for our purpose. Therefore we extend important theorems on causal networks to infinite graphs and general state spaces using analytical methods from measure theoretic probability theory and the theory of discrete time stochastic processes. Furthermore, we prove a generalization of the strong Markov property from Markov processes to infinite causal networks. Secondly, we develop a new idea for a projected stochastic constraint optimization algorithm. Generally a discrete gradient ascent algorithm can be used to generate an iterative sequence that converges to the stationary points of a given optimization problem. Whenever the optimization takes place over a compact subset of a vector space, it is possible that the iterative sequence leaves the constraint set. One possibility to cope with this problem is to project all points to the constraint set using Euclidean best-approximation. The latter is sometimes difficult to calculate. A concrete example is an optimization over the unit ball in a matrix space equipped with operator norm. Our idea consists of a back-projection using quasi-projectors different from the Euclidean best-approximation. In the matrix example, there is another canonical way to force the iterative sequence to stay in the constraint set: Whenever a point leaves the unit ball, it is divided by its norm. For a given target function, this procedure might introduce spurious stationary points on the boundary. We show that this problem can be circumvented by using a gradient that is tailored to the quasi-projector used for back-projection. We state a general technical compatibility condition between a quasi-projector and a metric used for gradient ascent, prove convergence of stochastic iterative sequences and provide an appropriate metric for the unit-ball example. Thirdly, a class of learning problems in the sensorimotor loop is defined and motivated. This class of problems is more general than the usual expected reward maximization and is illustrated by numerous examples (like expected reward maximization, maximization of the predictive information, maximization of the entropy and minimization of the variance of a given reward function). We also provide stationarity conditions together with appropriate gradient formulas. Last but not least, we prove convergence of a stochastic optimization algorithm (as considered in the second topic) applied to a general learning problem (as considered in the third topic). It is shown that the learning algorithm converges to the set of stationary points. Among others, the proof covers the convergence of an improved version of an algorithm for the maximization of the predictive information as proposed by N. Ay, R. Der and K. Zahedi. We also investigate an application to a linear Gaussian dynamic, where the policies are encoded by the unit-ball in a space of matrices equipped with operator norm.

Page generated in 0.1119 seconds