• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 3
  • Tagged with
  • 5
  • 4
  • 3
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • 1
  • 1
  • 1
  • 1
  • 1
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
1

Interactions entre processus émotionnels et cognitifs : une étude en robotique mobile et sociale neuromimétique / Interactions between emotional and cognitive processes : a study in neuromimetic mobile and social robotics

Belkaid, Marwen 06 December 2016 (has links)
L'objectif de ma thèse est d'étudier les interactions entre processus cognitifs et émotionnels à travers le prisme de la robotique neuromimétique.Les modèles proposés sont implémentés sur des réseaux de neurones artificiels et encorporés dans des plateformes robotiques -- c'est-à-dire des systèmes encorporés et situés. D'une manière générale, l'intérêt est double : 1) s'inspirer des solutions biologiques pour concevoir des systèmes qui interagissent mieux avec leurs environnements physiques et sociaux, et 2) mettre en place des modèles computationels comme moyen de comprendre la cognition et les émotions biologiques.La première partie du manuscrit aborde la navigation comme cadre permettant d'étudier la cognition biologique et artificielle. Dans le Chapitre 1, je commence par donner un aperçu de la littérature en navigation bio-inspirée. Ensuite, deux problématiques sont traitées. Dans le Chapitre 2, la reconnaissance visuelle de lieux en environnement extérieur est abordée. Dans ce cadre, je propose un modèle basé sur les notions de textit{contextes visuels} et de textit{précédence globale} qui combine des informations locales et holistiques. Puis, dans le Chapitre 3, je considère l'textit{apprentissage interactif} de tâches de navigation à travers une communication homme--robot non-verbale basée sur des signaux visuomoteurs de bas niveau.La deuxième partie du manuscrit s'attaque à la question centrale des interactions entre emotion et cognition. Le Chapitre 4 introduit la recherche sur les émotions comme une entreprise inter-disciplinaire, incluant des théories en psychologie, des découvertes en neurosciences et des modèles computationnels. Dans le Chapitre 5, je propose un textit{modèle conceptuel} des interactions emotion--cognition donc différentes instantiations sont ensuite présentées. Plus précisément, dans le Chapitre 6, je modélise la perception de l'textit{espace péripersonnel} quand elle est modulée par des signaux sensoriels et physiologiques ayant une valence émotionnelle. Enfin, dans le Chapitre 7, j'introduis le concept de textit{métacontrôle émotionnel} comme un exemple d'interaction emotion--cognition. Cela consiste à utiliser des signaux émotionnels élicités par une auto-évaluation pour moduler des processus computationnels -- tels que l'attention ou la sélection de l'action -- dans le but de réguler le comportement.Une idée clé dans cette thèse est que, dans les systèmes autonomes, emotion et cognition ne peuvent pas être séparées. En effet, il est maintenant bien admis que les émotions sont très liées à la cognition, en particulier à travers la modulation de différents processus computationnels. Cela pose la question de savoir si ces modulations se produisent au niveau du traitement sensoriel ou celui de la sélection de l'action. Dans cette thèse, je préconise l'intégration des émotions artificielles dans les architectures robotiques via des influences bidirectionnelles avec les processus sensoriels, attentionnels, moteurs and décisionnels. Ce travail tente de mettre en avant la manière dont cette approche des aspects internes des processus émotionnels peut favoriser une interaction efficace avec l'environnement physique et social du système. / The purpose of my thesis is to study interactions between cognitive and emotional processes through the lens of neuromimetic robotics.The proposed models are implemented on artificial neural networks and embodied in robotic platforms -- that is to say embodied and situated systems. In general, the interest is twofold: 1) taking inspiration from biological solutions to design systems that better interact with their physical and social environments, and 2) providing computational models as a means to better understand biological cognition and emotion.The first part of the dissertation addresses spatial navigation as a framework to study biological and artificial cognition. In Chapter 1, I present a brief overview of the literature on biologically inspired navigation. Then, two issues are more specifically tackled. In Chapter 2, visual place recognition is addressed in the case of outdoor navigation. In that matter, I propose a model based on the notions of textit{visual context} and textit{global precedence} combining local and holistic visual information. Then, in Chapter 3, I consider the textit{interactive learning} of navigation tasks through non-verbal human--robot communication based on low-level visuomotor signals.The second part of the dissertation addresses the central question of emotion--cognition interactions. In Chapter 4, I give an overview of the research on emotion as a cross-disciplinary research, including psychological theories, neuroscientific findings and computational models. In Chapter 5, I propose a textit{conceptual model} of emotion--cognition interactions. Then, various instantiations of this model are presented. In Chapter 6, I model the perception of the textit{peripersonal space} when modulated by emotionally valenced sensory and physiological signals. Last, in Chapter 7, I introduce the concept of textit{Emotional Metacontrol} as an example of emotion--cognition interaction. It consists in using emotional signals elicited by self-assessment to modulate computational processes -- such as attention and action selection -- for the purpose of behavior regulation.In this thesis, a key idea is that, in autonomous systems, emotion and cognition cannot be separated. Indeed, it is becoming well admitted that emotion is closely related to cognition, in particular through the modulation of various computational processes. This raises the question of whether this modulation occurs at the level of sensory processing or at the level of action selection. In this thesis, I will advocate the idea that artificial emotion must be integrated in robotic architectures through bidirectional influences with sensory, attentional, decisional and motor processes. This work attempts to highlight how this approach to internal emotional processes could foster efficient interaction with the physical and social environment.
2

Multi-Scale Spatial Cognition Models and Bio-Inspired Robot Navigation

Llofriu Alonso, Martin I. 15 June 2017 (has links)
The rodent navigation system has been the focus of study for over a century. Discoveries made lately have provided insight on the inner workings of this system. Since then, computational approaches have been used to test hypothesis, as well as to improve robotics navigation and learning by taking inspiration on the rodent navigation system. This dissertation focuses on the study of the multi-scale representation of the rat’s current location found in the rat hippocampus. It first introduces a model that uses these different scales in the Morris maze task to show their advantages. The generalization power of larger scales of representation are shown to allow for the learning of more coherent and complete policies faster. Based on this model, a robotics navigation learning system is presented and compared to an existing algorithm on the taxi driver problem. The algorithm outperforms a canonical Q-Learning algorithm, learning the task faster. It is also shown to work in a continuous environment, making it suitable for a real robotics application. A novel task is also introduced and modeled, with the aim of providing further insight to an ongoing discussion over the involvement of the temporal portion of the hippocampus in navigation. The model is able to reproduce the results obtained with real rats and generates a set of empirically verifiable predictions. Finally, a novel multi-query path planning system is introduced, inspired in the way rodents represent location, their way of storing a topological model of the environment and how they use it to plan future routes. The algorithm is able to improve the routes in the second run, without disrupting the robustness of the underlying navigation system.
3

Information representation on a universal neural Chip

Galluppi, Francesco January 2013 (has links)
How can science possibly understand the organ through which the Universe knows itself? The scientific method can be used to study how electro-chemical signals represent information in the brain. However, modelling it by simulating its structures and functions is a computation- and communication-intensive task. Whilst supercomputers offer great computational power, brain-scale models are challenging in terms of communication overheads and power consumption. Dedicated neural hardware can be used to enhance simulation performance, but it is often optimised for specific models. While performance and flexibility are desirable simulation features, there is no perfect modelling platform, and the choice is subordinate to the specific research question being investigated. In this context SpiNNaker constitutes a novel parallel architecture, with communication and memory accesses optimised for spike-based computation, permitting simulation of large spiking neural networks in real time. To exploit SpiNNaker's performance and reconfigurability fully, a neural network model must be translated from its conceptual form into data structures for a parallel system. This thesis presents a flexible approach to distributing and mapping neural models onto SpiNNaker, within the constraints introduced by its specialised architecture. The conceptual map underlying this approach characterizes the interaction between the model and the system: during the build phase the model is placed on SpiNNaker; at runtime, placement information mediates communication with devices and instrumentation for data analysis. Integration within the computational neuroscience community is achieved by interfaces to two domain-specific languages: PyNN and Nengo. The real-time, event-driven nature of the SpiNNaker platform is explored using address-event representation sensors and robots, performing visual processing using a silicon retina, and navigation on a robotic platform based on a cortical, basal ganglia and hippocampal place cells model. The approach has been successfully exploited to run models on all iterations of SpiNNaker chips and development boards to date, and demonstrated live in workshops and conferences.
4

Biologically inspired action representation on humanoids with a perspective for soft wearable robots

Nassour, John 10 September 2021 (has links)
Although in many of the tasks in robotics, what is sought mainly includes accuracy, precision, flexibility, adaptivity, etc., yet in wearable robotics, there are some other aspects as well that could distinguish a reliable and promising approach. The three key elements that are addressed are as follows: control, actuation, and sensors. Where the goal for each of the previously mentioned objectives is to find a solution/design compatible with humans. A possible way to understand the human motor behaviours is to generate them on human-like robots. Biologically inspired action generation is promising in control of wearable robots as they provide more natural movements. Furthermore, wearable robotics shows exciting progress, also with its design. Soft exosuits use soft materials to build both sensors and actuators. This work investigates an adaptive representation model for actions in robotics. The concrete action model is composed of four modularities: pattern selection, spatial coordination, temporal coordination, and sensory-motor adaptation. Modularity in motor control might provide us with more insights about action learning and generalisation not only for humanoid robots but also for their biological counterparts. Successfully, we tested the model on a humanoid robot by learning to perform a variety of tasks (push recovery, walking, drawing, grasping, etc.). In the next part, we suggest several soft actuation mechanisms that overcome the problem of holding heavy loads and also the issue of on-line programming of the robot motion. The soft actuators use textile materials hosting thermoplastic polyurethane formed as inflatable tubes. Tubes were folded inside housing channels with one strain-limited side to create a flexor actuator. We proposed a new design to control the strained side of the actuator by adding four textile cords along its longitudinal axis. As a result, the actuator behaviour can be on-line programmed to bend and twist in several directions. In the last part of this thesis, we organised piezoresistive elements in a superimposition structure. The sensory structure is used on a sensory gripper to sense and distinguish between pressure and curvature stimuli. Next, we elaborated the sensing gripper by adding proximity sensing through conductive textile parts added to the gripper and work as capacitive sensors. We finally developed a versatile soft strain sensor that uses silicone tubes with an embedded solution that has an electrical resistance proportional to the strain applied on the tubes. Therefore, an entirely soft sensing glove exhibits hand gestures recognition. The proposed combinations of soft actuators, soft sensors, and biologically inspired action representation might open a new perspective to obtain smart wearable robots. / Obwohl bei vielen Aufgaben in der Robotik vor allem Genauigkeit, Präzision, Flexibilität, Anpassungsfähigkeit usw. gefragt sind, gibt es in der Wearable-Robotik auch einige andere Aspekte, die einen zuverlässigen und vielversprechenden Ansatz kennzeichnen. Die drei Schlüsselelemente, sind die folgenden: Steuerung, Aktuatoren und Sensoren. Dabei ist das Ziel für jedes der genannten Elemente, eine menschengerechte Lösung und ein menschengerechtes Design zu finden. Eine Möglichkeit, die menschliche Motorik zu verstehen, besteht darin, sie auf menschenähnlichen Robotern zu erzeugen. Biologisch inspirierte Bewegungsabläufe sind vielversprechend bei der Steuerung von tragbaren Robotern, da sie natürlichere Bewegungen ermöglichen. Darüber hinaus zeigt die tragbare Robotik spannende Fortschritte bei ihrem Design. Zum Beispiel verwenden softe Exoskelette weiche Materialien, um sowohl Sensoren als auch Aktuatoren zu erschaffen. Diese Arbeit erforscht ein adaptives Repräsentationsmodell für Bewegungen in der Robotik. Das konkrete Bewegungsmodell besteht aus vier Modularitäten: Musterauswahl, räumliche Koordination, zeitliche Koordination und sensorisch-motorische Anpassung. Diese Modularität in der Motorsteuerung könnte uns mehr Erkenntnisse über das Erlernen und Verallgemeinern von Handlungen nicht nur für humanoide Roboter, sondern auch für ihre biologischen Gegenstücke liefern. Erfolgreich testeten wir das Modell an einem humanoiden Roboter, indem dieser gelernt hat eine Vielzahl von Aufgaben auszuführen (Stoß-Ausgleichsbewegungen, Gehen, Zeichnen, Greifen, etc.). Im Folgenden schlagen wir mehrere weiche Aktuatoren vor, welche das Problem des Haltens schwerer Lasten und auch die Frage der Online- Programmierung der Roboterbewegung lösen. Diese weichen Aktuatoren verwenden textile Materialien mit thermoplastischem Polyurethan, die als aufblasbare Schläuche geformt sind. Die Schläuche wurden in Gehäusekanäle mit einer dehnungsbegrenzten Seite gefaltet, um Flexoren zu schaffen. Wir haben ein neues Design vorgeschlagen, um die angespannte Seite eines Flexors zu kontrollieren, indem wir vier textile Schnüre entlang seiner Längsachse hinzufügen. Dadurch kann das Verhalten des Flexors online programmiert werden, um ihn in mehrere Richtungen zu biegen und zu verdrehen. Im letzten Teil dieser Arbeit haben wir piezoresistive Elemente in einer Überlagerungsstruktur organisiert. Die sensorische Struktur wird auf einem sensorischen Greifer verwendet, um Druck- und Krümmungsreize zu erfassen und zu unterscheiden. Den sensorischen Greifer haben wir weiterentwickelt indem wir kapazitiv arbeitende Näherungssensoren mittels leitfähiger Textilteile hinzufügten. Schließlich entwickelten wir einen vielseitigen weichen Dehnungssensor, der Silikonschläuche mit einer eingebetteten resistiven Lösung verwendet, deren Wiederstand sich proportional zur Belastung der Schläuche verhält. Dies ermöglicht einem völlig weichen Handschuh die Erkennung von Handgesten. Die vorgeschlagenen Kombinationen aus weichen Aktuatoren, weichen Sensoren und biologisch inspirierter Bewegungsrepräsentation kann eine neue Perspektive eröffnen, um intelligente tragbare Roboter zu erschaffen.
5

Learning of Central Pattern Generator Coordination in Robot Drawing

Atoofi, Payam, Hamker, Fred H., Nassour, John 06 September 2018 (has links)
How do robots learn to perform motor tasks in a specific condition and apply what they have learned in a new condition? This paper proposes a framework for motor coordination acquisition of a robot drawing straight lines within a part of the workspace. Then, it addresses transferring the acquired coordination into another area of the workspace while performing the same task. Motor patterns are generated by a Central Pattern Generator (CPG) model. The motor coordination for a given task is acquired by using a multi-objective optimization method that adjusts the CPGs' parameters involved in the coordination. To transfer the acquired motor coordination to the whole workspace we employed (1) a Self-Organizing Map that represents the end-effector coordination in the Cartesian space, and (2) an estimation method based on Inverse Distance Weighting that estimates the motor program parameters for each SOM neuron. After learning, the robot generalizes the acquired motor program along the SOM network. It is able therefore to draw lines from any point in the 2D workspace and with different orientations. Aside from the obvious distinctiveness of the proposed framework from those based on inverse kinematics typically leading to a point-to-point drawing, our approach also permits of transferring the motor program throughout the workspace.

Page generated in 0.0609 seconds