• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 5
  • 1
  • 1
  • Tagged with
  • 8
  • 8
  • 3
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
1

The role of eye movements in high-acuity monocular and binocular vision

Intoy, Janis 02 February 2022 (has links)
The human eyes are always moving. Even during periods of fixation when visual information is acquired, a persistent jittering of the eyes (ocular drift) is occasionally interrupted by small rapid gaze shifts (microsaccades). Though much has been learned in the last 20 years about the perceptual roles of fixational eye movements, little is known about the consequences of their active control for fine pattern vision and depth perception. Using custom techniques for high-resolution eye-tracking and precise control of retinal stimulation, this dissertation describes three studies that investigated the consequences of controlled fixational eye movements for visual perception of fine patterns in two and three dimensions. The first study addresses whether fixational eye movements are controlled to meet the needs of a demanding visual task and their contributions to visual acuity. We show that in a standard acuity test, humans actively tune their drifts to enhance relevant spatial information and control their microsaccades to precisely place stimuli within the foveola. Together these eye movements contribute 0.15 logMAR to visual acuity, approximately two lines of an eye chart. The second study addresses the perceptual and computational impact of tuning ocular drift. We show that humans are sensitive to changes in visual flow generated by drifts of different sizes. Changes in sensitivity are fully predicted by changes in effective power of luminance modulations delivered by drift, suggesting that drift acts as a mechanism for controlling the effective contrast of the retinal stimulus. The third study addresses the impact of binocular fixational eye movements on fine depth perception. We show that these movements, specifically the opposing movements of the eyes (vergence), are beneficial for stereovision. In the absence of disparity modulations from fixational vergence, fine depth perception is significantly impaired. The research described in this dissertation advances the field in several fundamental ways by showing that (a) contrary to traditional assumptions, ocular drift is tuned to the demands of the visual task; (b) the precise spatiotemporal structure of the luminance changes from ocular drift predictably impacts visual sensitivity; and (c) stereoscopic vision is a dynamic process that uses temporal disparity modulations generated by fixational vergence. / 2024-02-02T00:00:00Z
2

Learning in a state of confusion : employing active perception and reinforcement learning in partially observable worlds

Crook, Paul A. January 2007 (has links)
In applying reinforcement learning to agents acting in the real world we are often faced with tasks that are non-Markovian in nature. Much work has been done using state estimation algorithms to try to uncover Markovian models of tasks in order to allow the learning of optimal solutions using reinforcement learning. Unfortunately these algorithms which attempt to simultaneously learn a Markov model of the world and how to act have proved very brittle. Our focus differs. In considering embodied, embedded and situated agents we have a preference for simple learning algorithms which reliably learn satisficing policies. The learning algorithms we consider do not try to uncover the underlying Markovian states, instead they aim to learn successful deterministic reactive policies such that agents actions are based directly upon the observations provided by their sensors. Existing results have shown that such reactive policies can be arbitrarily worse than a policy that has access to the underlying Markov process and in some cases no satisficing reactive policy can exist. Our first contribution is to show that providing agents with alternative actions and viewpoints on the task through the addition of active perception can provide a practical solution in such circumstances. We demonstrate empirically that: (i) adding arbitrary active perception actions to agents which can only learn deterministic reactive policies can allow the learning of satisficing policies where none were originally possible; (ii) active perception actions allow the learning of better satisficing policies than those that existed previously and (iii) our approach converges more reliably to satisficing solutions than existing state estimation algorithms such as U-Tree and the Lion Algorithm. Our other contributions focus on issues which affect the reliability with which deterministic reactive satisficing policies can be learnt in non-Markovian environments. We show that that greedy action selection may be a necessary condition for the existence of stable deterministic reactive policies on partially observable Markov decision processes (POMDPs). We also set out the concept of Consistent Exploration. This is the idea of estimating state-action values by acting as though the policy has been changed to incorporate the action being explored. We demonstrate that this concept can be used to develop better algorithms for learning reactive policies to POMDPs by presenting a new reinforcement learning algorithm; the Consistent Exploration Q(l) algorithm (CEQ(l)). We demonstrate on a significant number of problems that CEQ(l) is more reliable at learning satisficing solutions than the algorithm currently regarded as the best for learning deterministic reactive policies, that of SARSA(l).
3

Audition active et intégration sensorimotrice pour un robot autonome bioinspiré / Active audition and sensorimotor integration for a bioinspired autonomous robot

Bernard, Mathieu 15 May 2014 (has links)
La grande majorité des systèmes perceptifs proposés en robotique héritent d'une conception passive de la perception dans laquelle la génération d'une commande motrice est l'étape ultime d'une succession de traitements purement passifs. Dans le cadre de la localisation de sources sonores, qui est une tâche fondamentale du système auditif, cette approche passive offre de bons résultats lorsque les conditions environnementales sont bien connues et facilement modélisables. Cependant des difficultés apparaissent lorsque l'environnement se complexifie, a fortiori s'il est inconnu ou changeant. Ces difficultés constituent un enjeu important dans le domaine de l'audition artificielle. Cette thèse considère une approche radicalement différente de l'approche passive, inspirée de la psychologie de la perception et de la théorie des contingences sensorimotrices. Cette approche place l'action au coeur du processus de perception, qui est alors vu comme une interaction qu'un agent biologique ou robotique entretient avec son environnement. Alors que l'approche passive nécessite des connaissances sur l'environnement, implicement intégrées dans les traitements par le roboticien, l'approche sensorimotrice suggère au contraire que ces connaissances sont acquises par l'agent de manière autonome, à travers son expérience sensorimotrice. Ainsi cette thèse applique la théorie des contingences sensorimotrices à la localisation de sources sonores pour la robotique autonome. Sur la base d'un modèle bioinspiré du système auditif adapté au contexte robotique, cette thèse propose une redéfinition du problème de la localisation en termes sensorimoteurs. Un modèle de localisation sensorimotrice est alors proposé. Celui-ci se base sur des capacités de perception active bas-niveau pour construire une représentation de l'espace auditif qui est ensuite utilisée pour une localisation passive. En exploitant les capacités d'action du robot, ce modèle permet de s'affranchir des dépendances à l'environnement qui mettent en difficulté l'approche passive, en proposant ainsi un degré d'autonomie supérieur à celui des modèles actuels / The vast majority of perceptual systems proposed in robotics inherit apassive conception of perception, in which the generation of a motor command is the final stage of successive passive processes. In the field of sound sources localization, which is a fundamental task of the auditory system, this passive approach provides good results when the environment is well known and easily modeled. However, difficulties arise when the environment becomes more complex, unknown or changing. These difficulties are a major issue in the field of machine hearing. This thesis considers a radically different approach inspired by the psychology of perception and theory of sensorimotor contingencies. This approach places action at the heart of the process of perception, which is seen as an interaction of a biological or robotic agent with it's environment. While passive approach requires environmental knowledge, implicitly integrated into models by the robotisist, the sensorimotor approach suggests that this knowledge is acquired by the agent by itself, through its sensorimotor experience. Thus, this thesis applies the theory of sensorimotor contingencies to sound sources localization for autonomous robots. Based on a model of the auditory system adapted to robotics, this thesis proposes a redefinition of the localization problem in sensorimotor terms. A sensorimotor model of localization is then proposed. It is based on active and low-level perception skills which are used to learn a representation of the auditory space. This representation is then used for a passive localization of new sound sources. By exploiting the active capabilities of the robot, this model eliminates the environment dependencies that put difficulty in the passive approach, thus offering a degree of autonomy higher than current models
4

On the Behavioral Dynamics of Human Sound Localization: Two Experiments Concerning Active Localization

Riehm, Christopher D., M.A. 22 October 2020 (has links)
No description available.
5

Contributions to active visual estimation and control of robotic systems / Contributions à la perception active et à la commande de systèmes robotiques

Spica, Riccardo 11 December 2015 (has links)
L'exécution d'une expérience scientifique est un processus qui nécessite une phase de préparation minutieuse et approfondie. Le but de cette phase est de s'assurer que l'expérience donne effectivement le plus de renseignements possibles sur le processus que l'on est en train d'observer, de manière à minimiser l'effort (en termes, par exemple, du nombre d'essais ou de la durée de chaque expérience) nécessaire pour parvenir à une conclusion digne de confiance. De manière similaire, la perception est un processus actif dans lequel l'agent percevant (que ce soit un humain, un animal ou un robot) fait de son mieux pour maximiser la quantité d'informations acquises sur l'environnement en utilisant ses capacités de détection et ses ressources limitées. Dans de nombreuses applications robotisées, l'état d'un robot peut être partiellement récupéré par ses capteurs embarqués. Des schémas d'estimation peuvent être exploités pour récupérer en ligne les «informations manquantes» et les fournir à des planificateurs/contrôleurs de mouvement, à la place des états réels non mesurables. Cependant, l'estimation doit souvent faire face aux relations non linéaires entre l'environnement et les mesures des capteurs qui font que la convergence et la précision de l'estimation sont fortement affectées par la trajectoire suivie par le robot/capteur. Par exemple, les techniques de commande basées sur la vision, telles que l'Asservissement Visuel Basé-Image (IBVS), exigent normalement une certaine connaissance de la structure 3-D de la scène qui ne peut pas être extraite directement à partir d'une seule image acquise par la caméra. On peut exploiter un processus d'estimation (“Structure from Motion - SfM”) pour reconstruire ces informations manquantes. Toutefois, les performances d'un estimateur SfM sont grandement affectées par la trajectoire suivie par la caméra pendant l'estimation, créant ainsi un fort couplage entre mouvement de la caméra (nécessaire pour, par exemple, réaliser une tâche visuelle) et performance/précision de l'estimation 3-D. À cet égard, une contribution de cette thèse est le développement d'une stratégie d'optimisation en ligne de trajectoire qui permet de maximiser le taux de convergence d'un estimateur SfM affectant (activement) le mouvement de la caméra. L'optimisation est basée sur des conditions classiques de persistance d'excitation utilisée en commande adaptative pour caractériser le conditionnement d'un problème d'estimation. Cette mesure est aussi fortement liée à la matrice d'information de Fisher employée dans le cadre d'estimation probabiliste à des fins similaires. Nous montrons aussi comment cette technique peut être couplé avec l'exécution simultanée d'une tâche d'asservissement visuel en utilisant des techniques de résolution et de maximisation de la redondance. Tous les résultats théoriques présentés dans cette thèse sont validés par une vaste campagne expérimentale en utilisant un robot manipulateur équipé d'une caméra embarquée. / As every scientist and engineer knows, running an experiment requires a careful and thorough planning phase. The goal of such a phase is to ensure that the experiment will give the scientist as much information as possible about the process that she/he is observing so as to minimize the experimental effort (in terms of, e.g., number of trials, duration of each experiment and so on) needed to reach a trustworthy conclusion. Similarly, perception is an active process in which the perceiving agent (be it a human, an animal or a robot) tries its best to maximize the amount of information acquired about the environment using its limited sensor capabilities and resources. In many sensor-based robot applications, the state of a robot can only be partially retrieved from his on-board sensors. State estimation schemes can be exploited for recovering online the “missing information” then fed to any planner/motion controller in place of the actual unmeasurable states. When considering non-trivial cases, however, state estimation must often cope with the nonlinear sensor mappings from the observed environment to the sensor space that make the estimation convergence and accuracy strongly affected by the particular trajectory followed by the robot/sensor. For instance, when relying on vision-based control techniques, such as Image-Based Visual Servoing (IBVS), some knowledge about the 3-D structure of the scene is needed for a correct execution of the task. However, this 3-D information cannot, in general, be extracted from a single camera image without additional assumptions on the scene. One can exploit a Structure from Motion (SfM) estimation process for reconstructing this missing 3-D information. However performance of any SfM estimator is known to be highly affected by the trajectory followed by the camera during the estimation process, thus creating a tight coupling between camera motion (needed to, e.g., realize a visual task) and performance/accuracy of the estimated 3-D structure. In this context, a main contribution of this thesis is the development of an online trajectory optimization strategy that allows maximization of the converge rate of a SfM estimator by (actively) affecting the camera motion. The optimization is based on the classical persistence of excitation condition used in the adaptive control literature to characterize the well-posedness of an estimation problem. This metric, however, is also strongly related to the Fisher information matrix employed in probabilistic estimation frameworks for similar purposes. We also show how this technique can be coupled with the concurrent execution of a IBVS task using appropriate redundancy resolution and maximization techniques. All of the theoretical results presented in this thesis are validated by an extensive experimental campaign run using a real robotic manipulator equipped with a camera in-hand.
6

Social Agent: Facial Expression Driver for an e-Nose

Widmark, Jörgen January 2003 (has links)
<p>This thesis describes that it is possible to drive synthetic emotions of an interface agent with an electronic nose system developed at AASS. The e-Nose can be used for quality control, and the detected distortion from a known smell sensation prototype is interpreted to a 3D-representation of emotional states, which in turn points to a set of pre-defined muscle contractions. This extension of a rule based motivation system, which we call Facial Expression Driver, is incorporated to a model for sensor fusion with active perception, to provide a general design for a more complex system with additional senses. To be consistent with the biologically inspired sensor fusion model a muscle based animated facial model was chosen as a test bed for the expression of current emotion. The social agent’s facial expressions demonstrate its tolerance to the detected distortion in order to manipulate the user to restore the system to functional balance. Only a few of the known projects use chemically based sensing to drive a face in real-time, whether they are virtual characters or animatronics. This work may inspire a future android implementation of a head with electro active polymers as synthetic facial muscles.</p>
7

Social Agent: Facial Expression Driver for an e-Nose

Widmark, Jörgen January 2003 (has links)
This thesis describes that it is possible to drive synthetic emotions of an interface agent with an electronic nose system developed at AASS. The e-Nose can be used for quality control, and the detected distortion from a known smell sensation prototype is interpreted to a 3D-representation of emotional states, which in turn points to a set of pre-defined muscle contractions. This extension of a rule based motivation system, which we call Facial Expression Driver, is incorporated to a model for sensor fusion with active perception, to provide a general design for a more complex system with additional senses. To be consistent with the biologically inspired sensor fusion model a muscle based animated facial model was chosen as a test bed for the expression of current emotion. The social agent’s facial expressions demonstrate its tolerance to the detected distortion in order to manipulate the user to restore the system to functional balance. Only a few of the known projects use chemically based sensing to drive a face in real-time, whether they are virtual characters or animatronics. This work may inspire a future android implementation of a head with electro active polymers as synthetic facial muscles.
8

Context-aware anchoring, semantic mapping and active perception for mobile robots

Günther, Martin 30 November 2021 (has links)
An autonomous robot that acts in a goal-directed fashion requires a world model of the elements that are relevant to the robot's task. In real-world, dynamic environments, the world model has to be created and continually updated from uncertain sensor data. The symbols used in plan-based robot control have to be anchored to detected objects. Furthermore, robot perception is not only a bottom-up and passive process: Knowledge about the composition of compound objects can be used to recognize larger-scale structures from their parts. Knowledge about the spatial context of an object and about common relations to other objects can be exploited to improve the quality of the world model and can inform an active search for objects that are missing from the world model. This thesis makes several contributions to address these challenges: First, a model-based semantic mapping system is presented that recognizes larger-scale structures like furniture based on semantic descriptions in an ontology. Second, a context-aware anchoring process is presented that creates and maintains the links between object symbols and the sensor data corresponding to those objects while exploiting the geometric context of objects. Third, an active perception system is presented that actively searches for a required object while being guided by the robot's knowledge about the environment.

Page generated in 0.1135 seconds