• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 110
  • 12
  • 6
  • 6
  • 5
  • 2
  • 1
  • 1
  • 1
  • Tagged with
  • 179
  • 55
  • 43
  • 42
  • 38
  • 36
  • 34
  • 31
  • 27
  • 25
  • 23
  • 23
  • 21
  • 20
  • 18
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
61

Multirate and Perceptual Techniques for Haptic Rendering in Virtual Environments

Ruffaldi, Emanuele January 2006 (has links)
Haptics is a field of robotics that involves many aspects of engineering, requiring the collaboration of different disciplines like mechanics, electronics, control theory and computer science. Although multi-disciplinarity is an element in common with other robotic application, haptic system has the additional requirement of high performance because of the human perception requirement of 1KHz feedback rate. Such high computing requirement impacts the design of the whole haptic system but it is has particular effects in the design and implementation of haptic rendering algorithms. In the chain of software and hardware components that describe a haptic system the haptic rendering is the element that has the objective of computing the force feedback given the interaction of the user with the device.A variety of haptic rendering algorithms have been proposed in the past for the simulation of three degree of freedom (3DoF) interactions in which a single point touches a complex object as well as 6DoF interactions in which two complex objects interact in multiple points. The use of 3DoF or 6DoF algorithms depends mostly from the type of application and consequently the type of device. For example applications like virtual prototype require 6DoF interaction while many simulation applications have less stringent requirements. Apart the number of degree of freedom haptic rendering algorithms are characterized by the geometrical representation of the objects, by the use of rigid or deformable objects and by the introduction of physical properties of the object surface like friction and texture properties. Given this variety of possibilities and the presence of the human factor in the computation of haptic feedback it is hard to compare different algorithms to asses whether one specific solution performs better than any other previously proposed.The goal of the proposed work is two-fold. First this thesis proposes a framework allowing for more objective comparison of haptic rendering algorithms. Such comparison take into account the perceptual aspect of haptic interaction but tries to remove it from the comparison with the aim of obtaining an objective comparison between algorithms. Second, this thesis proposes a new haptic rendering algorithm for 3DoF interaction and one for 6DoF interaction. The first algorithm for 3DoF interaction provides interaction with rotational friction based on a simulation of the soft finger contact model. The new 6DoF interaction algorithm allows the computation of the haptic feedback of interaction between voxel models.
62

Considerations for the Development of Non-Visual Interfaces for Driving Applications

Colby, Ryan Stephen 22 April 2012 (has links)
While haptics, tactile displays, and other topics relating to non-visual user interfaces have been the subject of a variety of research initiatives, little has been done specifically related to those for blind driving. Many automation technologies have been developed for the purpose of assisting and improving the safety of sighted drivers, but to enable a true driving experience without any sense of sight has been an essentially overlooked area of study. Since 2005, the Robotics & Mechanisms Laboratory at Virginia Tech has assumed the task of developing non-visual interfaces for driving through the Blind Driver Challenge®, a project funded by the National Federation of the Blind. The objective here is not to develop a vehicle that will autonomously mobilize blind people, but to develop a vehicle that a blind person can actively and independently operate based on information communicated by non-visual interfaces. This thesis proposes some generalized considerations for the development of non-visual interfaces for driving, using the instructional interfaces developed for the Blind Driver Challenge® as a case study. A model is suggested for the function of blind driving as an open-loop control system, wherein the human is an input/output device. Further, a discussion is presented on the relationship between the bandwidth of information communicated to the driver, the amount of human decision-making involved in blind driving, and the cultivation of driver independence. The considerations proposed here are intended to apply generally to the process of non-visual interface development for driving, enabling efficient concept generation and evaluation. / Master of Science
63

Development of Multipoint Haptic Device for Spatial Palpation

Muralidharan, Vineeth January 2017 (has links) (PDF)
This thesis deals with the development of novel haptic array system that can render distributed pressure pattern. The haptic devices are force feedback interfaces, which are widely seen from consumer products to tele-surgical systems, such as vibration feedback in game console, mobile phones, virtual reality applications, and daVinci robots in minimally invasive surgery. Telemedicine and computer-enabled medical training system are modern medical infrastructures where the former provides health care services to people especially in rural and remote places while the latter is meant for training the next generation of doctors and medical students. In telemedicine, a patient at a remote location consults the physician at a distant place through the telecommunication media whereas in computer enabled medical training system, physician and medical students interact with the virtual patient. The experience of physical presence of the remote patient in telemedicine and immersive interaction with virtual patient on the computer-enabled training system can be attained through haptic devices. In this work we focus on palpation simulation in telemedicine and medical training systems. Palpation is a primary diagnostic method which involves multi-finger, multi-contact interaction between the patient and physician. During palpation, a distributed pressure pattern rather than point load is perceived by the physician. The commercially available haptic devices are single and five point devices, which lack the face validity in rendering distributed pressure pattern; there are only a few works reported in literatures that deal with palpation simulation. There is a strong need of a haptic device which provide distributed force pattern with multipoint feedback which can be applied for palpation simulation in telemedicine and medical training purposes. The haptic device should be a multipoint device to simulate palpation process, an array device to render distributed force pattern, light weight to move from one place to another and finally it has to cover hand portion of physician. We are proposing a novel under-actuated haptic array device, called taut cable haptic array system (TCHAS), which in general is a m x n system, consist of m+n actuators to obtain m.nhaptels, that are multiple end effectors. A prototype of 3 x 3 TCHAS is developed during this work and detailed study on its characterisation is explored. The performance of device is validated with elaborate user study and it establishes that the device has promising capability in rendering distributed spatio-temporal pressure pattern.
64

Toward Novel Remote-Center-of-Motion Manipulators and Wearable Hand-Grounded Kinesthetic Haptics for Robot-Assisted Surgery / 外科手術支援のためのロボットマニピュレータとハプティクスに関する研究

Sajid, Nisar 25 March 2019 (has links)
付記する学位プログラム名: デザイン学大学院連携プログラム / 京都大学 / 0048 / 新制・課程博士 / 博士(工学) / 甲第21759号 / 工博第4576号 / 新制||工||1713(附属図書館) / 京都大学大学院工学研究科機械理工学専攻 / (主査)教授 松野 文俊, 教授 椹木 哲夫, 教授 小森 雅晴 / 学位規則第4条第1項該当 / Doctor of Philosophy (Engineering) / Kyoto University / DFAM
65

A STUDY TOWARDS DEVELOPMENT OF AN AUTOMATED HAPTIC USER INTERFACE (AHUI) FOR INDIVIDUALS WHO ARE BLIND OR VISUALLY IMPAIRED

Rastogi, Ravi 08 August 2012 (has links)
An increasing amount of information content used in schools, work and everyday living is being presented in graphical form, creating accessibility challenges for individuals who are blind or visually impaired, especially in dynamic environments, such as over the internet. Refreshable haptic displays that can interact with computers can be used to access such information tactually. Main focus of this study was the development of specialized computer applications allowing users to actively compensate for the inherent issues of haptics when exploring visual diagrams as compared to vision, which we hypothesized, would improve the usability of such devices. An intuitive zooming algorithm capable of automatically detecting significant different zoom levels, providing auditory feedback, preventing cropping of information and preventing zooming in on areas where no features were present was developed to compensate for the lower spatial resolution of haptics and was found to significantly improve the performance of the participants. Another application allowing the users to perform dynamic simplifications on the diagram to compensate for the serial based nature of processing 2D geometric information was tested and found to significantly improve the performance of the participants. For both applications participants liked the user interface and found it more usable, as expected. In addition, in this study we investigated methods that can be used to effectively present different visual features as well as overlaying features present in the visual diagrams. Three methods using several combinations of tactile and auditory modalities were tested. We found that the performance significantly improves when using the overlapping method using different modalities. For tactile only methods developed for deaf blind individuals, the toggle method was surprisingly preferred as compared to the overlapping method.
66

POSITION CONCORDANT - HAPTIC MOUSE

Rastogi, Ravi 19 February 2009 (has links)
Haptic mice, computer mice modified to have a tactile display, have been developed to enable access to computer graphics by individuals who are blind or visually impaired. Although these haptic mice are potentially very helpful and have been frequently used by the research community, there are some fundamental problems with the mouse, limiting its acceptance. In this paper we have identified the problems and have suggested solutions using one haptic mouse, the VT Player. We found that our modified VT Player showed significant improvement both in terms of the odds of obtaining a correct responses and the time to perform the tasks.
67

Haptic rendering for 6/3-DOF haptic devices / Haptic rendering for 6/3-DOF haptic devices

Kadleček, Petr January 2013 (has links)
Application of haptic devices expanded to fields like virtual manufacturing, virtual assembly or medical simulations. Advances in development of haptic devices have resulted in a wide distribution of assymetric 6/3-DOF haptic devices. However, current haptic rendering algorithms work correctly only for symmetric devices. This thesis analyzes 3-DOF and 6-DOF haptic rendering algorithms and proposes an algorithm for 6/3-DOF haptic rendering involving pseudo-haptics. The 6/3-DOF haptic rendering algorithm is implemented based on the previous analysis and tested in a user study.
68

A Tactful Conceptualization of Joint Attention: Joint Haptic Attention and Language Development

Driggers-Jones, Lauren P 01 August 2019 (has links)
Research investigating associations between joint attention and language development have thus far only investigated joint attention by way of visual perceptions while neglecting the potential effects of joint attention engaged through other sensory modalities. In the present study, I aimed to investigate the joint attention-language development relationship by investigating the possible links between joint haptic attention and language development, while also exploring the likely contributions of joint visual attention through a mediation analysis. Using video recordings from an archival dataset, measures of joint haptic attention and joint visual attention were derived from behavioral tasks, and measures of vocabulary development were attained from a caregiver reported measure. Analyses revealed that joint haptic attention was associated with joint visual attention, and that joint visual attention was related to language development; however, there were no significant associations between joint haptic attention and language development. Study limitations, future directions, and conclusions are discussed.
69

Approche cognitive pour la représentation de l’interaction proximale haptique entre un homme et un humanoïde / Cognitive approach for representing the haptic physical human-humanoid interaction

Bussy, Antoine 10 October 2013 (has links)
Les robots sont tout près d'arriver chez nous. Mais avant cela, ils doivent acquérir la capacité d'interagir physiquement avec les humains, de manière sûre et efficace. De telles capacités sont indispensables pour qu'il puissent vivre parmi nous, et nous assister dans diverses tâches quotidiennes, comme porter une meuble. Dans cette thèse, nous avons pour but de doter le robot humanoïde bipède HRP-2 de la capacité à effectuer des actions haptiques en commun avec l'homme. Dans un premier temps, nous étudions comment des dyades humains collaborent pour transporter un objet encombrant. De cette étude, nous extrayons un modèle global de primitives de mouvement que nous utilisons pour implémenter un comportement proactif sur le robot HRP-2, afin qu'il puisse effectuer la même tâche avec un humain. Puis nous évaluons les performances de ce schéma de contrôle proactif au cours de tests utilisateurs. Finalement, nous exposons diverses pistes d'évolution de notre travail: la stabilisation d'un humanoïde à travers l'interaction physique, la généralisation du modèle de primitives de mouvements à d'autres tâches collaboratives et l'inclusion de la vision dans des tâches collaboratives haptiques. / Robots are very close to arrive in our homes. But before doing so, they must master physical interaction with humans, in a safe and efficient way. Such capacities are essential for them to live among us, and assit us in various everyday tasks, such as carrying a piece of furniture. In this thesis, we focus on endowing the biped humanoid robot HRP-2 with the capacity to perform haptic joint actions with humans. First, we study how human dyads collaborate to transport a cumbersome object. From this study, we define a global motion primitives' model that we use to implement a proactive behavior on the HRP-2 robot, so that it can perform the same task with a human. Then, we assess the performances of our proactive control scheme by perfoming user studies. Finally, we expose several potential extensions to our work: self-stabilization of a humanoid through physical interaction, generalization of the motion primitives' model to other collaboratives tasks and the addition of visionto haptic joint actions.
70

Contribution à l’interaction physique homme-robot : application à la comanipulation d’objets de grandes dimensions / Contribution to the physical human-robot interaction : application to comanipulation of large objects

Dumora, Julie 12 March 2014 (has links)
La robotique collaborative a pour vocation d'assister physiquement l'opérateur dans ses tâches quotidiennes. Les deux partenaires qui composent un tel système possèdent des atouts complémentaires : physique pour le robot versus cognitif pour l'opérateur. Cette combinaison offre ainsi de nouvelles perspectives d'applications, notamment pour la réalisation de tâches non automatisables. Dans cette thèse, nous nous intéressons à une application particulière qui est l'assistance à la manipulation de pièces de grande taille lorsque la tâche à réaliser et l'environnement sont inconnus du robot. La manutention de telles pièces est une activité quotidienne dans de nombreux domaines et dont les caractéristiques en font une problématique à la fois complexe et critique. Nous proposons une stratégie d'assistance pour répondre à la problématique de contrôle simultané des points de saisie du robot et de l'opérateur liée à la manipulation de pièces de grandes dimensions, lorsque la tâche n'est pas connue du robot. Les rôles du robot et de l'opérateur dans la réalisation de la tâche sont distribués en fonction de leurs compétences relatives. Alors que l'opérateur décide du plan d'action et applique la force motrice qui permet de déplacer la pièce, le robot détecte l'intention de mouvement de l'opérateur et bloque les degrés de liberté qui ne correspondent pas au mouvement désiré. De cette façon, l'opérateur n'a pas à contrôler simultanément tous les degrés de liberté de la pièce. Les problématiques scientifiques relatives à l'interaction physique homme-robot abordées dans cette thèse se décomposent en trois grandes parties : la commande pour l'assistance, l'analyse du canal haptique et l'apprentissage lors de l'interaction. La stratégie développée s'appuie sur un formalisme unifié entre la spécification des assistances, la commande du robot et la détection d'intention. Il s'agit d'une approche modulaire qui peut être utilisée quelle que soit la commande bas niveau imposée dans le contrôleur du robot. Nous avons mis en avant son intérêt au travers de tâches différentes réalisées sur deux plateformes robotiques : un bras manipulateur et un robot humanoïde bipède. / Collaborative robotics aims at physically assisting humans in their daily tasks.The system comprises two partners with complementary strengths : physical for the robot versus cognitive for the operator. This combination provides new scenarios of application such as the accomplishment of difficult-to-automate tasks. In this thesis, we are interested in assisting the human operator to manipulate bulky parts while the robot has no prior knowledge of the environment and the task. Handling such parts is a daily activity in manyareas which is a complex and critical issue. We propose a new strategy of assistances to tackle the problem of simultaneously controlling both the grasping point of the operator and that of the robot. The task responsibilities for the robot and the operator are allocated according to their relative strengths. While the operator decides the plan and applies the driving force, the robot detects the operator's intention of motion and constrains the degrees of freedom that are useless to perform the intended motion. This way, the operator does not have to control all the degrees of freedom simultaneously. The scientific issues we deal with are split into three main parts : assistive control, haptic channel analysis and learning during the interaction.The strategy is based on a unified framework of the assistances specification, robot control and intention detection. This is a modular approach that can be applied with any low-level robot control architecture. We highlight its interest through manifold tasks completed with two robotics platforms : an industrial arm manipulator and a biped humanoid robot.

Page generated in 0.0533 seconds