Spelling suggestions: "subject:"haptic.""
61 |
Our Third Ear: A Multi-Sensory Experience of SoundMills, David Robert 06 July 2016 (has links)
Our Third Ear aims to create a multi-sensory experience by fusing sight, touch, and sound. By creating a means of physically feeling music, listeners can connect with songs, bands, and individual musicians on a profoundly personal level. The potential for unintended applications like learning to play an instrument, broadening the understanding of music for people with hearing impairments, or providing a means of therapy are also exciting prospects.
The purpose of this paper is to illustrate the process involved in creating a multi-sensory experience of music from concept to prototype. The culmination of interdisciplinary research and a broad range of creative technologies resulting in a working system. The multi-sensory experience consists of primarily tactile, but also visual responses triggered by music and executed in conjunction with aural music. Tactile investigation involved varied tactile sensations such as vibration, temperature, pressure, proprioception, and touch. Further research questioned the practicality, feasibility, and psychological impacts of using such sensations as well as where on the body such sensations would optimally be received. Visual research involved the visual representation of notes, chords, and sounds, as well as, how music could directly affect visuals in a real time environment. Additional research explored active interaction and passive interaction of visual cues using human computer interfaces. / Master of Fine Arts
|
62 |
Soaking Sensual Nakedness: Haptic Bathhouse ExplorationsForsell, Mari Jonel 20 April 2016 (has links)
How can architecture stimulate an increased haptic experience?
People with sight lack the everyday immediacy of sensory awareness as compared with people with significant sight impairment. When sight is lost, the mind compensates by heightening the other senses for receiving information. In particular, people who are sight impaired depend on their "somesthesis," or skin sense, for information.
In contrast, people who are sighted do not depend on somesthesis to accomplish everyday tasks. Many may go through an entire day without considering their sense of touch. If awareness exists, it is likely through discomfort such as that first barefooted encounter on ice cold tile first thing in the morning or grabbing a burning steering wheel after it baked all day in the hot summer sun.
Heschong writes "If sight allows for a three-dimensional world, then each other sense contributes at least one, if not more, additional dimensions." (Heschong, p. 28-29) The sighted rely so heavily on the visual sense for information. They miss many simple tactile encounters along with all their contiguous sensational experiences, constricting the development of these additional dimensions, thus significantly diminishing the depth and complexity of their existence.
This is an exploration of touch, a bathhouse, just south of Dupont Circle in the urban fabric of Washington DC. Experiencing a place where the entire body can intimately converge with a building saturated with tactile opportunities, the surprise of stimulating skin-to-surface encounters will remind us of our wonderful somatosensation. How we feel during these sensual unions will add vividness to our lives and a desire to again search for more tactile stimuli feeding our rejuvenated mindfulness. / Master of Architecture
|
63 |
Development of Multipoint Haptic Device for Spatial PalpationMuralidharan, Vineeth January 2017 (has links) (PDF)
This thesis deals with the development of novel haptic array system that can render distributed pressure pattern. The haptic devices are force feedback interfaces, which are widely seen from consumer products to tele-surgical systems, such as vibration feedback in game console, mobile phones, virtual reality applications, and daVinci robots in minimally invasive surgery. Telemedicine and computer-enabled medical training system are modern medical infrastructures where the former provides health care services to people especially in rural and remote places while the latter is meant for training the next generation of doctors and medical students. In telemedicine, a patient at a remote location consults the physician at a distant place through the telecommunication media whereas in computer enabled medical training system, physician and medical students interact with the virtual patient. The experience of physical presence of the remote patient in telemedicine and immersive interaction with virtual patient on the computer-enabled training system can be attained through haptic devices. In this work we focus on palpation simulation in telemedicine and medical training systems. Palpation is a primary diagnostic method which involves multi-finger, multi-contact interaction between the patient and physician. During palpation, a distributed pressure pattern rather than point load is perceived by the physician. The commercially available haptic devices are single and five point devices, which lack the face validity in rendering distributed pressure pattern; there are only a few works reported in literatures that deal with palpation simulation. There is a strong need of a haptic device which provide distributed force pattern with multipoint feedback which can be applied for palpation simulation in telemedicine and medical training purposes. The haptic device should be a multipoint device to simulate palpation process, an array device to render distributed force pattern, light weight to move from one place to another and finally it has to cover hand portion of physician. We are proposing a novel under-actuated haptic array device, called taut cable haptic array system (TCHAS), which in general is a m x n system, consist of m+n actuators to obtain m.nhaptels, that are multiple end effectors. A prototype of 3 x 3 TCHAS is developed during this work and detailed study on its characterisation is explored. The performance of device is validated with elaborate user study and it establishes that the device has promising capability in rendering distributed spatio-temporal pressure pattern.
|
64 |
Toward Novel Remote-Center-of-Motion Manipulators and Wearable Hand-Grounded Kinesthetic Haptics for Robot-Assisted Surgery / 外科手術支援のためのロボットマニピュレータとハプティクスに関する研究Sajid, Nisar 25 March 2019 (has links)
付記する学位プログラム名: デザイン学大学院連携プログラム / 京都大学 / 0048 / 新制・課程博士 / 博士(工学) / 甲第21759号 / 工博第4576号 / 新制||工||1713(附属図書館) / 京都大学大学院工学研究科機械理工学専攻 / (主査)教授 松野 文俊, 教授 椹木 哲夫, 教授 小森 雅晴 / 学位規則第4条第1項該当 / Doctor of Philosophy (Engineering) / Kyoto University / DFAM
|
65 |
A STUDY TOWARDS DEVELOPMENT OF AN AUTOMATED HAPTIC USER INTERFACE (AHUI) FOR INDIVIDUALS WHO ARE BLIND OR VISUALLY IMPAIREDRastogi, Ravi 08 August 2012 (has links)
An increasing amount of information content used in schools, work and everyday living is being presented in graphical form, creating accessibility challenges for individuals who are blind or visually impaired, especially in dynamic environments, such as over the internet. Refreshable haptic displays that can interact with computers can be used to access such information tactually. Main focus of this study was the development of specialized computer applications allowing users to actively compensate for the inherent issues of haptics when exploring visual diagrams as compared to vision, which we hypothesized, would improve the usability of such devices. An intuitive zooming algorithm capable of automatically detecting significant different zoom levels, providing auditory feedback, preventing cropping of information and preventing zooming in on areas where no features were present was developed to compensate for the lower spatial resolution of haptics and was found to significantly improve the performance of the participants. Another application allowing the users to perform dynamic simplifications on the diagram to compensate for the serial based nature of processing 2D geometric information was tested and found to significantly improve the performance of the participants. For both applications participants liked the user interface and found it more usable, as expected. In addition, in this study we investigated methods that can be used to effectively present different visual features as well as overlaying features present in the visual diagrams. Three methods using several combinations of tactile and auditory modalities were tested. We found that the performance significantly improves when using the overlapping method using different modalities. For tactile only methods developed for deaf blind individuals, the toggle method was surprisingly preferred as compared to the overlapping method.
|
66 |
POSITION CONCORDANT - HAPTIC MOUSERastogi, Ravi 19 February 2009 (has links)
Haptic mice, computer mice modified to have a tactile display, have been developed to enable access to computer graphics by individuals who are blind or visually impaired. Although these haptic mice are potentially very helpful and have been frequently used by the research community, there are some fundamental problems with the mouse, limiting its acceptance. In this paper we have identified the problems and have suggested solutions using one haptic mouse, the VT Player. We found that our modified VT Player showed significant improvement both in terms of the odds of obtaining a correct responses and the time to perform the tasks.
|
67 |
Haptic rendering for 6/3-DOF haptic devices / Haptic rendering for 6/3-DOF haptic devicesKadleček, Petr January 2013 (has links)
Application of haptic devices expanded to fields like virtual manufacturing, virtual assembly or medical simulations. Advances in development of haptic devices have resulted in a wide distribution of assymetric 6/3-DOF haptic devices. However, current haptic rendering algorithms work correctly only for symmetric devices. This thesis analyzes 3-DOF and 6-DOF haptic rendering algorithms and proposes an algorithm for 6/3-DOF haptic rendering involving pseudo-haptics. The 6/3-DOF haptic rendering algorithm is implemented based on the previous analysis and tested in a user study.
|
68 |
A Tactful Conceptualization of Joint Attention: Joint Haptic Attention and Language DevelopmentDriggers-Jones, Lauren P 01 August 2019 (has links)
Research investigating associations between joint attention and language development have thus far only investigated joint attention by way of visual perceptions while neglecting the potential effects of joint attention engaged through other sensory modalities. In the present study, I aimed to investigate the joint attention-language development relationship by investigating the possible links between joint haptic attention and language development, while also exploring the likely contributions of joint visual attention through a mediation analysis. Using video recordings from an archival dataset, measures of joint haptic attention and joint visual attention were derived from behavioral tasks, and measures of vocabulary development were attained from a caregiver reported measure. Analyses revealed that joint haptic attention was associated with joint visual attention, and that joint visual attention was related to language development; however, there were no significant associations between joint haptic attention and language development. Study limitations, future directions, and conclusions are discussed.
|
69 |
Approche cognitive pour la représentation de l’interaction proximale haptique entre un homme et un humanoïde / Cognitive approach for representing the haptic physical human-humanoid interactionBussy, Antoine 10 October 2013 (has links)
Les robots sont tout près d'arriver chez nous. Mais avant cela, ils doivent acquérir la capacité d'interagir physiquement avec les humains, de manière sûre et efficace. De telles capacités sont indispensables pour qu'il puissent vivre parmi nous, et nous assister dans diverses tâches quotidiennes, comme porter une meuble. Dans cette thèse, nous avons pour but de doter le robot humanoïde bipède HRP-2 de la capacité à effectuer des actions haptiques en commun avec l'homme. Dans un premier temps, nous étudions comment des dyades humains collaborent pour transporter un objet encombrant. De cette étude, nous extrayons un modèle global de primitives de mouvement que nous utilisons pour implémenter un comportement proactif sur le robot HRP-2, afin qu'il puisse effectuer la même tâche avec un humain. Puis nous évaluons les performances de ce schéma de contrôle proactif au cours de tests utilisateurs. Finalement, nous exposons diverses pistes d'évolution de notre travail: la stabilisation d'un humanoïde à travers l'interaction physique, la généralisation du modèle de primitives de mouvements à d'autres tâches collaboratives et l'inclusion de la vision dans des tâches collaboratives haptiques. / Robots are very close to arrive in our homes. But before doing so, they must master physical interaction with humans, in a safe and efficient way. Such capacities are essential for them to live among us, and assit us in various everyday tasks, such as carrying a piece of furniture. In this thesis, we focus on endowing the biped humanoid robot HRP-2 with the capacity to perform haptic joint actions with humans. First, we study how human dyads collaborate to transport a cumbersome object. From this study, we define a global motion primitives' model that we use to implement a proactive behavior on the HRP-2 robot, so that it can perform the same task with a human. Then, we assess the performances of our proactive control scheme by perfoming user studies. Finally, we expose several potential extensions to our work: self-stabilization of a humanoid through physical interaction, generalization of the motion primitives' model to other collaboratives tasks and the addition of visionto haptic joint actions.
|
70 |
Contribution à l’interaction physique homme-robot : application à la comanipulation d’objets de grandes dimensions / Contribution to the physical human-robot interaction : application to comanipulation of large objectsDumora, Julie 12 March 2014 (has links)
La robotique collaborative a pour vocation d'assister physiquement l'opérateur dans ses tâches quotidiennes. Les deux partenaires qui composent un tel système possèdent des atouts complémentaires : physique pour le robot versus cognitif pour l'opérateur. Cette combinaison offre ainsi de nouvelles perspectives d'applications, notamment pour la réalisation de tâches non automatisables. Dans cette thèse, nous nous intéressons à une application particulière qui est l'assistance à la manipulation de pièces de grande taille lorsque la tâche à réaliser et l'environnement sont inconnus du robot. La manutention de telles pièces est une activité quotidienne dans de nombreux domaines et dont les caractéristiques en font une problématique à la fois complexe et critique. Nous proposons une stratégie d'assistance pour répondre à la problématique de contrôle simultané des points de saisie du robot et de l'opérateur liée à la manipulation de pièces de grandes dimensions, lorsque la tâche n'est pas connue du robot. Les rôles du robot et de l'opérateur dans la réalisation de la tâche sont distribués en fonction de leurs compétences relatives. Alors que l'opérateur décide du plan d'action et applique la force motrice qui permet de déplacer la pièce, le robot détecte l'intention de mouvement de l'opérateur et bloque les degrés de liberté qui ne correspondent pas au mouvement désiré. De cette façon, l'opérateur n'a pas à contrôler simultanément tous les degrés de liberté de la pièce. Les problématiques scientifiques relatives à l'interaction physique homme-robot abordées dans cette thèse se décomposent en trois grandes parties : la commande pour l'assistance, l'analyse du canal haptique et l'apprentissage lors de l'interaction. La stratégie développée s'appuie sur un formalisme unifié entre la spécification des assistances, la commande du robot et la détection d'intention. Il s'agit d'une approche modulaire qui peut être utilisée quelle que soit la commande bas niveau imposée dans le contrôleur du robot. Nous avons mis en avant son intérêt au travers de tâches différentes réalisées sur deux plateformes robotiques : un bras manipulateur et un robot humanoïde bipède. / Collaborative robotics aims at physically assisting humans in their daily tasks.The system comprises two partners with complementary strengths : physical for the robot versus cognitive for the operator. This combination provides new scenarios of application such as the accomplishment of difficult-to-automate tasks. In this thesis, we are interested in assisting the human operator to manipulate bulky parts while the robot has no prior knowledge of the environment and the task. Handling such parts is a daily activity in manyareas which is a complex and critical issue. We propose a new strategy of assistances to tackle the problem of simultaneously controlling both the grasping point of the operator and that of the robot. The task responsibilities for the robot and the operator are allocated according to their relative strengths. While the operator decides the plan and applies the driving force, the robot detects the operator's intention of motion and constrains the degrees of freedom that are useless to perform the intended motion. This way, the operator does not have to control all the degrees of freedom simultaneously. The scientific issues we deal with are split into three main parts : assistive control, haptic channel analysis and learning during the interaction.The strategy is based on a unified framework of the assistances specification, robot control and intention detection. This is a modular approach that can be applied with any low-level robot control architecture. We highlight its interest through manifold tasks completed with two robotics platforms : an industrial arm manipulator and a biped humanoid robot.
|
Page generated in 0.0363 seconds