Spelling suggestions: "subject:"haptics."" "subject:"bhaptics.""
101 |
Design and Development of a Framework to Bridge the Gap Between Real and VirtualHossain, SK Alamgir January 2011 (has links)
Several researchers have successfully developed realistic models of real world objects/ phenomena and then have simulated them in the virtual world. In this thesis, we propose the opposite: instantiating virtual world events in the real world. The interactive 3D virtual environment provides a useful, realistic 3D world that resembles objects/phenomena of a real world, but it has limited capability to communicate with the physical environment. We argue that new and intuitive 3D user interfaces, such as 3D virtual environment interfaces, may provide an alternative form of media for communicating with the real environment. We propose a 3D virtual world-based add-on architecture that achieves a synchronized virtual-real communication. In this framework, we explored the possibilities of integrating haptic and real world object interactions with Linden Lab's multiuser online 3D virtual world, Second Life. We enhanced the open source Second Life viewer client in order to facilitate communications between the real and virtual world. Moreover, we analyzed the suitability of such an approach in terms of user perception, intuition and other common parameters. Our experiments suggest that the proposed approach not only demonstrates a more intuitive mode of communication system, but also is appealing and useful to the user. Some of the potential applications of the proposed approach include remote child-care, communication between distant lovers, stress recovery, and home automation.
|
102 |
Haptic Image ExplorationLareau, David January 2012 (has links)
The haptic exploration of 2-D images is a challenging problem in computer haptics. Research on the topic has primarily been focused on the exploration of maps and curves. This thesis describes the design and implementation of a system for the haptic exploration of photographs. The system builds on various research directions related to assistive technology, computer haptics, and image segmentation. An object-level segmentation hierarchy is generated from the source photograph to be rendered haptically as a contour image at multiple levels-of-detail. A tool for the authoring of object-level hierarchies was developed, as well as an innovative type of user interaction by region selection for accurate and efficient image segmentation. According to an objective benchmark measuring how the new method compares with other interactive image segmentation algorithms shows that our region selection interaction is a viable alternative to marker-based interaction. The hierarchy authoring tool combined with precise algorithms for image segmentation can build contour images of the quality necessary for the images to be understood by touch with our system. The system was evaluated with a user study of 24 sighted participants divided in different groups. The first part of the study had participants explore images using haptics and answer questions about them. The second part of the study asked the participants to identify images visually after haptic exploration. Results show that using a segmentation hierarchy supporting multiple levels-of-detail of the same image is beneficial to haptic exploration. As the system gains maturity, it is our goal to make it available to blind users.
|
103 |
Towards a Continuous User Authentication Using Haptic InformationAlsulaiman, Fawaz Abdulaziz A. January 2013 (has links)
With the advancement in multimedia systems and the increased interest in haptics to be used in interpersonal communication systems, where users can see, show, hear, tell, touch and be touched, mouse and keyboard are no longer dominant input devices. Touch, speech and vision will soon be the main methods of human computer interaction. Moreover, as interpersonal communication usage increases, the need for securing user authentication grows. In this research, we examine a user's identification and verification based on haptic information. We divide our research into three main steps. The first step is to examine a pre-defined task, namely a handwritten signature with haptic information. The user target in this task is to mimic the legitimate signature in order to be verified. As a second step, we consider the user's identification and verification based on user drawings. The user target is predefined, however there are no restrictions imposed on the order or on the level of details required for the drawing. Lastly, we examine the feasibility and possibility of distinguishing users based on their haptic interaction through an interpersonal communication system. In this third step, there are no restrictions on user movements, however a free movement to touch the remote party is expected. In order to achieve our goal, many classification and feature reduction techniques have been discovered and some new ones were proposed. Moreover, in this work we utilize evolutionary computing in user verification and identification. Analysis of haptic features and their significance on distinguishing users is hence examined.
The results show a utilization of visual features by Genetic Programming (GP) towards identity verification, with a probability equal to 50% while the remaining haptic features were utilized with a probability of approximately 50%. Moreover, with a handwritten signature application, a verification success rate of 97.93% with False Acceptance Rate (FAR) of 1.28% and @11.54% False Rejection Rate (FRR) is achieved with the utilization of genetic programming enhanced with the random over sampled data set. In addition, with a totally free user movement in a haptic-enabled interpersonal communication system, an identification success rate of 83.3% is achieved when random forest classifier is utilized.
|
104 |
Human Head Stiffness RenderingMinggao, Wei January 2015 (has links)
The technology of haptics rendering has greatly enriched development in Multimedia applications, such as teleoperation, gaming, medical and etc., because it makes the virtual object touchable by the human operator(s) in real world. Human head stiffness rendering is significant in haptic interactive applications as it defines the degree of reality in physical interaction of a human avatar created in virtual environment. In a similar research, the haptic rendering approach has two main types: 1) Haptic Information Integration and 2) Deformation Simulation. However, the complexity in anatomic and geometric structure of a human head makes the rendering procedure challenging because of the issues of accuracy and efficiency. In this work, we propose a hybrid method to render the appropriate stiffness property onto a 3D head polygon mesh of an individual user by firstly studying human head's sophisticated deformation behaviour and then rendering such behaviour as the resultant stiffness property on the polygon mesh. The stiffness property is estimated from a semantically registered and shape-adapted skull template mesh as a reference and modeled from soft tissue's deformation behaviour in a nonlinear Finite Element Method (FEM) framework. To render the stiffness property, our method consists of different procedures, including 3D facial landmark detection, models semantic registration using Iterative Closest Point (ICP) technique, adaptive shape modification processed with a modified Weighted Free-Form Deformation (FFD) and FEM Simulation. After the stiffness property is rendered on a head polygon mesh, we perform a user study by inviting participants to experience the haptic feedback rendered from our results. According to the participants' feedback, the head polygon mesh's stiffness property is properly rendered as it satisfies their expectation.
|
105 |
Modern Sensory Substitution for Vision in Dynamic EnvironmentsJanuary 2020 (has links)
abstract: Societal infrastructure is built with vision at the forefront of daily life. For those with
severe visual impairments, this creates countless barriers to the participation and
enjoyment of life’s opportunities. Technological progress has been both a blessing and
a curse in this regard. Digital text together with screen readers and refreshable Braille
displays have made whole libraries readily accessible and rideshare tech has made
independent mobility more attainable. Simultaneously, screen-based interactions and
experiences have only grown in pervasiveness and importance, precluding many of
those with visual impairments.
Sensory Substituion, the process of substituting an unavailable modality with
another one, has shown promise as an alternative to accomodation, but in recent
years meaningful strides in Sensory Substitution for vision have declined in frequency.
Given recent advances in Computer Vision, this stagnation is especially disconcerting.
Designing Sensory Substitution Devices (SSDs) for vision for use in interactive settings
that leverage modern Computer Vision techniques presents a variety of challenges
including perceptual bandwidth, human-computer-interaction, and person-centered
machine learning considerations. To surmount these barriers an approach called Per-
sonal Foveated Haptic Gaze (PFHG), is introduced. PFHG consists of two primary
components: a human visual system inspired interaction paradigm that is intuitive
and flexible enough to generalize to a variety of applications called Foveated Haptic
Gaze (FHG), and a person-centered learning component to address the expressivity
limitations of most SSDs. This component is called One-Shot Object Detection by
Data Augmentation (1SODDA), a one-shot object detection approach that allows a
user to specify the objects they are interested in locating visually and with minimal
effort realizing an object detection model that does so effectively.
The Personal Foveated Haptic Gaze framework was realized in a virtual and real-
world application: playing a 3D, interactive, first person video game (DOOM) and
finding user-specified real-world objects. User study results found Foveated Haptic
Gaze to be an effective and intuitive interface for interacting with dynamic visual
world using solely haptics. Additionally, 1SODDA achieves competitive performance
among few-shot object detection methods and high-framerate many-shot object de-
tectors. The combination of which paves the way for modern Sensory Substitution
Devices for vision. / Dissertation/Thesis / Doctoral Dissertation Computer Engineering 2020
|
106 |
Haptic-Enhanced Presence in VR: Exploring the importance of haptic feedback in virtual environments to achieve presenceHåkonsson, Jakob January 2018 (has links)
Through qualitative and exploratory research, this thesis project investigates how body stimulation from haptic feedback affects user’s feeling of presence in VR environments. It identifies that in current time,the development of haptic feedback in VR lags severely behind the advancements made in visual and auditory feedback, and that somecompanies disregard its importance. Simultaneously, new companies are emerging which focus entirely on haptics in VR. Since development is still an early stage, this thesis highlights now as a unique opportunity to explore the thoughts professionals have on the topic, as well as try to find out which exact haptics are important for feeling presence to serve as a guide to those developing such systems.Finally, to tackle this issue, it is imperative to understand certain theoretical concepts such as affordances and embodiment, and how they change in the world of VR. This understanding can contribute to Interaction Design knowledge.
|
107 |
Haptic Vision: Augmenting Non-visual Travel Tools, Techniques, and Methods by Increasing Spatial Knowledge Through Dynamic Haptic InteractionsJanuary 2020 (has links)
abstract: Access to real-time situational information including the relative position and motion of surrounding objects is critical for safe and independent travel. Object or obstacle (OO) detection at a distance is primarily a task of the visual system due to the high resolution information the eyes are able to receive from afar. As a sensory organ in particular, the eyes have an unparalleled ability to adjust to varying degrees of light, color, and distance. Therefore, in the case of a non-visual traveler, someone who is blind or low vision, access to visual information is unattainable if it is positioned beyond the reach of the preferred mobility device or outside the path of travel. Although, the area of assistive technology in terms of electronic travel aids (ETA’s) has received considerable attention over the last two decades; surprisingly, the field has seen little work in the area focused on augmenting rather than replacing current non-visual travel techniques, methods, and tools. Consequently, this work describes the design of an intuitive tactile language and series of wearable tactile interfaces (the Haptic Chair, HaptWrap, and HapBack) to deliver real-time spatiotemporal data. The overall intuitiveness of the haptic mappings conveyed through the tactile interfaces are evaluated using a combination of absolute identification accuracy of a series of patterns and subjective feedback through post-experiment surveys. Two types of spatiotemporal representations are considered: static patterns representing object location at a single time instance, and dynamic patterns, added in the HaptWrap, which represent object movement over a time interval. Results support the viability of multi-dimensional haptics applied to the body to yield an intuitive understanding of dynamic interactions occurring around the navigator during travel. Lastly, it is important to point out that the guiding principle of this work centered on providing the navigator with spatial knowledge otherwise unattainable through current mobility techniques, methods, and tools, thus, providing the \emph{navigator} with the information necessary to make informed navigation decisions independently, at a distance. / Dissertation/Thesis / Doctoral Dissertation Computer Science 2020
|
108 |
Stratégie de contrôle haptique pour la réhabilitation du membre supérieur atteint de l'enfant hémiplégique / Haptic control strategy development for the upper limb of hemiplegic children rehabilitationElsaeh, Mohammed 09 October 2017 (has links)
L'hémiplégie est une préoccupation mondiale pour la santé; C'est une partie de la Paralysie Cérébrale (PC) où une moitié verticale du corps est affectée. Cela peut avoir un impact significatif sur le niveau du handicap. Les déficiences motrices concernant les membres supérieurs, sont fréquentes chez les personnes hémiplégiques. Ces handicaps surviennent en raison de la perte de la communication entre le cerveau et le côté affecté du corps. Dans cette thèse, une stimulation visuelle, audio et tactile a été fournie aux patients pour une réhabilitation de ces membres affectés. La stratégie de contrôle haptique a été développée en utilisant des dispositifs haptiques, qui sont moins coûteux. Des scénarios de Réalité Virtuelle (RV) pour la thérapie haptique-RV, ont été développés afin de s'assurer la cohérence entre les flux visuels, audio et tactiles. Un dispositif haptique à 3 degrés de liberté (DDL) (Novint Falcon) que nous avons associé à un dispositif de rotation avec 1 DDL, que nous avons réalisé, a été utilisé pour appliquer cette stratégie de contrôle en mode-actif. La stratégie développée fournit des trajectoires libres dans des scènes RV et force feed-back dans d'autres directions, autrement dit modèle ‘résistant aux besoins’. Deux méthodes ont été développées pour évaluer les performances des membres supérieurs ciblés. La première méthode a été développée en utilisant une kinectTM pour Windows. Cet appareil a été choisi et utilisé dans notre système en raison sa portabilité, de son espace de travail, de sa facilité d'utilisation et de son prix bas. Le but principal de cette méthode d'évaluation est de valider la relation entre les différents scénarios de RV et le type de mouvement effectué par les membres supérieurs affectés des enfants. La deuxième méthode d'évaluation a été développée en utilisant les données collectées du système lui-même afin de fournir aux thérapeutes la qualité et la quantité de performance à chaque type de mouvements. Cette méthode a été construite en utilisant l'approche de la logique floue. Enfin, trois expériences ont été réalisées. Deux expériences de thérapie et une expérience d'évaluation. Les résultats illustrent la faisabilité de notre approche. Des perspectives ont été données en vue d’une validation au près d’un groupe d’enfant hémiplégique plus important dans l’avenir. Ce travail nécessite une préparation importante et une coordination sur le très long terme avec le corps médical et les familles des enfants atteints. / Hemiplegia is a global concern for health; it is a part of Cerebral Palsy (CP) where a vertical half of the body is affected. This can have a significant impact on the level of disability. Upper extremity motor impairments are common in hemiplegic patients. These disabilities occur because of the loss of communication between the brain and the affected side of the body. In this thesis, visual, audio and tactile stimulation was provided to patients for rehabilitation of these affected limbs. The haptic control strategy has been developed using haptic devices, which are less costly. Virtual Reality (VR) scenarios for haptic-VR therapy have been developed to ensure coherence between visual, audio and tactile flows. A 3-degree-of-freedom (DOF) haptic device (Novint Falcon) which we have associated with a rotation device with 1 DOF, which we have realized, has been used to apply this active-mode control strategy. The developed strategy provides free trajectories in VR scenes and forces feedback in other directions, 'resistant when needed' model. Two methods have been developed to evaluate the performance of the targeted upper limbs. The first method was developed using a kinectTM for Windows. This device was chosen and used in our system because of its portability, workspace, ease of use and low price. The main purpose of this evaluation method is to validate the relationship between the different VR scenarios and the type of movement performed by the affected upper limbs of children. The second evaluation method was developed using the collected data from the system itself in order to provide the therapists with the quality and the quantity of the performance for each type of movements. This method was constructed using the fuzzy logic approach. Finally, three experiments were carried out. Two therapy experiments and one evaluation experiment. The results illustrate the feasibility of our approach. Prospects have been given for validation with a larger group of hemiplegic children in the future. This work requires considerable preparation and coordination over the very long term with the medical profession and the families of affected children.
|
109 |
Contribution to the study of haptic perception and renderinf of deformable virtual objects / Contribution à l'étude du rendu et de la perception haptique d'objets virtuels déformablesLe gouis, Benoît 21 November 2017 (has links)
L'haptique joue un rôle majeur dans l'interaction avec des environnements virtuels, avec de nombreuses applications en entraînement virtuel, en prototypage et en assistance de téléopérations. En particulier, les objets déformables représentent un défi pour la simulation à cause de leur comportement intrinsèquement complexe. À cause des besoins particuliers en terme de puissance liés à l'interaction haptique, il est en général nécessaire de faire un compromis entre efficacité et précision, et tirer le meilleur parti de ce compromis reste un défi majeur. Les objectifs de ce doctorat sont premièrement d'améliorer l'interaction haptique avec des objets virtuels déformables au comportement complexe, et enfin d'étudier en quoi la perception peut nous aider dans cette tâche.Dans cette thèse, nous proposons dans un premier temps un modèle pour la simulation physique d'objets hétérogènes déformables. Plus précisément, nous nous intéressons au problème de la multirésolution géométrique pour les objets hétérogènes, en nous concentrant sur la représentation de l'hétérogénéité à basse résolution des objets simulés. La contribution consiste en une méthode d'attribution de l'élasticité pour la basse résolution de l'objet, et une évaluation de ce changement de géométrie sur la perception haptique.Nous nous intéressons ensuite à une autre classe de comportements complexes, les changements topologiques, en proposant un pipeline de simulation pour la déchirure haptique bimanuelle d'objets déformables fins. Cette contribution se concentre sur deux aspects essentiels à la simulation efficace de déchirure, à savoir la détection de collision pour les objets surfaciques, et la simulation physique efficace de déchirure. La simulation est particulièrement optimisée pour la propagation de déchirure.Le dernier aspect couvert dans cette thèse est l'influence de l'environnement sur la perception haptique de raideur, et plus particulièrement les environnements de Réalité Augmentée (RA). Comment perçoit-on les objets en RA par rapport à la Réalité Virtuelle (RV)? Est-ce que nous interagissons de la même manière dans ces deux environnements? Pour répondre à ces questions, nous avons mené une expérience pour comparer la perception haptique de raideur d'un piston virtuel entouré dans un premier cas d'objets de la vie quotidienne en RA, et du même piston entouré par une reproduction virtuelle de cet environnement réel en RV.Ces contributions ouvrent de nouvelles perspectives pour l'interaction haptique avec des environnements virtuels, depuis la simulation efficace et fidèle d'objets déformables au comportement complexe à une meilleure compréhension de la perception haptique et des stratégies d'interaction. / Haptics is a key part for the interaction with physically-based environments, with many applications in virtual training, prototyping and teleoperations assistance. In particular, deformable objects are challenging, due to the complexity oftheir behavior. Due to the specific need in performance associated to haptic interaction, a trade-off is usually necessarybetween accuracy and efficiency, and taking the best of this trade-off is a major challenge. The objectives of this PhD are to improve haptic rendering with physically-based deformable objects that exhibit complex behavior, and study how perception can be used to achieve this goal.In this PhD, we first propose a model for the physically-based simulation of complex heterogeneous deformable objects. More specifically, we address the issue of geometric multiresolution for deformable heterogeneous objects, with a major focus on the heterogeneity representation at the coarse resolution of the simulated objects. The contribution consists in a method for elasticity attribution at coarser resolution of the object, and an evaluation of the geometric coarsening on the haptic perception.We then focus on another class of complex objects behavior, topology changes, by proposing a simulation pipeline forbimanual haptic tearing of thin deformable surfaces. This contribution mainly focuses on two main aspects for an efficientsimulation of tearing, namely collision detection for thin objects and efficient physically-based simulation of tearing phenomena. The simulation is especially optimized for tear propagation.The last aspect that is covered by this PhD is the influence of the environment over haptic perception of stiffness, and more specifically of Augmented Reality (AR) environments. How are objects perceived in AR compared to Virtual Reality (VR)? Do we interact the same way on these two environments? In order to assess these questions, we conducted an experiment aiming at comparing the haptic stiffness perception of a piston surrounded by everyday life objects in AR and of the same piston surrounded by a virtual replica of the real environment in VR.These contributions open new perspectives for haptic interaction with virtual environments, from the efficient yet faithful simulation of complex deformable objects behavior to a better understanding of haptic perception and interaction strategies.
|
110 |
Contributions aux architectures de contrôle partagé pour la télémanipulation avancée / Contributions to shared control architectures for advanced telemanipulationAbi-Farraj, Firas 18 December 2018 (has links)
Bien que la pleine autonomie dans des environnements inconnus soit encore loin, les architectures de contrôle partagé où l'humain et un contrôleur autonome travaillent ensemble pour atteindre un objectif commun peuvent constituer un « terrain intermédiaire » pragmatique. Dans cette thèse, nous avons abordé les différents problèmes des algorithmes de contrôle partagé pour les applications de saisie et de manipulation. En particulier, le travail s'inscrit dans le projet H2020 Romans dont l'objectif est d'automatiser le tri et la ségrégation des déchets nucléaires en développant des architectures de contrôle partagées permettant à un opérateur humain de manipuler facilement les objets d'intérêt. La thèse propose des architectures de contrôle partagé différentes pour manipulation à double bras avec un équilibre opérateur / autonomie différent en fonction de la tâche à accomplir. Au lieu de travailler uniquement sur le contrôle instantané du manipulateur, nous proposons des architectures qui prennent en compte automatiquement les tâches de pré-saisie et de post-saisie permettant à l'opérateur de se concentrer uniquement sur la tâche à accomplir. La thèse propose également une architecture de contrôle partagée pour contrôler un humanoïde à deux bras où l'utilisateur est informé de la stabilité de l'humanoïde grâce à un retour haptique. En plus, un nouvel algorithme d'équilibrage permettant un contrôle optimal de l'humanoïde lors de l'interaction avec l'environnement est également proposé. / While full autonomy in unknown environments is still in far reach, shared-control architectures where the human and an autonomous controller work together to achieve a common objective may be a pragmatic "middle-ground". In this thesis, we have tackled the different issues of shared-control architectures for grasping and sorting applications. In particular, the work is framed in the H2020 RoMaNS project whose goal is to automatize the sort and segregation of nuclear waste by developing shared control architectures allowing a human operator to easily manipulate the objects of interest. The thesis proposes several shared-control architectures for dual-arm manipulation with different operator/autonomy balance depending on the task at hand. While most of the approaches provide an instantaneous interface, we also propose architectures which automatically account for the pre-grasp and post-grasp trajectories allowing the operator to focus only on the task at hand (ex., grasping). The thesis also proposes a shared control architecture for controlling a force-controlled humanoid robot in which the user is informed about the stability of the humanoid through haptic feedback. A new balancing algorithm allowing for the optimal control of the humanoid under high interaction forces is also proposed.
|
Page generated in 0.0473 seconds