171 |
Internet Based Bilateral TeleoperationChing, Ho 17 October 2006 (has links)
In conventional bilateral teleoperation, transmission delay over the Internet can potentially cause instability. The wave variable algorithm guarantees stability under varying transmission delay at the cost of poor transient performance. Adding a predictor on the master side can reduce this undesirable side-effect, but that would require a slave model. An inaccurate slave model used in the predictor as well as variations in transmission delay, both of which are likely under realistic situations, can result in steady state errors. A direct drift control algorithm is used to drive this error to zero regardless of the source of error. A semi-adaptive predictor that can distinguish between free space and rigid contact environment is used to provide more accurate force feedback on the master side. A full adaptive predictor is also used that estimates the slave environment parameters using recursive least squares with a forgetting factor. This research presents the experimental results and evaluations of the wave variable based methods under a realistic operation environment using a real master and slave. The effectiveness of this algorithm is fully evaluated using human subjects with no previous experience in haptics. Three algorithms are tested using PHANTOM brand haptic devices as master and slave: conventional bilateral teleoperation with no transmission delay as control, wave variable teleoperation with approximately 200 ms transmission delay one way, and wave variables with adaptive predictor and direct drift control with approximately 200 ms transmission delay one way. For each algorithm the human subjects are asked to perform three simple tasks: use the master to force the slave to track a reference trajectory in free space with the least amount of error, identify a contour surface on the slave side as accurately as possible using only haptic information from the master, and navigate a simple maze on the slave side in the least amount of time using haptic information from the master.
|
172 |
Assistive force feedback for path following in 3D space for upper limb rehabilitation applicationsSwaminathan, Ramya 01 June 2007 (has links)
The primary objective of this research was the design of an easy to use C++ Graphical User Interface (GUI) which helps the user to choose the task that he/she wants to perform. This C++ application provides a platform intended for upper arm rehabilitation applications. The user can choose from different tasks such as: Assistive Function in 3D Space to Traverse a Linear Trajectory, User Controlled Velocity Based Scaling, Fitts' Task in X, Y, Z Directions. According to a study conducted by the scientific journal of the American Academy of Neurology, stroke patients aided by robotic rehabilitation devices gain significant improvement in movement. They also indicate that both initial and long term recovery are greater for patients assisted by robots during rehabilitation. This research aims to provide a haptic interface C++ platform for clinicians and therapists to study human arm motion and also to provide assistance to the user.
The user would get to choose andperform repetitive tasks aimed at improving his/her muscle memory. About eight healthy volunteers were chosen to perform a set of preliminary experiments on this haptic integrated C++ platform. These experiments were performed to get an indication of the effectiveness of the assistance functions provided in this C++ application. The eight volunteers performed the Fitts' Task in X, Y and Z directions. The subjects were divided into two groups, where one of the groups was given training without assistance and the other was given training with assistance. The execution time for both the groups was compared and analyzed. The experiments performed were preliminary, however some trends were observed: the people who received training with assistive force feedback took less execution time compared to those who were given training without any assistance. The path following error was also analyzed.
These preliminary tests were performed to demonstrate the haptic platform's use as a therapeutic assessment application, a rehabilitation tool and a data collection system for clinicians and researchers.
|
173 |
Socially interactive robots as mediators in human-human remote communicationPapadopoulos, Fotios January 2012 (has links)
This PhD work was partially supported by the European LIREC project (Living with robots and interactive companions) a collaboration of 10 EU partners that aims to develop a new generation of interactive and emotionally intelligent companions able of establishing and maintaining long-term relationships with humans. The project takes a multi-disciplinary approach towards investigating methods to allow robotic companions to perceive, remember and react to people in order to enhance the companion’s awareness of sociability in domestic environments. (e.g. remind a user and provide useful information, carry heavy objects etc.). One of the project's scenarios concerns remote human-human communication enhancement utilising autonomous robots as social mediators which is the focus of this PhD thesis. This scenario involves a remote communication situation between two distant users who wish to utilise their robot companions in order to enhance their communication and interaction experience with each other over the internet. The scenario derived from the need of communication between people who are separated from their relatives and friends due to work commitments or other personal obligations. Even for people that live close by, communication mediated by modern technologies has become widespread. However, even with the use of video communication, they are still missing an important medium of interaction that has received much less attention over the past years, which is touch. The purpose of this thesis was to develop autonomous robots as social mediators in a remote human-human communication scenario in order to allow the users to use touch and other modalities on the robots. This thesis addressed the following research questions: Can an autonomous robot be a social mediator in human-human remote communication? How does an autonomous robotic mediator compare to a conventional computer interface in facilitating users’ remote communication? Which methodology should be used for qualitative and quantitative measurements for local user-robot and user-user social remote interactions? In order to answer these questions, three different communications platforms were developed during this research and each one addressed a number of research questions. The first platform (AIBOcom) allowed two distant users to collaborate in a virtual environment by utilising their autonomous robotic companions during their communication. Two pet-like robots, which interact individually with two remotely communicating users, allowed the users to play an interactive game cooperatively. The study tested two experimental conditions, characterised by two different modes of synchronisation between the robots that were located locally with each user. In one mode the robots incrementally affected each other’s behaviour, while in the other mode, the robots mirrored each other’s behaviour. This study aimed to identify users’ preferences for robot mediated human-human interactions in these two modes, as well as investigating users’ overall acceptance of such communication media. Findings indicated that users preferred the mirroring mode and that in this pilot study robot assisted remote communication was considered desirable and acceptable to the users. The second platform (AiBone) explored the effects of an autonomous robot on human-human remote communication and studied participants' preferences in comparison with a communication system not involving robots. We developed a platform for remote human-human communication in the context of a collaborative computer game. The exploratory study involved twenty pairs of participants who communicated using video conference software. Participants expressed more social cues and sharing of their game experiences with each other when using the robot. However, analysis of the interactions of the participants with each other and with the robot show that it is difficult for participants to familiarise themselves quickly with the robot while they can perform the same task more efficiently with conventional devices. Finally, our third platform (AIBOStory) was based on a remote interactive story telling software that allowed users to create and share common stories through an integrated, autonomous robot companion acting as a social mediator between two people. The behaviour of the robot was inspired by dog behaviour and used a simple computational memory model. An initial pilot study evaluated the proposed system's use and acceptance by the users. Five pairs of participants were exposed to the system, with the robot acting as a social mediator, and the results suggested an overall positive acceptance response. The main study involved long-term interactions of 20 participants in order to compare their preferences between two modes: using the game enhanced with an autonomous robot and a non-robot mode. The data was analysed using quantitative and qualitative techniques to measure user preference and Human-Robot Interaction. The statistical analysis suggests user preferences towards the robot mode. Furthermore, results indicate that users utilised the memory feature, which was an integral part of the robot’s control architecture, increasingly more as the sessions progressed. Results derived from the three main studies supported our argument that domestic robots could be used as social mediators in remote human-human communications and offered an enhanced experience during their interactions with both robots and each other. Additionally, it was found that the presence of intelligent robots in the communication can increase the number of exhibited social cues between the users and are more preferable compared to conventional interactive devices such as computer keyboard and mouse.
|
174 |
Boundary Notions: A Sonic Art PortfolioFure, Ashley Rose 19 September 2013 (has links)
I offer this dissertation as a survey and a story: a survey of my work across the field of sonic art and a story of my progressive compulsion toward sound that conveys touch. This haptic sensibility sharpens from Susurrus (2006) through Soma (2012), manifesting in a fixation on the impact of sound on bodies and the impact of bodies on sound. Both the visceral sensation of hearing and the manner in which movement imprints onto acoustic phenomena concern me. My musical forms are conceived not as abstract arrangements of objects (or notes) but as complex physical confrontations that produce audible byproducts. I compose primarily with chaotic spectra, mixing raw noise from found objects with extended instrumental techniques. These timbres front an acoustic wildness intentionally abated in conventional instrumental practice. And yet, the precision of classical instruments opens avenues of transformation closed to unmediated noise. Virtuosity and crudeness face-off in my work, circling an aesthetic region between embellishment and fact, between sound as a carrier of aesthetic intent and sound as a subsidiary effect of action. The ten works presented in this portfolio include eight compositions scored for a range of ensembles, from soloist to orchestra, with and without electronics, as well as two interactive multimedia installations. Dramatic links between physical movement and musical form arise across this output. In my installations, I posit causal relationships between visible stimuli (spinning strings, spatial structures, moving bodies) and resultant sounds. In my electroacoustic works, I attend to the implied weight of spatialized sound – as though a gesture’s trajectory through arrayed speakers were informed by gravity. In my acoustic music, I bring the muscular strain behind instrumental technique to the perceptual fore. My professional activities shift regularly between concert music and installation art and between acoustic and electroacoustic contexts. Passing between these genres stretches the boundaries of my creative practice and forces me to consistently reframe notions of ritual and form. Within each platform, I aim to stage visceral aesthetic encounters that, as Francis Bacon once hoped for his paint, bypass the brain and go directly to the nervous system. / Music
|
175 |
Tactile display for mobile interactionPasquero, Jerome. January 2008 (has links)
Interaction with mobile devices suffers from a number of shortcomings, most of which are linked to the small size of screens. Artificial tactile feedback promises to be particularly well suited to the mobile interaction context. To be practical, tactile transducers for mobile devices must be small and light, and yet be capable of displaying a rich set of expressive stimuli. This thesis introduces a tactile transducer for mobile interaction that is capable of distributed skin stimulation on the fingertip. The transducer works on a principle that was first investigated because of its potential application to the display of Braille. A preliminary study was conducted on an earlier version of the transducer. It concluded that subjects were able to identify simple Braille characters with a high rate of success. Then, a complete re-design of the transducer addressed the goal of integration in a handheld prototype for mobile interaction. The resulting device comprises a liquid crystal graphic display co-located with the miniature, low-power, distributed tactile transducer. Next, it was needed to measure the perceptual differences between the stimuli that the device could display. Our experiences with one evaluation approach raised questions relating to the methodology for data collection. Therefore, an analysis of the process was carried out using a stimulus set obtained with the device. By means of multidimensional scaling analysis, both the perceptual parameters forming the stimuli space and the evaluation technique were validated. Finally, two experiments were carried out with the objective to develop new mobile interactions paradigms that combined visual and tactile feedback. Both experiments modeled a list scrolling task on the device. The first experiment found a marginal improvement in performance when tactile feedback was employed. It also came at a higher attentional cost dedicated to operating the device. For the second experiment, the scrolling paradigm and the tactile feedback were improved. This lead to a decrease in the reliance on vision when tactile feedback was enabled. Results showed a 28% decrease in the number of key presses that controlled the visibility state of the scroll list.
|
176 |
Haptic emulation of hard surfaces with applications to orthopaedic surgeryHungr, Nikolai Anthony 05 1900 (has links)
A generally accepted goal in orthopaedic surgery today is to maximize conservation of tissue and reduce tissue damage. Bone-conserving implants have bone-mating surfaces that reproduce the natural curvature of bone structures, requiring less bone removal. No small, reliable, inexpensive and universal bone sculpting technique currently exists, however, that can both create and accurately align such complex surfaces. The goal of this thesis was to develop a haptic hard surface emulation mechanism that could be applied to curvilinear bone sculpting using a surgical robot. A novel dynamic physical constraint concept was developed that is able to emulate realistic hard constraints, smooth surface following, and realistic surface rigidity, while allowing complete freedom of motion away from the constraints. The concept was verified through the construction of a two-link manipulator prototype. Tests were run on nine users that involved each user tracing out five different virtual surfaces on a drawing surface using the prototype. The primary purposes of prototype testing were to obtain subjective data on how effectively the dynamic physical constraint concept simulates simple surfaces, to assess how it reacts to typical user interactions and to identify any unexpected behaviour. Users were 100% satisfied with the prototype’s ability to emulate realistic and stiff hard surfaces and with its ease of manipulation. The amount of incursion into each of the virtual surfaces by all the users was measured to assess the precision of the system with the goal of deciding whether this new haptic concept should be further developed specifically for precision applications such as surgery. For curvilinear surfaces, 90% of the cumulative distribution of the measured data was less than 2mm, while for linear surfaces it was less than 6mm. Four behavioural effects were noticed: lateral deflection, reverse ‘stickiness’, hysteresis and instability in certain areas. These effects were studied in detail to determine how to either eliminate them or to minimize them through system design optimization. A computer simulation was also used to model the behaviour of the prototype and to gain further understanding of these effects. These analyses showed that the concept can be successfully used in curvilinear bone sculpting.
|
177 |
Universal motion-based control and motion recognitionChen, Mingyu 13 January 2014 (has links)
In this dissertation, we propose a universal motion-based control framework that supports general functionalities on 2D and 3D user interfaces with a single integrated design. We develop a hybrid framework of optical and inertial sensing technologies to track 6-DOF (degrees of freedom) motion of a handheld device, which includes the explicit 6-DOF (position and orientation in the global coordinates) and the implicit 6-DOF (acceleration and angular speed in the device-wise coordinates). Motion recognition is another key function of the universal motion-based control and contains two parts: motion gesture recognition and air-handwriting recognition. The interaction technique of each task is carefully designed to follow a consistent mental model and ensure the usability. The universal motion-based control achieves seamless integration of 2D and 3D interactions, motion gestures, and air-handwriting.
Motion recognition by itself is a challenging problem. For motion gesture recognition, we propose a normalization procedure to effectively address the large in-class motion variations among users. The main contribution is the investigation of the relative effectiveness of various feature dimensions (of tracking signals) for motion gesture recognition in both user-dependent and user-independent cases. For air-handwriting recognition, we first develop a strategy to model air-handwriting with basic elements of characters and ligatures. Then, we build word-based and letter-based decoding word networks for air-handwriting recognition. Moreover, we investigate the detection and recognition of air-fingerwriting as an extension to air-handwriting. To complete the evaluation of air-handwriting, we conduct usability study to support that air-handwriting is suitable for text input on a motion-based user interface.
|
178 |
An Adaptive Approach to Exergames with Support for Multimodal InterfacesSilva Salmeron, Juan Manuel 30 January 2013 (has links)
Technology such as television, computers, and video games are often in the line for reasons of why people lack physical activity and tend to gain weight and become obese.
In the case of video games, with the advent of the so called “serious games initiative”, a new breed of video games have come into place. Such games are called “exergames” and they are intended to motivate the user to do physical activity. Although there is some evidence that some types of Exergames are more physically demanding than traditional sedentary games, there is also evidence that suggests that such games are not really providing the intensity of exert that is at the recommended levels for a daily exercise. Currently, most exergames have a passive approach. There is no real tracking of the players progress, there is no assessment of his/her level of exert, no contextual information, and there is no adaptability on the game itself to change the conditions of the game and prompt the desired physiological response on the player.
In this thesis we present research work done towards the design and development of an architecture and related systems that support a shift in the exertion game paradigm. The contributions of this work are enablers in the design and development of exertion games with a strict serious game approach. Such games should have “exercising” as the primary goal, and a game engine that has been developed under this scheme should be aware of the exertion context of the player. The game should be aware of the level of exertion of the player and adapt the gaming context (in-game variables and exertion interface settings) so that the player can reach a predefined exertion rate as desired.
To support such degree of adaptability in a multimedia, multimodal system, we have proposed a system architecture that lays down the general guidelines for the design and development of such systems.
|
179 |
Towards Template Security for Iris-based Biometric SystemsFouad, Marwa 18 April 2012 (has links)
Personal identity refers to a set of attributes (e.g., name, social insurance number, etc.) that are associated with a person. Identity management is the process of creating, maintaining and destroying identities of individuals in a population. Biometric technologies are technologies developed to use statistical analysis of an individual’s biological or behavioral traits to determine his identity. Biometrics based authentication systems offer a reliable solution for identity management, because of their uniqueness, relative stability over time and security (among other reasons). Public acceptance of biometric systems will depend on their ability to ensure robustness, accuracy and security. Although robustness and accuracy of such systems are rapidly improving, there still remain some issues of security and balancing it with privacy. While the uniqueness of biometric traits offers a convenient and reliable means of identification, it also poses the risk of unauthorized cross-referencing among databases using the same biometric trait. There is also a high risk in case of a biometric database being compromised, since it’s not possible to revoke the biometric trait and re-issue a new one as is the case with passwords and smart keys. This unique attribute of biometric based authentication system poses a challenge that might slow down public acceptance and the use of biometrics for authentication purposes in large scale applications.
In this research we investigate the vulnerabilities of biometric systems focusing on template security in iris-based biometric recognition systems. The iris has been well studied for authentication purposes and has been proven accurate in large scale applications in several airports and border crossings around the world. The most widely accepted iris recognition systems are based on Daugman’s model that creates a binary iris template. In this research we develop different systems using watermarking, bio-cryptography as well as feature transformation to achieve revocability and security of binary templates in iris based biometric authentication systems, while maintaining the performance that enables widespread application of these systems. All algorithms developed in this research are applicable on already existing biometric authentication systems and do not require redesign of these existing, well established iris-based authentication systems that use binary templates.
|
180 |
Towards Diverse Media Augmented E-Book Reader PlatformAlam, Kazi Masudul 06 June 2012 (has links)
In order to leverage the use of various modalities such as audio-visual-touch in instilling learning behaviour, we present an intuitive approach of annotation based hapto-audio-visual interaction with the traditional digital learning materials such as eBooks. By integrating the traditional home entertainment system and respective media in the user's reading experience combined with haptic interfaces, we examine whether such augmentation of modalities influence the user's reading experience in terms of attention, entertainment and retention. The proposed Haptic E-Book (HE-Book) system leverages the haptic jacket, haptic arm band as well as haptic sofa interfaces to receive haptic emotive signals wirelessly in the form of patterned vibrations of the actuators and expresses the learning material by incorporating audio-video based augmentation in order to pave ways for intimate reading experience in the popular eBook platform. We have designed and developed desktop, mobile/tablet based HE-Book system as well as a semi-automated annotation authoring tool. Our system also supports multimedia based diverse quiz augmentations, which can help in learning tracking. We have conducted quantitative and qualitative tests using the developed prototype systems. We have adopted the indirect objective based performance analysis methodology, which is commonly used for multimedia based learning investigation. The user study shows that, there is a positive tendency of accepting multimodal interactions including haptics with traditional eBook reading experience. Though our limited number of laboratory tests reveal, that haptics can be an influencing media in eBook reading experience, but it requires large scale real life tests to provide a concluding remarks.
|
Page generated in 0.0488 seconds