Spelling suggestions: "subject:"2human computer 1interaction"" "subject:"2human computer 3dinteraction""
171 |
A holistic approach to the cyberspace metaphor /Finkelstein, Adam B. A. January 1998 (has links)
Virtual Reality (VR) has the potential to impact human minds and bodies more than any previous technology. Researchers have unsuccessfully attempted three definitions of VR. The first, a definition by technological architecture, focuses on the different types of systems that have been developed. The second, a definition by human architecture, proposes that VR consists of psychological phenomena that users experience such as immersion and presence. The third, a definition by language architecture, uses the popular metaphor of cyberspace as a location to describe the experience. Although each attempt falls short of delivering a complete definition of VR, expanding the parameters the VR metaphor holistically promises to extend research of this new technology. / The study of metaphorical language has progressed from previous reductionistic proposals that metaphors are merely a side effect of language, to current holistic approaches that suggest metaphors are central to human communication and understanding. Since changing a metaphor changes the concept, the choice of a VR metaphor is crucial. / Cyberspace has the potential to be both the greatest threat and the greatest achievement of human society. The current metaphor of cyberspace is incomplete, focusing only on the location. This thesis proposes to extend the cyberspatial metaphor to launch, destination, and re-entry in order to better conceptualize VR. Researchers can reexamine previously ignored human elements and attempt to understand how to safely send cybernauts from reality to cyberspace and back.
|
172 |
Augmenting Visual Feedback Using Sensory SubstitutionGreene, Eugene Dominic January 2011 (has links)
Direct interaction in virtual environments can be realized using relatively simple hardware, such as standard webcams and monitors. The result is a large gap between the stimuli existing in real-world interactions and those provided in the virtual environment. This leads to reduced efficiency and effectiveness when performing tasks. Conceivably these missing stimuli might be supplied through a visual modality, using sensory substitution. This work suggests a display technique that attempts to usefully and non-detrimentally employ sensory substitution to display proximity, tactile, and force information.
We solve three problems with existing feedback mechanisms. Attempting to add information to existing visuals, we need to balance: not occluding the existing visual output; not causing the user to look away from the existing visual output, or otherwise distracting the user; and displaying as much new information as possible. We assume the user interacts with a virtual environment consisting of a manually controlled probe and a set of surfaces.
Our solution is a pseudo-shadow: a shadow-like projection of the user's probe onto the surface being explored or manipulated. Instead of drawing the probe, we only draw the pseudo-shadow, and use it as a canvas on which to add other information. Static information is displayed by varying the parameters of a procedural texture rendered in the pseudo-shadow. The probe velocity and probe-surface distance modify this texture to convey dynamic information. Much of the computation occurs on the GPU, so the pseudo-shadow renders quickly enough for real-time interaction.
As a result, this work contains three contributions: a simple collision detection and handling mechanism that can generalize to distance-based force fields; a way to display content during probe-surface interaction that reduces occlusion and spatial distraction; and a way to visually convey small-scale tactile texture.
|
173 |
Universal interaction and control in multiple display environments /Slay, Hannah. Unknown Date (has links)
This dissertation presents interaction techniques for ubiquitous computing environments equipped with multiple, heterogeneous display devices and with novel augmented reality visualisation. Ubiquitous computing work environments are typically enhanced with a range of display technologies, personal information appliances, speech and natural language interfaces, interaction devices and contextual sensors. Interaction in these environments introduces new challenges not previously encountered in shared display or single display environments. / This dissertation describes a number of novel contributions that improve the state of the art in human computer interaction within ubiquitous computing environments. Firstly an interaction model is provided that can be used to categorise interaction tasks performed in ubiquitous computing environments. When interacting across multiple displays, users typically require temporary storage of information to allow data to be copied between devices. The second contribution of this dissertation of a clipboard model for ubiquitous computing environments to allow users to perform this task. Thirdly, a number of infrastructure modules were created to support interaction within these environments. The modules developed include: an Interaction Manager that implements the interaction model to allow any device to be used to control displays in the environments; a Clipboard Manager to manage the creation and access of ubiquitous computing clipboards as defined in the clipboard model; an Interaction Client to be run on each display to be controlled to implement the interaction tasks; and a rapidly adaptable tracking facility for ubiquitous computing environments. Fourthly, a Universal Interaction Controller was created to allow seamless interaction and control of displays in ubiquitous computing environments. With the Universal Interaction Controller, users are able to select a display by pointing at it, and then the interactions performed on the controller are forwarded to the selected display via the Interaction Manager. The controller also provides access to a number of clipboards as defined using the clipboard model. Finally, this dissertation describes the results of a user study that was run to determine the intuitiveness of the Universal Interaction Controller in multiple display and single display environments. This is performed by comparing users' performance with the device to their performance with the leading mobile pointing device and the traditional mouse. / Based on these contributions, two applications were developed to demonstrate how the infrastructure can be used in real world situations. The first application demonstrates the use of a Universal Interaction Controller and a Clipboard Manager for information visualisation. The second application interfaces with the traditional system clipboard to allow ubiquitous computing clipboards to be defined and accessed through traditional desktop clipboard techniques. / Thesis (PhDInformationTechnology)--University of South Australia, 2005.
|
174 |
Implementation patterns for supporting learning and group interactionsKutay, Cat, Computer Science & Engineering, Faculty of Engineering, UNSW January 2006 (has links)
This thesis covers research on group learning by using a computer as the medium. The computer software provides the basic blending of the students??? contributions augmented by the effects generated for the specific learning domain by a system of agents to guide the process of the students??? learning. The research is based on the approach that the computer as a medium is not an end point of the interaction. The development of agents in based on Human- Computer-Human interaction or HCH. HCH is about removing the idea that the role of the computer is that of an intelligent agent and reducing its role to that of a mixer, with the ability to insert adaptive electronic (software) components that add extra effects and depth to the product of the human-human interactions. For the computer to achieve this support, it must be able to analyse the input from the individuals and the group as a whole. Experiments have been conducted on groups working face to face, and then on groups using software developed for the research. Patterns of interaction and learning have been extracted from the logs and files of these group sessions. Also a pattern language has been developed by which to describe these patterns, so that the agent support needed to analyse and respond appropriately to each pattern can be developed. The research has led to the derivation of a structure that encompasses all the types of support required, and provides the format for implementing each type of support. The main difficulty in this work is the limited ability of computers to analyse human thoughts through their actions. However progress is made in analysing the level of approach by students to a range of learning concepts. The research identified the separate patterns that contribute to learning agents development and form a language of learning processes, and the agents derived from these patterns could in future be linked into a multi-agent system to support learning.
|
175 |
Visual recognition of hand motionHolden, Eun-Jung January 1997 (has links)
Hand gesture recognition is an active area of research in recent years, being used in various applications from deaf sign recognition systems to human-machine interaction applications. The gesture recognition process, in general, may be divided into two stages: the motion sensing, which extracts useful data from hand motion; and the classification process, which classifies the motion sensing data as gestures. The existing vision-based gesture recognition systems extract 2-D shape and trajectory descriptors from the visual input, and classify them using various classification techniques from maximum likelihood estimation to neural networks, finite state machines, Fuzzy Associative Memory (FAM) or Hidden Markov Models (HMMs). This thesis presents the framework of the vision-based Hand Motion Understanding (HMU) system that recognises static and dynamic Australian Sign Language (Auslan) signs by extracting and classifying 3-D hand configuration data from the visual input. The HMU system is a pioneer gesture recognition system that uses a combination of a 3-D hand tracker for motion sensing, and an adaptive fuzzy expert system for classification. The HMU 3-D hand tracker extracts 3-D hand configuration data that consists of the 21 degrees-of-freedom parameters of the hand from the visual input of a single viewpoint, with an aid of a colour coded glove. The tracker uses a model-based motion tracking algorithm that makes incremental corrections to the 3-D model parameters to re-configure the model to fit the hand posture appearing in the images through the use of a Newton style optimisation technique. Finger occlusions are handled to a certain extent by recovering the missing hand features in the images through the use of a prediction algorithm. The HMU classifier, then, recognises the sequence of 3-D hand configuration data as a sign by using an adaptive fuzzy expert system where the sign knowledge are used as inference rules. The classification is performed in two stages. Firstly, for each image, the classifier recognises Auslan basic hand postures that categorise the Auslan signs like the alphabet in English. Secondly, the sequence of Auslan basic hand postures that appear in the image sequence is analysed and recognised as a sign. Both the posture and sign recognition are performed by the same adaptive fuzzy inference engine. The HMU rule base stores 22 Auslan basic hand postures, and 22 signs. For evaluation, 44 motion sequences (2 for each of the 22 signs) are recorded. Among them, 22 randomly chosen sequences (1 for each of the 22 signs) are used for testing and the rest are used for training. The evaluation shows that before training the HMU system correctly recognised 20 out of 22 signs. After training, with the same test set, the HMU system recognised 21 signs correctly. All of the failed cases did not produce any output. The evaluation has successfully demonstrated the functionality of the combined use of a 3-D hand tracker and an adaptive fuzzy expert for a vision-based sign language recognition.
|
176 |
Exploration of human-computer interaction challenges in designing software for mobile devicesMuhanna, Muhanna A. January 2007 (has links)
Thesis (M.S.)--University of Nevada, Reno, 2007. / "May, 2007." Includes bibliographical references (leaves 87-93). Online version available on the World Wide Web.
|
177 |
Large-scale display interaction techniques to support face-to-face collaboration : a thesis submitted in partial fulfilment of the requirements for the degree of Master of Science in Computer Science in the University of Canterbury /Thompson, David January 2006 (has links)
Thesis (M. Sc.)--University of Canterbury, 2006. / Typescript (photocopy). Includes bibliographical references (leaves [120]-127). Also available via the World Wide Web.
|
178 |
Rendering optimizations guided by head-pose estimates and their uncertainty /Martínez, Javier E., January 2005 (has links)
Thesis (M.S.)--University of Nevada, Reno, 2005. / "August, 2005." Includes bibliographical references (leaves 54-55). Library also has microfilm. Ann Arbor, Mich. : ProQuest Information and Learning Company, [2005]. 1 microfilm reel ; 35 mm.
|
179 |
Determination of the human perception threshold of phase difference between motion and visual cues in a moving-based flight simulator.Lee, Peter Tung Sing. January 2004 (has links)
Thesis (M.A. Sc.)--University of Toronto, 2004. / Advisers: P.R. Grant; L.D. Reid.
|
180 |
A framework for long-term human-robot interaction /Palathingal, Xavier P. January 2007 (has links)
Thesis (M.S.)--University of Nevada, Reno, 2007. / "May, 2007." Includes bibliographical references (leaves 44-46). Online version available on the World Wide Web. Library also has microfilm. Ann Arbor, Mich. : ProQuest Information and Learning Company, [2007]. 1 microfilm reel ; 35 mm.
|
Page generated in 0.1143 seconds