Return to search

Human‐computer interaction in 3D object manipulation in virtual environments: A cognitive ergonomics contribution

It is proposed to investigate the cognitive processes involved in assembly/disassembly tasks, and then to apply the findings to the design of 3D virtual environments (VEs). Virtual Environments are interactive systems that enable one or more users to interact with the simulation of objects and scenes usually in three dimensions, in a realistic fashion, by means of a set of computational techniques covering one or more sensory modalities (vision, touch, haptic, hearing, etc.). Often described as the ultimate direct manipulation interface, this technology seeks to make the interface eventually 'disappear' in order to provide users with a 'natural' mode of interaction. Virtual reality (VR) is the experience of being within a VE. One objective of the VR technology is indeed to exploit natural human behaviour without requiring any learning from their users [Fuchs2003], [Bowman2005]. Moreover, VEs are a stimulating field of research because they involve perceptually and cognitively novel situations [Burkhardt2003]. VEs also offer a large potential of innovative solutions to existing application problems. Among others, assembly tasks are a major focus for VEs [Boud2000], [Brooks1999], [Lok2003‐a], [Lok2003‐b], due to their numerous potential applications, such as assembly/disassembly of objects, scientific research (e.g., molecular docking [Ferey2009] etc.). The common feature in VEs is the use of representations and devices to support the users in handling and arranging several distinct elements in a three dimensional (3D) space under specific constraints. Most of the current devices and interaction techniques have focused on providing users with high‐fidelity sensory stimulations, rather than targeting real‐life or task‐centred functions associated with the corresponding interfaces. While many contributions have been made to the field of VR, there are only few empirical data that have been published. We believe that it is very unlikely that more adapted VEs and assistance to users' task - in the specific context of assembly tasks - will follow either just by chance [Brooks1999], by making repeated trials, by tuning what we already have at hand, or by more realistic sensory renderings, without any reference to the 'specific properties of the tasks' including its cognitive dimension. Consequently, a clear picture of the cognitive processes and constraints in real tasks involving spatial manipulation should lead to a significant enhancement of the users' interactions with VEs. This enhancement can be made by creating better or new guidance mechanisms (e.g., video feedback, object collision detection, or avoidance mechanisms) adapted to the users' goals and strategies. This project thus involves work both from the cognitive side and its implications on 3D interactions in industrial VEs. The objective of this doctoral work is to contribute to a better understanding of human factors (HF) - including performance and cognitive processes - related to assisting spatial 3D manipulation and problem‐solving in assembly/disassembly tasks in VEs. For that purpose, we compared performance and strategies of subjects while they solve a simplified spatial task requiring them to assemble pieces to form a specified shape in various conditions of interfacing actions in real and virtual environments. The assembly task chosen was neither very easy such as put peg‐in‐a‐hole type task, as in [Zhang2005], [Pettinaro1999], or [Unger2001], nor highly complex and specific, such as performing open heart or liver surgery [Torkington2001] (one whose results could be applied only to that specific kind of task). The chosen task was semi‐complex, in which the users were required to construct a 3D cube using seven rectangular blocks of different sizes and shapes. The methodology used had two tiers: real and virtual. For the chosen assembly task, a study was first conducted in real settings, which was to provide inspiration, input, and insight for the main experiment to follow. The main experiment that followed was similar in design, but the difference was that it was conducted in virtual settings. The experiment in virtual settings was conducted in three modalities - the classical keyboard‐mouse, the gestural modality, and the vocal modality.

Identiferoai:union.ndltd.org:CCSD/oai:tel.archives-ouvertes.fr:tel-00603331
Date26 November 2010
CreatorsAbbasi, Sarwan
PublisherUniversité Paris Sud - Paris XI
Source SetsCCSD theses-EN-ligne, France
LanguageEnglish
Detected LanguageEnglish
TypePhD thesis

Page generated in 0.0019 seconds