Return to search

Precueing Manual Tasks in Augmented and Virtual Reality

Work on Virtual Reality (VR) and Augmented Reality (AR) task interaction and visualization paradigms has typically focused on providing information about the current task step (a cue) immediately before or during its performance. For sequential tasks that involve multiple steps, providing information about the next step (a precue) might also benefit the user. Some research has shown the advantages of simultaneously providing a cue and a precue in path-following tasks. We explore the use of precues in VR and AR for both path-following and object-manipulation tasks involving rotation. We address the effectiveness of different numbers and kinds of precues for different tasks. To achieve this, we conducted a series of user studies:

First, we investigate whether it would be possible to improve efficiency by precueing information about multiple upcoming steps before completing the current step in a planar path-following task. To accomplish this, we developed a VR user study comparing task completion time and subjective metrics for different levels and styles of precueing. Our task-guidance visualizations vary the precueing level (number of steps precued in advance) and style (whether the path to a target is communicated through a line to the target, and whether the place of a target is communicated through graphics at the target). Participants in our study performed best when given two to three precues for visualizations using lines to show the path to targets. However, performance degraded when four precues were used. On the other hand, participants performed best with only one precue for visualizations without lines, showing only the places of targets, and performance degraded when a second precue was given. In addition, participants performed better using visualizations with lines than ones without lines.

Second, we extend the idea of precueing information about multiple steps to a more complex task, whose subtasks involve moving to and picking up a physical object, moving that object to a designated place in the same plane while rotating it to a specific angle in the plane, and depositing it. We conducted two user studies to examine how people accomplish this task while wearing an AR headset, guided by different visualizations that cue and precue movement and rotation. Participants performed best when given movement information for two successive subtasks (one movement precue) and rotation information for a single subtask (no rotation precue). In addition, participants performed best when the visualization of how much to rotate was split across the manipulated object and its destination.

Third, we investigate whether and how much precued rotation information might improve user performance in AR. We consider two unimanual tasks: one requires a participant to make sequential rotations of a single physical object in a plane, and the other requires the participant to move their hand between multiple such objects to rotate them in the plane in sequence. We conducted a user study to explore these two tasks using circular arrows to communicate rotation. In the single-object task, we examined the impact of number of precues and visualization style on participant performance. Results show that precues could improve performance and that arrows with highlighted heads and tails, with each rotation destination aligned with the next origin, yielded the shortest completion time on average. In the multiple-object task, we explored whether rotation precues can be helpful in conjunction with movement precues. Here, using a rotation cue without rotation precues in conjunction with a movement cue and movement precues performed the best, implying that rotation precues were not helpful when movement was also required.

Fourth, we address sequential tasks involving 3DoF rotations and 3DoF translations in headset AR. In each step, a participant picks up a physical object, rotates it in 3D while translating it in 3D, and deposits it in a target 6DoF pose. We designed and compared two types of visualizations for cueing and precueing steps in such a task: Action-based visualizations show the actions needed to carry out a step and goal-based visualizations show the desired end state of a step. We conducted a user study to evaluate these visualizations and their efficacy for precueing. Participants performed better with goal-based visualizations than with action-based visualizations, and most effectively with goal-based visualizations aligned with the Euler axis. However, only a few of our participants benefited from precues, possibly because of the cognitive load of 3D rotations.

In summary, we showed that using precueing can improve the speed at which participants perform different types of tasks. In our VR path-following task, participants were able to benefit from two to three precues using lines to show the path to targets. In our object-manipulation task with 2DoF movement and 1DoF rotation, participants performed best when given movement information for two successive subtasks and rotation information for a single subtask. Further, in our later study focusing on rotation, we found that participants were able to use rotation precues in our single-object task, while in the multiple-object task, rotation precues were not beneficial to participants. Finally, in a study on a sequential 6DoF task, participants performed better with goal-based visualizations than with action-based visualizations.

Identiferoai:union.ndltd.org:columbia.edu/oai:academiccommons.columbia.edu:10.7916/83v2-xq22
Date January 2024
CreatorsLiu, Jen-Shuo
Source SetsColumbia University
LanguageEnglish
Detected LanguageEnglish
TypeTheses

Page generated in 0.0026 seconds