Return to search

Characterizing Mental Workload in Physical Human-Robot Interaction Using Eye-Tracking Measures

Recent technological developments have ushered in an exciting era for collaborative robots (cobots), which can operate in close proximity with humans, sharing and supporting task goals. While there is increasing research on the biomechanical and ergonomic consequences of using cobots, there is relatively little work on the potential motor-cognitive demand associated with these devices. These cognitive demands primarily stem from the need to form accurate internal (mental) models of robot behavior, while also dealing with the intrinsic motor-cognitive demands of physical co-manipulation tasks, and visually monitoring the environment to ensure safe operation. The primary aim of this work was to investigate the viability of eye-tracking measures for characterizing mental workload during the use of cobots, while accounting for the potential effects of learning, task-type, expertise, and age-differences. While eye-tracking is gaining traction in surgical/rehabilitation robotics domains, systematic investigations of eye tracking for studying interactions with industrial cobots are currently lacking. We conducted three studies in which participants of different ages and expertise levels learned to perform upper- and lower-limb tasks using a dual-armed cobot and a whole-body powered exoskeleton respectively, over multiple trials. Robot-control difficulty was manipulated by changing the joint impedance on one of the robot arms (for the dual-armed cobot).
The first study demonstrated that when individuals were learning to interact with a dual-armed cobot to perform an upper-limb co-manipulation task simulated in a virtual reality (VR) environment, pupil dilation (PD) and stationary gaze entropy (SGE) were the most sensitive and reliable measures of mental workload. A combination of eye-tracking measures predicted performance with greater accuracy than experimental task variables. Measures of visual attentional focus were more sensitive to task difficulty manipulations than typical eye-tracking workload measures, and PD was most sensitive to changes in workload over learning. The second study showed that compared to walking freely, walking while using a complex whole-body powered exoskeleton: a) increased PD of novices but not experts, b) led to reduced SGE in both groups and c) led to greater downward focused gaze (on the walking path) in experts compared to novices. In the third study using an upper-limb co-manipulation task similar to Study 1, we found that the PD of younger adults reduced at a faster rate over learning, compared to that of older adults, and older adults showed a significantly greater drop in gaze transition entropy with an increase in task difficulty, compared to younger adults. Also, PD was sensitive to learning and robot-difficulty but not environmental-complexity (collisions with objects in the task environment), and gaze-behavior measures were generally more sensitive to environmental-complexity.
This research is the first to conduct a comprehensive analysis of mental workload in physical human-robot interaction using eye-tracking measures. PD was consistently found to show larger effects over learning, compared to task difficulty. Gaze-behavior measures quantifying visual attention towards environmental areas of interest were found to show relatively large effects of task difficulty and should continue to be explored in future research. While walking in a powered exoskeleton, both novices and experts exhibited compensatory gaze strategies. This finding highlights potentially persistent effects of using cobots on visual attention, with potential implications to safety and situational awareness. Older adults were found to apply greater mental effort (indicated by sustained PD) and followed more constrained gaze patterns in order to maintain similar levels of performance to younger adults. Perceived workload measures could not capture these age-differences, thus highlighting the advantages of eye-tracking measures. Lastly, the differential sensitivity of pupillary- and gaze behavior metrics to different types of task demands highlights the need for future research to employ both kinds of measures for evaluating pHRI. Important questions for future research are the potential sensitivity of eye-tracking workload measures over long-term adaptations to cobots, and the potential generalizability of eye-tracking measures to real-world (non-VR) tasks. / Doctor of Philosophy / Collaborative robots (cobots) are an exciting and novel technology that may be used to assist human workers in manual industrial work, reduce physical demand, and potentially enable older adults to re-enter the workforce. However, relatively little is known about the potential cognitive demands that cobots may impose on the human user. Although intended to assist humans, some cobots have been found to be difficult to use, because of the time and effort that is needed to learn their control dynamics (i.e. to learn how to physically control them to perform a complex manual task). Thus, it is important to better understand the potential mental demand/workload that a human operator may experience, while using a cobot, and how this demand may vary over time and learning to use the cobot. Eye-tracking is a promising technique to measure a cobot-operators' mental workload, since it can provide various measures that correlate with the involuntary physiological response to mental workload (e.g. pupil dilation - PD), as well as voluntary gaze strategies (e.g. the durations and patterns of where people look) in order to perform a physical/motor task. Eye-tracking measures may be used to continuously and precisely evaluate whether a cobot imposes excessive workload on the human operator, and if high workload is observed, the cobot may be programmed to adapt its behavior to reduce workload. Although eye-tracking is gaining traction in surgical/rehabilitation robotics domains, systematic investigations of eye tracking for studying interactions with industrial cobots are currently lacking. We designed three studies in which we investigated 1) the ability of eye-tracking measures to measure changes in mental workload while participants learned to use a cobot under different difficulty-levels 2) the changes in pupil diameter and gaze behavior when participants walked while wearing a whole-body powered exoskeleton as opposed to walking freely, and potential differences between novice- and expert exoskeleton-users 3) the differences in mental workload and visual attention between younger and older adults while learning to use a cobot. The first and third studies used virtual reality (VR) to simulate the task environment, to allow for precise control over the presentation of stimuli.
In study 1, we found that in higher difficulty-levels, participants' pupils were significantly more dilated, i.e., participants experienced higher mental workload, than in lower-difficulty levels. Also, PD gradually reduced as participants learned to better perform the task. In difficult task-conditions, participants gazed more frequently at the robot, and showed higher randomness (entropy) in their gaze patterns. The proportion of gaze falling on certain objects was at least as sensitive an indicator of task-difficulty, as PD and gaze entropy. In study 2, we found that walking in a whole-body exoskeleton was cognitively demanding, but only for novice participants. However, both novice and expert participants showed changes in their gaze patterns while walking in the exoskeleton – both groups lowered their gaze and focused on the walking path to a greater extent, compared to walking freely. Lastly, in study 3, we also found that older adults applied greater mental effort for maintaining similar levels of performance as younger adults. Older adults also exhibited more repetitive scanning patterns compared to younger adults, when task difficulty increased. This may have been due to potential reduction in the capacity to control attention with age. Our work demonstrates that eye-tracking measures are sensitive and reliable metrics of workload, and that different metrics are sensitive to different sources of workload. Specifically, PD was sensitive to robot-difficulty, and measures of visual attention were generally more sensitive to the complexity of the task environment. Important questions for future research are the potential changes in eye-tracking workload measures over longer time periods of learning to use cobots, and how these results generalize to real-world tasks that are not performed in virtual reality.

Identiferoai:union.ndltd.org:VTETD/oai:vtechworks.lib.vt.edu:10919/115664
Date06 July 2023
CreatorsUpasani, Satyajit Abhay
ContributorsIndustrial and Systems Engineering, Gabbard, Joseph L., Srinivasan, Divya, Leonessa, Alexander, Nussbaum, Maury A., Lau, Nathan Ka Ching
PublisherVirginia Tech
Source SetsVirginia Tech Theses and Dissertation
LanguageEnglish
Detected LanguageEnglish
TypeDissertation
FormatETD, application/pdf
RightsIn Copyright, http://rightsstatements.org/vocab/InC/1.0/

Page generated in 0.0021 seconds