Return to search

Transparency, trust, and level of detail in user interface design for human autonomy teaming

Effective collaboration between humans and autonomous agents can improve productivity and reduce risks of human operators in safety-critical situations, with autonomous agents working as complementary teammates and lowering physical and mental demands by providing assistance and recommendations in complicated scenarios. Ineffective collaboration would have drawbacks, such as risks of being out-of-the-loop when switching over controls, increased time and workload due to the additional needs for communication and situation assessment, unexpected outcomes due to overreliance, and disuse of autonomy due to uncertainty and low expectations. Disclosing the information about the agents for communication and collaboration is one approach to calibrate trust for appropriate reliance and overcome the drawbacks in human-autonomy teaming. When disclosing agent information, the level of detail (LOD) needs careful consideration because not only the availability of information but also the demand for information processing would change, resulting in unintended consequences on comprehension, workload, and task performance.
This dissertation investigates how visualization design at different LODs about autonomy influences transparency, trust, and, ultimately, the effectiveness of human autonomy teaming (HAT) in search and rescue missions. LOD indicates the amount of information aggregated or organized in communication for the human to perceive, comprehend, and respond, and could be manipulated by changing the granularity of information in a user interface. High LOD delivers less information so that users can identify overview and key information of autonomy, while low LOD delivers information in a more detailed manner. The objectives of this research were (1) to build a simulation platform for a representative HAT task affected by visualizations at different LODs about autonomy, (2) to establish the empirical relationship between LOD and transparency, given potential information overload with indiscriminate exposure, and (3) examine how to adapt LOD in visualization with respect to trust as users interact with autonomy over time. A web-based application was developed for wilderness SAR, which can support different visualizations of the lost-person model, UAV path-planner, and task assignment. Two empirical studies were conducted recruiting human participants to collaborate with autonomous agents, making decisions on search area assignment, unmanned aerial vehicle path planning, and object detection. The empirical data included objective measures of task performance and compliance, subjective ratings of transparency, trust, and workload, and qualitative interview data about the designs with students and search and rescue professionals.
The first study revealed that lowering LODs (i.e., more details) does not lead to a proportional increase in transparency (ratings), trust, workload, accuracy, and speed. Transparency increased with decreased LODs up to a point before the subsequent decline, providing empirical evidence for the transparency paradox phenomenon. Further, lowering LOD about autonomy can promote trust with diminishing returns and plateau even with lowering LOD further. This suggests that simply presenting some information about autonomy can build trust quickly, as the users may perceive any reasonable forms of disclosure as signs of benevolence or good etiquette that promote trust. Transparency appears more sensitive to LOD than trust, likely because trust is conceptually less connected to the understanding of autonomy than transparency. In addition, the impacts of LODs were not uniform across the human performance measurements. The visualization with the lowest LOD yielded the highest decision accuracy but the worst in decision speed and intermediate levels of workload, transparency, and trust. LODs could induce the speed-accuracy trade-off. That is, as LOD decreases, more cognitive resources are needed to process the increased amount of information; thus, processing speed decreases accordingly.
The second study revealed patterns of overall and instantaneous trust with respect to visualization at different LODs. For static visualization, the lowest LOD resulted in higher transparency ratings than the middle and high LOD. The lowest LOD generated the highest overall trust amongst the static and adaptive LODs. For visualizations of all LODs, instantaneous trust increased and then stabilized after a series of interactions. However, the rate of change and plateau for trust varied with LODs and modes between static and adaptive. The lowest, middle, and adaptive LODs followed a sigmoid curve, while the high LOD followed a linear one. Among the static LODs, the lowest LOD exhibits the highest growth rate and plateau in trust. The middle LOD developed trust the slowest and reached the lowest plateau. The high LOD showed a linear growth rate until a level similar to that of the lowest LOD. Adaptive LOD earned the trust of the participants at a very similar speed and plateau as the lowest LOD. Taking these results together, more details about autonomy are effective for expediting the process of building trust, as long as the amount of information is carefully managed to prevent overloading participants' information processing. Further, varying quantities of information in adaptive mode could yield very similar growth and plateau in trust, helping humans to deal with either the minimum or maximum amount of information. This adaptive approach could prevent situations where comprehension is hindered due to insufficient information or where users are potentially overloaded by details. Adapting LODs to instantaneous trust presents a promising technique for managing information exchange that can promote the efficiency of communication for building trust.
The contribution of this research to literature is two-fold. The first study provides the first empirical evidence indicating that the impact of LODs on transparency and trust is not linear, which has not been explicitly demonstrated in prior studies about HAT. The impact of LOD on transparency is more sensitive than trust, calling for a more defined and consistent use of the term or concept - "transparency" and a deeper investigation into the relationships between trust and transparency. The second study presents the first examination of how static and dynamic LODs can influence the development of trust toward autonomy. The algorithm for adapting LOD for the adaptive visualization based on user trust is novel, and adaptive LODs in visualization could switch between detailed and abstract information to influence trust without always transmitting all the details about autonomy. Visualizations with different LODs in both static and adaptive modes present their own set of benefits and drawbacks, resulting in trade-offs concerning the speed of promoting trust and information quantity transmitted during communication. These findings indicate that LOD is an important factor for designing and analyzing visualization for transparency and trust in HAT. / Doctor of Philosophy / The collaboration between human and autonomous agents in search and rescue (SAR) missions aims to improve the success rate and speed of finding the lost person. In these missions, a human supervisor may coordinate with autonomous agents responsible for estimating lost person behavior, path planning, and unmanned aerial vehicles. The human SAR professional may rely on information from the autonomous agents to reinforce the search plan and make crucial decisions. Balancing the amount of information provided by the autonomous agents to the SAR professionals is critical, as insufficient information can hinder trust, leading to manual intervention, and excessive information can cause information overload, reducing efficiency. Both cases can result in human distrust of autonomy. Effective visualization of information can help study and improve the transmission of information between humans and autonomous agents. This approach can reduce unnecessary information in communication, thus conserving communication resources without sacrificing trust.
This dissertation investigates how visualization design at the proper aggregation of details about autonomy, also referred to as level of detail (LOD), influences perceived understanding of the autonomous agents (i.e., transparency), trust, and ultimately, the effectiveness of human autonomy teaming (HAT) for wilderness SAR. A simulation platform was built for proof-of-concept, and two studies were conducted recruiting human participants to use the platform for completing simulated SAR tasks supported by visualizations at different LODs about autonomy. Study 1 results showed that transparency ratings increased with more details about autonomy up to a point and then declined with the most details (i.e., lowest LOD). Trust, workload, and performance also did not linearly improve with more details about autonomy. The non-linear relationships of LODs with transparency, trust, workload, and performance, confirmed the phenomenon of the transparency paradox, which refers to the disclosure of excessive information about autonomy may hinder transparency and subsequent performance. Study 2 results also illustrated that when visualization with LOD adapted to instant trust, the speed of building trust and the plateau of trust on autonomy can achieve the same level as the visualization provided with the most details, which performed the best in building trust. This adaptive approach minimized the amount of information displayed relative to the visualization, constantly presenting the most information, potentially easing the burden of communication. Taken together, this research highlights that the amount of information about autonomy to display must be considered carefully for both research and practice. Further, this dissertation advances the visualization design by illustrating that visualization adapting LODs based on trust is effective at building trust in a manner that minimizes the amount of information presented to the user.

Identiferoai:union.ndltd.org:VTETD/oai:vtechworks.lib.vt.edu:10919/116632
Date03 November 2023
CreatorsWang, Tianzi
ContributorsIndustrial and Systems Engineering, Lau, Nathan Ka Ching, Jeon, Myounghoon, Williams, Ryan K., Gabbard, Joseph L.
PublisherVirginia Tech
Source SetsVirginia Tech Theses and Dissertation
LanguageEnglish
Detected LanguageEnglish
TypeDissertation
FormatETD, application/pdf
RightsIn Copyright, http://rightsstatements.org/vocab/InC/1.0/

Page generated in 0.0041 seconds