Spelling suggestions: "subject:"andautonomy steaming"" "subject:"semiautonomy steaming""
1 |
Transparency, trust, and level of detail in user interface design for human autonomy teamingWang, Tianzi 03 November 2023 (has links)
Effective collaboration between humans and autonomous agents can improve productivity and reduce risks of human operators in safety-critical situations, with autonomous agents working as complementary teammates and lowering physical and mental demands by providing assistance and recommendations in complicated scenarios. Ineffective collaboration would have drawbacks, such as risks of being out-of-the-loop when switching over controls, increased time and workload due to the additional needs for communication and situation assessment, unexpected outcomes due to overreliance, and disuse of autonomy due to uncertainty and low expectations. Disclosing the information about the agents for communication and collaboration is one approach to calibrate trust for appropriate reliance and overcome the drawbacks in human-autonomy teaming. When disclosing agent information, the level of detail (LOD) needs careful consideration because not only the availability of information but also the demand for information processing would change, resulting in unintended consequences on comprehension, workload, and task performance.
This dissertation investigates how visualization design at different LODs about autonomy influences transparency, trust, and, ultimately, the effectiveness of human autonomy teaming (HAT) in search and rescue missions. LOD indicates the amount of information aggregated or organized in communication for the human to perceive, comprehend, and respond, and could be manipulated by changing the granularity of information in a user interface. High LOD delivers less information so that users can identify overview and key information of autonomy, while low LOD delivers information in a more detailed manner. The objectives of this research were (1) to build a simulation platform for a representative HAT task affected by visualizations at different LODs about autonomy, (2) to establish the empirical relationship between LOD and transparency, given potential information overload with indiscriminate exposure, and (3) examine how to adapt LOD in visualization with respect to trust as users interact with autonomy over time. A web-based application was developed for wilderness SAR, which can support different visualizations of the lost-person model, UAV path-planner, and task assignment. Two empirical studies were conducted recruiting human participants to collaborate with autonomous agents, making decisions on search area assignment, unmanned aerial vehicle path planning, and object detection. The empirical data included objective measures of task performance and compliance, subjective ratings of transparency, trust, and workload, and qualitative interview data about the designs with students and search and rescue professionals.
The first study revealed that lowering LODs (i.e., more details) does not lead to a proportional increase in transparency (ratings), trust, workload, accuracy, and speed. Transparency increased with decreased LODs up to a point before the subsequent decline, providing empirical evidence for the transparency paradox phenomenon. Further, lowering LOD about autonomy can promote trust with diminishing returns and plateau even with lowering LOD further. This suggests that simply presenting some information about autonomy can build trust quickly, as the users may perceive any reasonable forms of disclosure as signs of benevolence or good etiquette that promote trust. Transparency appears more sensitive to LOD than trust, likely because trust is conceptually less connected to the understanding of autonomy than transparency. In addition, the impacts of LODs were not uniform across the human performance measurements. The visualization with the lowest LOD yielded the highest decision accuracy but the worst in decision speed and intermediate levels of workload, transparency, and trust. LODs could induce the speed-accuracy trade-off. That is, as LOD decreases, more cognitive resources are needed to process the increased amount of information; thus, processing speed decreases accordingly.
The second study revealed patterns of overall and instantaneous trust with respect to visualization at different LODs. For static visualization, the lowest LOD resulted in higher transparency ratings than the middle and high LOD. The lowest LOD generated the highest overall trust amongst the static and adaptive LODs. For visualizations of all LODs, instantaneous trust increased and then stabilized after a series of interactions. However, the rate of change and plateau for trust varied with LODs and modes between static and adaptive. The lowest, middle, and adaptive LODs followed a sigmoid curve, while the high LOD followed a linear one. Among the static LODs, the lowest LOD exhibits the highest growth rate and plateau in trust. The middle LOD developed trust the slowest and reached the lowest plateau. The high LOD showed a linear growth rate until a level similar to that of the lowest LOD. Adaptive LOD earned the trust of the participants at a very similar speed and plateau as the lowest LOD. Taking these results together, more details about autonomy are effective for expediting the process of building trust, as long as the amount of information is carefully managed to prevent overloading participants' information processing. Further, varying quantities of information in adaptive mode could yield very similar growth and plateau in trust, helping humans to deal with either the minimum or maximum amount of information. This adaptive approach could prevent situations where comprehension is hindered due to insufficient information or where users are potentially overloaded by details. Adapting LODs to instantaneous trust presents a promising technique for managing information exchange that can promote the efficiency of communication for building trust.
The contribution of this research to literature is two-fold. The first study provides the first empirical evidence indicating that the impact of LODs on transparency and trust is not linear, which has not been explicitly demonstrated in prior studies about HAT. The impact of LOD on transparency is more sensitive than trust, calling for a more defined and consistent use of the term or concept - "transparency" and a deeper investigation into the relationships between trust and transparency. The second study presents the first examination of how static and dynamic LODs can influence the development of trust toward autonomy. The algorithm for adapting LOD for the adaptive visualization based on user trust is novel, and adaptive LODs in visualization could switch between detailed and abstract information to influence trust without always transmitting all the details about autonomy. Visualizations with different LODs in both static and adaptive modes present their own set of benefits and drawbacks, resulting in trade-offs concerning the speed of promoting trust and information quantity transmitted during communication. These findings indicate that LOD is an important factor for designing and analyzing visualization for transparency and trust in HAT. / Doctor of Philosophy / The collaboration between human and autonomous agents in search and rescue (SAR) missions aims to improve the success rate and speed of finding the lost person. In these missions, a human supervisor may coordinate with autonomous agents responsible for estimating lost person behavior, path planning, and unmanned aerial vehicles. The human SAR professional may rely on information from the autonomous agents to reinforce the search plan and make crucial decisions. Balancing the amount of information provided by the autonomous agents to the SAR professionals is critical, as insufficient information can hinder trust, leading to manual intervention, and excessive information can cause information overload, reducing efficiency. Both cases can result in human distrust of autonomy. Effective visualization of information can help study and improve the transmission of information between humans and autonomous agents. This approach can reduce unnecessary information in communication, thus conserving communication resources without sacrificing trust.
This dissertation investigates how visualization design at the proper aggregation of details about autonomy, also referred to as level of detail (LOD), influences perceived understanding of the autonomous agents (i.e., transparency), trust, and ultimately, the effectiveness of human autonomy teaming (HAT) for wilderness SAR. A simulation platform was built for proof-of-concept, and two studies were conducted recruiting human participants to use the platform for completing simulated SAR tasks supported by visualizations at different LODs about autonomy. Study 1 results showed that transparency ratings increased with more details about autonomy up to a point and then declined with the most details (i.e., lowest LOD). Trust, workload, and performance also did not linearly improve with more details about autonomy. The non-linear relationships of LODs with transparency, trust, workload, and performance, confirmed the phenomenon of the transparency paradox, which refers to the disclosure of excessive information about autonomy may hinder transparency and subsequent performance. Study 2 results also illustrated that when visualization with LOD adapted to instant trust, the speed of building trust and the plateau of trust on autonomy can achieve the same level as the visualization provided with the most details, which performed the best in building trust. This adaptive approach minimized the amount of information displayed relative to the visualization, constantly presenting the most information, potentially easing the burden of communication. Taken together, this research highlights that the amount of information about autonomy to display must be considered carefully for both research and practice. Further, this dissertation advances the visualization design by illustrating that visualization adapting LODs based on trust is effective at building trust in a manner that minimizes the amount of information presented to the user.
|
2 |
Communication Networks and Team Workload in a Command and Control Synthetic Task EnvironmentJanuary 2020 (has links)
abstract: Despite the prevalence of teams in complex sociotechnical systems, current approaches to understanding workload tend to focus on the individual operator. However, research suggests that team workload has emergent properties and is not necessarily equivalent to the aggregate of individual workload. Assessment of communications provides a means of examining aspects of team workload in highly interdependent teams. This thesis set out to explore how communications are associated with team workload and performance under high task demand in all-human and human–autonomy teams in a command and control task. A social network analysis approach was used to analyze the communications of 30 different teams, each with three members operating in a command and control task environment of over a series of five missions. Teams were assigned to conditions differentiated by their composition with either a naïve participant, a trained confederate, or a synthetic agent in the pilot role. Social network analysis measures of centralization and intensity were used to assess differences in communications between team types and under different levels of demand, and relationships between communication measures, performance, and workload distributions were also examined. Results indicated that indegree centralization was greater in the all-human control teams than in the other team types, but degree centrality standard deviation and intensity were greatest in teams with a highly trained experimenter pilot. In all three team types, the intensity of communications and degree centrality standard deviation appeared to decrease during the high demand mission, but indegree and outdegree centralization did not. Higher communication intensity was associated with more efficient target processing and more successful target photos per mission, but a clear relationship between measures of performance and decentralization of communications was not found. / Dissertation/Thesis / Masters Thesis Human Systems Engineering 2020
|
3 |
The Impact of Coordination Quality on Coordination Dynamics and Team Performance: When Humans Team with AutonomyJanuary 2017 (has links)
abstract: This increasing role of highly automated and intelligent systems as team members has started a paradigm shift from human-human teaming to Human-Autonomy Teaming (HAT). However, moving from human-human teaming to HAT is challenging. Teamwork requires skills that are often missing in robots and synthetic agents. It is possible that adding a synthetic agent as a team member may lead teams to demonstrate different coordination patterns resulting in differences in team cognition and ultimately team effectiveness. The theory of Interactive Team Cognition (ITC) emphasizes the importance of team interaction behaviors over the collection of individual knowledge. In this dissertation, Nonlinear Dynamical Methods (NDMs) were applied to capture characteristics of overall team coordination and communication behaviors. The findings supported the hypothesis that coordination stability is related to team performance in a nonlinear manner with optimal performance associated with moderate stability coupled with flexibility. Thus, we need to build mechanisms in HATs to demonstrate moderately stable and flexible coordination behavior to achieve team-level goals under routine and novel task conditions. / Dissertation/Thesis / Doctoral Dissertation Engineering 2017
|
4 |
Creating a Well-Situated Human-Autonomy Team: The Effects of Team StructureFrost, Elizabeth Marie January 2019 (has links)
No description available.
|
Page generated in 0.0569 seconds