1 |
Supporting operator reliance on automation through continuous feedbackSeppelt, Bobbie Danielle 01 December 2009 (has links)
In driving, multiple variables in automated systems such as adaptive cruise control (ACC) and active steering, and in the environment dynamically change and interact. This complexity makes it difficult for operators to track the activities and responses of automation. The inability of operators to monitor and understand automation's behavior contributes to inappropriate reliance, i.e. when an operator uses automation that performs poorly or fails to use automation that is superior to manual control. The decision to use or not use automation is one of the most important an operator can make, particularly in time-critical or emergency situations, therefore it is essential that an operator is calibrated in their automation use. An operator's decision to rely on automation depends on trust. System feedback provided to the operator is one means to calibrate trust in automation in that the type of feedback may differentially affect trust. The goal of this research is to help operators manage imperfect automation in real-time and to promote calibrated trust and reliance. A continuous information display that provides information on system behavior relative to its operating context is one means to promote such calibration. Three specific aims are pursued to test the central hypothesis of this dissertation that continuous feedback on the state and behavior of the automation informs operators of the evolving relationship between system performance and operating limits, therefore promoting accurate mental models and calibrated trust. The first aim applies a quantitative model to define the effect of understanding on driver-ACC interaction failures and to predict driver response to feedback. The second aim presents a systematic approach to define the feedback needed to support appropriate reliance in a demanding multi-task domain such as driving. The third aim assesses the costs and benefits of presenting drivers with continuous visual and auditory feedback. Together these aims indicate that continuous feedback on automation's behavior is a viable means to promote calibrated trust and reliance. The contribution of this dissertation is in providing purpose, process, and performance information to operators through a continuous, concurrent information display that indicates how the given situation interacts with the characteristics of the automation to affect its capability.
|
2 |
Communication-aware planning aid for single-operator multi-UAV teams in urban environmentsChristmann, Hans Claus 21 September 2015 (has links)
With the achievement of autonomous flight for small unmanned aircraft, currently
ongoing research is expanding the capabilities of systems utilizing such
vehicles for various tasks. This allows shifting the research focus from the
individual systems to task execution benefits resulting from interaction and
collaboration of several aircraft.
Given that some available high-fidelity simulations do not yet support
multi-vehicle scenarios, the presented work introduces a framework which allows
several individual single-vehicle simulations to be combined into a larger
multi-vehicle scenario with little to no special requirements towards the
single-vehicle systems. The created multi-vehicle system offers real-time
software-in-the-loop simulations of swarms of vehicles across multiple hosts and
enables a single operator to command and control a swarm of unmanned aircraft
beyond line-of-sight in geometrically correct two-dimensional cluttered
environments through a multi-hop network of data-relaying intermediaries.
This dissertation presents the main aspects of the developed system: the
underlying software framework and application programming interface, the
utilized inter- and intra-system communication architecture, the graphical user
interface, and implemented algorithms and operator aid heuristics to support the
management and placement of the vehicles. The effectiveness of the aid
heuristics is validated through a human subject study which showed that the
provided operator support systems significantly improve the operators'
performance in a simulated first responder scenario.
The presented software is released under the Apache License 2.0 and, where
non-open-source parts are used, software packages with free academic licenses
have been chosen--resulting in a framework that is completely free for academic
research.
|
3 |
The manipulation of user expectancies: effects on reliance, compliance, and trust using an automated systemMayer, Andrew K. 31 March 2008 (has links)
As automated technologies continue to advance, they will be perceived more as collaborative team members and less as simply helpful machines. Expectations of the likely performance of others play an important role in how their actual performance is judged (Stephan, 1985). Although user expectations have been expounded as important for human-automation interaction, this factor has not been systematically investigated. The purpose of the current study was to examine the effect older and younger adults expectations of likely automation performance have on human-automation interaction. In addition, this study investigated the effect of different automation errors (false alarms and misses) on dependence, reliance, compliance, and trust in an automated system. Findings suggest that expectancy effects are relatively short lived, significantly affecting reliance and compliance only through the first experimental block. The effects of type of automation error indicate that participants in a false alarm condition increase reliance and decrease compliance while participants in a miss condition do not change their behavior. The results are important because expectancies must be considered when designing training for human-automation interaction. In addition, understanding the effects of type of automation errors is crucial for the design of automated systems. For example, if the automation is designed for diverse and dynamic environments where automation performance may fluctuate, then a deeper understanding of automation functioning may be needed by users.
|
4 |
Understanding human decision making with automation using Systems Factorial TechnologyKneeland, Cara M. 20 August 2021 (has links)
No description available.
|
5 |
Calibrating Driver Trust: How trust factors influence driver’s trust in Driver Assistance Systems in trucksChikumbi Zulu, Naomi January 2023 (has links)
Vehicle automation has garnered increasing attention as a means of improving safety and efficiency. Advanced Driver Assistance Systems (ADAS) have gained popularity in the transport industry. However, establishing an appropriate level of trust in these systems is crucial for their successful implementation. This research explores the factors influencing driver trust calibration in different levels of automation within driver assistance systems for commercial mobility trucks to ensure drivers comprehend the limitations of these systems and uphold road safety. A qualitative approach involved eleven interviews and observations with drivers to explore their perceptions, experiences, and expectations regarding these systems. The study’s findings extend the Hoff and Bashir Trust model to include significant social factors in calibrating trust. These findings offer valuable insights into the various trust factors that impact driver trust calibration at different levels of automation in driver assistance systems for commercial mobility trucks. These insights contribute to academia in that they help understand how trust in automation is formed and calibrated in real-world settings. In the automotive industry, they can guide the design and implementation of these systems to enhance future drivers’ safety and overall experience.
|
6 |
Developing a model of driver performance, situation awareness, and cognitive load considering different levels of partial vehicle autonomyCossitt, Jessie E. 13 May 2022 (has links) (PDF)
To fully utilize the abilities of current autonomous vehicles, it is necessary to understand the interactions between vehicles and their operators. Since the current state of the art of autonomous vehicles is partial autonomy that requires operators to perform parts of the driving task and be alert and ready to take over full control of the vehicle, it is necessary to know how operators' abilities are impacted by the amount of autonomy present in the system. Autonomous systems have known effects on performance, cognitive load, and situation awareness, but little is known about how these effects change in relation to distinct, increasing autonomy levels. It is also necessary to consider these abilities with the addition of secondary tasks due to the appeal of using autonomous systems for multitasking.
The goal of this research is to use a web-based virtual reality study to model operator situation awareness, cognitive load, driving performance, and secondary task performance as a function of five distinct, increasing levels of partial vehicle autonomy first with a constant, low rate of secondary tasks and then with an increasing rate of secondary tasks. The study had each participant operate a virtual military vehicle in one of five possible autonomy conditions while responding to questions on a communications terminal. After a practice phase for familiarization, participants took part in two drives where they would have to intervene to prevent crashes regardless of autonomy level. The first drive had a slow, steady rate of communication questions, and the second increased the rate of questions to an unmanageable point before slowing down again.
For both phases, the factors of scored driving performance, secondary task performance (accuracy and latency), subjective situation awareness from the Situation Awareness Rating Technique (SART), objective situation awareness from real-time probes, and cognitive load from the NASA Task Load Index (NASA-TLX) and the SOS Scale were analyzed in terms of how they related to the autonomy level and to each other.
Results are presented in the form of statistical analysis and modeled equations and show the potential for optimal multitasking within specific autonomy levels and task allocation requirements.
|
7 |
Using dynamic task allocation to evaluate driving performance, situation awareness, and cognitive load at different levels of partial autonomyPatel, Viraj R. 08 August 2023 (has links) (PDF)
The state of the art of autonomous vehicles requires operators to remain vigilant while performing secondary tasks. The goal of this research was to investigate how dynamically allocated secondary tasks affected driving performance, cognitive load, and situation awareness. Secondary tasks were presented at rates based on the autonomy level present and whether the autonomous system was engaged. A rapid secondary task rate was also presented for two short periods regardless of whether autonomy was engaged. There was a three-minute familiarization phase followed by a data collection phase where participants responded to secondary tasks while preventing the vehicle from colliding into random obstacles. After data collection, there was a brief survey to gather data on cognitive load, situation awareness, and relevant demographics. The data was compared to data gathered in a similar study by Cossitt [10] where secondary tasks were presented at a controlled frequency and a gradually increasing frequency.
|
8 |
Determining System Requirements for Human-Machine Integration in Cyber Security Incident ResponseMegan M Nyre-Yu (7525319) 30 October 2019 (has links)
<div>In 2019, cyber security is considered one of the most significant threats to the global economy and national security. Top U.S. agencies have acknowledged this fact, and provided direction regarding strategic priorities and future initiatives within the domain. However, there is still a lack of basic understanding of factors that impact complexity, scope, and effectiveness of cyber defense efforts. Computer security incident response is the short-term process of detecting, identifying, mitigating, and resolving a potential security threat to a network. These activities are typically conducted in computer security incident response teams (CSIRTs) comprised of human analysts that are organized into hierarchical tiers and work closely with many different computational tools and programs. Despite the fact that CSIRTs often provide the first line of defense to a network, there is currently a substantial global skills shortage of analysts to fill open positions. Research and development efforts from educational and technological perspectives have been independently ineffective at addressing this shortage due to time lags in meeting demand and associated costs. This dissertation explored how to combine the two approaches by considering how human-centered research can inform development of computational solutions toward augmenting human analyst capabilities. The larger goal of combining these approaches is to effectively complement human expertise with technological capability to alleviate pressures from the skills shortage.</div><div><br></div><div>Insights and design recommendations for hybrid systems to advance the current state of security automation were developed through three studies. The first study was an ethnographic field study which focused on collecting and analyzing contextual data from three diverse CSIRTs from different sectors; the scope extended beyond individual incident response tasks to include aspects of organization and information sharing within teams. Analysis revealed larger design implications regarding collaboration and coordination in different team environments, as well as considerations about usefulness and adoption of automation. The second study was a cognitive task analysis with CSIR experts with diverse backgrounds; the interviews focused on expertise requirements for information sharing tasks in CSIRTs. Outputs utilized a dimensional expertise construct to identify and prioritize potential expertise areas for augmentation with automated tools and features. Study 3 included a market analysis of current automation platforms based on the expertise areas identified in Study 2, and used Systems Engineering methodologies to develop concepts and functional architectures for future system (and feature) development.</div><div><br></div><div>Findings of all three studies support future directions for hybrid automation development in CSIR by identifying social and organizational factors beyond traditional tool design in security that supports human-systems integration. Additionally, this dissertation delivered functional considerations for automated technology that can augment human capabilities in incident response; these functions support better information sharing between humans and between humans and technological systems. By pursuing human-systems integration in CSIR, research can help alleviate the skills shortage by identifying where automation can dynamically assist with information sharing and expertise development. Future research can expand upon the expertise framework developed for CSIR and extend the application of proposed augmenting functions in other domains.</div>
|
9 |
Model-based metrics of human-automation function allocation in complex work environmentsKim, So Young 08 July 2011 (has links)
Function allocation is the design decision which assigns work functions to all agents in a team, both human and automated. Efforts to guide function allocation systematically has been studied in many fields such as engineering, human factors, team and organization design, management science, and cognitive systems engineering. Each field focuses on certain aspects of function allocation, but not all; thus, an independent discussion of each does not address all necessary issues with function allocation. Four distinctive perspectives emerged from a review of these fields: technology-centered, human-centered, team-oriented, and work-oriented. Each perspective focuses on different aspects of function allocation: capabilities and characteristics of agents (automation or human), team structure and processes, and work structure and the work environment.
Together, these perspectives identify the following eight issues with function allocation:
1)Workload,
2)Incoherency in function allocations,
3)Mismatches between responsibility and authority,
4)Interruptive automation,
5)Automation boundary conditions,
6)Function allocation preventing human adaptation to context,
7)Function allocation destabilizing the humans' work environment, and
8)Mission Performance.
Addressing these issues systematically requires formal models and simulations that include all necessary aspects of human-automation function allocation: the work environment, the dynamics inherent to the work, agents, and relationships among them. Also, addressing these issues requires not only a (static) model, but also a (dynamic) simulation that captures temporal aspects of work such as the timing of actions and their impact on the agent's work. Therefore, with properly modeled work as described by the work environment, the dynamics inherent to the work, agents, and relationships among them, a modeling framework developed by this thesis, which includes static work models and dynamic simulation, can capture the issues with function allocation.
Then, based on the eight issues, eight types of metrics are established. The purpose of these metrics is to assess the extent to which each issue exists with a given function allocation. Specifically, the eight types of metrics assess workload, coherency of a function allocation, mismatches between responsibility and authority, interruptive automation, automation boundary conditions, human adaptation to context, stability of the human's work environment, and mission performance.
Finally, to validate the modeling framework and the metrics, a case study was conducted modeling four different function allocations between a pilot and flight deck automation during the arrival and approach phases of flight. A range of pilot cognitive control modes and maximum human taskload limits were also included in the model. The metrics were assessed for these four function allocations and analyzed to validate capability of the metrics to identify important issues in given function allocations. In addition, the design insights provided by the metrics are highlighted
This thesis concludes with a discussion of mechanisms for further validating the modeling framework and function allocation metrics developed here, and highlights where these developments can be applied in research and in the design of function allocations in complex work environments such as aviation operations.
|
10 |
The development of the human-automation behavioral interaction task (HABIT) analysis frameworkBaird, Isabelle Catherine 07 June 2019 (has links)
No description available.
|
Page generated in 0.1488 seconds