Return to search

Mental models of hazards and the issue of trust in automation

Road hazards have been threatening drivers’ safety. Drivers should perceive road hazards in order to reduce or eliminate risks (Shinar, 2007a). Hazard perception is a cognitive ability which can be improved through practice (Mark S. Horswill, Hill, & Wetton, 2015). Failures in hazard perception can cause adversities (Spainhour, Brill, John Sobanjo, Jerry Wekezer, & Primus Mtenga, 2005). Hence, understanding the cognitive process of hazard perception is important for improving drivers’ reactions and creating human-like autonomous systems.
While there is no agreement among researchers about the concept of hazard perception (Baars, 1993; Mica R. Endsley, 1995b; Flach, 1994), most accepted views describe hazard perception as a process (Banbury, 2017). One of the predominant cognitive theories which describes hazard perception as a process is Neisser’s action-perception (Neisser, 1987). Neisser’s model relates hazard perception to mental map of the world and includes limitations of drivers’ memories. The action-perception model alongside other dominant hazard perception theories which describe the phenomenon can only be valid if drivers have mental models of hazards (Mica R. Endsley, 2000). Mental models which can facilitate the hazard perception process cannot be described by dominant theories that describe mental models as cognitive structural and functional models (Preece et al., 1994) In fact, the mental model which can be used for hazard perception and risk anticipation should explain how hazard can fail a traffic system and cause crash. Thus, the first experiment explores the existence of such a mental model by using Schema World Action Research Method (SWARM) cognitive probes. The result proves the existence of subjective mental models of hazards.
Mental models are subjective, and drivers can have different preference of actions to a hazard accordingly. Automation of driving requires drivers to monitor the Autonomous vehicles (AVs) behaviors and takeover the control when needed (Banks & Stanton, 2017). For a successful monitoring of AVs drivers should build an appropriate level of trust in systems according to systems reliability (Lee & See, 2004). AVs should produce acceptable results to be considered reliable and drivers should develop accurate mental models of AVs actions and limitations (Walliser, 2011). Drivers will evaluate systems actions by their subjective mental models including mental models of hazards. However, AVs are designed by their designers and have limitations in replicating human-like reactions to hazards (Don Norman, 2016). The second experiment investigates how discrepancies between design models of AVs and drivers’ mental models including mental models of hazards can influence drivers’ trust in automation. In this part, naturalistic method and a Tesla S is used and participants are interviewed after five days of driving by using advanced automated systems. Results show that drivers use their mental models of hazards to predict hazardous scenarios and takeover the control before hazards materialize. Additionally, findings reveal how complexity of the system can result in function confusion, mode confusion, and misinterpreting AVs capabilities, which can result in abuse of automated systems. Results reveal that there is a need for adequate training of drivers on autonomous and advanced systems.

Identiferoai:union.ndltd.org:uiowa.edu/oai:ir.uiowa.edu:etd-8382
Date01 May 2019
CreatorsSalehi, Hugh Pierre
ContributorsPennathur, Priyadarshini R.
PublisherUniversity of Iowa
Source SetsUniversity of Iowa
LanguageEnglish
Detected LanguageEnglish
Typedissertation
Formatapplication/pdf
SourceTheses and Dissertations
RightsCopyright © 2019 Hugh Pierre Salehi

Page generated in 0.0028 seconds