abstract: A critical challenge in the design of AI systems that operate with humans in the loop is to be able to model the intentions and capabilities of the humans, as well as their beliefs and expectations of the AI system itself. This allows the AI system to be "human- aware" -- i.e. the human task model enables it to envisage desired roles of the human in joint action, while the human mental model allows it to anticipate how its own actions are perceived from the point of view of the human. In my research, I explore how these concepts of human-awareness manifest themselves in the scope of planning or sequential decision making with humans in the loop. To this end, I will show (1) how the AI agent can leverage the human task model to generate symbiotic behavior; and (2) how the introduction of the human mental model in the deliberative process of the AI agent allows it to generate explanations for a plan or resort to explicable plans when explanations are not desired. The latter is in addition to traditional notions of human-aware planning which typically use the human task model alone and thus enables a new suite of capabilities of a human-aware AI agent. Finally, I will explore how the AI agent can leverage emerging mixed-reality interfaces to realize effective channels of communication with the human in the loop. / Dissertation/Thesis / Doctoral Dissertation Computer Science 2018
Identifer | oai:union.ndltd.org:asu.edu/item:51791 |
Date | January 2018 |
Contributors | Chakraborti, Tathagata (Author), Kambhampati, Subbarao (Advisor), Talamadupula, Kartik (Committee member), Scheutz, Matthias (Committee member), Ben Amor, Hani (Committee member), Zhang, Yu (Committee member), Arizona State University (Publisher) |
Source Sets | Arizona State University |
Language | English |
Detected Language | English |
Type | Doctoral Dissertation |
Format | 562 pages |
Rights | http://rightsstatements.org/vocab/InC/1.0/ |
Page generated in 0.0022 seconds