Transparency in HRI describes the method of making the current state of a robotor intelligent agent understandable to a human user. Applying transparencymechanisms to robots improves the quality of interaction as well as the userexperience. Explanations are an effective way to make a robot’s decision making transparent. We introduce a framework that uses natural language labels attached to a region inthe continuous state space of the robot to automatically generate local explanationsof a robot’s policy. We conducted a pilot study and investigated how the generated explanations helpedusers to understand and reproduce a robot policy in a debugging scenario.
Identifer | oai:union.ndltd.org:UPSALLA1/oai:DiVA.org:ltu-71154 |
Date | January 2018 |
Creators | Struckmeier, Oliver |
Publisher | Luleå tekniska universitet, Rymdteknik, Aalto University |
Source Sets | DiVA Archive at Upsalla University |
Language | English |
Detected Language | English |
Type | Student thesis, info:eu-repo/semantics/bachelorThesis, text |
Format | application/pdf |
Rights | info:eu-repo/semantics/openAccess |
Page generated in 0.0015 seconds