Robots can use a collection of auditory, visual, or haptic interfaces to convey information to human collaborators. The way these interfaces select signals typically depends on the task that the human is trying to complete: for instance, a haptic wristband may vibrate when the human is moving quickly and stop when the user is stationary.
But people interpret the same signals in different ways, so what one user finds intuitive another user may not understand. In the absence of task knowledge, conveying signals is even more difficult: without knowing what the human wants to do, how should the robot select signals that helps them accomplish their task? When paired with the seemingly infinite ways that humans can interpret signals, designing an optimal interface for all users seems impossible.
This thesis presents an information-theoretic approach to communication in task-agnostic settings: a unified algorithmic formalism for learning co-adaptive interfaces from scratch without task knowledge. The resulting approach is user-specific and not tied to any interface modality.
This method is further improved by introducing symmetrical properties using priors on communication. Although we cannot anticipate how a human will interpret signals, we can anticipate interface properties that humans may like. By integrating these functional priors in the aforementioned learning scheme, we achieve performance far better than baselines that have access to task knowledge.
The results presented here indicate that users subjectively prefer interfaces generated from the presented learning scheme while enabling better performance and more efficient interactions. / Master of Science / This thesis presents a novel interface for robot-to-human communication that personalizes to the current user without either task-knowledge nor an interpretative model of the human. Suppose that you are trying to find the location of buried treasure in a sandbox. You don't know the location of the treasure, but a robotic assistant does. Unfortunately, the only way the assistant can communicate the position of the treasure to you is through two LEDs of varying intensity --- and neither you nor the robot have a mutually understood interpretation of those signals. Without knowing the robot's convention for communication, how should you interpret the robot's signals? There are infinitely many viable interpretations: perhaps a brighter signal means that the treasure is towards the center of the sandbox -- or something else entirely.
The robot has a similar problem: how should it interpret your behavior? Without knowing what you want to do with the hidden information (i.e., your task) or how you behave (i.e., your interpretative model), there is an infinite number pairs for either that fit your behavior.
This work presents an interface optimizer that maximizes the correlation between the human's behavior and the hidden information. Testing with real humans indicates that this learning scheme can produce useful communicative mappings --- without knowing the users' tasks or their interpretative models.
Furthermore, we recognize that humans have common biases in their interpretation of the world (leading to biases in their interpretations of robot communication). Although we cannot assume how a specific user will interpret an interface's signal, we can assume user-friendly interface designs that most humans find intuitive. We leverage these biases to further improve the aforementioned learning scheme across several user studies. As such, the findings presented in this thesis have a direct impact on human-robot co-adaptation in task-agnostic settings.
Identifer | oai:union.ndltd.org:VTETD/oai:vtechworks.lib.vt.edu:10919/118917 |
Date | 07 May 2024 |
Creators | Christie, Benjamin Alexander |
Contributors | Mechanical Engineering, Losey, Dylan Patrick, Akbari Hamed, Kaveh, Acar, Pinar |
Publisher | Virginia Tech |
Source Sets | Virginia Tech Theses and Dissertation |
Language | English |
Detected Language | English |
Type | Thesis |
Format | ETD, application/pdf |
Rights | In Copyright, http://rightsstatements.org/vocab/InC/1.0/ |
Page generated in 0.0023 seconds