In strategically rich settings in which machines and people do not fully share the same preferences, machines must learn to cooperate and compromise with people to establish mutually successful relationships. However, designing machines that effectively cooperate with people in these settings is difficult due to a variety of technical and psychological challenges. To better understand these challenges, we conducted a series of user studies in which we investigated human-human, robot-robot, and human-robot cooperation in a simple, yet strategically rich, resource-sharing scenario called the Block Dilemma, a game in which players must balance fairness, efficiency, and risk. While both human-human and robot-robot pairs typically learn fair and cooperative solutions over time, our results show that these solutions tend to be different when communication is permitted versus when it is not. While people followed a less risky and less efficient solution, pairs of robots followed a more risky but more efficient solution. This difference in humans’ and machines’ behavior appears to influence human-robot cooperation negatively, as our studies show that human-robot pairs did not frequently produce either form of cooperation without communication. These results speak to the need for machine behavior to be better aligned with human behavior. While machines may behave more efficiently and produce better results than people when following their own calculations, machines may often better facilitate human-machine cooperation by aligning their behavior with human behavior rather than expecting human behavior to become more efficient.
Identifer | oai:union.ndltd.org:BGMYU2/oai:scholarsarchive.byu.edu:etd-10386 |
Date | 21 March 2022 |
Creators | Whiting, Tim |
Publisher | BYU ScholarsArchive |
Source Sets | Brigham Young University |
Detected Language | English |
Type | text |
Format | application/pdf |
Source | Theses and Dissertations |
Rights | https://lib.byu.edu/about/copyright/ |
Page generated in 0.0023 seconds