As people respond strongly to faces and facial features, both consciously and subconsciously, faces are an essential aspect of social robots. Robotic faces and heads until recently belonged to one of the following categories: virtual, mechatronic or animatronic. As an original contribution to the field of human-robot interaction, I present the R-PAF technology (Retro-Projected Animated Faces): a novel robotic head displaying a real-time, computer-rendered face, retro-projected from within the head volume onto a mask, as well as its driving software designed with openness and portability to other hybrid robotic platforms in mind. The work constitutes the first implementation of a non-planar mask suitable for social human-robot interaction, comprising key elements of social interaction such as precise gaze direction control, facial expressions and blushing, and the first demonstration of an interactive video-animated facial mask mounted on a 5-axis robotic arm. The LightHead robot, a R-PAF demonstrator and experimental platform, has demonstrated robustness both in extended controlled and uncontrolled settings. The iterative hardware and facial design, details of the three-layered software architecture and tools, the implementation of life-like facial behaviours, as well as improvements in social-emotional robotic communication are reported. Furthermore, a series of evaluations present the first study on human performance in reading robotic gaze and another first on user’s ethnic preference towards a robot face.
Identifer | oai:union.ndltd.org:bl.uk/oai:ethos.bl.uk:694415 |
Date | January 2016 |
Creators | Delaunay, Frédéric C. |
Publisher | University of Plymouth |
Source Sets | Ethos UK |
Detected Language | English |
Type | Electronic Thesis or Dissertation |
Source | http://hdl.handle.net/10026.1/4871 |
Page generated in 0.0045 seconds