Return to search

Crossmodal displays : coordinated crossmodal cues for information provision in public spaces

This thesis explores the design of Crossmodal Display, a new kind of display-based interface that aims to help prevent information overload and support information presentation for multiple simultaneous people who share a physical space or situated interface but have different information needs and privacy concerns. By exploiting the human multimodal perception and utilizing the synergy of both existing public displays and personal displays, crossmodal displays avoid numerous drawbacks associated with previous approaches, including a reliance on tracking technologies, weak protection for user‟s privacy, small user capacity and high cognitive load demands. The review of the human multimodal perception in this thesis, especially multimodal integration and crossmodal interaction, has many design implications for the design of crossmodal displays and constitutes the foundation for our proposed conceptual model. Two types of crossmodal display prototype applications are developed: CROSSFLOW for indoor navigation and CROSSBOARD for information retrieval on high-density information display; both of these utilize coordinated crossmodal cues to guide multiple simultaneous users‟ attention to publicly visible information relevant to each user timely. Most of the results of single-user and multi-user lab studies on the prototype systems we developed in this research demonstrate the effectiveness and efficiency of crossmodal displays and validate several significant advantages over the previous solutions. However, the results also reveal that more detailed usability and user experience of crossmodal displays as well as the human perception of crossmodal cues should be investigated and improved. This thesis is the first exploration into the design of crossmodal displays. A set of design suggestions and a lifecycle model of crossmodal display development have been produced, and can be used by designers or other researchers who wish to develop crossmodal displays for their applications or integrate crossmodal cues in their interfaces.
Date January 2013
CreatorsCao, Han
PublisherUniversity of Newcastle Upon Tyne
Source SetsEthos UK
Detected LanguageEnglish
TypeElectronic Thesis or Dissertation

Page generated in 0.0082 seconds