With the rapid growth of online devices, a new concept of Internet of Things (IoT) is emerging in which everyday devices will be connected to the Internet. As the number of devices in IoT is increasing, so is the complexity of the interactions between user and devices. There is a need to design intelligent user interfaces that could assist users in interactions. Many studies have been conducted on different interaction techniques such as proxemic and gesture interaction in order to propose an intuitive and intelligent system for controlling multimedia devices over the IoT, though most could not propose a universal solution. The present study proposes a proximity-based and gesture-enabled user interface for multimedia devices over IoT. The proposed method employs a cloud-based decision engine to support user to choose and interact with the most appropriate device, reliving the user from the burden of enumerating available devices manually. The decision engine observes the multimedia content and device properties, learns user preferences adaptively, and automatically recommends the most appropriate device to interact. In addition to that, the proposed system uses proximity information to find the user among people and provides her/him gesture control services. Furthermore, a new hand gesture vocabulary is proposed for controlling multimedia devices through conducting a multiphase elicitation study. The main advantage of this vocabulary is that it can be used for all multimedia devices. Both device recommendation system and gesture vocabulary are evaluated. The device recommendation system evaluation shows that the users agree with the proposed interaction 70% of the times. Moreover, the average agreement score of the proposed gesture vocabulary (0.56) exceeds the score of similar studies. An external user evaluation study shows that the average score of being a good-match is 4.08 out of 5 and the average of ease-of-performance equals to 4.21 out of 5. The memory test reveals that the proposed vocabulary is easy to remember since participants could remember and perform gestures in 3.13 seconds on average. In addition to that, the average accuracy of remembering gestures equals to 91.54%.
Identifer | oai:union.ndltd.org:uottawa.ca/oai:ruor.uottawa.ca:10393/31865 |
Date | January 2014 |
Creators | Ahmadi Danesh Ashtiani, Ali |
Contributors | El Saddik, Abdulmotaleb |
Publisher | Université d'Ottawa / University of Ottawa |
Source Sets | Université d’Ottawa |
Language | English |
Detected Language | English |
Type | Thesis |
Page generated in 0.0023 seconds