Return to search

Android-based customizable media crowdsourcing toolkit for machine vision research

Smart devices have become more complex and powerful, increasing in both computational power, storage capacities, and battery longevity. Currently available online facial recognition databases do not offer training datasets with enough contextually descriptive metadata for novel scenarios such as using machine vision to detect if people in a video like each other based on their facial expressions. The aim of this research is to design and implement a software tool to enable researchers to collect videos from a large pool of people through crowdsourcing means for machine vision analysis. We are particularly interested in the tagging of the videos with the demographic data of study participants as well as data from custom post hoc survey. This study has demonstrated that smart devices and their embedded technologies can be utilized to collect videos as well as self-evaluated metadata through crowdsourcing means. The application makes use of sensors embedded within smart devices such as the camera and GPS sensors to collect videos, survey data, and geographical data. User engagement is encouraged using periodic push notifications. The collected videos and metadata using the application will be used in the future for machine vision analysis of various phenomena such as investigating if machine vision could be used to detect people’s fondness for each other based on their facial expressions and self-evaluated post-task survey data.

Identiferoai:union.ndltd.org:oulo.fi/oai:oulu.fi:nbnfioulu-201812063247
Date10 December 2018
CreatorsAlorwu, A. (Andy)
PublisherUniversity of Oulu
Source SetsUniversity of Oulu
LanguageEnglish
Detected LanguageEnglish
Typeinfo:eu-repo/semantics/masterThesis, info:eu-repo/semantics/publishedVersion
Formatapplication/pdf
Rightsinfo:eu-repo/semantics/openAccess, © Andy Alorwu, 2018

Page generated in 0.1135 seconds