Return to search

Automatic Eye-Gaze Following from 2-D Static Images: Application to Classroom Observation Video Analysis

In this work, we develop an end-to-end neural network-based computer vision system to automatically identify where each person within a 2-D image of a school classroom is looking (“gaze following�), as well as who she/he is looking at. Automatic gaze following could help facilitate data-mining of large datasets of classroom observation videos that are collected routinely in schools around the world in order to understand social interactions between teachers and students. Our network is based on the architecture by Recasens, et al. (2015) but is extended to (1) predict not only where, but who the person is looking at; and (2) predict whether each person is looking at a target inside or outside the image. Since our focus is on classroom observation videos, we collect gaze dataset (48,907 gaze annotations over 2,263 classroom images) for students and teachers in classrooms. Results of our experiments indicate that the proposed neural network can estimate the gaze target - either the spatial location or the face of a person - with substantially higher accuracy compared to several baselines.

Identiferoai:union.ndltd.org:wpi.edu/oai:digitalcommons.wpi.edu:etd-theses-1250
Date23 April 2018
CreatorsAung, Arkar Min
ContributorsNeil T. Heffernan, Reader, Jacob R. Whitehill, Advisor,
PublisherDigital WPI
Source SetsWorcester Polytechnic Institute
Detected LanguageEnglish
Typetext
Formatapplication/pdf
SourceMasters Theses (All Theses, All Years)

Page generated in 0.0019 seconds