Since the turn of the millennium, a large number of computational models of visual salience have been put forward. How best to evaluate a given model's ability to predict where human observers fixate in images of real-world scenes remains an open research question. Assessing the role of spatial biases is a challenging issue; this is particularly true when we consider the tendency for high-salience items to appear in the image center, combined with a tendency to look straight ahead (“central bias”). This problem is further exacerbated in the context of model comparisons, because some—but not all—models implicitly or explicitly incorporate a center preference to improve performance. To address this and other issues, we propose to combine a-priori parcellation of scenes with generalized linear mixed models (GLMM), building upon previous work. With this method, we can explicitly model the central bias of fixation by including a central-bias predictor in the GLMM. A second predictor captures how well the saliency model predicts human fixations, above and beyond the central bias. By-subject and by-item random effects account for individual differences and differences across scene items, respectively. Moreover, we can directly assess whether a given saliency model performs significantly better than others. In this article, we describe the data processing steps required by our analysis approach. In addition, we demonstrate the GLMM analyses by evaluating the performance of different saliency models on a new eye-tracking corpus. To facilitate the application of our method, we make the open-source Python toolbox “GridFix” available.
Identifer | oai:union.ndltd.org:DRESDEN/oai:qucosa.de:bsz:ch1-qucosa-232614 |
Date | 22 January 2018 |
Creators | Nuthmann, Antje, Einhäuser, Wolfgang, Schütz, Immo |
Contributors | Technische Universität Chemnitz, Fakultät für Naturwissenschaften, Frontiers Research Foundation, |
Publisher | Universitätsbibliothek Chemnitz |
Source Sets | Hochschulschriftenserver (HSSS) der SLUB Dresden |
Language | English |
Detected Language | English |
Type | doc-type:article |
Format | application/pdf, text/plain, application/zip |
Source | Front. Hum. Neurosci., 31 October 2017 | https://doi.org/10.3389/fnhum.2017.00491, ISSN 1662-5161 |
Page generated in 0.002 seconds