This article describes a model for including scene/context priors in attention guidance. In the proposed scheme, visual context information can be available early in the visual processing chain, in order to modulate the saliency of image regions and to provide an efficient short cut for object detection and recognition. The scene is represented by means of a low-dimensional global description obtained from low-level features. The global scene features are then used to predict the probability of presence of the target object in the scene, and its location and scale, before exploring the image. Scene information can then be used to modulate the saliency of image regions early during the visual processing in order to provide an efficient short cut for object detection and recognition.
Identifer | oai:union.ndltd.org:MIT/oai:dspace.mit.edu:1721.1/6737 |
Date | 14 April 2004 |
Creators | Torralba, Antonio |
Source Sets | M.I.T. Theses and Dissertation |
Language | en_US |
Detected Language | English |
Format | 12 p., 2980182 bytes, 1698158 bytes, application/postscript, application/pdf |
Relation | AIM-2004-009 |
Page generated in 0.0013 seconds