Current uses of tagged images typically exploit only
the most explicit information: the link between the nouns
named and the objects present somewhere in the image. We
propose to leverage “unspoken” cues that rest within an
ordered list of image tags so as to improve object localization.
We define three novel implicit features from an image’s
tags—the relative prominence of each object as signified
by its order of mention, the scale constraints implied
by unnamed objects, and the loose spatial links hinted by
the proximity of names on the list. By learning a conditional
density over the localization parameters (position
and scale) given these cues, we show how to improve both
accuracy and efficiency when detecting the tagged objects.
We validate our approach with 25 object categories from
the PASCAL VOC and LabelMe datasets, and demonstrate
its effectiveness relative to both traditional sliding windows
as well as a visual context baseline. / text
Identifer | oai:union.ndltd.org:UTEXAS/oai:repositories.lib.utexas.edu:2152/ETD-UT-2010-05-1514 |
Date | 10 November 2010 |
Creators | Hwang, Sung Ju |
Source Sets | University of Texas |
Language | English |
Detected Language | English |
Type | thesis |
Format | application/pdf |
Page generated in 0.0024 seconds