Return to search

Leveraging Contextual Relationships Between Objects for Localization

Object localization is currently an active area of research in computer vision. The object localization task is to identify all locations of an object class within an image by drawing a bounding box around objects that are instances of that class. Object locations are typically found by computing a classification score over a small window at multiple locations in the image, based on some chosen criteria, and choosing the highest scoring windows as the object bounding-boxes. Localization methods vary widely, but there is a growing trend towards methods that are able to make localization more accurate and efficient through the use of context. In this thesis, I investigate whether contextual relationships between related objects can be leveraged to improve localization efficiency through a reduction in the number of windows considered for each localization task. I implement a context-driven localization model and evaluate it against two models that do not use context between objects for comparison. My model constrains the search spaces for the target object location and window size. I show that context-driven methods substantially reduce the mean number of windows necessary for localizing a target object versus the two models not using context. The results presented here suggest that contextual relationships between objects in an image can be leveraged to significantly improve localization efficiency by reducing the number of windows required to find the target object.

Identiferoai:union.ndltd.org:pdx.edu/oai:pdxscholar.library.pdx.edu:open_access_etds-3205
Date03 March 2015
CreatorsOlson, Clinton Leif
PublisherPDXScholar
Source SetsPortland State University
Detected LanguageEnglish
Typetext
Formatapplication/pdf
SourceDissertations and Theses

Page generated in 0.0012 seconds