Return to search

Simultaneous Bottom-up/top-down Processing In Early And Mid Level Vision

The prevalent view in computer vision since Marr is that visual perception is a data-driven bottom-up process. In this view, image data is processed in a feed-forward fashion where a sequence of independent visual modules transforms simple low-level cues into more complex abstract perceptual units. Over the years, a variety of techniques has been developed using this paradigm. Yet an important realization is that low-level visual cues are generally so ambiguous that they could make purely bottom-up methods quite unsuccessful. These ambiguities cannot be resolved without taking account of high-level contextual information. In this thesis, we explore different ways of enriching early and mid-level computer vision modules with a capacity to extract and use contextual knowledge. Mainly, we integrate low-level image features with contextual information within uni&amp / #64257 / ed formulations where bottom-up and top-down processing take place simultaneously.

Identiferoai:union.ndltd.org:METU/oai:etd.lib.metu.edu.tr:http://etd.lib.metu.edu.tr/upload/12610167/index.pdf
Date01 November 2008
CreatorsErdem, Mehmet Erkut
ContributorsTari, Sibel
PublisherMETU
Source SetsMiddle East Technical Univ.
LanguageEnglish
Detected LanguageEnglish
TypePh.D. Thesis
Formattext/pdf
RightsTo liberate the content for public access

Page generated in 0.0141 seconds