Return to search

Higher-level representations of natural images

The traditional view of vision is that neurons in early cortical areas process information about simple features (e.g. orientation and spatial frequency) in small, spatially localised regions of visual space (the neuron's receptive field). This piecemeal information is then fed-forward into later stages of the visual system where it gets combined to form coherent and meaningful global (higher-level) representations. The overall aim of this thesis is to examine and quantify this higher level processing; how we encode global features in natural images and to understand the extent to which our perception of these global representations is determined by the local features within images. Using the tilt after-effect as a tool, the first chapter examined the processing of a low level, local feature and found that the orientation of a sinusoidal grating could be encoded in both a retinally and spatially non-specific manner. Chapter 2 then examined these tilt aftereffects to the global orientation of the image (i.e., uprightness). We found that image uprightness was also encoded in a retinally / spatially non-specific manner, but that this global property could be processed largely independently of its local orientation content. Chapter 3 investigated if our increased sensitivity to cardinal (vertical and horizontal) structures compared to inter-cardinal (45° and 135° clockwise of vertical) structures, influenced classification of unambiguous natural images. Participants required relatively less contrast to classify images when they retained near-cardinal as compared to near-inter-cardinal structures. Finally, in chapter 4, we examined category classification when images were ambiguous. Observers were biased to classify ambiguous images, created by combining structures from two distinct image categories, as carpentered (e.g., a house). This could not be explained by differences in sensitivity to local structures and is most likely the result of our long-term exposure to city views. Overall, these results show that higher-level representations are not fully dependent on the lower level features within an image. Furthermore, our knowledge about the environment influences the extent to which we use local features to rapidly identify an image.

Identiferoai:union.ndltd.org:bl.uk/oai:ethos.bl.uk:766166
Date January 2018
CreatorsMiflah, Hussain Ismail Ahamed
PublisherQueen Mary, University of London
Source SetsEthos UK
Detected LanguageEnglish
TypeElectronic Thesis or Dissertation
Sourcehttp://qmro.qmul.ac.uk/xmlui/handle/123456789/39759

Page generated in 0.1289 seconds