• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 5
  • 2
  • Tagged with
  • 8
  • 5
  • 4
  • 3
  • 3
  • 3
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
1

Face Representation in Cortex: Studies Using a Simple and Not So Special Model

Rosen, Ezra 05 June 2003 (has links)
The face inversion effect has been widely documented as an effect of the uniqueness of face processing. Using a computational model, we show that the face inversion effect is a byproduct of expertise with respect to the face object class. In simulations using HMAX, a hierarchical, shape based model, we show that the magnitude of the inversion effect is a function of the specificity of the representation. Using many, sharply tuned units, an ``expert'' has a large inversion effect. On the other hand, if fewer, broadly tuned units are used, the expertise is lost, and this ``novice'' has a small inversion effect. As the size of the inversion effect is a product of the representation, not the object class, given the right training we can create experts and novices in any object class. Using the same representations as with faces, we create experts and novices for cars. We also measure the feasibility of a view-based model for recognition of rotated objects using HMAX. Using faces, we show that transfer of learning to novel views is possible. Given only one training view, the view-based model can recognize a face at a new orientation via interpolation from the views to which it had been tuned. Although the model can generalize well to upright faces, inverted faces yield poor performance because the features change differently under rotation.
2

Categorization in IT and PFC: Model and Experiments

Knoblich, Ulf, Freedman, David J., Riesenhuber, Maximilian 18 April 2002 (has links)
In a recent experiment, Freedman et al. recorded from inferotemporal (IT) and prefrontal cortices (PFC) of monkeys performing a "cat/dog" categorization task (Freedman 2001 and Freedman, Riesenhuber, Poggio, Miller 2001). In this paper we analyze the tuning properties of view-tuned units in our HMAX model of object recognition in cortex (Riesenhuber 1999) using the same paradigm and stimuli as in the experiment. We then compare the simulation results to the monkey inferotemporal neuron population data. We find that view-tuned model IT units that were trained without any explicit category information can show category-related tuning as observed in the experiment. This suggests that the tuning properties of experimental IT neurons might primarily be shaped by bottom-up stimulus-space statistics, with little influence of top-down task-specific information. The population of experimental PFC neurons, on the other hand, shows tuning properties that cannot be explained just by stimulus tuning. These analyses are compatible with a model of object recognition in cortex (Riesenhuber 2000) in which a population of shape-tuned neurons provides a general basis for neurons tuned to different recognition tasks.
3

Optimum Microarchitectures for Neuromorphic Algorithms

Wang, Shu January 2011 (has links)
No description available.
4

Investigating shape representation in area V4 with HMAX: Orientation and Grating selectivities

Kouh, Minjoon, Riesenhuber, Maximilian 08 September 2003 (has links)
The question of how shape is represented is of central interest to understanding visual processing in cortex. While tuning properties of the cells in early part of the ventral visual stream, thought to be responsible for object recognition in the primate, are comparatively well understood, several different theories have been proposed regarding tuning in higher visual areas, such as V4. We used the model of object recognition in cortex presented by Riesenhuber and Poggio (1999), where more complex shape tuning in higher layers is the result of combining afferent inputs tuned to simpler features, and compared the tuning properties of model units in intermediate layers to those of V4 neurons from the literature. In particular, we investigated the issue of shape representation in visual area V1 and V4 using oriented bars and various types of gratings (polar, hyperbolic, and Cartesian), as used in several physiology experiments. Our computational model was able to reproduce several physiological findings, such as the broadening distribution of the orientation bandwidths and the emergence of a bias toward non-Cartesian stimuli. Interestingly, the simulation results suggest that some V4 neurons receive input from afferents with spatially separated receptive fields, leading to experimentally testable predictions. However, the simulations also show that the stimulus set of Cartesian and non-Cartesian gratings is not sufficiently complex to probe shape tuning in higher areas, necessitating the use of more complex stimulus sets.
5

Investigating shape representation in area V4 with HMAX: Orientation and Grating selectivities

Kouh, Minjoon, Riesenhuber, Maximilian 08 September 2003 (has links)
The question of how shape is represented is of central interest to understanding visual processing in cortex. While tuning properties of the cells in early part of the ventral visual stream, thought to be responsible for object recognition in the primate, are comparatively well understood, several different theories have been proposed regarding tuning in higher visual areas, such as V4. We used the model of object recognition in cortex presented by Riesenhuber and Poggio (1999), where more complex shape tuning in higher layers is the result of combining afferent inputs tuned to simpler features, and compared the tuning properties of model units in intermediate layers to those of V4 neurons from the literature. In particular, we investigated the issue of shape representation in visual area V1 and V4 using oriented bars and various types of gratings (polar, hyperbolic, and Cartesian), as used in several physiology experiments. Our computational model was able to reproduce several physiological findings, such as the broadening distribution of the orientation bandwidths and the emergence of a bias toward non-Cartesian stimuli. Interestingly, the simulation results suggest that some V4 neurons receive input from afferents with spatially separated receptive fields, leading to experimentally testable predictions. However, the simulations also show that the stimulus set of Cartesian and non-Cartesian gratings is not sufficiently complex to probe shape tuning in higher areas, necessitating the use of more complex stimulus sets.
6

Realistic Modeling of Simple and Complex Cell Tuning in the HMAXModel, and Implications for Invariant Object Recognition in Cortex

Serre, Thomas, Riesenhuber, Maximilian 27 July 2004 (has links)
Riesenhuber \& Poggio recently proposed a model of object recognitionin cortex which, beyond integrating general beliefs about the visualsystem in a quantitative framework, made testable predictions aboutvisual processing. In particular, they showed that invariant objectrepresentation could be obtained with a selective pooling mechanismover properly chosen afferents through a {\sc max} operation: Forinstance, at the complex cells level, pooling over a group of simplecells at the same preferred orientation and position in space but atslightly different spatial frequency would provide scale tolerance,while pooling over a group of simple cells at the same preferredorientation and spatial frequency but at slightly different positionin space would provide position tolerance. Indirect support for suchmechanisms in the visual system come from the ability of thearchitecture at the top level to replicate shape tuning as well asshift and size invariance properties of ``view-tuned cells'' (VTUs)found in inferotemporal cortex (IT), the highest area in the ventralvisual stream, thought to be crucial in mediating object recognitionin cortex. There is also now good physiological evidence that a {\scmax} operation is performed at various levels along the ventralstream. However, in the original paper by Riesenhuber \& Poggio,tuning and pooling parameters of model units in early and intermediateareas were only qualitatively inspired by physiological data. Inparticular, many studies have investigated the tuning properties ofsimple and complex cells in primary visual cortex, V1. We show thatunits in the early levels of HMAX can be tuned to produce realisticsimple and complex cell-like tuning, and that the earlier findings onthe invariance properties of model VTUs still hold in this morerealistic version of the model.
7

Limitations of using bags of complex features: Hierarchical higher-order filters fail to capture spatial configurations

Van Horn, Nicholas M. 28 July 2011 (has links)
No description available.
8

Scale Invariant Object Recognition Using Cortical Computational Models and a Robotic Platform

Voils, Danny 01 January 2012 (has links)
This paper proposes an end-to-end, scale invariant, visual object recognition system, composed of computational components that mimic the cortex in the brain. The system uses a two stage process. The first stage is a filter that extracts scale invariant features from the visual field. The second stage uses inference based spacio-temporal analysis of these features to identify objects in the visual field. The proposed model combines Numenta's Hierarchical Temporal Memory (HTM), with HMAX developed by MIT's Brain and Cognitive Science Department. While these two biologically inspired paradigms are based on what is known about the visual cortex, HTM and HMAX tackle the overall object recognition problem from different directions. Image pyramid based methods like HMAX make explicit use of scale, but have no sense of time. HTM, on the other hand, only indirectly tackles scale, but makes explicit use of time. By combining HTM and HMAX, both scale and time are addressed. In this paper, I show that HTM and HMAX can be combined to make a com- plete cortex inspired object recognition model that explicitly uses both scale and time to recognize objects in temporal sequences of images. Additionally, through experimentation, I examine several variations of HMAX and its

Page generated in 0.0212 seconds