• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 172
  • 59
  • 25
  • 14
  • 11
  • 6
  • 4
  • 4
  • 2
  • 2
  • 1
  • 1
  • 1
  • 1
  • 1
  • Tagged with
  • 364
  • 364
  • 108
  • 101
  • 64
  • 61
  • 46
  • 43
  • 38
  • 32
  • 30
  • 26
  • 26
  • 26
  • 26
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
71

Space Efficient 3D Model Indexing

Jacobs, David W. 01 February 1992 (has links)
We show that we can optimally represent the set of 2D images produced by the point features of a rigid 3D model as two lines in two high-dimensional spaces. We then decribe a working recognition system in which we represent these spaces discretely in a hash table. We can access this table at run time to find all the groups of model features that could match a group of image features, accounting for the effects of sensing error. We also use this representation of a model's images to demonstrate significant new limitations of two other approaches to recognition: invariants, and non- accidental properties.
72

Indexing for Visual Recognition from a Large Model Base

Breuel, Thomas M. 01 August 1990 (has links)
This paper describes a new approach to the model base indexing stage of visual object recognition. Fast model base indexing of 3D objects is achieved by accessing a database of encoded 2D views of the objects using a fast 2D matching algorithm. The algorithm is specifically intended as a plausible solution for the problem of indexing into very large model bases that general purpose vision systems and robots will have to deal with in the future. Other properties that make the indexing algorithm attractive are that it can take advantage of most geometric and non-geometric properties of features without modification, and that it addresses the incremental model acquisition problem for 3D objects.
73

The Computational Study of Vision

Hildreth, Ellen C., Ullman, Shimon 01 April 1988 (has links)
The computational approach to the study of vision inquires directly into the sort of information processing needed to extract important information from the changing visual image---information such as the three-dimensional structure and movement of objects in the scene, or the color and texture of object surfaces. An important contribution that computational studies have made is to show how difficult vision is to perform, and how complex are the processes needed to perform visual tasks successfully. This article reviews some computational studies of vision, focusing on edge detection, binocular stereo, motion analysis, intermediate vision, and object recognition.
74

Learning Object-Independent Modes of Variation with Feature Flow Fields

Miller, Erik G., Tieu, Kinh, Stauffer, Chris P. 01 September 2001 (has links)
We present a unifying framework in which "object-independent" modes of variation are learned from continuous-time data such as video sequences. These modes of variation can be used as "generators" to produce a manifold of images of a new object from a single example of that object. We develop the framework in the context of a well-known example: analyzing the modes of spatial deformations of a scene under camera movement. Our method learns a close approximation to the standard affine deformations that are expected from the geometry of the situation, and does so in a completely unsupervised (i.e. ignorant of the geometry of the situation) fashion. We stress that it is learning a "parameterization", not just the parameter values, of the data. We then demonstrate how we have used the same framework to derive a novel data-driven model of joint color change in images due to common lighting variations. The model is superior to previous models of color change in describing non-linear color changes due to lighting.
75

Context-Based Vision System for Place and Object Recognition

Torralba, Antonio, Murphy, Kevin P., Freeman, William T., Rubin, Mark A. 19 March 2003 (has links)
While navigating in an environment, a vision system has to be able to recognize where it is and what the main objects in the scene are. In this paper we present a context-based vision system for place and object recognition. The goal is to identify familiar locations (e.g., office 610, conference room 941, Main Street), to categorize new environments (office, corridor, street) and to use that information to provide contextual priors for object recognition (e.g., table, chair, car, computer). We present a low-dimensional global image representation that provides relevant information for place recognition and categorization, and how such contextual information introduces strong priors that simplify object recognition. We have trained the system to recognize over 60 locations (indoors and outdoors) and to suggest the presence and locations of more than 20 different object types. The algorithm has been integrated into a mobile system that provides real-time feedback to the user.
76

Robust 2-D Model-Based Object Recognition

Cass, Todd A. 01 May 1988 (has links)
Techniques, suitable for parallel implementation, for robust 2D model-based object recognition in the presence of sensor error are studied. Models and scene data are represented as local geometric features and robust hypothesis of feature matchings and transformations is considered. Bounds on the error in the image feature geometry are assumed constraining possible matchings and transformations. Transformation sampling is introduced as a simple, robust, polynomial-time, and highly parallel method of searching the space of transformations to hypothesize feature matchings. Key to the approach is that error in image feature measurement is explicitly accounted for. A Connection Machine implementation and experiments on real images are presented.
77

Generalization over contrast and mirror reversal, but not figure-ground reversal, in an "edge-based

Riesenhuber, Maximilian 10 December 2001 (has links)
Baylis & Driver (Nature Neuroscience, 2001) have recently presented data on the response of neurons in macaque inferotemporal cortex (IT) to various stimulus transformations. They report that neurons can generalize over contrast and mirror reversal, but not over figure-ground reversal. This finding is taken to demonstrate that ``the selectivity of IT neurons is not determined simply by the distinctive contours in a display, contrary to simple edge-based models of shape recognition'', citing our recently presented model of object recognition in cortex (Riesenhuber & Poggio, Nature Neuroscience, 1999). In this memo, I show that the main effects of the experiment can be obtained by performing the appropriate simulations in our simple feedforward model. This suggests for IT cell tuning that the possible contributions of explicit edge assignment processes postulated in (Baylis & Driver, 2001) might be smaller than expected.
78

Face processing in humans is compatible with a simple shape-based model of vision

Riesenhuber, Jarudi, Gilad, Sinha 05 March 2004 (has links)
Understanding how the human visual system recognizes objects is one of the key challenges in neuroscience. Inspired by a large body of physiological evidence (Felleman and Van Essen, 1991; Hubel and Wiesel, 1962; Livingstone and Hubel, 1988; Tso et al., 2001; Zeki, 1993), a general class of recognition models has emerged which is based on a hierarchical organization of visual processing, with succeeding stages being sensitive to image features of increasing complexity (Hummel and Biederman, 1992; Riesenhuber and Poggio, 1999; Selfridge, 1959). However, these models appear to be incompatible with some well-known psychophysical results. Prominent among these are experiments investigating recognition impairments caused by vertical inversion of images, especially those of faces. It has been reported that faces that differ "featurally" are much easier to distinguish when inverted than those that differ "configurally" (Freire et al., 2000; Le Grand et al., 2001; Mondloch et al., 2002) ??finding that is difficult to reconcile with the aforementioned models. Here we show that after controlling for subjects' expectations, there is no difference between "featurally" and "configurally" transformed faces in terms of inversion effect. This result reinforces the plausibility of simple hierarchical models of object representation and recognition in cortex.
79

Statistical Local Appearance Models for Object Recognition

Guillamet Monfulleda, David 10 March 2004 (has links)
Durant els últims anys, hi ha hagut un interès creixent en les tècniques de reconeixement d'objectes basades en imatges, on cadascuna de les quals es correspon a una aparença particular de l'objecte. Aquestes tècniques que únicament utilitzen informació de les imatges són anomenades tècniques basades en l'aparença i l'interès sorgit per aquestes tècniques és degut al seu éxit a l'hora de reconèixer objectes. Els primers mètodes basats en l'aparença es recolzaven únicament en models globals. Tot i que els mètodes globals han estat utilitzats satisfactòriament en un conjunt molt ampli d'aplicacions basades en la visió per computador (per exemple, reconeixement de cares, posicionament de robots, etc), encara hi ha alguns problemes que no es poden tractar fàcilment. Les oclusions parcials, canvis excessius en la il·luminació, fons complexes, canvis en l'escala i diferents punts de vista i orientacions dels objectes encara són un gran problema si s'han de tractar des d'un punt de vista global. En aquest punt és quan els mètodes basats en l'aparença local van sorgir amb l'objectiu primordial de reduir l'efecte d'alguns d'aquests problemes i proporcionar una representació molt més rica per ser utilitzada en entorns encara més complexes.Usualment, els mètodes basats en l'aparença local utilitzen descriptors d'alta dimensionalitat a l'hora de descriure regions locals dels objectes. Llavors, el problema de la maledicció de la dimensionalitat (curse of dimensionality) pot sorgir i la classificació dels objectes pot empitjorar. En aquest sentit, un exemple típic per alleujar la maledicció de la dimensionalitat és la utilització de tècniques basades en la reducció de la dimensionalitat. D'entre les possibles tècniques per reduir la dimensionalitat, es poden utilitzar les transformacions lineals de dades. Bàsicament, ens podem beneficiar de les transformacions lineals de dades si la projecció millora o manté la mateixa informació de l'espai d'alta dimensió original i produeix classificadors fiables. Llavors, el principal objectiu és la modelització de patrons d'estructures presents als espais d'altes dimensions en espais de baixes dimensions.La primera part d'aquesta tesi utilitza primordialment histogrames color, un descriptor local que ens proveeix d'una bona font d'informació relacionada amb les variacions fotomètriques de les regions locals dels objectes. Llavors, aquests descriptors d'alta dimensionalitat es projecten en espais de baixes dimensions tot utilitzant diverses tècniques. L'anàlisi de components principals (PCA), la factorització de matrius amb valors no-negatius (NMF) i la versió ponderada del NMF són 3 transformacions lineals que s'han introduit en aquesta tesi per reduir la dimensionalitat de les dades i proporcionar espais de baixa dimensionalitat que siguin fiables i mantinguin les estructures de l'espai original. Una vegada s'han explicat, les 3 tècniques lineals són àmpliament comparades segons els nivells de classificació tot utilitzant una gran diversitat de bases de dades. També es presenta un primer intent per unir aquestes tècniques en un únic marc de treball i els resultats són molt interessants i prometedors. Un altre objectiu d'aquesta tesi és determinar quan i quina transformació lineal s'ha d'utilitzar tot tenint en compte les dades amb que estem treballant. Finalment, s'introdueix l'anàlisi de components independents (ICA) per modelitzar funcions de densitat de probabilitats tant a espais originals d'alta dimensionalitat com la seva extensió en subespais creats amb el PCA. L'anàlisi de components independents és una tècnica lineal d'extracció de característiques que busca minimitzar les dependències d'alt ordre. Quan les seves assumpcions es compleixen, es poden obtenir característiques estadísticament independents a partir de les mesures originals. En aquest sentit, el ICA s'adapta al problema de reconeixement estadístic de patrons de dades d'alta dimensionalitat. Això s'aconsegueix utilitzant representacions condicionals a la classe i un esquema de decisió de Bayes adaptat específicament. Degut a l'assumpció d'independència aquest esquema resulta en una modificació del classificador ingenu de Bayes.El principal inconvenient de les transformacions lineals de dades esmentades anteriorment és que no consideren cap tipus de relació espacial entre les característiques locals. Conseqüentment, es presenta un mètode per reconèixer objectes tridimensionals a partir d'imatges d'escenes complexes, tot utilitzant un únic model après d'una imatge de l'objecte. Aquest mètode es basa directament en les característiques visuals locals extretes de punts rellevants dels objectes i té en compte les relacions espacials entre elles. Aquest nou esquema redueix l'ambigüitat de les representacions anteriors. De fet, es presenta una nova metodologia general per obtenir estimacions fiables de distribucions conjuntes de vectors de característiques locals de múltiples punts rellevants dels objectes. Per fer-ho, definim el concepte de k-tuples per poder representar l'aparença local de l'objecte a k punts diferents i al mateix moment les dependències estadístiques entre ells. En aquest sentit, el nostre mètode s'adapta a entorns complexes i reals demostrant una gran habilitat per detectar objectes en aquests escenaris amb resultats molt prometedors. / During the last few years, there has been a growing interest in object recognition techniques directly based on images, each corresponding to a particular appearance of the object. These techniques which use only information of images are called appearance based models and the interest in such techniques is due to its success in recognizing objects. Earlier appearance-based approaches were focused on the use of holistic approaches. In spite of the fact that global representations have been successfully used in a broad set of computer vision applications (i.e. face recognition, robot positioning, etc), there are still some problems that can not be easily solved. Partial object occlusions, severe lighting changes, complex backgrounds, object scale changes and different viewpoints or orientations of objects are still a problem if they should be faced under a holistic perspective. Then, local appearance approaches emerged as they reduce the effect of some of these problems and provide a richer representation to be used in more complex environments.Usually, local appearance methods use high dimensional descriptors to describe local regions of objects. Then, the curse of dimensionality problem appears and object classification degrades. A typical example to alleviate the curse of dimensionality problem is to use techniques based on dimensionality reduction. Among possible reduction techniques, one could use linear data transformations. We can benefit from linear data transformations if the projection improves or mantains the same information of the high dimensional space and produces reliable classifiers. Then, the main goal is to model low dimensional pattern structures present in high dimensional data.The first part of this thesis is mainly focused on the use of color histograms, a local descriptor which provides a good source of information directly related to the photometric variations of local image regions. Then, these high dimensional descriptors are projected to low dimensional spaces using several techniques. Principal Component Analysis (PCA), Non-negative Matrix Factorization (NMF) and a weighted version of NMF, the Weighted Non-negative Matrix Factorization (WNMF) are 3 linear transformations of data which have been introduced in this thesis to reduce dimensionality and provide reliable low dimensional spaces. Once introduced, these three linear techniques are widely compared in terms of performances using several databases. Also, a first attempt to merge these techniques in an unified framework is shown and results seem to be very promising. Another goal of this thesis is to determine when and which linear transformation might be used depending on the data we are dealing with. To this end, we introduce Independent Component Analysis (ICA) to model probability density functions in the original high dimensional spaces as well as its extension to model subspaces obtained using PCA. ICA is a linear feature extraction technique that aims to minimize higher-order dependencies in the extracted features. When its assumptions are met, statistically independent features can be obtained from the original measurements. We adapt ICA to the particular problem of statistical pattern recognition of high dimensional data. This is done by means of class-conditional representations and a specifically adapted Bayesian decision scheme. Due to the independence assumption this scheme results in a modification of the naive Bayes classifier.The main disadvantage of the previous linear data transformations is that they do not take into account the relationship among local features. Consequently, we present a method of recognizing three-dimensional objects in intensity images of cluttered scenes, using a model learned from one single image of the object. This method is directly based on local visual features extracted from relevant keypoints of objects and takes into account the relationship between them. Then, this new scheme reduces the ambiguity of previous representations. In fact, we describe a general methodology for obtaining a reliable estimation of the joint distribution of local feature vectors at multiple salient points (keypoints). We define the concept of k-tuple in order to represent the local appearance of the object at k different points as well as the statistical dependencies among them. Our method is adapted to real, complex and cluttered environments and we present some results of object detection in these scenarios with promising results.
80

Comparison of motor-based versus visual sensory representations in object recognition tasks

Misra, Navendu 01 November 2005 (has links)
Various works have demonstrated the usage of action as a critical component in allowing autonomous agents to learn about objects in the environment. The importance of memory becomes evident when these agents try to learn about complex objects. This necessity primarily stems from the fact that simpler agents behave reactively to stimuli in their attempt to learn about the nature of the object. However, complex objects have the property of giving rise to temporally varying sensory data as the agent interacts with the object. Therefore, reactive behavior becomes a hindrance in learning these complex objects, thus, prompting the need for memory. A straightforward approach to memory, visual memory, is where sensory data is directly represented. Another mechanism is skill-based memory or habit formation. In the latter mechanism the sequence of actions performed for a task is retained. The main hypothesis of this thesis is that since action seems to play an important role in simple perceptual understanding it may also serve as a good memory representation. In order to test this hypothesis a series of comparative tests were carried out to determine the merits of each of these representations. It turns out that skill memory performs significantly better at recognition tasks than visual memory. Furthermore, it was demonstrated in a related experiment that action forms a good intermediate representation of the sensory data. This provides support to theories that propose that various sensory modalities can ideally be represented in terms of action. This thesis successfully extends action to the role of understanding of complex objects.

Page generated in 0.0705 seconds