• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 172
  • 59
  • 25
  • 14
  • 11
  • 6
  • 4
  • 4
  • 2
  • 2
  • 1
  • 1
  • 1
  • 1
  • 1
  • Tagged with
  • 363
  • 363
  • 107
  • 100
  • 63
  • 61
  • 46
  • 43
  • 37
  • 32
  • 30
  • 26
  • 26
  • 26
  • 26
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
31

Learning Patch-based Structural Element Models with Hierarchical Palettes

Chua, Jeroen 21 November 2012 (has links)
Image patches can be factorized into ‘shapelets’ that describe segmentation patterns, and palettes that describe how to paint the segments. This allows a flexible factorization of local shape (segmentation patterns) and appearance (palettes), which we argue is useful for tasks such as object and scene recognition. Here, we introduce the ‘shapelet’ model- a framework that is able to learn a library of ‘shapelet’ segmentation patterns to capture local shape, and hierarchical palettes of colors to capture appearance. Using a learned shapelet library, image patches can be analyzed using a variational technique to produce descriptors that separately describe local shape and local appearance. These descriptors can be used for high-level vision tasks, such as object and scene recognition. We show that the shapelet model is competitive with SIFT-based methods and structure element (stel) model variants on the object recognition datasets Caltech28 and Caltech101, and the scene recognition dataset All-I-Have-Seen.
32

Comparing Visual Features for Morphing Based Recognition

Wu, Jia Jane 25 May 2005 (has links)
This thesis presents a method of object classification using the idea of deformable shape matching. Three types of visual features, geometric blur, C1 and SIFT, are used to generate feature descriptors. These feature descriptors are then used to find point correspondences between pairs of images. Various morphable models are created by small subsets of these correspondences using thin-plate spline. Given these morphs, a simple algorithm, least median of squares (LMEDS), is used to find the best morph. A scoring metric, using both LMEDS and distance transform, is used to classify test images based on a nearest neighbor algorithm. We perform the experiments on the Caltech 101 dataset [5]. To ease computation, for each test image, a shortlist is created containing 10 of the most likely candidates. We were unable to duplicate the performance of [1] in the shortlist stage because we did not use hand-segmentation to extract objects for our training images. However, our gain from the shortlist to correspondence stage is comparable to theirs. In our experiments, we improved from 21% to 28% (gain of 33%), while [1] improved from 41% to 48% (gain of 17%). We find that using a non-shape based approach, C2 [14], the overall classification rate of 33.61% is higher than all of the shaped based methods tested in our experiments.
33

Invariant object matching with a modified dynamic link network

Sim, Hak Chuah January 1999 (has links)
No description available.
34

The influence of real-world object expertise on visual discrimination mechanisms

Hagen, Simen 03 January 2018 (has links)
Object experts quickly and accurately discriminate objects within their domain of expertise. Although expert recognition has been extensively studied both at the behavioral- and neural-levels in both real-world and laboratory trained experts, we know little about the visual features and perceptual strategies that the expert learns to use in order to make fast and accurate recognition judgments. Thus, the aim of this work was to identify the visual features (e.g., color, form, motion) and perceptual strategies (e.g., fixation pattern) that real-world experts employ to recognize objects from their domain of expertise. Experiments 1 to 3 used psychophysical methods to test the role of color, form (spatial frequencies), and motion, respectively, in expert object recognition. Experiment 1 showed that although both experts and novices relied on color to recognize birds at the family level, analysis of the response time distribution revealed that color facilitated expert performance in the fastest and slowest trials whereas color only helped the novices in the slower trials. Experiment 2 showed that both experts and novices were more accurate when bird images contained the internal information represented by a middle range of SFs, described by a quadratic function. However, the experts, but not the novices, showed a similar quadratic relationship between response times and SF range. Experiment 3 showed that, contrary to our prediction, both groups were equally sensitivity to global bird motion. Experiment 4, which tested the perceptual stategies of expert recognition in a gaze-contingent eye-tracking paradigm, showed that only in the fastest trials did experts use a wider range of vision. Experiment 5, which examined the neural representations of categories within the expert domain, suggested that the mechanisms that represents within-categories of faces also represented within-categories from the domain of expertise, but not the novice domain. Collectively, these studies suggest that expertise influence visual discrimination mechanisms such that they become more sensitive to the visual dimensions upon which the expert domains are discriminated. / Graduate / 2018-12-12
35

Active object recognition for 2D and 3D applications

Govender, Natasha January 2015 (has links)
Includes bibliographical references / Active object recognition provides a mechanism for selecting informative viewpoints to complete recognition tasks as quickly and accurately as possible. One can manipulate the position of the camera or the object of interest to obtain more useful information. This approach can improve the computational efficiency of the recognition task by only processing viewpoints selected based on the amount of relevant information they contain. Active object recognition methods are based around how to select the next best viewpoint and the integration of the extracted information. Most active recognition methods do not use local interest points which have been shown to work well in other recognition tasks and are tested on images containing a single object with no occlusions or clutter. In this thesis we investigate using local interest points (SIFT) in probabilistic and non-probabilistic settings for active single and multiple object and viewpoint/pose recognition. Test images used contain objects that are occluded and occur in significant clutter. Visually similar objects are also included in our dataset. Initially we introduce a non-probabilistic 3D active object recognition system which consists of a mechanism for selecting the next best viewpoint and an integration strategy to provide feedback to the system. A novel approach to weighting the uniqueness of features extracted is presented, using a vocabulary tree data structure. This process is then used to determine the next best viewpoint by selecting the one with the highest number of unique features. A Bayesian framework uses the modified statistics from the vocabulary structure to update the system's confidence in the identity of the object. New test images are only captured when the belief hypothesis is below a predefined threshold. This vocabulary tree method is tested against randomly selecting the next viewpoint and a state-of-the-art active object recognition method by Kootstra et al.. Our approach outperforms both methods by correctly recognizing more objects with less computational expense. This vocabulary tree method is extended for use in a probabilistic setting to improve the object recognition accuracy. We introduce Bayesian approaches for object recognition and object and pose recognition. Three likelihood models are introduced which incorporate various parameters and levels of complexity. The occlusion model, which includes geometric information and variables that cater for the background distribution and occlusion, correctly recognizes all objects on our challenging database. This probabilistic approach is further extended for recognizing multiple objects and poses in a test images. We show through experiments that this model can recognize multiple objects which occur in close proximity to distractor objects. Our viewpoint selection strategy is also extended to the multiple object application and performs well when compared to randomly selecting the next viewpoint, the activation model and mutual information. We also study the impact of using active vision for shape recognition. Fourier descriptors are used as input to our shape recognition system with mutual information as the active vision component. We build multinomial and Gaussian distributions using this information, which correctly recognizes a sequence of objects. We demonstrate the effectiveness of active vision in object recognition systems. We show that even in different recognition applications using different low level inputs, incorporating active vision improves the overall accuracy and decreases the computational expense of object recognition systems.
36

Perceived Size Modulates Cortical Processing of Objects

Brown, James Michael 28 January 2016 (has links)
Empirical object recognition research indicates that objects are represented and perceived as hierarchical part-whole arrangements that vary according to bottom-up and top-down biases. An ongoing debate within object recognition research concerns whether local or global image properties are more fundamental for the perception of objects. Similarly, there is also disagreement about whether the visual system is guided by holistic or analytical processes. Neuroimaging findings have revealed functional distinctions between low and higher-level visual processes across lateral occipital-temporal cortex (LOC), primary visual cortices (V1/V2) and ventral occipital-temporal cortex. Recent studies suggest activations in these object recognition areas and others, such as the fusiform face area (FFA) and extra-striate body area (EBA), are collinear with activations associated with the perception scenes and buildings. Together, this information warrants the focus of the proposed study: to investigate the neural correlates of object recognition and perceived size. During the experiment subjects tracked a fixation stimulus while simultaneously being presented with images of shape contours and faces. Contours and face stimuli subtended small, medium and large visual angles in order to evaluate variance in neural activation across perceived size. In the present study visual areas were hypothesized to modulate as a function of visual angle, meaning that the part-whole relationships of objects vary with their perceived size. / Master of Science
37

The use of a structured laser light system to ascertain three dimensional measurements of underwater work sites

Spours, J. January 2000 (has links)
No description available.
38

Recognition exploiting geometrical, appearance, and relational descriptions

Byne, J. H. Magnus January 1999 (has links)
No description available.
39

Memory mechanisms in the medial temporal lobe of the primate : the role of the perirhinal cortex

Buckley, Mark J. January 1997 (has links)
No description available.
40

Recognition and Registration of 3D Models in Depth Sensor Data

Grankvist, Ola January 2016 (has links)
Object Recognition is the art of localizing predefined objects in image sensor data. In this thesis a depth sensor was used which has the benefit that the 3D pose of the object can be estimated. This has applications in e.g. automatic manufacturing, where a robot picks up parts or tools with a robot arm. This master thesis presents an implementation and an evaluation of a system for object recognition of 3D models in depth sensor data. The system uses several depth images rendered from a 3D model and describes their characteristics using so-called feature descriptors. These are then matched with the descriptors of a scene depth image to find the 3D pose of the model in the scene. The pose estimate is then refined iteratively using a registration method. Different descriptors and registration methods are investigated. One of the main contributions of this thesis is that it compares two different types of descriptors, local and global, which has seen little attention in research. This is done for two different scene scenarios, and for different types of objects and depth sensors. The evaluation shows that global descriptors are fast and robust for objects with a smooth visible surface whereas the local descriptors perform better for larger objects in clutter and occlusion. This thesis also presents a novel global descriptor, the CESF, which is observed to be more robust than other global descriptors. As for the registration methods, the ICP is shown to perform most accurately and ICP point-to-plane more robust.

Page generated in 0.1466 seconds