• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 6
  • 2
  • Tagged with
  • 12
  • 12
  • 5
  • 4
  • 4
  • 3
  • 3
  • 3
  • 3
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
1

TOP-DOWN EFFECTS OF PERCEPTUAL GROUPING ON THE PERCEPTION OF MOTION

Unknown Date (has links)
Ullman (1979) has proposed a measurement metric, which he termed “affinity." He described affinity as a certain similarity measure between successively presented surfaces as it affects the perception of apparent motion between the surfaces. Later, the concept of “affinity” has been extended; it entails that how the perception of motion within a surface is affected by its grouping strength with adjacent surfaces (Hock and Nichols, 2012). It has been found that the more attributes, that are shared by the adjacent surfaces, the greater the likelihood of their being grouped together. However, Ullman (1979) suggested that the relative affinities of pairs of surfaces could determine the solutions for the motion correspondence problem (when more than one motion path is possible). However, it has remained unknown whether the effects of affinity on solutions to the correspondence problem are due to its effects on a single surface apparent motion strength or pre-selection biases; i.e., the top-down effects of perceptual grouping favoring the perception of motion in one direction as opposed to other competing directions. In the current study, it has been confirmed that motion within a surface is affected by its affinity with adjacent surfaces. The current study also confirmed that affinity has a small, but significant effect on motion strength when motion surfaces are presented in a single surface apparent motion configuration, evidence for top-down effects in which motion strength can be affected by affinity. In motion correspondence problem, affinity affects the perceived motion direction due to competition is consistent with the solution to the motion correspondence problem being affected by the relative affinity-determined strength of competing motion signals. But it is seen that there is strong affinity is due to preselection identity biases. To conclude, in motion correspondence problem, stronger motion is perceived between the two similar surfaces is due to pre-selection biases resulting from the perceptual grouping of surfaces with the greatest affinity; i.e., the top-down effects favoring the perception of motion in one direction as opposed to other competing directions. / Includes bibliography. / Dissertation (Ph.D.)--Florida Atlantic University, 2020. / FAU Electronic Theses and Dissertations Collection
2

Low and Mid-level Shape Priors for Image Segmentation

Levinshtein, Alex 15 February 2011 (has links)
Perceptual grouping is essential to manage the complexity of real world scenes. We explore bottom-up grouping at three different levels. Starting from low-level grouping, we propose a novel method for oversegmenting an image into compact superpixels, reducing the complexity of many high-level tasks. Unlike most low-level segmentation techniques, our geometric flow formulation enables us to impose additional compactness constraints, resulting in a fast method with minimal undersegmentation. Our subsequent work utilizes compact superpixels to detect two important mid-level shape regularities, closure and symmetry. Unlike the majority of closure detection approaches, we transform the closure detection problem into one of finding a subset of superpixels whose collective boundary has strong edge support in the image. Building on superpixels, we define a closure cost which is a ratio of a novel learned boundary gap measure to area, and show how it can be globally minimized to recover a small set of promising shape hypotheses. In our final contribution, motivated by the success of shape skeletons, we recover and group symmetric parts without assuming prior figure-ground segmentation. Further exploiting superpixel compactness, superpixels are this time used as an approximation to deformable maximal discs that comprise a medial axis. A learned measure of affinity between neighboring superpixels and between symmetric parts enables the purely bottom-up recovery of a skeleton-like structure, facilitating indexing and generic object recognition in complex real images.
3

Low and Mid-level Shape Priors for Image Segmentation

Levinshtein, Alex 15 February 2011 (has links)
Perceptual grouping is essential to manage the complexity of real world scenes. We explore bottom-up grouping at three different levels. Starting from low-level grouping, we propose a novel method for oversegmenting an image into compact superpixels, reducing the complexity of many high-level tasks. Unlike most low-level segmentation techniques, our geometric flow formulation enables us to impose additional compactness constraints, resulting in a fast method with minimal undersegmentation. Our subsequent work utilizes compact superpixels to detect two important mid-level shape regularities, closure and symmetry. Unlike the majority of closure detection approaches, we transform the closure detection problem into one of finding a subset of superpixels whose collective boundary has strong edge support in the image. Building on superpixels, we define a closure cost which is a ratio of a novel learned boundary gap measure to area, and show how it can be globally minimized to recover a small set of promising shape hypotheses. In our final contribution, motivated by the success of shape skeletons, we recover and group symmetric parts without assuming prior figure-ground segmentation. Further exploiting superpixel compactness, superpixels are this time used as an approximation to deformable maximal discs that comprise a medial axis. A learned measure of affinity between neighboring superpixels and between symmetric parts enables the purely bottom-up recovery of a skeleton-like structure, facilitating indexing and generic object recognition in complex real images.
4

Road Extraction From Satellite Images By Self-supervised Classification And Perceptual Grouping

Sahin, Eda 01 January 2013 (has links) (PDF)
Road network extraction from high resolution satellite imagery is the most frequently utilized technique for updating and correcting geographic information system (GIS) databases, registering multi-temporal images for change detection and automatically aligning spatial datasets. This advance method is widely employed due to the improvements in satellite technology such as development of new sensors for high resolution imagery. To avoid the cost of the human interaction, various automatic and semi-automatic road extraction methods are developed and proposed in the literature. The aim of this study is to develop a fully automatized method which can extract road networks by using the spectral and structural features of the roads. In order to achieve this goal we set various objectives and work them out one by one. First bjective is to obtain reliable road seeds, since they are crucial for determining road regions correctly in the classification step. Second objective is finding most onvenient features and classification method for the road extraction. The third objective is to locate road centerlines which are defines the road topology. A number of algorithms are developed and tested throughout the thesis to achieve these objectives and the advantages of the proposed ones are explained. The final version of the proposed algorithm is tested by three band (RGB) satellite images and the results are compared with other studies in the literature to illustrate the benefits of the proposed algorithm.
5

Model Based Building Extraction From High Resolution Aerial Images

Bilen, Burak 01 June 2004 (has links) (PDF)
A method for detecting the buildings from high resolution aerial images is proposed. The aim is to extract the buildings from high resolution aerial images using the Hough transform and the model based perceptual grouping techniques.The edges detected from the image are the basic structures used in the building detection procedure. The method proposed in this thesis makes use of the basic image processing techniques. Noise removal and image sharpening techniques are used to enhance the input image. Then, the edges are extracted from the image using the Canny edge detection algorithm. The edges obtained are composed of discrete points. These discrete points are vectorized in order to generate straight line segments. This is performed with the use of the Hough transform and the perceptual grouping techniques. The straight line segments become the basic structures of the buildings. Finally, the straight line segments are grouped based on predefined model(s) using the model based perceptual grouping technique. The groups of straight line segments are the candidates for 2D structures that may be the buildings, the shadows or other man-made objects. The proposed method was implemented with a program written in C programming language. The approach was applied to several study areas. The results achieved are encouraging. The number of the extracted buildings increase if the orientation of the buildings are nearly the same and the Canny edge detector detects most of the building edges.If the buildings have different orientations,some of the buildings may not be extracted with the proposed method. In addition to building orientation, the building size and the parameters used in the Hough transform and the perceptual grouping stages also affect the success of the proposed method.
6

Developing An Integrated System For Semi-automated Segmentation Of Remotely Sensed Imagery

Kok, Emre Hamit 01 May 2005 (has links) (PDF)
Classification of the agricultural fields using remote sensing images is one of the most popular methods used for crop mapping. Most recent classification techniques are based on per-field approach that works as assigning a crop label for each field. Commonly, the spatial vector data is used for the boundaries of the fields for applying the classification within them. However, crop variation within the fields is a very common problem. In this case, the existing field boundaries may be insufficient for performing the field-based classification and therefore, image segmentation is needed to be employed to detect these homogeneous segments within the fields. This study proposed a field-based approach to segment the crop fields in an image within the integrated environment of Geographic Information System (GIS) and Remote Sensing. In this method, each field is processed separately and the segments within each field are detected. First, an edge detection is applied to the images, and the detected edges are vectorized to generate the straight line segments. Next, these line segments are correlated with the existing field boundaries using the perceptual grouping techniques to form the closed regions in the image. The closed regions represent the segments each of which contain a distinct crop type. To implement the proposed methodology, a software was developed. The implementation was carried out using the 10 meter spatial resolution SPOT 5 and the 20 meter spatial resolution SPOT 4 satellite images covering a part of Karacabey Plain, Turkey. The evaluations of the obtained results are presented using different band combinations of the images.
7

REM: Relational Entropy-Based Measure of Saliency

Duncan, Kester 07 May 2010 (has links)
The incredible ability of human beings to quickly detect the prominent or salient regions in an image is often taken for granted. To be able to reproduce this intelligent ability in computer vision systems remains quite a challenge. This ability is of paramount importance to perception and image understanding since it accelerates the image analysis process, thereby allowing higher vision processes such as recognition to have a focus of attention. In addition to this, human eye fixation points occurring during the early stages of visual processing, often correspond to the loci of salient image regions. These regions provide us with assistance in determining the interesting parts of an image and they also lend support to our ability to discriminate between different objects in a scene. Salient regions attract our immediate attention without requiring an exhaustive scan of a scene. In essence, saliency can be defined as the quality of an image region that enables it to stand out in relation to its neighbors. Saliency is often approached in either one of two ways. The bottom-up saliency approach refers to mechanisms which are image-driven and independent of the knowledge in an image, whereas the top-down saliency approach refers to mechanisms which are task-oriented and make use of the prior knowledge about a scene. In this thesis, we present a bottom-up measure of saliency based on the relationships exhibited among image features. The perceived structure in an image is determined more by the relationships among features rather than the individual feature attributes. From this standpoint, we aim to capture the organization within an image by employing relational distributions derived from distance and gradient direction relationships exhibited between image primitives. The Rényi entropy of the relational distribution tends to be lower if saliency is exhibited for some image region in the local pixel neighborhood over which the distribution is defined. This notion forms the foundation of our measure. Correspondingly, results of our measure are presented in the form of a saliency map, highlighting salient image regions. We show results on a variety of real images from various datasets. We evaluate the performance of our measure in relation to a dominant saliency model and obtain comparable results. We also investigate the biological plausibility of our method by comparing our results to those captured by human fixation maps. In an effort to derive meaningful information from an image, we investigate the significance of scale relative to our saliency measure, and attempt to determine optimal scales for image analysis. In addition to this, we extend a perceptual grouping framework by using our measure as an optimization criterion for determining the organizational strength of edge groupings. As a result, the use of ground truth images is circumvented.
8

Perceptual Grouping Strategies in Visual Search Tasks

Maria R Kon (12431190) 19 April 2022 (has links)
<p>A fundamental characteristic of human visual perception is the ability to group together disparate elements in a scene and treat them as a single unit. The mechanisms by which humans create such groupings remain unknown, but grouping seems to play an important role in a wide variety of visual phenomena. I propose a neural model of grouping; through top-down control of its circuits, the model implements a grouping strategy that involves both a connection strategy (which elements to connect) and a selection strategy (spatiotemporal properties of a selection signal that segments target elements to facilitate identification). With computer simulations I explain how the circuits work and show how they can account for a wide variety of Gestalt principles of perceptual grouping. Additionally, I extend the model so that it can simulate visual search tasks. I show that when the model uses particular grouping strategies, simulated results closely match empirical results from replication experiments of three visual search tasks. In these experiments, perceptual grouping was induced by proximity and shape similarity (Palmer & Beck, 2007), by the spacing of irrelevant distractors and size similarity (Vickery, 2008), or by the proximity of dots and the proximity and shape similarity of line figures (Trick & Enns, 1997). Thus, I show that the model accounts for a variety of grouping effects and indicates which grouping strategies were likely used to promote performance in three visual search tasks. </p>
9

Hierarchical Ensemble Representations: Forming Ensemble Representations across Multiple Spatial Scales

Pandey, Sandarsh 01 September 2020 (has links)
An ensemble representation refers to a statistical summary representation of a group of similar objects. Recent work has shown that we can form multiple ensemble representations – ensemble representations for a single feature dimension across multiple stimulus groups, ensemble representations for multiple feature dimensions in the same stimulus group, and ensemble representations across multiple sensory domains. In our study, we use hierarchical stimuli based on the Navon figures (Navon 1977) to study properties of ensemble representations across multiple spatial scales. In Experiments 1 and 3, we study properties of ensemble representations for the orientation and size feature dimension, respectively. In Experiment 2, we study properties of individual representations for the orientation feature dimension. Results indicate that it is possible to form ensemble representations across multiple spatial scales. Experiment 1 shows that the global ensemble representations may be extracted automatically (without intent) whereas the local ensemble representation is only extracted in response to task demands (with intent). Finally, in both Experiment 1 and Experiment 3, participants were more accurate at reporting the global ensemble representation than the local ensemble representation whereas in Experiment 2, performance did not differ across the levels. These results point towards global precedence in the formation of ensemble representations.
10

THE EFFECTS OF ALTERNATE-LINE SHADING ON VISUAL SEARCH IN GRID-BASED GRAPHIC DESIGNS

Lee, Michael P 01 January 2014 (has links)
Objective: The goal of this research was to determine whether alternate-line shading (zebra-striping) of grid-based displays affects the strategy (i.e., “visual flow”) and efficiency of serial search. Background: Grids, matrices, and tables are commonly used to organize information. A number of design techniques and psychological principles are relevant to how viewers’ eyes can be guided through such visual works. One common technique for grids, “zebra-striping,” is intended to guide eyes through the design, or “create visual flow” by alternating shaded and unshaded rows or columns. Method: 13 participants completed a visual serial search task. The target was embedded in a grid that had 1) no shading, 2) shading of alternating rows, or 3) shading of alternating columns. Response times and error rates were analyzed to determine search strategy and efficiency. Results: Our analysis found evidence supporting a weak effect of shading on search strategy. The direction of shading had an impact on which parts of the grid were responded to most rapidly. However, a left-to-right reading bias and middle-to-outside edge effect were also found. Overall performance was reliably better when the grid had no shading. Exploratory analyses suggest individual differences may be a factor. Conclusion: Shading seems to create visual flow that is relatively weak compared to search strategies related to the edge effect or left-to-right reading biases. In general, however, the presence of any type of shading reduced search performance. Application: Designers creating a grid-based display should not automatically assume that shading will change viewers search strategies. Furthermore, although strategic shading may be useful for tasks other than that studied here, our current data indicate that shading can actually be detrimental to visual search for complex (i.e., conjunctive) targets.

Page generated in 0.0765 seconds