• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 17
  • 3
  • 3
  • Tagged with
  • 32
  • 32
  • 9
  • 7
  • 6
  • 6
  • 5
  • 4
  • 4
  • 4
  • 4
  • 3
  • 3
  • 3
  • 3
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
11

Practical Issues in GPRAM Development

Li, Yin 01 January 2014 (has links)
In this thesis, two parts of practical issues in the GPRAM system design are included. The first part is the coding part. The sum-product decoding algorithm of LDPC codes has been refined to fit for the GPRAM hardware implementation. As we all know, communication channel has noise. The noise in telecom system is different from that in GPRAM systems. So the noise should be handled well in the GPRAM design. A noise look-up table was created for FPGA and those noises in the table are quantized. The second part of the thesis is to convert perfect images in video stream to those similar to the coarse images in human vision. GPRAM is an animal like robot in which coarse images are needed more than the fine images in order for us to understand how to GPRAM progresses those images to generate as clear image as we experienced. We use three steps, Point Spread function, inserting Poisson Noise, and introducing Eye fixation movements to mimic the coarse images seen merely from our eyes at the retinal photo-receptor level, i.e., without any brain processing.
12

On the effective number of tracked trajectories in normal human vision.

Tripathy, Srimant P., Narasimhan, Sathyasri, Barrett, Brendan T. January 2007 (has links)
No / Z. W. Pylyshyn and R. W. Storm (1988) have shown that human observers can accurately track four to five items at a time. However, when a threshold paradigm is used, observers are unable to track more than a single trajectory accurately (S. P. Tripathy & B. T. Barrett, 2004). This difference between the two studies is examined systematically using substantially suprathreshold stimuli. The stimuli consisted of one (Experiment 1) or more (Experiments 2 and 3) bilinear target trajectories embedded among several linear distractor trajectories. The target trajectories deviated clockwise (CW) or counterclockwise (CCW) (by 19°, 38°, or 76° in Experiments 1 and 2 and by 19°, 38°, or 57° in Experiment 3), and observers reported the direction of deviation. From the percentage of correct responses, the ¿effective¿ number of tracked trajectories was estimated for each experimental condition. The total number of trajectories in the stimulus and the number of deviating trajectories had only a small effect on the effective number of tracked trajectories; the effective number tracked was primarily influenced by the angle of deviation of the targets and ranged from four to five trajectories for a ±76° deviation to only one to two trajectories for a ±19° deviation, regardless of whether the different magnitudes of deviation were blocked (Experiment 2) or interleaved (Experiment 3). Simple hypotheses based on ¿averaging of orientations,¿ ¿preallocation of resources,¿ or pop-out, crowding, or masking of the target trajectories are unlikely to explain the relationship between the effective number tracked and the angle of deviation of the target trajectories. This study reconciles the difference between the studies cited above in terms of the number of trajectories that can be tracked at a time.
13

The Canny edge detector revisited

McIlhagga, William H. 2010 October 1916 (has links)
yes / Canny (IEEE Trans. Pattern Anal. Image Proc. 8(6):679-698, 1986) suggested that an optimal edge detector should maximize both signal-to-noise ratio and localization, and he derived mathematical expressions for these criteria. Based on these criteria, he claimed that the optimal step edge detector was similar to a derivative of a gaussian. However, Canny's work suffers from two problems. First, his derivation of localization criterion is incorrect. Here we provide a more accurate localization criterion and derive the optimal detector from it. Second, and more seriously, the Canny criteria yield an infinitely wide optimal edge detector. The width of the optimal detector can however be limited by considering the effect of the neighbouring edges in the image. If we do so, we find that the optimal step edge detector, according to the Canny criteria, is the derivative of an ISEF filter, proposed by Shen and Castan (Graph. Models Image Proc. 54:112-133, 1992). In addition, if we also consider detecting blurred (or non-sharp) gaussian edges of different widths, we find that the optimal blurred-edge detector is the above optimal step edge detector convolved with a gaussian. This implies that edge detection must be performed at multiple scales to cover all the blur widths in the image. We derive a simple scale selection procedure for edge detection, and demonstrate it in one and two dimensions.
14

Visual perception of gradients. A psychophysical study of the mechanisms of detection and discrimination of achromatic and chromatic gradients.

Garcia-Suarez, Luis January 2009 (has links)
No description available.
15

Spatial and Temporal Interactions between Shape Representations in Human Vision

Slugocki, Michael January 2019 (has links)
The human visual system has the remarkable capacity to transform spatio-temporal patterns of light into structured units of perception. Much research has focused on how the visual system integrates information around the perimeter of closed contours to form the perception of shape. This dissertation extends previous work by investigating how the perception of curvature along closed-contour shapes is affected by the presence of additional shapes that appear close to the target shape in space and/or time. Chapter 2 examined the ability of shape mechanisms at representing low frequency curvature in the presence of a higher frequency component along contours in multi-shape displays. We found that additions of high amplitude, high frequency curvature along a contour path can modulate the strength of interaction observed between shapes, and thus attenuates the contribution of low frequency components in interactions between neighbouring contours. Chapter 3 examined what curvature features are of importance in modulating phase dependent interactions between shapes. Results revealed that phase-dependent masking does not depend on curvature frequency, but is related to sensitivity for phase shifts in isolated contours, and is affected by both positive and negative curvature extrema. Computational simulations aimed at modelling the population responses evoked in intermediate shape processing areas (i.e., V4) suggest sensitivity to shifts in phase of shapes is not well captured by such a population code, and therefore alternative explanations are required. Chapter 4 examined how sensitivity to curvature deformations along the contour of a closed shape changes as a function of polar angle, angular frequency, and spatial uncertainty. Results show that human observers are, at first approximation, uniformly sensitivity to curvature deformations across all polar angles tested, and this result holds despite changes in angular frequency and spatial uncertainty. Chapter 5 examined whether the strength of spatial masking between shapes is affected by the presentation of a temporal mask. Our results demonstrate that a temporal mask affected spatial masking only when it preceded the target-mask stimulus by 130-180ms. Furthermore, the effects of a temporal mask on spatial masking are approximately additive, suggesting that separate components contribute to spatial and temporal interactions between shapes. / Thesis / Doctor of Philosophy (PhD)
16

Visual perception of gradients : a psychophysical study of the mechanisms of detection and discrimination of achromatic and chromatic gradients

Garcia-Suarez, Luis January 2009 (has links)
No description available.
17

INFORMATION THEORETIC CRITERIA FOR IMAGE QUALITY ASSESSMENT BASED ON NATURAL SCENE STATISTICS

Zhang, Di January 2006 (has links)
Measurement of visual quality is crucial for various image and video processing applications. <br /><br /> The goal of objective image quality assessment is to introduce a computational quality metric that can predict image or video quality. Many methods have been proposed in the past decades. Traditionally, measurements convert the spatial data into some other feature domains, such as the Fourier domain, and detect the similarity, such as mean square distance or Minkowsky distance, between the test data and the reference or perfect data, however only limited success has been achieved. None of the complicated metrics show any great advantage over other existing metrics. <br /><br /> The common idea shared among many proposed objective quality metrics is that human visual error sensitivities vary in different spatial and temporal frequency and directional channels. In this thesis, image quality assessment is approached by proposing a novel framework to compute the lost information in each channel not the similarities as used in previous methods. Based on natural scene statistics and several image models, an information theoretic framework is designed to compute the perceptual information contained in images and evaluate image quality in the form of entropy. <br /><br /> The thesis is organized as follows. Chapter I give a general introduction about previous work in this research area and a brief description of the human visual system. In Chapter II statistical models for natural scenes are reviewed. Chapter III proposes the core ideas about the computation of the perceptual information contained in the images. In Chapter IV, information theoretic criteria for image quality assessment are defined. Chapter V presents the simulation results in detail. In the last chapter, future direction and improvements of this research are discussed.
18

INFORMATION THEORETIC CRITERIA FOR IMAGE QUALITY ASSESSMENT BASED ON NATURAL SCENE STATISTICS

Zhang, Di January 2006 (has links)
Measurement of visual quality is crucial for various image and video processing applications. <br /><br /> The goal of objective image quality assessment is to introduce a computational quality metric that can predict image or video quality. Many methods have been proposed in the past decades. Traditionally, measurements convert the spatial data into some other feature domains, such as the Fourier domain, and detect the similarity, such as mean square distance or Minkowsky distance, between the test data and the reference or perfect data, however only limited success has been achieved. None of the complicated metrics show any great advantage over other existing metrics. <br /><br /> The common idea shared among many proposed objective quality metrics is that human visual error sensitivities vary in different spatial and temporal frequency and directional channels. In this thesis, image quality assessment is approached by proposing a novel framework to compute the lost information in each channel not the similarities as used in previous methods. Based on natural scene statistics and several image models, an information theoretic framework is designed to compute the perceptual information contained in images and evaluate image quality in the form of entropy. <br /><br /> The thesis is organized as follows. Chapter I give a general introduction about previous work in this research area and a brief description of the human visual system. In Chapter II statistical models for natural scenes are reviewed. Chapter III proposes the core ideas about the computation of the perceptual information contained in the images. In Chapter IV, information theoretic criteria for image quality assessment are defined. Chapter V presents the simulation results in detail. In the last chapter, future direction and improvements of this research are discussed.
19

Improving Perception From Electronic Visual Prostheses

Boyle, Justin Robert January 2005 (has links)
This thesis explores methods for enhancing digital image-like sensations which might be similar to those experienced by blind users of electronic visual prostheses. Visual prostheses, otherwise referred to as artificial vision systems or bionic eyes, may operate at ultra low image quality and information levels as opposed to more common electronic displays such as televisions, for which our expectations of image quality are much higher. The scope of the research is limited to enhancement by digital image processing: that is, by manipulating the content of images presented to the user. The work was undertaken to improve the effectiveness of visual prostheses in representing the visible world. Presently visual prosthesis development is limited to animal models in Australia and prototype human trials overseas. Consequently this thesis deals with simulated vision experiments using normally sighted viewers. The experiments involve an original application of existing image processing techniques to the field of low quality vision anticipated from visual prostheses. Resulting from this work are firstly recommendations for effective image processing methods for enhancing viewer perception when using visual prosthesis prototypes. Although limited to low quality images, recognition of some objects can still be achieved, and it is useful for a viewer to be presented with several variations of the image representing different processing methods. Scene understanding can be improved by incorporating Region-of-Interest techniques that identify salient areas within images and allow a user to zoom into that area of the image. Also there is some benefit in tailoring the image processing depending on the type of scene. Secondly the research involved the construction of a metric for basic information required for the interpretation of a visual scene at low image quality. The amount of information content within an image was quantified using inherent attributes of the image and shown to be positively correlated with the ability of the image to be recognised at low quality.
20

Source-Space Analyses in MEG/EEG and Applications to Explore Spatio-temporal Neural Dynamics in Human Vision

Yang, Ying 01 February 2017 (has links)
Human cognition involves dynamic neural activities in distributed brain areas. For studying such neural mechanisms, magnetoencephalography (MEG) and electroencephalography (EEG) are two important techniques, as they non-invasively detect neural activities with a high temporal resolution. Recordings by MEG/EEG sensors can be approximated as a linear transformation of the neural activities in the brain space (i.e., the source space). However, we only have a limited number sensors compared with the many possible locations in the brain space; therefore it is challenging to estimate the source neural activities from the sensor recordings, in that we need to solve the underdetermined inverse problem of the linear transformation. Moreover, estimating source activities is typically an intermediate step, whereas the ultimate goal is to understand what information is coded and how information flows in the brain. This requires further statistical analysis of source activities. For example, to study what information is coded in different brain regions and temporal stages, we often regress neural activities on some external covariates; to study dynamic interactions between brain regions, we often quantify the statistical dependence among the activities in those regions through “connectivity” analysis. Traditionally, these analyses are done in two steps: Step 1, solve the linear problem under some regularization or prior assumptions, (e.g., each source location being independent); Step 2, do the regression or connectivity analysis. However, biases induced in the regularization in Step 1 can not be adapted in Step 2 and thus may yield inaccurate regression or connectivity results. To tackle this issue, we present novel one-step methods of regression or connectivity analysis in the source space, where we explicitly modeled the dependence of source activities on the external covariates (in the regression analysis) or the cross-region dependence (in the connectivity analysis), jointly with the source-to-sensor linear transformation. In simulations, we observed better performance by our models than by commonly used two-step approaches, when our model assumptions are reasonably satisfied. Besides the methodological contribution, we also applied our methods in a real MEG/EEG experiment, studying the spatio-temporal neural dynamics in the visual cortex. The human visual cortex is hypothesized to have a hierarchical organization, where low-level regions extract low-level features such as local edges, and high-level regions extract semantic features such as object categories. However, details about the spatio-temporal dynamics are less understood. Here, using both the two-step and our one-step regression models in the source space, we correlated neural responses to naturalistic scene images with the low-level and high-level features extracted from a well-trained convolutional neural network. Additionally, we also studied the interaction between regions along the hierarchy using the two-step and our one-step connectivity models. The results from the two-step and the one-step methods were generally consistent; however, the one-step methods demonstrated some intriguing advantages in the regression analysis, and slightly different patterns in the connectivity analysis. In the consistent results, we not only observed an early-to-late shift from low-level to high-level features, which support feedforward information flow along the hierarchy, but also some novel evidence indicating non-feedforward information flow (e.g., topdown feedback). These results can help us better understand the neural computation in the visual cortex. Finally, we compared the empirical sensitivity between MEG and EEG in this experiment, in detecting dependence between neural responses and visual features. Our results show that the less costly EEG was able to achieve comparable sensitivity with that in MEG when the number of observations was about twice of that in MEG. These results can help researchers empirically choose between MEG and EEG when planning their experiments with limited budgets.

Page generated in 0.1818 seconds