• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 18
  • 3
  • 3
  • Tagged with
  • 33
  • 33
  • 9
  • 7
  • 6
  • 6
  • 5
  • 4
  • 4
  • 4
  • 4
  • 4
  • 3
  • 3
  • 3
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
11

Practical Issues in GPRAM Development

Li, Yin 01 January 2014 (has links)
In this thesis, two parts of practical issues in the GPRAM system design are included. The first part is the coding part. The sum-product decoding algorithm of LDPC codes has been refined to fit for the GPRAM hardware implementation. As we all know, communication channel has noise. The noise in telecom system is different from that in GPRAM systems. So the noise should be handled well in the GPRAM design. A noise look-up table was created for FPGA and those noises in the table are quantized. The second part of the thesis is to convert perfect images in video stream to those similar to the coarse images in human vision. GPRAM is an animal like robot in which coarse images are needed more than the fine images in order for us to understand how to GPRAM progresses those images to generate as clear image as we experienced. We use three steps, Point Spread function, inserting Poisson Noise, and introducing Eye fixation movements to mimic the coarse images seen merely from our eyes at the retinal photo-receptor level, i.e., without any brain processing.
12

Visual perception of gradients. A psychophysical study of the mechanisms of detection and discrimination of achromatic and chromatic gradients.

Garcia-Suarez, Luis January 2009 (has links)
No description available.
13

Spatial and Temporal Interactions between Shape Representations in Human Vision

Slugocki, Michael January 2019 (has links)
The human visual system has the remarkable capacity to transform spatio-temporal patterns of light into structured units of perception. Much research has focused on how the visual system integrates information around the perimeter of closed contours to form the perception of shape. This dissertation extends previous work by investigating how the perception of curvature along closed-contour shapes is affected by the presence of additional shapes that appear close to the target shape in space and/or time. Chapter 2 examined the ability of shape mechanisms at representing low frequency curvature in the presence of a higher frequency component along contours in multi-shape displays. We found that additions of high amplitude, high frequency curvature along a contour path can modulate the strength of interaction observed between shapes, and thus attenuates the contribution of low frequency components in interactions between neighbouring contours. Chapter 3 examined what curvature features are of importance in modulating phase dependent interactions between shapes. Results revealed that phase-dependent masking does not depend on curvature frequency, but is related to sensitivity for phase shifts in isolated contours, and is affected by both positive and negative curvature extrema. Computational simulations aimed at modelling the population responses evoked in intermediate shape processing areas (i.e., V4) suggest sensitivity to shifts in phase of shapes is not well captured by such a population code, and therefore alternative explanations are required. Chapter 4 examined how sensitivity to curvature deformations along the contour of a closed shape changes as a function of polar angle, angular frequency, and spatial uncertainty. Results show that human observers are, at first approximation, uniformly sensitivity to curvature deformations across all polar angles tested, and this result holds despite changes in angular frequency and spatial uncertainty. Chapter 5 examined whether the strength of spatial masking between shapes is affected by the presentation of a temporal mask. Our results demonstrate that a temporal mask affected spatial masking only when it preceded the target-mask stimulus by 130-180ms. Furthermore, the effects of a temporal mask on spatial masking are approximately additive, suggesting that separate components contribute to spatial and temporal interactions between shapes. / Thesis / Doctor of Philosophy (PhD)
14

Human Visual Search Performance for Close Range Detection of Static Targets from Moving Sensor Platforms

Hewitt, Jennifer 01 January 2024 (has links) (PDF)
Search models based on human perception have been developed by military researchers over the past few decades and have both military and commercial applications for sensor design and implementation. These models are created primarily for static imagery, and accurately predict task performance for systems with stationary targets and stationary sensors, if the observer is given infinite time to make targeting decisions. To account for situations where decisions must be made on a shortened time scale, the time-limited search model was developed to describe how task performance evolves with time. Recent variations of this model have been made to account for dynamic target situations and dynamic sensor situations. The latter of these was designed to model performance from vehicle-mounted sensors. This model has been applied here for the optimization of sensor configuration for near-infrared search of Burmese pythons in grass, for both static imagery and for videos recorded from a moving sensor platform. By coupling the established dynamic sensor model with camera matrix theory, measured static human perception data can be used to optimize sensing system selection and sensor operations including sensor pointing angle, height, and platform speed to maximize human search performance for the detection of close-range ground targets from a moving sensor platform. To illustrate this, this methodology is applied to the detection of Burmese pythons viewed in near-infrared from a moving sensor platform.
15

On the effective number of tracked trajectories in normal human vision.

Tripathy, Srimant P., Narasimhan, Sathyasri, Barrett, Brendan T. January 2007 (has links)
No / Z. W. Pylyshyn and R. W. Storm (1988) have shown that human observers can accurately track four to five items at a time. However, when a threshold paradigm is used, observers are unable to track more than a single trajectory accurately (S. P. Tripathy & B. T. Barrett, 2004). This difference between the two studies is examined systematically using substantially suprathreshold stimuli. The stimuli consisted of one (Experiment 1) or more (Experiments 2 and 3) bilinear target trajectories embedded among several linear distractor trajectories. The target trajectories deviated clockwise (CW) or counterclockwise (CCW) (by 19°, 38°, or 76° in Experiments 1 and 2 and by 19°, 38°, or 57° in Experiment 3), and observers reported the direction of deviation. From the percentage of correct responses, the ¿effective¿ number of tracked trajectories was estimated for each experimental condition. The total number of trajectories in the stimulus and the number of deviating trajectories had only a small effect on the effective number of tracked trajectories; the effective number tracked was primarily influenced by the angle of deviation of the targets and ranged from four to five trajectories for a ±76° deviation to only one to two trajectories for a ±19° deviation, regardless of whether the different magnitudes of deviation were blocked (Experiment 2) or interleaved (Experiment 3). Simple hypotheses based on ¿averaging of orientations,¿ ¿preallocation of resources,¿ or pop-out, crowding, or masking of the target trajectories are unlikely to explain the relationship between the effective number tracked and the angle of deviation of the target trajectories. This study reconciles the difference between the studies cited above in terms of the number of trajectories that can be tracked at a time.
16

The Canny edge detector revisited

McIlhagga, William H. 16 October 2010 (has links)
Yes / Canny (IEEE Trans. Pattern Anal. Image Proc. 8(6):679-698, 1986) suggested that an optimal edge detector should maximize both signal-to-noise ratio and localization, and he derived mathematical expressions for these criteria. Based on these criteria, he claimed that the optimal step edge detector was similar to a derivative of a gaussian. However, Canny's work suffers from two problems. First, his derivation of localization criterion is incorrect. Here we provide a more accurate localization criterion and derive the optimal detector from it. Second, and more seriously, the Canny criteria yield an infinitely wide optimal edge detector. The width of the optimal detector can however be limited by considering the effect of the neighbouring edges in the image. If we do so, we find that the optimal step edge detector, according to the Canny criteria, is the derivative of an ISEF filter, proposed by Shen and Castan (Graph. Models Image Proc. 54:112-133, 1992). In addition, if we also consider detecting blurred (or non-sharp) gaussian edges of different widths, we find that the optimal blurred-edge detector is the above optimal step edge detector convolved with a gaussian. This implies that edge detection must be performed at multiple scales to cover all the blur widths in the image. We derive a simple scale selection procedure for edge detection, and demonstrate it in one and two dimensions.
17

Visual perception of gradients : a psychophysical study of the mechanisms of detection and discrimination of achromatic and chromatic gradients

Garcia-Suarez, Luis January 2009 (has links)
No description available.
18

INFORMATION THEORETIC CRITERIA FOR IMAGE QUALITY ASSESSMENT BASED ON NATURAL SCENE STATISTICS

Zhang, Di January 2006 (has links)
Measurement of visual quality is crucial for various image and video processing applications. <br /><br /> The goal of objective image quality assessment is to introduce a computational quality metric that can predict image or video quality. Many methods have been proposed in the past decades. Traditionally, measurements convert the spatial data into some other feature domains, such as the Fourier domain, and detect the similarity, such as mean square distance or Minkowsky distance, between the test data and the reference or perfect data, however only limited success has been achieved. None of the complicated metrics show any great advantage over other existing metrics. <br /><br /> The common idea shared among many proposed objective quality metrics is that human visual error sensitivities vary in different spatial and temporal frequency and directional channels. In this thesis, image quality assessment is approached by proposing a novel framework to compute the lost information in each channel not the similarities as used in previous methods. Based on natural scene statistics and several image models, an information theoretic framework is designed to compute the perceptual information contained in images and evaluate image quality in the form of entropy. <br /><br /> The thesis is organized as follows. Chapter I give a general introduction about previous work in this research area and a brief description of the human visual system. In Chapter II statistical models for natural scenes are reviewed. Chapter III proposes the core ideas about the computation of the perceptual information contained in the images. In Chapter IV, information theoretic criteria for image quality assessment are defined. Chapter V presents the simulation results in detail. In the last chapter, future direction and improvements of this research are discussed.
19

INFORMATION THEORETIC CRITERIA FOR IMAGE QUALITY ASSESSMENT BASED ON NATURAL SCENE STATISTICS

Zhang, Di January 2006 (has links)
Measurement of visual quality is crucial for various image and video processing applications. <br /><br /> The goal of objective image quality assessment is to introduce a computational quality metric that can predict image or video quality. Many methods have been proposed in the past decades. Traditionally, measurements convert the spatial data into some other feature domains, such as the Fourier domain, and detect the similarity, such as mean square distance or Minkowsky distance, between the test data and the reference or perfect data, however only limited success has been achieved. None of the complicated metrics show any great advantage over other existing metrics. <br /><br /> The common idea shared among many proposed objective quality metrics is that human visual error sensitivities vary in different spatial and temporal frequency and directional channels. In this thesis, image quality assessment is approached by proposing a novel framework to compute the lost information in each channel not the similarities as used in previous methods. Based on natural scene statistics and several image models, an information theoretic framework is designed to compute the perceptual information contained in images and evaluate image quality in the form of entropy. <br /><br /> The thesis is organized as follows. Chapter I give a general introduction about previous work in this research area and a brief description of the human visual system. In Chapter II statistical models for natural scenes are reviewed. Chapter III proposes the core ideas about the computation of the perceptual information contained in the images. In Chapter IV, information theoretic criteria for image quality assessment are defined. Chapter V presents the simulation results in detail. In the last chapter, future direction and improvements of this research are discussed.
20

Improving Perception From Electronic Visual Prostheses

Boyle, Justin Robert January 2005 (has links)
This thesis explores methods for enhancing digital image-like sensations which might be similar to those experienced by blind users of electronic visual prostheses. Visual prostheses, otherwise referred to as artificial vision systems or bionic eyes, may operate at ultra low image quality and information levels as opposed to more common electronic displays such as televisions, for which our expectations of image quality are much higher. The scope of the research is limited to enhancement by digital image processing: that is, by manipulating the content of images presented to the user. The work was undertaken to improve the effectiveness of visual prostheses in representing the visible world. Presently visual prosthesis development is limited to animal models in Australia and prototype human trials overseas. Consequently this thesis deals with simulated vision experiments using normally sighted viewers. The experiments involve an original application of existing image processing techniques to the field of low quality vision anticipated from visual prostheses. Resulting from this work are firstly recommendations for effective image processing methods for enhancing viewer perception when using visual prosthesis prototypes. Although limited to low quality images, recognition of some objects can still be achieved, and it is useful for a viewer to be presented with several variations of the image representing different processing methods. Scene understanding can be improved by incorporating Region-of-Interest techniques that identify salient areas within images and allow a user to zoom into that area of the image. Also there is some benefit in tailoring the image processing depending on the type of scene. Secondly the research involved the construction of a metric for basic information required for the interpretation of a visual scene at low image quality. The amount of information content within an image was quantified using inherent attributes of the image and shown to be positively correlated with the ability of the image to be recognised at low quality.

Page generated in 0.0653 seconds