• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 18
  • 3
  • 3
  • Tagged with
  • 33
  • 33
  • 9
  • 7
  • 6
  • 6
  • 5
  • 4
  • 4
  • 4
  • 4
  • 4
  • 3
  • 3
  • 3
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
1

Colour constancy in simple and complex scenes

Bramwell, David January 1997 (has links)
Colour constancy is defined as the ability to perceive the surface colours of objects within scenes as approximately constant through changes in scene illumination. Colour constancy in real life functions so seamlessly that most people do not realise that the colour of the light emanating from an object can change markedly throughout the day. Constancy measurements made in simple scenes constructed from flat coloured patches do not produce constancy of this high degree. The question that must be asked is: what are the features of everyday scenes that improve constancy? A novel technique is presented for testing colour constancy. Results are presented showing measurements of constancy in simple and complex scenes. More specifically, matching experiments are performed for patches against uniform and multi-patch backgrounds, the latter of which provide colour contrast. Objects created by the addition of shape and 3-D shading information are also matched against backgrounds consisting of matte reflecting patches. In the final set of experiments observers match detailed depictions of objects - rich in chromatic contrast, shading, mutual illumination and other real life features - within depictions of real life scenes. The results show similar performance across the conditions that contain chromatic contrast, although some uncertainty still remains as to whether the results are indicative of human colour constancy performance or to sensory match capabilities. An interesting division exists between patch matches performed against uniform and multi-patch backgrounds that is manifested as a shift in CIE xy space. A simple model of early chromatic processes is proposed and examined in the context of the results.
2

The Isolation of Human Rod and Cone Photoreceptor Activity combining Electroretinography and Silent Substitution Techniques

Maguire, John January 2017 (has links)
Aims: The electroretinogram (ERG) can be used to independently assess the function of rod and cone photoreceptors within the human retina. The work in this thesis sought to investigate an alternative method of recording the ERG, using the silent substitution paradigm (Estevez and Spekreijse 1982). The aims are separated into two parts, firstly, the isolation and characterisation of the non-dark adapted rod photoreceptor response, and secondly, characterising the ERG response from L-, M- and S-cones. Methods: Rod, L-, M- and S-cone isolating as well as non-isolating sinusoidal flicker and transient square-wave stimuli were generated on a 4 primary LED ganzfeld stimulator to elicit ERGs from non-dark adapted participants with normal and compromised rod or cone function. Results: The results from the rod experiments showed that ERGs elicited by rod isolating silent substitution stimuli exhibit low-pass temporal frequency response characteristics with an upper response limit of 30Hz and saturate beyond 1000ph Td. Responses are optimal between 5 – 8 Hz and between 10-100 photopic Td. There is a significant correlation between the response amplitudes obtained with the silent substitution method and current standard clinical protocols. The results from the cone experiments showed that the L-, M- and S-cone stimulation produced ERGs with very different morphologies. L- and M-cone stimulation is of limited use as an objective measure of colour vision deficiency. Conclusion: Silent substitution provides an effective method for the isolation of human rod and cone photoreceptor function in subjects when stimuli are used within appropriate parameter ranges.
3

Integration across time determines path deviation discrimination for moving objects.

Whitaker, David J., Levi, D.M., Kennedy, Graeme J. 04 1900 (has links)
Yes / Background: Human vision is vital in determining our interaction with the outside world. In this study we characterize our ability to judge changes in the direction of motion of objects-a common task which can allow us either to intercept moving objects, or else avoid them if they pose a threat. Methodology/Principal Findings: Observers were presented with objects which moved across a computer monitor on a linear path until the midline, at which point they changed their direction of motion, and observers were required to judge the direction of change. In keeping with the variety of objects we encounter in the real world, we varied characteristics of the moving stimuli such as velocity, extent of motion path and the object size. Furthermore, we compared performance for moving objects with the ability of observers to detect a deviation in a line which formed the static trace of the motion path, since it has been suggested that a form of static memory trace may form the basis for these types of judgment. The static line judgments were well described by a 'scale invariant' model in which any two stimuli which possess the same two-dimensional geometry (length/width) result in the same level of performance. Performance for the moving objects was entirely different. Irrespective of the path length, object size or velocity of motion, path deviation thresholds depended simply upon the duration of the motion path in seconds. Conclusions/Significance: Human vision has long been known to integrate information across space in order to solve spatial tasks such as judgment of orientation or position. Here we demonstrate an intriguing mechanism which integrates direction information across time in order to optimize the judgment of path deviation for moving objects. / Wellcome Trust, Leverhulme Trust, NIH
4

Relative contributions to vergence eye movements of two binocular cues for motion-in-depth

Giesel, M., Yakovleva, A., Bloj, Marina, Wade, A.R., Norcia, A.M., Harris, J.M. 11 November 2019 (has links)
Yes / When we track an object moving in depth, our eyes rotate in opposite directions. This type of “disjunctive” eye movement is called horizontal vergence. The sensory control signals for vergence arise from multiple visual cues, two of which, changing binocular disparity (CD) and inter-ocular velocity differences (IOVD), are specifically binocular. While it is well known that the CD cue triggers horizontal vergence eye movements, the role of the IOVD cue has only recently been explored. To better understand the relative contribution of CD and IOVD cues in driving horizontal vergence, we recorded vergence eye movements from ten observers in response to four types of stimuli that isolated or combined the two cues to motion-in-depth, using stimulus conditions and CD/IOVD stimuli typical of behavioural motion-in-depth experiments. An analysis of the slopes of the vergence traces and the consistency of the directions of vergence and stimulus movements showed that under our conditions IOVD cues provided very little input to vergence mechanisms. The eye movements that did occur coinciding with the presentation of IOVD stimuli were likely not a response to stimulus motion, but a phoria initiated by the absence of a disparity signal. / Supported by NIH EY018875 (AMN), BBSRC grants BB/M001660/1 (JH), BB/M002543/1 (AW), and BB/MM001210/1 (MB).
5

Human Visual Search Performance for Close Range Detection of Static Targets from Moving Sensor Platforms

Hewitt, Jennifer 01 January 2024 (has links) (PDF)
Search models based on human perception have been developed by military researchers over the past few decades and have both military and commercial applications for sensor design and implementation. These models are created primarily for static imagery, and accurately predict task performance for systems with stationary targets and stationary sensors, if the observer is given infinite time to make targeting decisions. To account for situations where decisions must be made on a shortened time scale, the time-limited search model was developed to describe how task performance evolves with time. Recent variations of this model have been made to account for dynamic target situations and dynamic sensor situations. The latter of these was designed to model performance from vehicle-mounted sensors. This model has been applied here for the optimization of sensor configuration for near-infrared search of Burmese pythons in grass, for both static imagery and for videos recorded from a moving sensor platform. By coupling the established dynamic sensor model with camera matrix theory, measured static human perception data can be used to optimize sensing system selection and sensor operations including sensor pointing angle, height, and platform speed to maximize human search performance for the detection of close-range ground targets from a moving sensor platform. To illustrate this, this methodology is applied to the detection of Burmese pythons viewed in near-infrared from a moving sensor platform.
6

The role of spatial derivatives in feature detection

Barbieri, Gillian Sylvia Anna-Stasia January 2000 (has links)
No description available.
7

The Computational Study of Vision

Hildreth, Ellen C., Ullman, Shimon 01 April 1988 (has links)
The computational approach to the study of vision inquires directly into the sort of information processing needed to extract important information from the changing visual image---information such as the three-dimensional structure and movement of objects in the scene, or the color and texture of object surfaces. An important contribution that computational studies have made is to show how difficult vision is to perform, and how complex are the processes needed to perform visual tasks successfully. This article reviews some computational studies of vision, focusing on edge detection, binocular stereo, motion analysis, intermediate vision, and object recognition.
8

How do Humans Determine Reflectance Properties under Unknown Illumination?

Fleming, Roland W., Dror, Ron O., Adelson, Edward H. 21 October 2001 (has links)
Under normal viewing conditions, humans find it easy to distinguish between objects made out of different materials such as plastic, metal, or paper. Untextured materials such as these have different surface reflectance properties, including lightness and gloss. With single isolated images and unknown illumination conditions, the task of estimating surface reflectance is highly underconstrained, because many combinations of reflection and illumination are consistent with a given image. In order to work out how humans estimate surface reflectance properties, we asked subjects to match the appearance of isolated spheres taken out of their original contexts. We found that subjects were able to perform the task accurately and reliably without contextual information to specify the illumination. The spheres were rendered under a variety of artificial illuminations, such as a single point light source, and a number of photographically-captured real-world illuminations from both indoor and outdoor scenes. Subjects performed more accurately for stimuli viewed under real-world patterns of illumination than under artificial illuminations, suggesting that subjects use stored assumptions about the regularities of real-world illuminations to solve the ill-posed problem.
9

WAVELET AND SINE BASED ANALYSIS OF PRINT QUALITY EVALUATIONS

Mahalingam, Vijay Venkatesh 01 January 2004 (has links)
Recent advances in imaging technology have resulted in a proliferation of images across different media. Before it reaches the end user, these signals undergo several transformations, which may introduce defects/artifacts that affect the perceived image quality. In order to design and evaluate these imaging systems, perceived image quality must be measured. This work focuses on analysis of print image defects and characterization of printer artifacts such as banding and graininess by using a human visual system (HVS) based framework. Specifically the work addresses the prediction of visibility of print defects (banding and graininess) by representing the print defects in terms of the orthogonal wavelet and sinusoidal basis functions and combining the detection probabilities of each basis functions to predict the response of the human visual system (HVS). The detection probabilities for basis function components and the simulated print defects are obtained from separate subjective tests. The prediction performance from both the wavelet based and sine based approaches is compared with the subjective testing results .The wavelet based prediction performs better than the sinusoidal based approach and can be a useful technique in developing measures and methods for print quality evaluations based on HVS.
10

SYMLET AND GABOR WAVELET PREDICTION OF PRINT DEFECTS

Klemo, Elios 01 January 2005 (has links)
Recent studies have been done to create models that predict the response of the human visual system (HVS) based on how the HVS processes an image. The most widely known of these models is the Gabor model, since the Gabor patterns closely resemble the receptive filters in the human eye. The work of this thesis examines the use of Symlets to represent the HVS, since Symlets provide the benefit of orthogonality. One major problem with Symlets is that the energy is not stable in respective Symlet channels when the image patterns are translated spatially. This thesis addresses this problem by up sampling Symlets instead of down sampling, and thus creating shift invariant Symlets. This thesis then compares the representation of Gabor versus Symlet approach in predicting the response of the HVS to detecting print defect patterns such as banding and graining. In summary we noticed that Symlet prediction outperforms the Gabor prediction thus Symlets would be a good choice for HVS response prediction. We also concluded that for banding defect periodicity and size are important factors that affect the response of the HVS to the patterns. For graining defects we noticed that size does not greatly affect the response of the HVS to the defect patterns. We introduced our results using two set of performance metrics, the mean and median.

Page generated in 0.0611 seconds