Spelling suggestions: "subject:"human vision "" "subject:"suman vision ""
1 |
Colour constancy in simple and complex scenesBramwell, David January 1997 (has links)
Colour constancy is defined as the ability to perceive the surface colours of objects within scenes as approximately constant through changes in scene illumination. Colour constancy in real life functions so seamlessly that most people do not realise that the colour of the light emanating from an object can change markedly throughout the day. Constancy measurements made in simple scenes constructed from flat coloured patches do not produce constancy of this high degree. The question that must be asked is: what are the features of everyday scenes that improve constancy? A novel technique is presented for testing colour constancy. Results are presented showing measurements of constancy in simple and complex scenes. More specifically, matching experiments are performed for patches against uniform and multi-patch backgrounds, the latter of which provide colour contrast. Objects created by the addition of shape and 3-D shading information are also matched against backgrounds consisting of matte reflecting patches. In the final set of experiments observers match detailed depictions of objects - rich in chromatic contrast, shading, mutual illumination and other real life features - within depictions of real life scenes. The results show similar performance across the conditions that contain chromatic contrast, although some uncertainty still remains as to whether the results are indicative of human colour constancy performance or to sensory match capabilities. An interesting division exists between patch matches performed against uniform and multi-patch backgrounds that is manifested as a shift in CIE xy space. A simple model of early chromatic processes is proposed and examined in the context of the results.
|
2 |
The Isolation of Human Rod and Cone Photoreceptor Activity combining Electroretinography and Silent Substitution TechniquesMaguire, John January 2017 (has links)
Aims: The electroretinogram (ERG) can be used to independently assess the function of rod and cone photoreceptors within the human retina. The work in this thesis sought to investigate an alternative method of recording the ERG, using the silent substitution paradigm (Estevez and Spekreijse 1982). The aims are separated into two parts, firstly, the isolation and characterisation of the non-dark adapted rod photoreceptor response, and secondly, characterising the ERG response from L-, M- and S-cones.
Methods: Rod, L-, M- and S-cone isolating as well as non-isolating sinusoidal flicker and transient square-wave stimuli were generated on a 4 primary LED ganzfeld stimulator to elicit ERGs from non-dark adapted participants with normal and compromised rod or cone function.
Results: The results from the rod experiments showed that ERGs elicited by rod isolating silent substitution stimuli exhibit low-pass temporal frequency response characteristics with an upper response limit of 30Hz and saturate beyond 1000ph Td. Responses are optimal between 5 – 8 Hz and between 10-100 photopic Td. There is a significant correlation between the response amplitudes obtained with the silent substitution method and current standard clinical protocols. The results from the cone experiments showed that the L-, M- and S-cone stimulation produced ERGs with very different morphologies. L- and M-cone stimulation is of limited use as an objective measure of colour vision deficiency.
Conclusion: Silent substitution provides an effective method for the isolation of human rod and cone photoreceptor function in subjects when stimuli are used within appropriate parameter ranges.
|
3 |
Integration across time determines path deviation discrimination for moving objects.Whitaker, David J., Levi, D.M., Kennedy, Graeme J. 04 1900 (has links)
Yes / Background: Human vision is vital in determining our interaction with the outside world. In this study we characterize our ability to judge changes in the direction of motion of objects-a common task which can allow us either to intercept moving objects, or else avoid them if they pose a threat.
Methodology/Principal Findings: Observers were presented with objects which moved across a computer monitor on a linear path until the midline, at which point they changed their direction of motion, and observers were required to judge the direction of change. In keeping with the variety of objects we encounter in the real world, we varied characteristics of the moving stimuli such as velocity, extent of motion path and the object size. Furthermore, we compared performance for moving objects with the ability of observers to detect a deviation in a line which formed the static trace of the motion path, since it has been suggested that a form of static memory trace may form the basis for these types of judgment. The static line judgments were well described by a 'scale invariant' model in which any two stimuli which possess the same two-dimensional geometry (length/width) result in the same level of performance. Performance for the moving objects was entirely different. Irrespective of the path length, object size or velocity of motion, path deviation thresholds depended simply upon the duration of the motion path in seconds.
Conclusions/Significance: Human vision has long been known to integrate information across space in order to solve spatial tasks such as judgment of orientation or position. Here we demonstrate an intriguing mechanism which integrates direction information across time in order to optimize the judgment of path deviation for moving objects. / Wellcome Trust, Leverhulme Trust, NIH
|
4 |
Relative contributions to vergence eye movements of two binocular cues for motion-in-depthGiesel, M., Yakovleva, A., Bloj, Marina, Wade, A.R., Norcia, A.M., Harris, J.M. 11 November 2019 (has links)
Yes / When we track an object moving in depth, our eyes rotate in opposite directions. This type of “disjunctive” eye movement is called horizontal vergence. The sensory control signals for vergence arise from multiple visual cues, two of which, changing binocular disparity (CD) and inter-ocular velocity differences (IOVD), are specifically binocular. While it is well known that the CD cue triggers horizontal vergence eye movements, the role of the IOVD cue has only recently been explored. To better understand the relative contribution of CD and IOVD cues in driving horizontal vergence, we recorded vergence eye movements from ten observers in response to four types of stimuli that isolated or combined the two cues to motion-in-depth, using stimulus conditions and CD/IOVD stimuli typical of behavioural motion-in-depth experiments. An analysis of the slopes of the vergence traces and the consistency of the directions of vergence and stimulus movements showed that under our conditions IOVD cues provided very little input to vergence mechanisms. The eye movements that did occur coinciding with the presentation of IOVD stimuli were likely not a response to stimulus motion, but a phoria initiated by the absence of a disparity signal. / Supported by NIH EY018875 (AMN), BBSRC grants BB/M001660/1 (JH), BB/M002543/1 (AW), and BB/MM001210/1 (MB).
|
5 |
The role of spatial derivatives in feature detectionBarbieri, Gillian Sylvia Anna-Stasia January 2000 (has links)
No description available.
|
6 |
The Computational Study of VisionHildreth, Ellen C., Ullman, Shimon 01 April 1988 (has links)
The computational approach to the study of vision inquires directly into the sort of information processing needed to extract important information from the changing visual image---information such as the three-dimensional structure and movement of objects in the scene, or the color and texture of object surfaces. An important contribution that computational studies have made is to show how difficult vision is to perform, and how complex are the processes needed to perform visual tasks successfully. This article reviews some computational studies of vision, focusing on edge detection, binocular stereo, motion analysis, intermediate vision, and object recognition.
|
7 |
How do Humans Determine Reflectance Properties under Unknown Illumination?Fleming, Roland W., Dror, Ron O., Adelson, Edward H. 21 October 2001 (has links)
Under normal viewing conditions, humans find it easy to distinguish between objects made out of different materials such as plastic, metal, or paper. Untextured materials such as these have different surface reflectance properties, including lightness and gloss. With single isolated images and unknown illumination conditions, the task of estimating surface reflectance is highly underconstrained, because many combinations of reflection and illumination are consistent with a given image. In order to work out how humans estimate surface reflectance properties, we asked subjects to match the appearance of isolated spheres taken out of their original contexts. We found that subjects were able to perform the task accurately and reliably without contextual information to specify the illumination. The spheres were rendered under a variety of artificial illuminations, such as a single point light source, and a number of photographically-captured real-world illuminations from both indoor and outdoor scenes. Subjects performed more accurately for stimuli viewed under real-world patterns of illumination than under artificial illuminations, suggesting that subjects use stored assumptions about the regularities of real-world illuminations to solve the ill-posed problem.
|
8 |
WAVELET AND SINE BASED ANALYSIS OF PRINT QUALITY EVALUATIONSMahalingam, Vijay Venkatesh 01 January 2004 (has links)
Recent advances in imaging technology have resulted in a proliferation of images across different media. Before it reaches the end user, these signals undergo several transformations, which may introduce defects/artifacts that affect the perceived image quality. In order to design and evaluate these imaging systems, perceived image quality must be measured. This work focuses on analysis of print image defects and characterization of printer artifacts such as banding and graininess by using a human visual system (HVS) based framework. Specifically the work addresses the prediction of visibility of print defects (banding and graininess) by representing the print defects in terms of the orthogonal wavelet and sinusoidal basis functions and combining the detection probabilities of each basis functions to predict the response of the human visual system (HVS). The detection probabilities for basis function components and the simulated print defects are obtained from separate subjective tests. The prediction performance from both the wavelet based and sine based approaches is compared with the subjective testing results .The wavelet based prediction performs better than the sinusoidal based approach and can be a useful technique in developing measures and methods for print quality evaluations based on HVS.
|
9 |
SYMLET AND GABOR WAVELET PREDICTION OF PRINT DEFECTSKlemo, Elios 01 January 2005 (has links)
Recent studies have been done to create models that predict the response of the human visual system (HVS) based on how the HVS processes an image. The most widely known of these models is the Gabor model, since the Gabor patterns closely resemble the receptive filters in the human eye. The work of this thesis examines the use of Symlets to represent the HVS, since Symlets provide the benefit of orthogonality. One major problem with Symlets is that the energy is not stable in respective Symlet channels when the image patterns are translated spatially. This thesis addresses this problem by up sampling Symlets instead of down sampling, and thus creating shift invariant Symlets. This thesis then compares the representation of Gabor versus Symlet approach in predicting the response of the HVS to detecting print defect patterns such as banding and graining. In summary we noticed that Symlet prediction outperforms the Gabor prediction thus Symlets would be a good choice for HVS response prediction. We also concluded that for banding defect periodicity and size are important factors that affect the response of the HVS to the patterns. For graining defects we noticed that size does not greatly affect the response of the HVS to the defect patterns. We introduced our results using two set of performance metrics, the mean and median.
|
10 |
Estimation de la visibilité routière du point de vue du conducteur : contribution aux aides à la conduite / Estimation of road visibility from the human perception : contribution to driving assistance systemsJoulan, Karine 21 September 2015 (has links)
Les aides à la conduite sont des systèmes qui aident le conducteur à mieux appréhender la tâche de conduite en situation difficile. Parmi les différents capteurs qu'utilisent ces ADAS, des caméras sont embarquées et délivrent des images de la scène routière qui sont traitées et analysées de manière à informer le conducteur des dangers éventuels ou enclencher des systèmes d'urgence. Les caméras issues de ces ADAS capturent l'environnement routier d'une manière qui est loin d'être représentative de la perception qu'aurait un conducteur. Une des conséquences éventuelles est que ces ADAS soient contre-productives en déclenchant inopinément des systèmes d'alerte et d'action à l'encontre du conducteur. De manière à remplir complétement la vocation de ces ADAS, il est primordial de disposer d'une carte de la perception de l'environnement routier du point de vue du conducteur de manière à ajuster l'aide dont il pourrait avoir besoin. Nous proposons d'estimer par traitement d'image, la visibilité routière du point de vue du conducteur en utilisant un algorithme bio-inspiré simulant la sensibilité au contraste de l'œil humain. Dans un premier temps, nous étendons un modèle de CSF (Contrast Sensitivity Function) de manière à prendre en compte des taux de détection cohérents avec la sécurité routière, l'orientation, la couleur et l'âge du conducteur. Dans un second temps, nous modélisons notre modèle de CSF par un filtrage spatial et en calculons la visibilité en chaque pixel de l'image. Nous appliquons cette carte de visibilité sur une carte de contours issue de notre détecteur de contour bio-inspiré. Ainsi, nous considérons les contours des objets routiers présents dans l'image plutôt que leurs caractéristiques de manière à éviter toute hypothèse. Ces contours sont associés à un niveau de visibilité indiquant s'ils sont visibles ou pas par l'observateur. Nous validons le procédé en le comparant à des performances visuelles d'observateurs, en condition de laboratoire, pour une détection de cible et en situation de conduite simulée en conduite de nuit. Dans un deuxième temps, nous associons ces niveaux de visibilité en deux unités facilement compréhensibles pour des ADAS: un temps de réaction et une distance perçue. En premier lieu, nous proposons un modèle d'estimation du temps de perception du conducteur en fonction de la visibilité en nous inspirant de la loi de Piéron sur des données expérimentales de détection de cibles sur des images routières de synthèse pour une certaine densité de brouillard de jour. Les études ont montré que les conducteurs auraient tendance à se rapprocher du véhicule devant eux de manière à ne pas les perdre de vue. Ce constat nous renseigne sur le fait que le conducteur ne dispose pas suffisamment de visibilité à ses yeux dans cette configuration de conduite. Nous montrons l'intérêt des méthodes de restauration d'images en termes de gain de temps de réaction et de performance visuelle comme le taux de détection du véhicule devant lui. Dans un second temps, nous estimons une distance par rapport au véhicule précédent du point de vue du conducteur en nous inspirant de la détection des feux arrière du véhicule situé devant le conducteur. Les résultats ont montré que les conducteurs estimaient mal les distances sur obstacles lointains en comparaison des aides à la conduite basées sur des imageurs optique, radar ou lidar pour une conduite de nuit. D'après ce constat, les ADAS jouent un rôle fondamental pour prévenir le conducteur de sa conduite inadaptée. Enfin, nous délimitons les limites de nos modèles de CSF et de visibilité et proposons plusieurs perspectives. Pour des applications routières, une des perspectives qui a été concrétisée partiellement est l'évaluation objective des systèmes d'éclairage par notre modèle de visibilité et sa cohérence avec une expertise subjective / The driver assistance systems are systems that help the driver to better understand the plight driving task. Among the various sensors used by these ADAS, cameras are shipped and deliver images of the road scene which are processed and analyzed to inform the driver of potential hazards or switch of emergency systems. The cameras capture from these ADAS is far from representative of perception would have a driver. One of the possible consequences is that these ADAS can be counter productive in triggering warning and action against the driver. In order to completely fulfill the objectives of such ADAS, it is essential to have a map of the perception of the road environment from the perspective of the driver to adjust the help they might need. We propose to estimate by image processing, road visibility from the driver's perspective using a bio-inspired algorithm simulating the contrast sensitivity of the human eye. First, we extend a model of CSF (Contrast Sensitivity Function) to consider coherent detection rate with road safety, orientation, color and age of the driver. In a second step, we model our CSF spatial filtering and calculate the visibility for each pixel of the image. We apply this visibility map on a map of contours of our bio-inspired edge detector. Thus, we consider the contours of the road objects in the image rather than the characteristics in order to avoid assumptions. These contours are associated with a level of visibility as to whether or not they are visible by the observer. We validate the method by comparing it with the visual performance of observers in laboratory conditions for target detection and simulated driving situation in night driving. Secondly, we combine these two levels of visibility in easily understandable units for ADAS: a reaction time and a target distance. First, we propose a model to estimate the driver's reaction time depending on the visibility (inspired by Piéron's law of target detection) with experimental data on road synthetic images for some daylight fog density. Studies have shown that drivers would tend toget closer to the vehicle in front of them in order not to lose sight of them. This observation tells us that the driver does not have enough visibility in that configuration. We show the interest of the image restoration methods in terms of reaction time and gain in visual performance as well as vehicle detection rate. In a second step, we estimate a distance from the point of view of the driver taking inspiration from the detection of the rear lights of the vehicle in front of the driver. The results showed that drivers were bad about the distance evaluation of distant obstacles compared to driving aids based on optical imaging, radar or lidar for night driving. Based on this observation, the ADAS may play a fundamental role in preventing the driver from his inappropriate behavior. Finally, we outline the limits of our models CSF and visibility and offer several perspectives for road applications, one of which was the objective evaluation of lighting systems by our model of visibility and consistency with a subjective expertise
|
Page generated in 0.0545 seconds