81 |
Evoked potential correlates of perceived fluctuations in an ambiguous visual display.Rideout, Bruce Edward 01 January 1974 (has links) (PDF)
No description available.
|
82 |
Optimum finite sequential pattern recognition using the maximum principle of pontryagin /Hammond, Marvin Harvey January 1968 (has links)
No description available.
|
83 |
Auditory object perception : counterpoint in a new contextWright, James K. January 1986 (has links)
No description available.
|
84 |
Psychophysical studies of interactions between luminance and chromatic information in human visionClery, Stéphane January 2014 (has links)
In this thesis, I investigated how human vision processes colour and luminance information to enable perception of our environment. I first tested how colour can alter the perception of depth from shading. A luminance variation can be interpreted as either variation of reflectance (patterning) or variation of shape. The process of shape-from-shading interprets luminance variation as changes in the shape of the object (e.g. the shading on an object might elicit the perception of curvature). The addition of colour variation is known to modify this shape-from-shading processing. In the experiments presented here I tested how luminance driven percepts can be modified by colour. My first series of experiments confirmed that depth is modulated by colour. I explored a larger number of participants than previously tested. Contrary to previous studies, a wide repertoire of behaviour was found; participants experienced variously more depth, or less depth, or no difference. I hypothesised that the colour modulation effect might be due to a low-level contrast modulation of luminance by colour, rather than a higher-level depth effect. In a second series of experiments, I therefore tested how the perceived contrast of a luminance target can be affected by the presence of an orthogonal mask. I found that colour had a range of effects on the perception of luminance, again dependant on the participants. Luminance also had a similar wide range of effects on the perceived contrast of luminance targets. This showed that, at supra-threshold levels, a luminance target's contrast can be modulated by a component of another orientation (colour or luminance defined). The effects of luminance and colour were not following a particular rule. In a third series of experiments, I explored this interaction at detection levels of contrast. I showed cross-interaction between luminance target and mask but no effects of a colour mask.
|
85 |
The interaction of motion and form in the perception of global structure: a glass-pattern studyOr, Chun-fai, Charles., 柯駿輝. January 2005 (has links)
published_or_final_version / abstract / Psychology / Master / Master of Philosophy
|
86 |
Spatial and temporal disparaties in aurally aided visual searchGriffiths, Shaaron S, shaaron.griffiths@deakin.edu.au January 2001 (has links)
Research over the last decade has shown that auditorily cuing the location of visual targets reduces the time taken to locate and identify targets for both free-field and virtually presented sounds. The first study conducted for this thesis confirmed these findings over an extensive region of free-field space. However, the number of sound locations that are measured and stored in the data library of most 3-D audio spatial systems is limited, so that there is often a discrepancy in position between the cued and physical location of the target. Sampling limitations in the systems also produce temporal delays in which the stored data can be conveyed to operators. To investigate the effects of spatial and temporal disparities in audio cuing of visual search, and to provide evidence to alleviate concerns that psychological research lags behind the capabilities to design and implement synthetic interfaces, experiments were conducted to examine (a) the magnitude of spatial separation, and (b) the duration of temporal delay that intervened between auditory spatial cues and visual targets to alter response times to locate targets and discriminate their shape, relative to when the stimuli were spatially aligned, and temporally synchronised, respectively. Participants listened to free-field sound localisation cues that were presented with a single, highly visible target that could appear anywhere across 360° of azimuthal space on the vertical mid-line (spatial separation), or extended to 45° above and below the vertical mid-line (temporal delay). A vertical or horizontal spatial separation of 40° between the stimuli significantly increased response times, while separations of 30° or less did not reach significance. Response times were slowed at most target locations when auditory cues occurred 770 msecs prior to the appearance of targets, but not with similar durations of temporal delay (i.e., 440 msecs or less). When sounds followed the appearance of targets, the stimulus onset asynchrony that affected response times was dependent on target location, and ranged from 440 msecs at higher elevations and rearward of participants, to 1,100 msecs on the vertical mid-line. If targets appeared in the frontal field of view, no delay of acoustical stimulation affected performance. Finally, when conditions of spatial separation and temporal delay were combined, visual search times were degraded with a shorter stimulus onset asynchrony than when only the temporal relationship between the stimuli was varied, but responses to spatial separation were unaffected. The implications of the results for the development of synthetic audio spatial systems to aid visual search tasks was discussed.
|
87 |
Effects of spatial frequency overlap on face and object recognitionCollin, Charles Alain. January 2000 (has links)
There has recently been much interest in how limitations in spatial frequency range affect face and object perception. This work has mainly focussed on determining which bands of frequencies are most useful for visual recognition. However, a fundamental question not yet addressed is how spatial frequency overlap (i.e., the range of spatial frequencies shared by two images) affects complex image recognition. Aside from the basic theoretical interest this question holds, it also bears on research about effects of display format (e.g., line-drawings, Mooney faces, etc.) and studies examining the nature of mnemonic representations of faces and objects. Examining the effects of spatial frequency overlap on face and object recognition is the main goal of this thesis. / A second question that is examined concerns the effect of calibration of stimuli on recognition of spatially filtered images. Past studies using non-calibrated presentation methods have inadvertently introduced aberrant frequency content to their stimuli. The effect this has on recognition performance has not been examined, leading to doubts about the comparability of older and newer studies. Examining the impact of calibration on recognition is an ancillary goal of this dissertation. / Seven experiments examining the above questions are reported here. Results suggest that spatial frequency overlap had a strong effect on face recognition and a lesser effect on object recognition. Indeed, contrary to much previous research it was found that the band of frequencies occupied by a face image had little effect on recognition, but that small variations in overlap had significant effects. This suggests that the overlap factor is important in understanding various phenomena in visual recognition. Overlap effects likely contribute to the apparent superiority of certain spatial bands for different recognition tasks, and to the inferiority of line drawings in face recognition. Results concerning the mnemonic representation of faces and objects suggest that these are both encoded in a format that retains spatial frequency information, and do not support certain proposed fundamental differences in how these two stimulus classes are stored. Data on calibration generally shows non-calibration having little impact on visual recognition, suggesting moderate confidence in results of older studies.
|
88 |
Studies in visual search : effects of distractor ratio and local grouping processesPoisson, Marie E. January 1991 (has links)
According to Feature Integration Theory (Treisman & Gelade, 1980), search for a target defined by features on two different dimensions (e.g. green horizontal target among red horizontal and green vertical distractors) is conducted via serial attentive search of all items in the array. Results presented in this thesis clearly demonstrate that conjunction search is not conducted as a serial self-terminating search, and suggest that subjects selectively search a single feature set. Strong support is also provided for the role of local grouping processes in visual conjunction search. This includes evidence demonstrating: (1) that local context is an important factor in directing search toward the target, and (2) that groups of spatially adjacent homogeneous elements can be processed in parallel. These results point to the importance of spatial layout of target and distractor elements. More recent theories (e.g. Cave & Wolfe, 1990) will have to be amended in order to account for these data.
|
89 |
Eye and mind's eye evidence for perceptually-grounded mental imagery /Aveyard, Mark, Zwaan, Rolf A. January 2004 (has links)
Thesis (M.S.)--Florida State University, 2004. / Advisor: Dr. Rolf Zwaan, Florida State University, College of Arts and Sciences, Dept. of Psychology. Title and description from dissertation home page (viewed Sept. 29, 2004). Includes bibliographical references.
|
90 |
The role of early visual experience in the development of expert face processing /Le Grand, Richard. Maurer, Daphne January 2003 (has links)
Thesis (Ph.D.)--McMaster University, 2003. / Advisor: Daphne Maurer. Includes bibliographical references. Also available via World Wide Web.
|
Page generated in 0.1046 seconds