Spelling suggestions: "subject:"aperception arecognition"" "subject:"aperception 2recognition""
1 |
Role of temporal texture in visual system exploration with computer simulations /Su, Ying-fung. January 2010 (has links)
Thesis (M. Phil.)--University of Hong Kong, 2010. / Includes bibliographical references (p. 143-149). Also available in print.
|
2 |
Bridging the gap in face recognition performance : what makes a face familiar? /Roark, Dana A. January 2007 (has links)
Thesis (Ph.D.)--University of Texas at Dallas, 2007. / Includes vita. Includes bibliographical references (leaves 53-58)
|
3 |
Development of a test of facial affect recognition /Sherman, Adam Grant. January 1994 (has links)
Thesis (Ph.D)--University of Tulsa, 1994. / Includes bibliographical references (leaves 96-115).
|
4 |
Perceiving and recognising novel objects detecting configural, switch and shape changes /Keane, Simone K. January 2003 (has links)
Thesis (Ph.D.)--University of Wollongong, 2003. / Typescript. Includes bibliographical references: leaf 209-230.
|
5 |
Object recognition by integration of information across the dorsal and ventral visual pathwaysFarivar, Reza. January 1900 (has links)
Thesis (Ph.D.). / Written for the Dept. of Psychology. Title from title page of PDF (viewed 2008/01/12). Includes bibliographical references.
|
6 |
The limiting role of backward recognition masking for recognition of speech-like transitionsGaston, Jeremy R. January 2005 (has links)
Thesis (M.A.)--State University of New York at Binghamton, Psychology Department, 2005. / Includes bibliographical references.
|
7 |
Sound visualisation as an aid for the deaf : a new approachSoltani-Farani, A. A. January 1998 (has links)
Visual translation of speech as an aid for the deaf has long been a subject of electronic research and development. This thesis is concerned with a technique of sound visualisation based upon the theory of the primacy of dynamic, rather than static, information in the perception of speech sounds. The goal is design and evaluation of a system to display the perceptually important features of an input sound in a dynamic format as similar as possible to the auditory representation of that sound. The human auditory system, as the most effective system of sound representation, is first studied. Then, based on the latest theories of hearing and techniques of auditory modelling, a simplified model of the human ear is developed. In this model, the outer and middle ears together are simulated by a high-pass filter, and the inner ear is modelled by a bank of band-pass filters the outputs of which, after rectification and compression, are applied to a visualiser block. To design an appropriate visualiser block, theories of sound and speech perception are reviewed. Then the perceptually important properties of sound, and their relations to the physical attributes of the sound pressure wave, are considered to map the outputs from the auditory model onto an informative and recognisable running image-like the one known as cochleagram. This conveyor-like image is then sampled by a window of 20 milliseconds duration at a rate of 50 samples per second, so that a sequence of phase-locked, rectangular images is produced. Animation of these images results in a novel method of spectrography displaying both the time-varying and the time-independent information of the underlying sound with a high resolution in real time. The resulting system translates a spoken word into a visual gesture, and displays a still picture when the input is a steady state sound. Finally the implementation of this visualiser system is evaluated through several experiments undertaken by normal-hearing subjects. In these experiments, recognition of the gestures of a number of spoken words, is examined through a set of two-word and multi-word forced-choice tests. The results of these preliminary experiments show a high recognition score (40-90 percent, where zero represents chance expectation) after only 10 learning trials. General conclusions from the results suggest: a potential quick learning of the gestures, language independence of the system, fidelity of the system in translating the auditory information, and persistence of the learned gestures in the long-term memory. The results are very promising and motivate further investigations.
|
8 |
The timecourse of activation of the neural operations and representations supporting visual object identification and memory /Schendan, Haline Elizabeth, January 1998 (has links)
Thesis (Ph. D.)--University of California, San Diego, 1998. / Vita. Includes bibliographical references.
|
9 |
Computational models of high-level visual perception and recognition /Dailey, Matthew N. January 2002 (has links)
Thesis (Ph. D.)--University of California, San Diego, 2002. / Vita. Includes bibliographical references (leaves 158-169).
|
10 |
An analysis of the multiple face phenomenon /Paras, Carrie. January 2007 (has links)
Thesis (M.A.)--University of Nevada, Reno, 2007. / "May, 2007." Includes bibliographical references (leaves 29-33). Library also has microfilm. Ann Arbor, Mich. : ProQuest Information and Learning Company, [2008]. 1 microfilm reel ; 35 mm. Online version available on the World Wide Web.
|
Page generated in 0.2616 seconds