• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 2
  • Tagged with
  • 3
  • 3
  • 3
  • 2
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
1

The Role of Facial Gestural Information in Supporting Perceptual Learning of Degraded Speech

WAYNE, RACHEL 02 September 2011 (has links)
Everyday speech perception frequently occurs in degraded listening conditions, against a background of noise, interruptions and intermingling voices. Despite these challenges, speech perception is remarkably successful, due in part to perceptual learning. Previous research has demonstrated more rapid perceptual learning of acoustically-degraded speech when listeners are given the opportunity to map the linguistic content of utterances, presented in clear auditory form, onto the degraded auditory utterance. Here, I investigate whether learning is further enhanced by the provision of naturalistic facial gestural information, presented concurrently with either the clear auditory sentence (Experiment I), or with the degraded utterance (Experiment II). Recorded materials were noise-vocoded (4 frequency channels; 50- 8000 Hz). Noise-vocoding (NV) is a popular simulation of speech transduced through a cochlear implant, and 4-channel NV speech is difficult for naïve listeners to understand, but can be learned over several sentences of practice. In Experiment I, each trial began with an auditory-alone presentation of a degraded stimulus for report (D). In two conditions, this was followed by passive listening to either the clear spoken form and then the degraded form again (condition DCD), or the reverse (DDC); the former format of presentation (DCD) results in more efficient learning (Davis et al, 2005). Condition DCvD was similar to DCD, except that the clear spoken form was accompanied by facial gestural information (a talking face). The results indicate that presenting clear audiovisual feedback (DCvD) does not confer any advantage over clear auditory feedback (DCD). In Experiment II, two groups received a degraded sentence presentation with corresponding facial movements (Dv); the second group also received a second degraded (auditory-alone) presentation (DvD). Two control conditions and a baseline DCvD condition were also tested. Although they never received clear speech feedback, performance in the DvD group was significantly greater than in all others, indicating that perceptual learning mechanisms can capitalize on visual concomitants of speech. The DvD group outperformed the Dv group, suggesting that the second degraded presentation in the DvD condition further facilitates generalization of learning. These findings have important implications for improving comprehension of speech in an unfamiliar accent or following cochlear implantation. / Thesis (Master, Psychology) -- Queen's University, 2011-09-01 16:50:58.923
2

Korean-English Bilinguals’ perception of noise-vocoded speech

Lee, Keebbum state 23 October 2019 (has links)
No description available.
3

Perception of Synthetic Speech by a Language-Trained Chimpanzee (Pan troglodytes)

Heimbauer, Lisa A. 10 July 2009 (has links)
Ability of human listeners to understand altered speech is argued as evidence of uniquely human processing abilities, but early auditory experience also may contribute to this capability. I tested the ability of Panzee, a language-trained chimpanzee (Pan troglodytes), reared and spoken to from infancy by humans, to recognize synthesized words. Training and testing was conducted with different sets of English words in natural, “harmonics-only” (resynthesized using only voiced components), or “noise-vocoded” (based on amplitude-modulated noise bands) forms, with Panzee choosing from “lexigram” symbols that represented words. In Experiment 1 performance was equivalent with words in natural and harmonics-only form. In Experiment 2 performance with noise-vocoded words was significantly higher than chance but lower than with natural words. Results suggest specialized processing mechanisms are not necessary to speech perception in the absence of traditional acoustic cues, with the more important factor for speech-processing abilities being early immersion in a speech-rich environment.

Page generated in 0.0339 seconds