• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 2
  • Tagged with
  • 6
  • 6
  • 5
  • 3
  • 3
  • 3
  • 3
  • 2
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
1

The Role of Facial Gestural Information in Supporting Perceptual Learning of Degraded Speech

WAYNE, RACHEL 02 September 2011 (has links)
Everyday speech perception frequently occurs in degraded listening conditions, against a background of noise, interruptions and intermingling voices. Despite these challenges, speech perception is remarkably successful, due in part to perceptual learning. Previous research has demonstrated more rapid perceptual learning of acoustically-degraded speech when listeners are given the opportunity to map the linguistic content of utterances, presented in clear auditory form, onto the degraded auditory utterance. Here, I investigate whether learning is further enhanced by the provision of naturalistic facial gestural information, presented concurrently with either the clear auditory sentence (Experiment I), or with the degraded utterance (Experiment II). Recorded materials were noise-vocoded (4 frequency channels; 50- 8000 Hz). Noise-vocoding (NV) is a popular simulation of speech transduced through a cochlear implant, and 4-channel NV speech is difficult for naïve listeners to understand, but can be learned over several sentences of practice. In Experiment I, each trial began with an auditory-alone presentation of a degraded stimulus for report (D). In two conditions, this was followed by passive listening to either the clear spoken form and then the degraded form again (condition DCD), or the reverse (DDC); the former format of presentation (DCD) results in more efficient learning (Davis et al, 2005). Condition DCvD was similar to DCD, except that the clear spoken form was accompanied by facial gestural information (a talking face). The results indicate that presenting clear audiovisual feedback (DCvD) does not confer any advantage over clear auditory feedback (DCD). In Experiment II, two groups received a degraded sentence presentation with corresponding facial movements (Dv); the second group also received a second degraded (auditory-alone) presentation (DvD). Two control conditions and a baseline DCvD condition were also tested. Although they never received clear speech feedback, performance in the DvD group was significantly greater than in all others, indicating that perceptual learning mechanisms can capitalize on visual concomitants of speech. The DvD group outperformed the Dv group, suggesting that the second degraded presentation in the DvD condition further facilitates generalization of learning. These findings have important implications for improving comprehension of speech in an unfamiliar accent or following cochlear implantation. / Thesis (Master, Psychology) -- Queen's University, 2011-09-01 16:50:58.923
2

Speech Perception of Global Acoustic Structure in Children With Speech Delay, With and Without Dyslexia

Madsen, Mikayla Nicole 07 April 2020 (has links)
Children with speech delay (SD) have underlying deficits in speech perception that may be related to reading skill. Children with SD and children with dyslexia have previously shown deficits for distinct perceptual characteristics, including segmental acoustic structure and global acoustic structure. In this study, 35 children (ages 7-9 years) with SD, SD + dyslexia, and/or typically developing were presented with a vocoded speech recognition task to investigate their perception of global acoustic speech structure. Findings revealed no differences in vocoded speech recognition between groups, regardless of SD or dyslexia status. These findings suggest that in children with SD, co-occurring dyslexia does not appear to influence speech perception of global acoustic structure. We discuss these findings in the context of previous research literature and also discuss limitations of the current study and future directions for follow-up investigations.
3

Speech Perception of Global Acoustic Structure in Children with Speech Delay, with and Without Dyslexia

Madsen, Mikayla Nicole 30 March 2020 (has links)
Children with speech delay (SD) have underlying deficits in speech perception that may be related to reading skill. Children with SD and children with dyslexia have previously shown deficits for distinct perceptual characteristics, including segmental acoustic structure and global acoustic structure. In this study, 35 children (ages 7-9 years) with SD, SD + dyslexia, and/or typically developing were presented with a vocoded speech recognition task to investigate their perception of global acoustic speech structure. Findings revealed no differences in vocoded speech recognition between groups, regardless of SD or dyslexia status. These findings suggest that in children with SD, co-occurring dyslexia does not appear to influence speech perception of global acoustic structure. We discuss these findings in the context of previous research literature and also discuss limitations of the current study and future directions for follow-up investigations.
4

Korean-English Bilinguals’ perception of noise-vocoded speech

Lee, Keebbum state 23 October 2019 (has links)
No description available.
5

Investigating Speech Perception in Children With Speech Delay, Dyslexia, and Speech Delay and Dyslexia

Spencer, Lauren Marie 24 May 2023 (has links) (PDF)
Perceptual deficits related to phonology in children with speech delay (SD) and children with dyslexia have been identified in separate lines of research. However, there has only been a small number of studies that have investigated the perceptual deficits of children with SD and/or dyslexia in the same study to better understand the overlap of their speech perception abilities. Children with SD have previously shown deficits perceiving speech stimuli that is acoustically sparse, particularly when stimuli contain speech sounds they do not produce correctly. Yet in contrast to children with dyslexia, children with SD are better able to recover linguistic structure from speech stimuli that preserves global acoustic structure in the absence of spectral detail. Therefore, the purpose of this study is to further investigate how children with SD, dyslexia, SD + dyslexia, and typically developing (TD) peers perceive different types of speech. To do this, we used both vocoded speech and sine-wave speech recognition tasks. In this study, 40 children (ages 7-10 years) with SD, dyslexia, SD + dyslexia, and/or typically developing were presented with both sine-wave and vocoded speech recognition tasks to investigate their speech perception. Findings revealed no differences between groups for both the sine-wave and vocoded speech perception tasks, regardless of SD and/or dyslexia status. Increasing the number of participants or utilizing more sensitive speech perception tasks may provide clinically applicable resources for assessment or intervention. We discuss these findings in the context of previous research literature and also discuss limitations of the current study and future directions for follow-up investigations.
6

Perception of Synthetic Speech by a Language-Trained Chimpanzee (Pan troglodytes)

Heimbauer, Lisa A. 10 July 2009 (has links)
Ability of human listeners to understand altered speech is argued as evidence of uniquely human processing abilities, but early auditory experience also may contribute to this capability. I tested the ability of Panzee, a language-trained chimpanzee (Pan troglodytes), reared and spoken to from infancy by humans, to recognize synthesized words. Training and testing was conducted with different sets of English words in natural, “harmonics-only” (resynthesized using only voiced components), or “noise-vocoded” (based on amplitude-modulated noise bands) forms, with Panzee choosing from “lexigram” symbols that represented words. In Experiment 1 performance was equivalent with words in natural and harmonics-only form. In Experiment 2 performance with noise-vocoded words was significantly higher than chance but lower than with natural words. Results suggest specialized processing mechanisms are not necessary to speech perception in the absence of traditional acoustic cues, with the more important factor for speech-processing abilities being early immersion in a speech-rich environment.

Page generated in 0.0785 seconds