<p> Speech perception requires that the listener identify <i>where</i> the meaningful units are (e.g., syllables) before they can identify <i> what</i> those units might be. This segmentation is difficult because there exist no clear, systematic silences between words, syllables or phonemes. One potentially useful cue is the acoustic envelope: slow (< 10 Hz) fluctuations in sound amplitude over time. Sharp increases in the envelope are loosely related to the onsets of syllables. In addition to this cue, the brain may also make use of the temporal regularity of syllables which last ~200 ms on average across languages. This quasi-rhythmicity enables prediction as a means to identify the onsets of syllables. The work presented here supports neural synchrony to the envelope at the syllabic rate as a critical mechanism to segment the sound stream. Chapter 1 and 2 show synchrony to both speech and music and demonstrate a relationship between synchrony and successful behavior. Chapter 3, following up on this work, compares the data from Chapter 2 with two competing computational models—oscillator vs evoked—and shows that the data are consistent with an oscillatory mechanism. These chapters support the oscillator as an effective means of read-in and segmentation of rhythmic input.</p><p>
Identifer | oai:union.ndltd.org:PROQUEST/oai:pqdtoai.proquest.com:10928751 |
Date | 17 November 2018 |
Creators | Doelling, Keith Bryant |
Publisher | New York University |
Source Sets | ProQuest.com |
Language | English |
Detected Language | English |
Type | thesis |
Page generated in 0.0014 seconds