Return to search

Synchronizing Rhythms| Neural Oscillations Align to Rhythmic Patterns in Sound

<p> Speech perception requires that the listener identify <i>where</i> the meaningful units are (e.g., syllables) before they can identify <i> what</i> those units might be. This segmentation is difficult because there exist no clear, systematic silences between words, syllables or phonemes. One potentially useful cue is the acoustic envelope: slow (&lt; 10 Hz) fluctuations in sound amplitude over time. Sharp increases in the envelope are loosely related to the onsets of syllables. In addition to this cue, the brain may also make use of the temporal regularity of syllables which last ~200 ms on average across languages. This quasi-rhythmicity enables prediction as a means to identify the onsets of syllables. The work presented here supports neural synchrony to the envelope at the syllabic rate as a critical mechanism to segment the sound stream. Chapter 1 and 2 show synchrony to both speech and music and demonstrate a relationship between synchrony and successful behavior. Chapter 3, following up on this work, compares the data from Chapter 2 with two competing computational models&mdash;oscillator vs evoked&mdash;and shows that the data are consistent with an oscillatory mechanism. These chapters support the oscillator as an effective means of read-in and segmentation of rhythmic input.</p><p>

Identiferoai:union.ndltd.org:PROQUEST/oai:pqdtoai.proquest.com:10928751
Date17 November 2018
CreatorsDoelling, Keith Bryant
PublisherNew York University
Source SetsProQuest.com
LanguageEnglish
Detected LanguageEnglish
Typethesis

Page generated in 0.0014 seconds