Spelling suggestions: "subject:"neurosciences|1experimental psychology"" "subject:"neurosciences|15experimental psychology""
1 |
On information representation in the brainTee, James Seng Khien 24 March 2017 (has links)
<p>A complex nervous system must transmit information many times, potentially over relatively vast distances. For example, information in vision originates from the retina, conveyed via the optic nerve to the Lateral Geniculate Nucleus (LGN) before arriving at the visual cortex at the back of the brain ? a distance spanning almost the entire length of the brain. From there, some information may be further transmitted forward onto the prefrontal cortex at the front of the brain. How does the brain attain a high reliability (i.e. minimal errors) throughout the entirety of such a communications process? Is information in the brain represented continuously, or discretely? A communications systems engineer, such as Claude Elwood Shannon, would stipulate that a continuous neural coding protocol is too error prone due to noise. To attain high reliability, a discrete neural coding protocol would be the necessary pre-requisite. This is the conclusion of my work in Chapter 2, based on a theoretical simulation of information transmission (i.e. communications) between neurons. My analysis of behavioral tasks in Chapters 3 (i.e. a conjunction probability task) and 4 (i.e. an intertemporal choice task) further reinforced this conclusion ? that, information in the brain is most likely to be represented discretely. The right question to pose is not one about continuous-versus-discrete representation, but rather, one that is focused on how fine-grained the discreteness is (i.e. how many bits of precision). We cannot and should not simply assume the use of continuous models in modeling cognitive tasks ? we need to test how fine grained the discreteness is. This is a major advance and demarcation from the continuous model assumption typically employed in data analysis.
|
2 |
Synchronizing Rhythms| Neural Oscillations Align to Rhythmic Patterns in SoundDoelling, Keith Bryant 17 November 2018 (has links)
<p> Speech perception requires that the listener identify <i>where</i> the meaningful units are (e.g., syllables) before they can identify <i> what</i> those units might be. This segmentation is difficult because there exist no clear, systematic silences between words, syllables or phonemes. One potentially useful cue is the acoustic envelope: slow (< 10 Hz) fluctuations in sound amplitude over time. Sharp increases in the envelope are loosely related to the onsets of syllables. In addition to this cue, the brain may also make use of the temporal regularity of syllables which last ~200 ms on average across languages. This quasi-rhythmicity enables prediction as a means to identify the onsets of syllables. The work presented here supports neural synchrony to the envelope at the syllabic rate as a critical mechanism to segment the sound stream. Chapter 1 and 2 show synchrony to both speech and music and demonstrate a relationship between synchrony and successful behavior. Chapter 3, following up on this work, compares the data from Chapter 2 with two competing computational models—oscillator vs evoked—and shows that the data are consistent with an oscillatory mechanism. These chapters support the oscillator as an effective means of read-in and segmentation of rhythmic input.</p><p>
|
Page generated in 0.1226 seconds