According to the classical efficient-coding hypothesis, biological neurons are naturally adapted to transmit and process information about the stimulus in an optimal way. Shannon's information theory provides methods to compute the fundamental limits on maximal information transfer by a general system. Understanding how these limits differ between different classes of neurons may help us to better understand how sensory and other information is processed in the brain. In this work we provide a brief review of information theory and its use in computational neuroscience. We use mathematical models of neuronal cells with stochastic input that realistically reproduce different activity patterns observed in real cortical neurons. By employing the neuronal input-output pro- perties we calculate several key information-theoretic characteristics, including the information capacity. In order to determine the information capacity we propose an iterative extension of the Blahut-Arimoto algorithm that generalizes to continuous input channels subjected to constraints. Finally, we compare the information optimality conditions among different models and parameter sets. 1
Identifer | oai:union.ndltd.org:nusl.cz/oai:invenio.nusl.cz:382919 |
Date | January 2018 |
Creators | Bárta, Tomáš |
Contributors | Košťál, Lubomír, Pokora, Ondřej |
Source Sets | Czech ETDs |
Language | English |
Detected Language | English |
Type | info:eu-repo/semantics/masterThesis |
Rights | info:eu-repo/semantics/restrictedAccess |
Page generated in 0.002 seconds