Return to search

Multi-scale error-correcting codes and their decoding using belief propagation

This work is motivated from error-correcting codes in the brain. To counteract the effect of representation noise, a large number of neurons participate in encoding even low-dimensional variables. In many brain areas, the mean firing rates of neurons as a function of represented variable, called the tuning curve, have unimodal shape centered at different values, defining a unary code. This dissertation focuses on a new type of neural code where neurons have periodic tuning curves, with a diversity of periods. Neurons that exhibit this tuning are grid cells of the entorhinal cortex, which represent self-location in two-dimensional space. First, we investigate mutual information between such multi-scale codes and the coded variable as a function of tuning curve width. For decoding, we consider maximum likelihood (ML) and plausible neural network (NN) based models. For unary neural codes, Fisher information increases with narrower tuning, regardless of the decoding method. By contrast, for the multi-scale neural code, the optimal tuning curve width depends on the decoding method. While narrow tuning is optimal for ML decoding, a finite width, matched to statistics of the noise, is optimal with a NN decoder. This finding may explain why actual neural tuning curves have relatively wide tuning. Next, motivated by the observation that multi-scale codes involve non-trivial decoding, we examine a decoding algorithm based on belief propagation (BP) because BP promises certain gains in decoding efficiency. The decoding problem is first formulated as a subset selection problem on a graph and then approximately solved by BP. Even though the graph has many cycles, BP converges to a fixed point after few iterations. The mean square error of BP approaches to that of ML at high signal-to-noise ratios. Finally, using the multi-scale code, we propose a joint source-channel coding scheme that allows separate senders to transmit complementary information over additive Gaussian noise channels without cooperation. The receiver decodes one sender's codeword using the other as side information and achieves a lower distortion using the same number of transmissions. The proposed scheme offers a new framework to design distributed joint source-channel codes for continuous variables. / text

Identiferoai:union.ndltd.org:UTEXAS/oai:repositories.lib.utexas.edu:2152/24842
Date25 June 2014
CreatorsYoo, Yong Seok
Source SetsUniversity of Texas
LanguageEnglish
Detected LanguageEnglish
TypeThesis
Formatapplication/pdf

Page generated in 0.0017 seconds