Return to search

Towards a Computational Theory of the Brain: The Simplest Neural Models, and a Hypothesis for Language

Obtaining a computational understanding of the brain is one of the most important problems in basic science. However, the brain is an incredibly complex organ, and neurobiological research has uncovered enormous amounts of detail at almost every level of analysis (the synapse, the neuron, other brain cells, brain circuits, areas, and so on); it is unclear which of these details are conceptually significant to the basic way in which the brain computes. An essential approach to the eventual resolution of this problem is the definition and study of theoretical computational models, based on varying abstractions and inclusions of such details.

This thesis defines and studies a family of models, called NEMO, based on a particular set of well-established facts or well-founded assumptions in neuroscience: atomic neural firing, random connectivity, inhibition as a local dynamic firing threshold, and fully local plasticity. This thesis asks: what sort of algorithms are possible in these computational models? To the extent possible, what seem to be the simplest assumptions where interesting computation becomes possible? Additionally, can we find algorithms for cognitive phenomena that, in addition to serving as a "proof of capacity" of the computational model, otherwise reflect what is known about these processes in the brain? The major contributions of this thesis include:

1. The formal definition of the basic-NEMO and NEMO models, with an explication of their neurobiological underpinnings (that is, realism as abstractions of the brain).

2. Algorithms for the creation of neural \emph{assemblies}, or highly dense interconnected subsets of neurons, and various operations manipulating such assemblies, including reciprocal projection, merge, association, disassociation, and pattern completion, all in the basic-NEMO model. Using these operations, we show the Turing-completeness of the NEMO model (with some specific additional assumptions).

3. An algorithm for parsing a small but non-trivial subset of English and Russian (and more generally any regular language) in the NEMO model, with meta-features of the algorithm broadly in line with what is known about language in the brain.

4. An algorithm for parsing a much larger subset of English (and other languages), in particular handling dependent (embedded) clauses, in the NEMO model with some additional memory assumptions. We prove that an abstraction of this algorithm yields a new characterization of the context-free languages.

5. Algorithms for the blocks-world planning task, which involves outputting a sequence of steps to rearrange a stack of cubes in one order into another target order, in the NEMO model. A side consequence of this work is an algorithm for a chaining operation in basic-NEMO.

6. Algorithms for several of the most basic and initial steps in language acquisition in the baby brain. This includes an algorithm for the learning of the simplest, concrete nouns and action verbs (words like "cat" and "jump") from whole sentences in basic-NEMO with a novel representation of word and contextual inputs. Extending the same model, we present an algorithm for an elementary component of syntax, namely learning the word order of 2-constituent intransitive and 3-constituent transitive sentences. These algorithms are very broadly in line with what is known about language in the brain.

Identiferoai:union.ndltd.org:columbia.edu/oai:academiccommons.columbia.edu:10.7916/q0g0-hc14
Date January 2024
CreatorsMitropolsky, Daniel
Source SetsColumbia University
LanguageEnglish
Detected LanguageEnglish
TypeTheses

Page generated in 0.002 seconds