Return to search

Generating structured stimuli for investigations of human behavior and brain activity with computational models

Some of the most important discoveries in cognitive neuroscience have come from recent innovations in experimental tools. Computational models that simulate human perception of environmental inputs have revealed the internal processes and features by which those inputs are learned and represented by the brain. We advance this line of work across two separate research studies in which we leveraged these models to both generate experimental task stimuli and make predictions about behavioral and neural responses to those stimuli.

Chapter 1 details how nine language models were used to generate controversial sentence pairs for which two of the models disagreed about which sentence is more likely to occur. Human judgments about these sentence pairs were collected and compared to model preferences in order to identify model-specific pitfalls and provide a behavioral performance benchmark for future research. We found that transformer models GPT-2, RoBERTa and ELECTRA were most aligned with human judgments.

Chapter 2 utilizes the GloVe model of semantic word vectors to generate a set of schematically structured poems comprising ten different topics whose specific temporal order was learned by a group of participants. The GloVe model was then used to investigate learning-induced changes in the spatial geometry of the representations of the topics across the cortex. A Hidden Markov Model was also used to measure neural event segmentation during poem listening. In both analyses we identified a consistent topography of learning-induced changes in the default mode network, which could be partially explained by the models.

Identiferoai:union.ndltd.org:columbia.edu/oai:academiccommons.columbia.edu:10.7916/vbtm-gz89
Date January 2024
CreatorsSiegelman, Matthew E.
Source SetsColumbia University
LanguageEnglish
Detected LanguageEnglish
TypeTheses

Page generated in 0.0018 seconds