Return to search

Fusing simultaneously acquired EEG-fMRI using deep learning

Simultaneous EEG-fMRI is a multi-modal neuroimaging technique where hemodynamic activity across the brain is measured at millimeter spatial resolution using functional magnetic resonance imaging (fMRI) while electrical activity at the scalp is measured at millisecond resolution using electroencephalography (EEG). EEG-fMRI is significantly more challenging to collect than either modality on its own, due to electromagnetic coupling and interference between the modalities. The rationale for collecting the two modalities together is that, given the complementary spatial and temporal resolutions of the individual modalities, EEG-fMRI has the potential as a non-invasive neuroimaging technique for recovering the latent source space of neural activity. Inferring this latent source space and fusing these modalities has been a main challenge for realizing the potential of simultaneous EEG-fMRI for human neuroscience.

In this thesis we develop a principled and interpretable approach to address this inference problem. We build on the knowledge of the generative processes underlying image/signal formation of each of the modalities, traditionally viewed via linear mappings, and recast this into a framework which we refer to as “neural transcoding”. The idea of neural transcoding is to generate a signal of one neuroimaging modality from another by first decoding it into a latent source space and then encoding it into the other measurement space. We implement this transcoding via deep architectures based on convolutional neural networks. We first develop a basic transcoding architecture and test it on simulated EEG-fMRI data. Evaluation on simulated data enables us to assess the model’s ability to recover a latent source space given known ground truth. We then extend this architecture and add a cycle consistency loss to create a cycle-CNN transcoder and show that it outperforms, in terms of the fidelity of recovered source space, both the basic transcoder as well as traditional source estimation techniques, even when we provide those techniques detailed information about the image generation process. We then assess the performance of the cycle-CNN transcoder on real simultaneous EEG-fMRI datasets including an auditory oddball dataset and a three-choice visual categorization dataset.

Without any prior knowledge of either the hemodynamic response function or leadfield matrix, the transcoder is able to exploit the temporal and spatial relationships between the modalities and latent source spaces to learn these mappings. We show, for real EEG-fMRI data, the modalities can be transcoded from one to another, and that the transcoded results for unseen test data, have substantial correlation with the ground truth. In addition, we analyze the source space of the transcoders and observe latent neural dynamics that could not be observed with either modality alone--e.g., millimeter by millisecond dynamics of cortical regions representing motor activation and somatosensory feedback for finger movement.

Collectively, this thesis demonstrates how one can incorporate a principled understanding of the generative process underlying biomedical image/signal formation in a deep learning framework to build models that are interpretable and symmetrically combine both modalities. It also potentially enables a new type of low-cost computational neuroimaging -- i.e., generating an ``expensive" fMRI BOLD image from ``low cost" EEG data.

Identiferoai:union.ndltd.org:columbia.edu/oai:academiccommons.columbia.edu:10.7916/k9s8-f604
Date January 2022
CreatorsLiu, Xueqing
Source SetsColumbia University
LanguageEnglish
Detected LanguageEnglish
TypeTheses

Page generated in 0.002 seconds