Return to search

Structured Deep Probabilistic Machine Learning Models with Imaging Applications

In 2023, breakthroughs achieved by large language models like ChatGPT have been transformative, revealing the hidden structures within natural language. This has enabled these models to reason and perform tasks with intelligence previously unattainable, a feat made possible by analyzing vast datasets. However, the domain of medical imaging—characterized by the high costs and intensive labor of data acquisition, along with the scarcity of data from pathological sources—presents unique challenges.

Neuroimaging data, for instance, is marked by its high dimensionality, limited sample sizes, complex hierarchical and temporal structures, significant noise, and contextual variability. These obstacles are especially prevalent in methodologies like functional Magnetic Resonance Imaging (fMRI) and computer vision applications, where datasets are naturally sparse. Developing sophisticated methods to overcome these challenges is essential for maximizing the utility of imaging technologies and enhancing our understanding of neurological functions. Such advancements are critical for the creation of innovative diagnostic tools and therapeutic approaches for neurological and psychiatric conditions.

The data from current set of non-invasive neuroimaging modalities is most often analyzed using classical statistical and machine learning methods. In this work we show that widely used machine learning methods for neural imaging data can be unified under a Bayesian perspective. We use this unifying view of probabilistic modeling techniques to further develop models and statistical inference methods to address the aforementioned challenges by leveraging substantial research developments in artificial intelligence i.e. deep learning, and probabilistic modeling methods over the last decade.

In this work, we broaden the family of probabilistic models to encompass various prior structures,including discrete, hierarchical, and temporal elements. We derive efficient inference models using principled Bayesian inference and modern stochastic optimization and empirically demonstrate how the representational capacity of neural networks can be combined with principled probabilistic generative models to achieve state-of-the-art results on neuroimaging and computer vision datasets. The methods we develop are applicable to a diverse range of datasets beyond neuroimaging; for instance, we apply these probabilistic inference principles to improve movie and song recommendations, enhance object detection in computer vision models, and perform neural architecture search.

Identiferoai:union.ndltd.org:columbia.edu/oai:academiccommons.columbia.edu:10.7916/k0zg-de41
Date January 2023
CreatorsMittal, Arunesh
Source SetsColumbia University
LanguageEnglish
Detected LanguageEnglish
TypeTheses

Page generated in 0.0027 seconds