Entry Date:
February 11, 2019

Develop Novel Neuroimaging Methods to Holistically Capture the Spatiotemporal and Representational Space of Brain Activation


Combining multimodal data to capture an integrated view of brain function in representational space is a powerful approach to study the human brain and will yield a new perspective on the fundamental analysis of brain behavior and its neurophysiological underpinnings. The approach, termed representational similarity analysis (RSA), compares representational matrices (stimulus x stimulus similarity structures) across imaging modalities and data types. We are developing computational tools to link neural (MEG, fMRI); behavioral data (e.g. button presses, video camera data); and computational models (deep neural models - DNNs) using RSA.

The tools are exemplified in a novel computational method we recently developed, which fuses fMRI and MEG data, yielding a first-of its-kind visualization of the dynamics of object processing in humans. Intuitively, the method links the MEG temporal and fMRI spatial patterns by requiring stimuli to be equivalently represented in both modalities (if two visual stimuli evoke similar MEG patterns, they should also evoke similar fMRI patterns). To demonstrate this method, we captured the spatiotemporal dynamics of ventral stream activation of visual objects in sighted individuals in two independent data sets.

Efforts concentrate on: a) methodological development of these tools, by extending MEG-fMRI fusion maps to experimental contrasts, derivation of statistical maps and thresholds, and optimization of spatio-temporal resolution; b) validation, by concretely demonstrating that a MEG-fMRI fusion approach can access deep neural signals which are very hard to localize with MEG alone; and c) efficient software implementations, by creating effective Matlab and GPU tools. In the long run, our goal is to expand the limits of imaging technologies by developing and popularizing computational tools that integrate the spatial and temporal richness of multi-modality data.