Entry Date:
December 22, 2016

Computational Neuroimaging of Human Auditory Cortex

Principal Investigator Josh McDermott

Project Start Date July 2016

Project End Date
 June 2019


Just by listening, humans can infer a vast array of things about the world around them: who is talking, whether a window in their house is open or shut, or what their child dropped on the floor in the next room. This ability to derive information from sound is a core component of human intelligence, and is enabled by many stages of neuronal processing extending from the ear into the brain. Although much is known about how the ears convert sound to electrical signals that are sent to the brain, the mechanisms by which the brain mediates our sound recognition abilities remains poorly understood. These gaps in knowledge limit our ability to develop machine systems that can replicate our listening skills (e.g. for use in robots) or to understand the basis of listening difficulties, as in disorders such as dyslexia or auditory processing disorder, or in age-related hearing loss. To gain insight into the neuronal processes that enable auditory recognition, the brain's processing of sound will be studied using fMRI, a technique to non-invasively measure brain activity. The responses measured in the brain will be compared to the numerical responses produced by state-of-the-art computer algorithms for sound recognition. The research will help reveal the principles of human auditory intelligence, with the long-term goals of enabling more effective machine algorithms and treatments for listening disorders. The research will also provide insight into the inner workings of computer audio algorithms, stimulating interaction between engineering, industry, and neuroscience. The project will facilitate other research efforts via the dissemination of new tools for manipulating sound and the creation of audio data sets, and will recruit and train women and underrepresented minorities in computational neuroscience.

Aspects of the structure and function of primary auditory cortex are well established, and there are a variety of proposals for pathways that might extend out of primary auditory cortex. However, we know little about the transformations within the auditory cortex that enable sound recognition, and there are few computational models of how such transformations might occur. The goal of the proposed research is to conduct fMRI experiments that reveal representational transformations within auditory cortex that might contribute to auditory recognition, to use fMRI responses to test existing models of auditory computation, and to develop new models that can account for human abilities and neuronal responses. Functional MRI will be used to characterize cortical responses because it allows measurements from the entire auditory cortex at once, making it possible to compare responses in different regions of the auditory cortex (including those far from the cortical surface), and thus to probe for representational transformations between regions. New models of auditory computation will be developed by leveraging the recent successes of "deep learning", and their relevance to the brain will be tested using new synthesis-based methods for model evaluation. The results will help reveal how the auditory cortex mediates robust sound recognition.