Prof. Josh H McDermott
Professor of Brain and Cognitive Sciences
Associate Department Head, Brain and Cognitive Sciences (BCS)
Associate Investigator, McGovern Institute
Associate Investigator, McGovern Institute
Primary DLC
Department of Brain and Cognitive Sciences
MIT Room:
46-4078
Areas of Interest and Expertise
Computational Audition
Auditory Perception and Cognition
Music Perception
Auditory Perception and Cognition
Music Perception
Research Summary
The McDermott lab studies how people hear. Sound is produced by events in the world, travels through the air as pressure waves, and is measured by two sensors (the ears). The brain uses the signals from these sensors to infer a vast number of important things -- what someone said, their emotional state when they said it, and the whereabouts and nature of events we cannot see, to name but a few. Humans make such auditory judgments hundreds of times a day, but their basis in our acoustic sensory input is often not obvious, and reflects many stages of sophisticated processing that remain poorly characterized.
We seek to understand the computational basis of these impressive yet routine perceptual inferences. We hope to use our research to improve devices for assisting those whose hearing is impaired, and to design more effective machine systems for recognizing and interpreting sound, which at present perform dramatically worse in real-world conditions than do normal human listeners.
Work combines behavioral experiments with computational modeling and tools for analyzing, manipulating and synthesizing sounds. We draw particular inspiration from machine hearing research: we aim to conduct experiments in humans that reveal how we succeed where machine algorithms fail, and to use approaches in machine hearing to motivate new experimental work. We also have strong ties to auditory neuroscience. Models of the auditory system provide the backbone of our perceptual theories, and we collaborate actively with neurophysiologists and cognitive neuroscientists. The lab thus functions at the intersection of psychology, neuroscience, and engineering.
Current research in the lab explores how humans recognize real-world sound sources, segregate particular sounds from the mixture that enters the ear (the cocktail party problem), separate the acoustic contribution of the environment (e.g. room reverberation) from that of the sound source, and remember and/or attend to particular sounds of interest. We also study music perception and cognition, both for their intrinsic interest, and because music often provides revealing examples of basic hearing mechanisms at work.
We seek to understand the computational basis of these impressive yet routine perceptual inferences. We hope to use our research to improve devices for assisting those whose hearing is impaired, and to design more effective machine systems for recognizing and interpreting sound, which at present perform dramatically worse in real-world conditions than do normal human listeners.
Work combines behavioral experiments with computational modeling and tools for analyzing, manipulating and synthesizing sounds. We draw particular inspiration from machine hearing research: we aim to conduct experiments in humans that reveal how we succeed where machine algorithms fail, and to use approaches in machine hearing to motivate new experimental work. We also have strong ties to auditory neuroscience. Models of the auditory system provide the backbone of our perceptual theories, and we collaborate actively with neurophysiologists and cognitive neuroscientists. The lab thus functions at the intersection of psychology, neuroscience, and engineering.
Current research in the lab explores how humans recognize real-world sound sources, segregate particular sounds from the mixture that enters the ear (the cocktail party problem), separate the acoustic contribution of the environment (e.g. room reverberation) from that of the sound source, and remember and/or attend to particular sounds of interest. We also study music perception and cognition, both for their intrinsic interest, and because music often provides revealing examples of basic hearing mechanisms at work.
-
Projects
January 20, 2017Department of Brain and Cognitive Sciences
Understanding Real-World Auditory Scene Analysis
Principal Investigator Josh McDermott
December 22, 2016Department of Brain and Cognitive SciencesComputational Neuroimaging of Human Auditory Cortex
Principal Investigator Josh McDermott
January 26, 2016Department of Brain and Cognitive SciencesLossy Compression in Auditory Perception
Principal Investigator Josh McDermott
January 26, 2016Department of Brain and Cognitive SciencesMid-Level Representations of Natural Sounds
Principal Investigator Josh McDermott
December 6, 2012Department of Brain and Cognitive SciencesMcDermott Lab: Computational Audition
Principal Investigator Josh McDermott
January 9, 1998Department of Brain and Cognitive SciencesSpeech and Hearing Bioscience and Technology (SHBT) Training Program
Principal Investigator Josh McDermott