Entry Date:
May 15, 2012

Multimodal Understanding Group (MUG)

Principal Investigator Randall Davis


The Multimodal Understanding Group's objective is to build techniques, software and hardware that enable natural interaction with information. Our vision is that natural interaction implies the integration of speech, gestures and sketching to emulate a human-like dialogue. Consequently, research focuses on the following areas:

Gesture -- building and testing systems that understand body- and hand-based gestures

Sketch -- improving, generalizing and applying sketch recognition algorithms to real-world problems

Multimodal -- building systems that integrate speech with the gesture and sketch modalities, by leveraging speech understanding and natural language processing research at MIT CSAIL