Principal Investigator Patricia Maes
Project Website http://ttt.media.mit.edu/research/invisiblemedia.html
With Invisible Media we can augment objects around us to make them sensitive to, and able to inform, the focus of our attention in order to provide relevant content. The system is built to minimize bulky wearable gear, and allows the user to navigate this situated information with speech commands, keeping their hands available for manual manipulation of the objects themselves. Information is presented to people auditorily, resulting in a user-system dialog that attempts to mimic a domain expert or recommender who knows what objects are in view of the user and can suggest relevant content. We have created Engine-Info, a training application that teaches the components of an internal combustion engine, as well as a personalized shopping scenario that can suggest appropriate foods in a supermarket based on a person's preferences and health needs.