Prof. Antonio Torralba

Thomas and Gerd Perkins Professor of Professor of Electrical Engineering and Computer Science
Faculty Head, AI+D, EECS Department

Primary DLC

Department of Electrical Engineering and Computer Science

MIT Room: 32-D462

Assistant

Fern Keniston
fernd@mit.edu

Areas of Interest and Expertise

Computer Vision
Machine Learning
Human Visual Perception
Systems that Solve Scene and Object Recognition in an Integrated Fashion Leading to More Robust Recognition Systems
Development of Computer Vision Systems and Solving Real World Recognition Tasks\nModeling Human Perceptual and Cognitive Capabilities\nObject Recognition, Classification of Whole Scenes\nVisual Recognition and Classification of Places and Objects
Big Data
Machine Learning

Research Summary

Professor Torralba's research is in the areas of computer vision, machine learning and human visual perception. He is interested in scene and object recognition, among other things. Scene and object recognition are two related visual tasks generally studied separately. However, by devising systems that solve these tasks in an integrated fashion, he believes it is possible to build more efficient and robust recognition systems.

Professor Torralbas has developed a new scene recognition system that you can test -- you take picture with the phone or upload pictures -- and it tells you what they contain. It is called MIT Scene Recognition Demo.

Recent Work

  • Video

    Antonio Torralba - 2019 RD Conference

    November 20, 2019Conference Video Duration: 41:44

    Dissecting Neural Networks

    It is an exciting time for computer vision. With the success of new computational architectures for visual processing, such as deep neural networks (e.g., ConvNets) and access to image databases with millions of labeled examples (e.g., ImageNet, Places), the state of the art in computer vision is advancing rapidly. Even when no examples are available, Generative Adversarial Networks (GANs) have demonstrated a remarkable ability to learn from images and are able to create nearly photorealistic images. The performance achieved by convNets and GANs is remarkable and constitute the state of the art on many tasks. But why do convNets work so well? what is the nature of the internal representation learned by a convNet in a classification task? How does a GAN represent our visual world internally? In this talk I will show that the internal representation in both convNets and GANs can be interpretable in some important cases. I will then show several applications for object recognition, computer graphics, and unsupervised learning from images and audio.

    2019 MIT Research and Development Conference