Prof. Luca Carlone

Charles Stark Draper Associate Professor of Aeronautics and Astronautics

Primary DLC

Department of Aeronautics and Astronautics

MIT Room: 31-243

Research Summary

Professor Carlone's research pushes the boundaries of autonomous operation on agile micro aerial vehicles, through the design of robust and lightweight perception algorithms. His work is a combination of rigorous theory and practical implementation: Carlone brings new theoretical tools to the robotics community (e.g., convex relaxations, spectral graph theory, distributed computing, compressive sensing) and demonstrate their revolutionary impact in real-world applications. Previous research provided fundamental insights and performance guarantees in robot localization and mapping. Current and future research will redesign the landscape of sensing and perception for resource-constrained robots, with a special focus on swarms of micro and nano aerial vehicles.

Carlone's research aims to bring autonomous robots into the real world. During his Ph.D. Carlone addressed issues related to robust robot perception. Traditional algorithms for localization and mapping are fragile and rely on careful parameter tuning. Moreover, they are prone to failure in off-nominal conditions (e.g., large sensor noise, outliers). Carlone demonstrated that using tools from nonlinear optimization (e.g, convex relaxation, Lagrangian duality), graph theory (e.g., cycle space, spectral graph theory), Riemannian geometry, and probabilistic inference, one can design faster and more robust algorithms, which are less sensitive to parameter tuning and adverse environmental conditions. These algorithms have been implemented in popular robotics software libraries and used by universities and companies. Robust perception algorithms relax the requirement of human supervision, making robot deployment cheaper, and enabling to scale to large teams of cooperative robots in real scenarios.

Current and future research will enable autonomous navigation of resource-constrained platforms, with a special focus on swarms of agile micro (MAV) and nano (NAV) aerial vehicles. Faster operational speed means more efficient task completion, which is crucial in time-critical applications (e.g., search and rescue). Agility, in particular, is a key requirement for indoor operation, pushing towards the adoption of smaller platforms. While the use of multiple small MAVs appears as a desirable alternative to more expensive monolithic solutions, the deployment of these platforms in the real world poses formidable challenges. The limited payload and power impose constraints on the onboard computation and sensing, preventing the use of information-rich sensors, such as lidars and depth cameras. Moreover, the adoption of a large number of vehicles largely limits the bandwidth available to each vehicle. Finally, the use of platforms with fast dynamics requires perception algorithms to operate in a very challenging regime (motion blur, sub-sampled data, high rate and low latency estimation for fast closed-loop control). These challenges require a paradigm shift and open a number of research endeavors, such as dealing with very sparse sensor data (sparse sensing), and designing algorithms that selectively process only sensor data that is relevant to complete a given task (perceptual attention), and that are aware of the on-board resources of the platform (algorithms-and-hardware co-design).

Carlone's research will have a broader impact on the robotic ecosystem beyond the micro aerial vehicles domain. The study of resource constrained perception systems will empower bio-inspired robots (e.g., robotic insects) with advanced navigation capabilities. Moreover, it will impact all domains in which sensing is limited (e.g., endoscopic surgery). Finally, it will promote the use of more affordable sensors in safety-critical applications (e.g., self-driving cars), by leveraging a tighter integration of sensing, perception, and control.

Recent Work

  • Video

    2020 Autonomy Day 1 - Luca Carlone

    April 8, 2020Conference Video Duration: 31:11

    Robot perception and computer vision have witnessed an unprecedented progress in the last decade. Robots and autonomous vehicles are now able to detect objects, localize them, and create large-scale maps of an unknown environment, which are crucial capabilities for navigation and manipulation. Despite these advances, both researchers and practitioners are well aware of the brittleness of current perception systems, and a large gap still separates robot and human perception. While many applications can afford occasional failures (e.g., AR/VR, domestic robotics), high-integrity autonomous systems (including self-driving vehicles) demand a new generation of algorithms. This talk discusses two efforts targeted at bridging this gap. The first focuses on robustness: I present recent advances in the design of certifiable perception algorithms that are robust to extreme amounts of outliers and afford performance guarantees. These algorithms are “hard to break” and are able to work in regimes where all related techniques fail. The second effort targets high-level understanding. While humans are able to quickly grasp both geometric and semantic aspects of a scene, high-level scene understanding remains a challenge for robotics. I present recent work on real-time metric-semantic understanding, which combines robust estimation with deep learning.

    Luca Carlone - 2019 RD Conference

    November 20, 2019Conference Video Duration: 33:57

    Certifiable Perception for Robots and Autonomous Vehicles

    Spatial perception has witnessed an unprecedented progress in the last decade. Robots are now able to detect objects, localize them, and create large-scale maps of an unknown environment, which are crucial capabilities for navigation and manipulation. Despite these advances, both researchers and practitioners are well aware of the brittleness of current perception systems, and a large gap still separates robot and human perception. While many applications can afford occasional failures (e.g., AR/VR, domestic robotics) or can structure the environment to simplify perception (e.g., industrial robotics), safety-critical applications of robotics in the wild, ranging from self-driving vehicles to search & rescue, demand a new generation of algorithms. This talk discusses two efforts targeted at bridging this gap. The first focuses on robustness: I present recent advances in the design of certifiably robust spatial perception algorithms that are robust to extreme amounts of outliers and afford performance guarantees. These algorithms are “hard to break” and are able to work in regimes where all related techniques fail. The second effort targets metric-semantic understanding. While humans are able to quickly grasp both geometric and semantic aspects of a scene, high-level scene understanding remains a challenge for robotics. I present recent work on real-time metric-semantic understanding, which combines robust estimation with deep learning. I discuss these efforts and their applications to a variety of perception problems, including mesh registration, image-based object localization, and robot Simultaneous Localization and Mapping.

    2019 MIT Research and Development Conference