Companies are racing to implement AI to better their businesses; however, most have yet to see results. While your company might have the magic algorithm, there’s a good chance it does not have quality data to yet gain insights. The data most organizations currently have was not gathered or created with machine learning in mind; rather, it was traditionally used for measuring physical and financial assets. But how could these same measures apply in a marketplace where the majority of assets are now intangible? In order for your data (and insights) to have meaning, it must be curated around key knowledge and differentiated from your competitors’ data. When making important decisions, what matters most to your company? How can you capture this knowledge and data to make the best use of it?
Visual object detection and recognition are needed for a wide range of applications including robotics/drones, self-driving cars, smart Internet of Things, and portable/wearable electronics. For many of these applications, local embedded processing is preferred due to privacy or latency concerns. In this talk, we will describe how joint algorithm and hardware design can be used to reduce the energy consumption of object detection and recognition while delivering real-time and robust performance. We will discuss several energy-efficient techniques that exploit sparsity, reduce data movement and storage costs, and show how they can be applied to popular forms of object detection and recognition, including those that use deep convolutional neural nets (CNNs). We will present results from recently fabricated ASICs (including our deep CNN accelerator named “Eyeriss” which is 10x more energy efficient than a mobile GPU) that demonstrate these techniques in real-time computer vision systems.
How can you protect yourself against threats you don’t know about? What measures can you take to assess your risk before a breach? How can you protect yourself against an attack that originates in an innocuous object like a toaster? Professor John Williams will discuss how organizations can prepare themselves to defend against cybersecurity threats to protect their enterprises. He will discussrisk a modeling and data analytics tool (Saffron), that helps to identify risk tolerance and strategies for assessing, responding to, and monitoring cyber security risks.
Our work addresses the planning, control, and mapping issues for autonomous robot teams that operate in challenging, partially observable, dynamic environments with limited field-of-view sensors. In such scenarios, individual robots need to be able to plan/execute safe paths on short timescales to avoid imminent collisions. Performance can be improved by planning beyond the robots’ immediate sensing horizon using high-level semantic descriptions of the environment. For mapping on longer timescales, the agents must also be able to align and fuse imperfect and partial observations to construct a consistent and unified representation of the environment. Furthermore, these tasks must be done autonomously onboard, which typically adds significant complexity to the system. This talk will highlight three recently developed solutions to these challenges that have been implemented to (1) robustly plan paths and demonstrate high-speed agile flight of a quadrotor in unknown, cluttered environments; and (2) plan beyond the line-of-sight by utilizing the learned context within the local vicinity, with applications in last-mile delivery. We further present a multi-way data association algorithm to correctly synchronize partial and noisy representations and fuse maps acquired by (single or multiple) robots, showcased on a simultaneous localization and mapping (SLAM) application.
Spatial perception has witnessed an unprecedented progress in the last decade. Robots are now able to detect objects, localize them, and create large-scale maps of an unknown environment, which are crucial capabilities for navigation and manipulation. Despite these advances, both researchers and practitioners are well aware of the brittleness of current perception systems, and a large gap still separates robot and human perception. While many applications can afford occasional failures (e.g., AR/VR, domestic robotics) or can structure the environment to simplify perception (e.g., industrial robotics), safety-critical applications of robotics in the wild, ranging from self-driving vehicles to search & rescue, demand a new generation of algorithms. This talk discusses two efforts targeted at bridging this gap. The first focuses on robustness: I present recent advances in the design of certifiably robust spatial perception algorithms that are robust to extreme amounts of outliers and afford performance guarantees. These algorithms are “hard to break” and are able to work in regimes where all related techniques fail. The second effort targets metric-semantic understanding. While humans are able to quickly grasp both geometric and semantic aspects of a scene, high-level scene understanding remains a challenge for robotics. I present recent work on real-time metric-semantic understanding, which combines robust estimation with deep learning. I discuss these efforts and their applications to a variety of perception problems, including mesh registration, image-based object localization, and robot Simultaneous Localization and Mapping.
Moderator: David Keith Panelists (4-minute statement each): Kent Larson Carlo Ratti Sarah Williams Jinhua Zhao
Given the severe mobility challenges in urbanizing areas, numerous visions for designing urban mobility systems are discussed by policymakers, planners, and industry. These visions must anticipate technological and sociodemographic developments, while accounting for the constraints of operator business models and environmental concerns. In this session, MIT faculty will share and discuss their ideas for urban mobility systems around the globe, considering both promising technologies as well as heterogeneities among the world’s urban centers.