A goal of the Paris Agreement is to stabilize atmospheric greenhouse gas concentrations. If this were achieved, global warming would slow to a rate significantly lower than 21st century warming rates, but little is known about how this would occur over time and across geographies. This work investigates this geographic variability and provides the first framework for estimating the end of rapid, anthropogenic warming.
Presenting MIT Startup Exchange Companies: -Â serviceMob:Â Making customer service access simple with AI -Â iQ3Connect:Â VR platform for collaboration -Â Plume Labs:Â Distributed sensing to map air quality -Â Hosta Labs:Â Sensing to generate 3D maps of building interiors -Â AirWorks:Â Aerial intelligent software that cuts drafting time in half
Presenting Bouygues-incubated startups: -Â Com'in -Â Flexy Moov:Â Electric vehicle sharing platform for companies
How do some companies thrive in an era of constant technological disruption, while others stumble? How can your company excel in using artificial intelligence, chatbots, and future technological innovations to delight your customers and dominate your competitors? Through seven years of research into digital transformation, we have identified five levers digital masters use to outperform their industry peers. They don’t just adopt technology; they build the capabilities to continuously drive technology-powered transformation. Using examples from several industries and countries, we will show you how to turn your company into a digital master.
We are in the midst of an impending step change, and again, schools like MIT are in competition to lead this change. It has led to an "arms race" in higher education that will shape the future people that work with/for you. Hundreds of millions of dollars are being spent by universities in a competition to create innovation ecosystems that produce technology innovators that have making + innovation skill sets.
You're going to want to know about these people, who is best at educating/creating them, and how to gain a competitive advantage in hiring them. In this talk, I'm going to help you figure that out.
An open question in artificial intelligence is how to endow agents with common sense knowledge that humans naturally seem to possess. A prominent theory in child development posits that human infants gradually acquire such knowledge through the process of experimentation. According to this theory, even the seemingly frivolous play of infants is a mechanism for them to conduct experiments to learn about their environment. Inspired by this view of biological sensorimotor learning, I will present my work on building artificial agents that use the paradigm of experimentation to explore and condense their experience into models that enable them to solve new problems. I will discuss the effectiveness of my approach and open issues using case studies of a robot learning to push objects, manipulate ropes, finding its way in office environments and an agent learning to play video games merely based on the incentive of conducting experiments.
Every team has top performers -- people who excel at working in a team to find the right solutions in complex, difficult situations. These top performers include nurses who run hospital floors, emergency response teams, air traffic controllers, and factory line supervisors. While they may outperform the most sophisticated optimization and scheduling algorithms, they cannot often tell us how they do it. Similarly, even when a machine can do the job better than most of us, it can’t explain how. In this talk I share recent work investigating effective ways to blend the unique decision-making strengths of humans and machines. I discuss the development of computational models that enable machines to efficiently infer the mental state of human teammates and thereby collaborate with people in richer, more flexible ways. Our studies demonstrate statistically significant improvements in people’s performance on military, healthcare and manufacturing tasks, when aided by intelligent machine teammates.
Machine learning has made tremendous progress over the last decade. It's thus tempting to believe that ML techniques are a "silver bullet", capable of making progress on any real-world problem they are applied to.
But is that really so?
In this talk, I will discuss a major challenge in the real-world deployment of ML: making ML solutions robust, reliable and secure. In particular, I will survey the widespread vulnerabilities of state-of-the-art ML models to various forms of noise, and then outline promising approach to alleviating these deficiencies as well as to making models be more human-aligned.
Computing near the sensor is preferred over the cloud due to privacy and/or latency concerns for a wide range of applications including robotics/drones, self-driving cars, smart Internet of Things, and portable/wearable electronics. However, at the sensor there are often stringent constraints on energy consumption and cost in addition to the throughput and accuracy requirements of the application. In this talk, we will describe how joint algorithm and hardware design can be used to reduce energy consumption while delivering real-time and robust performance for applications including deep learning, computer vision, autonomous navigation/exploration and video/image processing. We will show how energy-efficient techniques that exploit correlation and sparsity to reduce compute, data movement and storage costs can be applied to various tasks including image classification, depth estimation, super-resolution, localization and mapping.