Why do people believe and share misinformation, including entirely fabricated news headlines (“fake news”) and biased or misleading coverage of actual events ("hyper-partisan" content)? The dominant narrative in the media and among academics is that we believe misinformation because we want to – that is, we engage in motivated reasoning, using our cognitive capacities to convince ourselves of the truth of statements that align with our political ideology rather than to undercover the truth. In a series of survey experiments using American participants, my colleagues and I challenge this account. We consistently find that engaging in more reasoning makes one better able to identify false or biased headlines - even for headlines that align with individuals’ political ideology. These findings suggest that susceptibility to misinformation is driven more by mental laziness and lack of reasoning than it is by partisan bias hijacking the reasoning process. We then build on this observation to examine interventions to fight the spread of misinformation. We find - given this smaller-than-believed role of partisan bias - that crowdsourcing can actually be a quite effective approach for identifying misleading news outlets and news content. We also demonstrate the power of making the concept of accuracy top-of-mind, thereby increasing the likelihood that people think about the accuracy of headlines before they decide whether to share them online. Our results suggest that reasoning is not held hostage by partisan bias, but that instead our participants do have the ability to tell fake or inaccurate from real - if they bother to pay attention. Our findings also suggest simple, cost-effective behavioral interventions to fight the spread of misinformation.
How many Design Thinking workshops have you been to in the last 5 years? How many times have you seen the IDEO shopping cart video? User-Centered Design has changed how industry innovates and has taught us how to go beyond business needs and design for customer/user needs. But think about your favorite products—do they just give you satisfaction as a customer or user? Or do they see into your life and fulfill you at a deeper level? We founded Human Element to go beyond users and to design for humans. In this talk, we will present our proprietary methodology, Whole Human Design to show you how we do that.
It is an exciting time for computer vision. With the success of new computational architectures for visual processing, such as deep neural networks (e.g., ConvNets) and access to image databases with millions of labeled examples (e.g., ImageNet, Places), the state of the art in computer vision is advancing rapidly. Even when no examples are available, Generative Adversarial Networks (GANs) have demonstrated a remarkable ability to learn from images and are able to create nearly photorealistic images. The performance achieved by convNets and GANs is remarkable and constitute the state of the art on many tasks. But why do convNets work so well? what is the nature of the internal representation learned by a convNet in a classification task? How does a GAN represent our visual world internally? In this talk I will show that the internal representation in both convNets and GANs can be interpretable in some important cases. I will then show several applications for object recognition, computer graphics, and unsupervised learning from images and audio.
The large amounts of both structured and unstructured data created in manufacturing and operations today present enormous opportunities to apply advanced analytics, machine learning and deep learning. This talk will describe specific use cases in process control and optimization; yield prediction and enhancement; defect inspection and classification and anomaly detection in time series data. Additionally, some of the unique manufacturing and operations challenges like: class imbalance, concept drift and complex multivariate time dynamics will be described. This research has led to the creation of MIT MIMO (Machine Intelligence for Manufacturing and Operations) which will be described during this talk.
If AI succeeds in eclipsing human general intelligence within decades, as many leading AI researchers predict, then how can we make it the best rather than worst thing ever to happen to humanity? I argue that this will require planning and hard work, and explore challenges that we need to overcome as well as exciting opportunities. How can we grow our prosperity through automation without leaving people lacking income or purpose? What career advice should we give today’s kids? How can we make future AI systems more robust, so that they do what we want without crashing, malfunctioning or getting hacked? How can we make machines understand, adopt and retain our goals, and whose goals should should they be? What future do you want? Welcome to the most important conversation of our time!
Artificial intelligence has the potential to radically reshape business and society, and transform the way we work and live -- unlike anything we’ve seen since the Industrial Revolution. Businesses that understand how to harness AI can surge ahead. Those that neglect it will fall behind. Based on research gathered from 1,500 organizations revealed in the book Human + Machine: Reimagining Work in the Age of AI, this talk will shed light into key research that is needed, how organizations are deploying AI to work with humans in fundamentally new ways, and how the “Missing Middle” is the secret to humans powerfully harnessing the opportunity and the promise of AI for greater good.
Construction Tech is one of the fastest growing areas of venture capital funding in the US. With over three billion in investments over the past year it is clear that Construction Tech will soon impact the ways we deliver building of all sizes. Moving forward we need new, rich ideas in software development to solve many of the building industries toughest problems. The talk will present a framework for home delivery directly from computers. Larry will show how builders will design and construct buildings from digital files using systems similar to 3D Printing.
We are currently at an inflection point as artificially intelligent (AI) systems gain capabilities to handle complex tasks in various domains. In this talk, I discuss how machine intelligence could be a direct and complementary extension of human intelligence. I investigate how computing, artificially intelligent systems, and the internet could be directly coupled with the human experience to augment and extend human cognition and abilities. The talk presents recent work on the AlterEgo system, a peripheral neural interface that enables people to silently and internally converse with machines — without voice and discernible movements, and discusses how the human-computer interface can for the first time become endogenous to the human user, changing our relationship with computing and thereby enabling people in different ways. Through the lens of extended computing, I discuss our work investigating AI systems functioning as complements to human cognitive abilities in pursuits as diverse as gene sequencing to human self-expression.
The remarkable progression of innovations that imbue machines with human and superhuman capabilities is generating significant uncertainty and deep anxiety about the future of work. Whether and how our current period of technological disruption differs from prior industrial epochs is a source of vigorous debate. But there is no question that we face an urgent sense of collective concern about how to harness these technological innovations for social benefit. To meet this challenge, the Institute launched the MIT Task Force on the Work of the Future in spring 2018.
The ever-increasing demand for mobile and wireless data has placed a huge strain on today’s WiFi and cellular networks. Millimeter wave frequency bands address this problem by offering multi-GHz of unlicensed bandwidth – 200 times more than the bandwidth allocated to today’s WiFi and cellular networks. In this talk, I describe the opportunities and challenges brought in by this technology, and its applications in enabling untethered virtual reality headsets and high throughput multi-media applications.