In recent years, great strides have been made to scale and automate Big Data collection, storage, and processing, but deriving real insight through relational and semantic data analysis still requires time-consuming guesswork and human intuition. Now, novel approaches designed across domains (education, medicine, energy, and others) have helped identify foundational issues in general data analysis, providing the basis for developing a “Data Science Machine,” an automated system for generating predictive models from raw data.
2016 MIT Information and Communication Technologies Conference
Encryption as a means of data control (privacy and security):
For a long time, interaction on Web has been less private or secure than many end-users expect and prefer. Now, however, the widespread deployment of encryption helps us to change that.
* Making encryption widespread. For years we have known how to do encryption, but it wasn't widely used, because it wasn't part of overall system design. In response, particularly as we've become aware of capabilities for network-scale monitoring, standards groups including IETF and W3C have worked to encrypt more of those network connections at the protocol and API-design phase, and to make it easier to deploy and use encrypted protocols such as HTTPS. Encryption won't necessarily stop a targeted attack (attackers can often break end-user systems where they can't brute-force break the encryption), but it raises the effort required for surveillance and forces transparency on other network participants who want to see or shape traffic.
* Secure authentication. Too many of our "secure" communications are protected by weak password mechanisms, leaving users open to password database breaches and phishing attacks. Strong new authentication mechanisms, being worked on for web-wide standards, can replace the password; helping users and applications to secure accounts more effectively. Strong secure authentication will enable users to manage their personal interactions and data privacy, as well as securing commercial data exchange.
Computer Science rests on an unphysical division between the description of a computation and its implementation. Many issues in computing, including its scalability, efficiency, and security, arise at that interface. I will introduce alternative approaches to aligning the representations of hardware and software, and explore some of the social and economic implications of programming atoms as well as bits.
This talk will begin by looking at predictions from the past about the future of work. Then it will focus on a promising new way to predict how work will be organized in the future: by thinking about how to create more intelligent organizations. Examples to be described include: studies of why some groups are smarter than others, studies of how people and machines together can do better than either alone, and ways to harness the collective intelligence of thousands of people to solve complex problems like climate change.
Imagine if our environment helped us to be more productive, to learn the most from our social interactions, and to inspire us when we felt stuck. The Responsive Environments Group at the MIT Media Lab develops systems that connect ubiquitous sensors and computers through the IoT, allowing us to analyze and control networked devices and make them work in concert. The resulting interface can be considered an effective extension of the human nervous system, leveraging approaches including wearable electronics, sensor networks, and the discovery of latent dimensions in user preference for the design of intuitive lighting interfaces.
Uplevel Security Enterprises invest millions in preventing and detecting cyber attacks but have limited technology capabilities for responding to attacks. Their current security infrastructure applies sophisticated algorithms to network and endpoint data to identify potentially malicious activity. However, the output of these appliances is an alert - an isolated data point without any surrounding context. Incident responders need to go through a manual, time-consuming process to reconstruct the original context and understand how an alert relates to their historical data and external threat intelligence. Uplevel automates incident response by applying graph theory to the technical artifacts of cyberattacks. This allows organizations to reduce response times and increase the efficiency of their analysts, thereby reducing their overall exposure risk.
Yaxa According to 2015 Verizon Data Breach Investigative Report (DBIR), 95% of the breaches happen due to stolen user credentials. When legitimate user’s login credentials gets stolen, imposters (malicious outsiders) using these stolen credentials pose as insiders. Yaxa’s in-line software appliance protects enterprise’s critical data center assets and web applications in real-time from such insider threats. Yaxa’s unique user data access fingerprint approach not only detects such bad users but also takes automatic enforcement action as per configured IT policy instead of generating an alert. Real-time imposter and malicious user detection, coupled with automatic enforcement results in huge savings in investigation time and cost reduction while improving risk posture for an organization.
Recent advances in artificial intelligence and robotics are reshaping our thinking about the likely trajectory of occupational change and employment growth. Understanding the evolving relationship between computer capability and human skill demands requires confronting historical thinking about machine displacement of human labor and considering the contemporary incarnation of this displacement: the simultaneous growth of high-education, high-wage and low-education, low-wages jobs.
Traditional applications of metal-organic frameworks (MOFs) are focused on gas storage and separation, which take advantage of the inherent porosity and high surface area of these materials. The MOFs’ use in technologies that require charge transport have lagged behind, however, because MOFs are poor conductors of electricity. We show that design principles honed from decades of previous research in molecular conductors can be employed to produce MOFs with remarkable charge mobility and conductivity values that rival or surpass those of common organic semiconductors and even graphite. We further show that these, ordered, and crystalline conductors can be used for a variety of applications in energy storage, electrocatalysis, electrochromics, and selective chemiresistive sensing. Another virtually untapped area of MOF chemistry is related to their potential to mediate redox reactivity and heterogeneous catalysis through their metal nodes. We show that MOFs can be thought of as unique macromolecular ligands that give rise to unusual molecular clusters where small molecules can react in a matrix-like environment, akin to the metal binding pockets of metalloproteins. By employing a mild, highly modular synthetic method and a suite of spectroscopic techniques, we show that redox reactivity at MOF nodes can lead to the isolation and characterization of highly unstable intermediates relevant to biological and industrial catalysis, and to industrially relevant catalytic transformations that are currently performed only by homogeneous catalysts.
The birth of artificial-intelligence research as an autonomous discipline is generally thought to have been the month long Dartmouth Summer Research Project on Artificial Intelligence in 1956, which convened 10 leading electrical engineers — including MIT’s Marvin Minsky and Claude Shannon — to discuss “how to make machines use language” and “form abstractions and concepts.” A decade later, impressed by rapid advances in the design of digital computers, Minsky was emboldened to declare that “within a generation ... the problem of creating ‘artificial intelligence’ will substantially be solved.”
The problem, of course, turned out to be much more difficult than AI’s pioneers had imagined. In recent years, by exploiting machine learning — in which computers learn to perform tasks from sets of training examples — artificial-intelligence researchers have built special-purpose systems that can do things like interpret spoken language or play Atari games or drive cars using vision with great success.
But according to Tomaso Poggio, the Eugene McDermott Professor of Brain Sciences and Human Behavior at MIT, “These recent achievements have, ironically, underscored the limitations of computer science and artificial intelligence. We do not yet understand how the brain gives rise to intelligence, nor do we know how to build machines that are as broadly intelligent as we are.”
Poggio thinks that AI research needs to revive its early ambitions. “It’s time to try again,” he says. “We know much more than we did before about biological brains and how they produce intelligent behavior. We’re now at the point where we can start applying that understanding from neuroscience, cognitive science and computer science to the design of intelligent machines.”
In this talk I will focus on applying in situ transmission electron microscopy (TEM) and lab-on-a-chip to mechanistic investigations of energy materials. Recent advances in nano-manipulation, environmental TEM and MEMS have allowed us to investigate coupled mechanical and electrochemical phenomena with unprecedented spatial and temporal resolutions. For example, we can now quantitatively characterize liquid-solid and gas-solid interfaces at nanometer resolution for in situ corrosion, fatigue and hydrogen embrittlement processes. These experiments greatly complement our modeling efforts, and together they help provide insights into how materials degrade in service due to combined electrochemical-mechanical forces.