As autonomous systems move out of the research laboratory into operational environments, they require ever deeper connections to their surroundings. Traditional notions of full autonomy have led to “clockwork” approaches where robots must be isolated from their human surroundings. Instead, we need precise, robust relationships with people and infrastructure. This situated autonomy appears in driverless cars' dependence on human-built infrastructure, the need for new systems of unmanned traffic management in the air, and the increasing importance of collaborative robotics in factories. How can we best design such systems to inhabit and enhance the human world? In this talk, David Mindell sketches a number of these emerging scenarios, traces new technologies to address the problems they raise, and envisions new approaches to human and robotic interaction that helps people and robots work together safely and collaboratively.
Computational Imaging systems consist of two parts: the physical part where light propagates through free space or optical elements such as lenses, prisms, etc. finally forming a raw intensity image on the digital camera; and the computational part, where algorithms try to restore the image quality or extract other type of information from the raw intensity image data. Computational Imaging promises to solve the challenge of imaging objects that are too small, i.e. of size at about the wavelength of illumination or smaller; too far, i.e. with extremely low numerical aperture; too dark, i.e. at very low photon counts; or too foggy, i.e. when the light has to propagate through a strongly scattering medium before reaching the detector. In this talk I will discuss the emerging trend in computational imaging to train deep neural networks (DNNs) to attack the quad of challenging objects. In several imaging experiments carried out by our group, objects rendered “invisible” due to various adverse conditions such as extreme defocus, scatter, or very low photon counts were “revealed” after processing of the raw images by DNNs. The DNNs were trained from examples consisting of pairs of known objects and their corresponding raw images. The objects were drawn from databases of faces and natural images, with the brightness converted to phase through a liquid-crystal spatial phase modulator. After training, the DNNs were capable of recovering unknown, i.e. hitherto not presented during training, objects from the raw images and recovery was robust to disturbances in the optical system, such as additional defocus or various misalignments. This suggests that DNNs may form robust internal models of the physics of light propagation and detection and generalize priors from the training set.
Layer 2 is currently used as an umbrella term for all operations that are performed “off chain” and use blockchains to settle transactions. This is based on the work of Tadge Dryja, who is one of the authors of the Lightning Network paper, and he continues to lead the DCI’s research in this area. The Lightning Network is one of the first applications of payment channels, and we’re confident we’ll see more. Another application we’ve been working on involves smart contracts. In order to create useful smart contracts, we need oracles, data feeds that verify real-world occurrences and submit this information in a format that can be used in a blockchain.
With the proliferation of commercial wearable devices, we are now able to obtain unprecedented insight into the ever-changing physical state of our bodies. These devices allow real-time monitoring of biosignals that can generate actionable information to enable optimized interventions to avoid injury and enhance performance. Combat and medical planners across all military services are keenly interested in harnessing wearable sensor advances to diagnose, predict, and improve warfighter health and performance. However, moving from civilian promise to military reality is complex, with unique requirements of hardware design, real-time networking, data management, cybersecurity, predictive model building, and decision science. Emerging technologies for military on-the-move monitoring will be highlighted, along with a discussion of an integrated open systems architecture approach for functional evolution.
Digital fabrication and computational materials are enabling the design and manufacturing of objects that are mass-customizable, interconnected, and can fundamentally adapt to users’ needs and requirements. This talk will present a series of research projects and technologies that push the boundaries of how materials and computers can be intertwined to create new products and experiences — from the nanoscale to a stadium, from a single person to a crowd — and that redefine how we perceive and interact with physical world.
The demand for materials, particularly minerals and metals, has experienced an exceptional growth in the last decades. In parallel, the costs of the corresponding processing technologies have reached levels that are unsustainable for most countries. Increasing access to cost effective and clean electricity sets the stage for novel processes that can match new expectations from society. In this context, recent research and development results pertinent to materials processing are presented, in particular for oxides and sulfides. In parallel, novel experimental methods and predictive capacity for high temperature systems are shown, paving the way to transformative processes and materials.
Artificial intelligence is being embedded into products to save people time and money. Experts in many domains have already begun to see the results of this, from medicine to education to navigation. But these products are built using an army of data scientists and machine learning experts, and the rate at which these human experts can deliver results is far lower than the current demand. My lab at MIT, called Data to AI, wanted to change this. Recognizing the human bottleneck in creating these systems, a few years ago we launched an ambitious project: we decided “to teach a computer how to be a data scientist." Our goal was to create automated systems that can ask questions of data, come up with analytic queries that could answer those questions, and use machine learning to solve them—in other words, all the things that human data scientists do. After much research and experimentation, the systems we have developed now allow us to build end-to-end AI products that can solve a new problem in one day. In this talk, I will cover what these new technologies are, how we are using them to accelerate the design and development of AI products, and how you can take advantage of them to actually build AI products faster and cheaper.