Low Power/Edge Computing

November 5, 2020
Low Power/Edge Computing
Webinar


Location

Zoom Webinar

Education Partner

MIT Professional Education log

 

 

 


Overview

State-of-the-art information and communication technologies have become absolutely essential for all industries as the world is becoming more and more interconnected and data-driven. This trend has been further accelerated by the COVID pandemic.  Where is the digital frontier today and what lies ahead?  The annual MIT Information & Communication Technologies (ICT) event explores the latest research from across the Institute and its potential impact across industries. The webinar series will feature three sessions by six MIT faculty on the following topics: wireless communications, low power/edge computing and urban infrastructure. Additionally, a fourth session will feature MIT-connected startups presenting on the same topics.

ICT Webinar Series

 

  • Overview

    State-of-the-art information and communication technologies have become absolutely essential for all industries as the world is becoming more and more interconnected and data-driven. This trend has been further accelerated by the COVID pandemic.  Where is the digital frontier today and what lies ahead?  The annual MIT Information & Communication Technologies (ICT) event explores the latest research from across the Institute and its potential impact across industries. The webinar series will feature three sessions by six MIT faculty on the following topics: wireless communications, low power/edge computing and urban infrastructure. Additionally, a fourth session will feature MIT-connected startups presenting on the same topics.

    ICT Webinar Series

     

Register

Agenda

11:00am

Welcome and Introduction
11:05am

The Extreme Materials Revolution: From Computers in Venus to Synthetic Cells
Professor in the Department of Electrical Engineering and Computer Science
Tomás Palacios
Professor in the Department of Electrical Engineering and Computer Science

Tomás Palacios is a Professor in the Department of Electrical Engineering and Computer Science at MIT. He received his PhD from the University of California - Santa Barbara in 2006, and his undergraduate degree in Telecommunication Engineering from the Universidad Politécnica de Madrid (Spain). His current research focuses on new electronic devices and applications for novel semiconductor materials such as graphene and gallium nitride. His work has been recognized with multiple awards including the Presidential Early Career Award for Scientists and Engineers, the 2012 and 2019 IEEE George Smith Award, and the NSF, ONR, and DARPA Young Faculty Awards, among many others. Prof. Palacios is the founder and director of the MIT MTL Center for Graphene Devices and 2D Systems, as well as the Chief Advisor and co-founder of Cambridge Electronics, Inc. He is a Fellow of IEEE.

The end of traditional transistor scaling brings unprecedented new opportunities to semiconductor devices and electronics. In this new era, heterogeneous integration of new materials becomes key in order to add new functionality and value to electronic chips. This talk will review some examples of these new opportunities, including 1. Gallium Nitride vertical power transistors and CMOS logic for a much more efficient electric grid; 2. One-layer-thick molybdenum disulfide wi-fi energy harvesters to enable ubiquitous electronics; 3. High temperature CMOS technology to power future missions to Venus; and 4. A new generation of cell-sized autonomous electronic microsystems to revolutionize environmental monitoring and healthcare. The seminar will conclude with a reflection on how the democratization of heterogeneous integration and the unique properties of extreme materials will transform our society just as Moore’s law has done for the last 50 years.

11:50am

MCUNet: TinyNAS and TinyEngine for Efficient Deep Learning on Microcontrollers
Assistant Professor, Department of Electrical Engineering and Computer Science, MIT EECS
Song Han
Assistant Professor, Department of Electrical Engineering and Computer Science

Song Han is an assistant professor in MIT’s Department of Electrical Engineering and Computer Science. He received his PhD degree from Stanford University. His research focuses on efficient deep learning computing. He proposed “deep compression” technique that can reduce neural network size by an order of magnitude without losing accuracy, and the hardware implementation “efficient inference engine” that first exploited model compression and weight sparsity in deep learning accelerators, which impacted commercial AI chips designed by NVIDIA, Xilinx, Samsung, MediaTek, etc. His recent work on hardware-aware neural architecture search was highlighted by MIT News, Qualcomm News, VentureBeat, IEEE Spectrum, integrated in PyTorch and AutoGluon, and received many low-power computer vision contest awards in flagship AI conferences (CVPR’19, ICCV’19 and NeurIPS’19). Song received Best Paper awards at ICLR’16 and FPGA’17, Amazon Machine Learning Research Award, SONY Faculty Award, Facebook Faculty Award. Song was named “35 Innovators Under 35” by MIT Technology Review for his contribution on “deep compression” technique that “lets powerful artificial intelligence (AI) programs run more efficiently on low-power mobile devices.” Song received the NSF CAREER Award for “efficient algorithms and hardware for accelerated machine learning.”

Machine learning on tiny IoT devices based on microcontroller units (MCU) is appealing but challenging: the memory of microcontrollers is 2-3 orders of magnitude less  than mobile phones, not to mention GPUs. I will introduce key technologies for neural network optimization on IoT devices, including model compression (pruning, quantization), neural architecture search, and compiler/runtime optimizations. Based on that, we propose MCUNet, a framework that jointly designs the efficient neural architecture (TinyNAS) and the lightweight inference engine (TinyEngine). MCUNet automatically designs perfectly matched neural architecture and the inference library on MCU. MCUNet enables ImageNet-scale inference on microcontrollers that has only 1MB of FLASH and 320KB SRAM. It achieves significant speedup compared to existing MCU libraries: TF-Lite Micro, CMSIS-NN, and MicroTVM. Our study suggests that the era of tiny machine learning on IoT devices has arrived. 

  • Agenda
    11:00am

    Welcome and Introduction
    11:05am

    The Extreme Materials Revolution: From Computers in Venus to Synthetic Cells
    Professor in the Department of Electrical Engineering and Computer Science
    Tomás Palacios
    Professor in the Department of Electrical Engineering and Computer Science

    Tomás Palacios is a Professor in the Department of Electrical Engineering and Computer Science at MIT. He received his PhD from the University of California - Santa Barbara in 2006, and his undergraduate degree in Telecommunication Engineering from the Universidad Politécnica de Madrid (Spain). His current research focuses on new electronic devices and applications for novel semiconductor materials such as graphene and gallium nitride. His work has been recognized with multiple awards including the Presidential Early Career Award for Scientists and Engineers, the 2012 and 2019 IEEE George Smith Award, and the NSF, ONR, and DARPA Young Faculty Awards, among many others. Prof. Palacios is the founder and director of the MIT MTL Center for Graphene Devices and 2D Systems, as well as the Chief Advisor and co-founder of Cambridge Electronics, Inc. He is a Fellow of IEEE.

    The end of traditional transistor scaling brings unprecedented new opportunities to semiconductor devices and electronics. In this new era, heterogeneous integration of new materials becomes key in order to add new functionality and value to electronic chips. This talk will review some examples of these new opportunities, including 1. Gallium Nitride vertical power transistors and CMOS logic for a much more efficient electric grid; 2. One-layer-thick molybdenum disulfide wi-fi energy harvesters to enable ubiquitous electronics; 3. High temperature CMOS technology to power future missions to Venus; and 4. A new generation of cell-sized autonomous electronic microsystems to revolutionize environmental monitoring and healthcare. The seminar will conclude with a reflection on how the democratization of heterogeneous integration and the unique properties of extreme materials will transform our society just as Moore’s law has done for the last 50 years.

    11:50am

    MCUNet: TinyNAS and TinyEngine for Efficient Deep Learning on Microcontrollers
    Assistant Professor, Department of Electrical Engineering and Computer Science, MIT EECS
    Song Han
    Assistant Professor, Department of Electrical Engineering and Computer Science

    Song Han is an assistant professor in MIT’s Department of Electrical Engineering and Computer Science. He received his PhD degree from Stanford University. His research focuses on efficient deep learning computing. He proposed “deep compression” technique that can reduce neural network size by an order of magnitude without losing accuracy, and the hardware implementation “efficient inference engine” that first exploited model compression and weight sparsity in deep learning accelerators, which impacted commercial AI chips designed by NVIDIA, Xilinx, Samsung, MediaTek, etc. His recent work on hardware-aware neural architecture search was highlighted by MIT News, Qualcomm News, VentureBeat, IEEE Spectrum, integrated in PyTorch and AutoGluon, and received many low-power computer vision contest awards in flagship AI conferences (CVPR’19, ICCV’19 and NeurIPS’19). Song received Best Paper awards at ICLR’16 and FPGA’17, Amazon Machine Learning Research Award, SONY Faculty Award, Facebook Faculty Award. Song was named “35 Innovators Under 35” by MIT Technology Review for his contribution on “deep compression” technique that “lets powerful artificial intelligence (AI) programs run more efficiently on low-power mobile devices.” Song received the NSF CAREER Award for “efficient algorithms and hardware for accelerated machine learning.”

    Machine learning on tiny IoT devices based on microcontroller units (MCU) is appealing but challenging: the memory of microcontrollers is 2-3 orders of magnitude less  than mobile phones, not to mention GPUs. I will introduce key technologies for neural network optimization on IoT devices, including model compression (pruning, quantization), neural architecture search, and compiler/runtime optimizations. Based on that, we propose MCUNet, a framework that jointly designs the efficient neural architecture (TinyNAS) and the lightweight inference engine (TinyEngine). MCUNet automatically designs perfectly matched neural architecture and the inference library on MCU. MCUNet enables ImageNet-scale inference on microcontrollers that has only 1MB of FLASH and 320KB SRAM. It achieves significant speedup compared to existing MCU libraries: TF-Lite Micro, CMSIS-NN, and MicroTVM. Our study suggests that the era of tiny machine learning on IoT devices has arrived.