Contact
As a strategy to save the cost of expensive substrates in semiconductor processing, the technique called “layer-transfer” has been developed. In order to achieve real cost-reduction via the “layer-transfer”, the following needs to be insured: (1) Reusability of the expensive substrate, (2) Minimal substrate refurbishment step after the layer release, (3) Fast release rate, and (4) Precise control of a released interface. Although a number of layer transfer methods have been developed including chemical lift-off, optical lift-off, and mechanical lift-off, none of those three methods fully satisfies conditions listed above. In this talk, we will discuss our recent development in a “graphene-based layer-transfer” process that could fully satisfy the above requirements, where epitaxial graphene can serve as a universal seed layer to grow single-crystalline GaN, III-V, II-VI and IV semiconductor films and a release layer that allows precise and repeatable release at the graphene surface. We will further discuss about cost-effective, defect-free heterointergration of semiconductors using graphene-based layer transfers.
Lastly, I will introduce our new research activities in developing advanced RRAM devices. Resistive switching devices have attracted tremendous attention due to their high endurance, sub-nanosecond switching, long retention, scalability, low power consumption, and CMOS compatibility. RRAMs have also emerged as a promising candidate for non-Von Neumann computing architectures based on neuromorphic and machine learning systems to deal with “big data” problems such as pattern recognition from large amounts of data sets. However, currently reported RRAM devices have not shown uniform switching behaviors across the devices with high on-off ratio which holds up commercialization of RRAM-based data storages as well as demonstration of large-scale neuromorphic functions. Recently, we redesigned RRAM devices and this new device structure exhibits most of functions required for large-array memories and neuromorphic computing, which are (1) excellent retention with high endurance, (2) excellent device uniformity, (3) high on/off current ratio, and (4) current suppression in low voltage regime. I will discuss about the characterization results of this new RRAM device.
The MIT Center for Advanced Virtuality (MIT Virtuality for short) pioneers innovative experiences using technologies of virtuality — computing systems that construct imaginative experiences atop our physical world. Our approach to engineering and creative practices pushes the expressive potential of technologies of virtuality and simulates social and cognitive phenomena, while intrinsically considering their social and cultural impacts. This talk focuses on an important aspect of such technologies: virtual selves. Indeed nearly early everyone these days uses virtual identities, ranging from accounts for social media and online shopping to avatars in videogames or virtual reality. Given the widespread and growing use of such technologies, it is important to better understand their impacts and to establish innovative and best practices. In this talk, Harrell explores how our social identities are complicated by their intersection with extended reality technologies, videogames, social media, and related digital media forms. With an emphasis on equity, Harrell will explore how virtual identities both implement and transform persistent issues of class, gender, sex, race, ethnicity, and the dynamically construction social categories more generally.
Want some good news about the environment? In America, we have finally learned to grow our economy while taking less from the Earth year after year: less water, timber, and metal; fewer minerals and resources; even less energy. This talk is a show and tell about this profound change. Andy McAfee will show the evidence that we've started getting more from less and tell how it happened. The unlikely heroes of the tale are the cost pressures that come from intense competition and powerful digital tools that reduce the need for resources. In short, prices and processors are now letting us tread more lightly on the Earth. The story is full of surprises and also insights. In particular, it gives us a playbook for dealing with the major challenges still ahead of us: global warming, pollution, and species loss.
Currently, medical images require a physician to extract clinically relevant information. This talk will explore current work towards making images part of the quantitative medical history and to enable large-scale image-based studies of disease. Although large databases of clinical images contain a wealth of information, medical acquisition constraints result in sparse scans that miss much of the anatomy. These characteristics often render computational analysis impractical as standard processing algorithms tend to fail when applied to such images. Our goal is to enable application of existing algorithms that were originally developed for high resolution research scans to severely undersampled images. Application of the method is illustrated in the context of neurodegeneration and white matter disease studies in stroke patients.