4.13.22-Build.nano-Ricardo-Alvarez

Conference Video|Duration: 24:34
April 13, 2022
  • Interactive transcript
    Share

    SPEAKER 1: So, hello, everybody. Thanks so much for having me here. I have the honor of being, like, the last speaker. So I'll try to be brief and concise. I'm not going to promise I'm going to be able to be successful at it.

    So what I want to talk about is about sensing and economies at cities, at city scale. How is it that we move from something so small? We talk about the purpose of this conference, about nanotechnologies, and how do we take that out and scale it and extrapolate it to very large systems-- at urban scale.

    So I work at the Senseable City Lab, and basically what we do in the lab is we do research and think about digital cities. How cities are transforming because of digital technologies. And to a certain extent, this is because we're fundamentally a digital society today. We move from industrial to post-industrial to information and our knowledge societies. And cities, as a human invention, they sort of reflect that. We're at a moment in time where we're just putting digital sensors and microprocessors and telecommunications and sort of bringing everything together to make sense.

    And in no small part, this is due to nanotech, is due to semiconductors, is due because of our capacity to shrink. If we think about digital technologies over time, we move from mainframes to computers to laptop to our mobiles to wearables. Now we're talking about smart dust, and we keep making them smaller and smaller and smaller and smaller, and more powerful and more powerful and more powerful.

    So when we think about sensing at scale-- to be honest, it's all about density. How many sensors can we put in place, what are the types of sensors from which we can gather data, and what is the information that those sensors yield can tell us about hidden complexities. I'm going to say invisible realities in very large ecosystems, such as cities.

    In order to think about it from an urban perspective, we have to think about place as a function of space and information. We're thinking about a place, any place. Can be a street, can be a shopping mall, can be a park. There's a series of encoded information that happen within the built environment-- energy, human activity, et cetera, et cetera, et cetera. And all of that layered information is what makes that place a place.

    So we live in a fairly unique moment in time where now we have both the sensing capabilities and the computational capabilities to acquire massive amounts of data, and then make sense out of that data. So I'm going to go over an example to sort of try to bring it down to Earth. What the hell does all of these mean?

    So if we take one system. Let's take one system that's pretty popular, as you can see in the picture. And that is light. So we think about streetlights. They've been around for a long time actually. Public illumination systems have been around for over 900 years. Cordoba, London, take Pall Mall, arc lamp, even today. They're everywhere. We really, really like our streetlights.

    But if we think about the vector of innovation throughout history of public illumination systems, it's always been about light as a single function. How do we get a better light, how do we get a cheaper light? We move from oil to whale oil to gas to arclight to halogen and silver highlight, high pressure sodium systems, now we're moving into LEDs. And it's all about how do we get it better and cheaper. Except that when we think about the adoption of LED and LEDs as a digital system, we're also talking about not only light, but data. And when we synthesize the system-- in this case streetlights-- not only from the perspective of light, but from the perspective of information, then we have a much more interesting conversation.

    So what we're really talking about is as we progress in transforming our streetlights towards the future into multi-sensing devices placed at high density in cities generating all sorts of data. Under two very simple principles-- number one, from one sensor of data I can have multiple uses, multiple purposes. And then I can recombine the data that's coming out of those sensors for a wide variety of applications. And these lists can just keep going and going and going. The truth is that streetlights in the present and in the near future will tend to become like many other urban infrastructure systems, multi-functional systems.

    There are not anymore only just about light. It's about light and applications and interaction with other urban systems. And it is the density in our computing power that makes it possible. So I'm going to give you just a couple of examples. If we think about the density at which they are placed, they are placed in our cities typically at distances close to 60 to 90 feet apart. And all over the world. Everywhere you go, you're going to see public illumination systems. One of the first infrastructures actually that people request as they're settling in a place. So in the US we have about 26 million streetlights. Globally we have about 300 million plus streetlights.

    So one project that we worked on at the lab is, OK, so how can leverage streetlights to create an intelligent human-centered dynamic street crossing. So we think about a street crossing. Let's take Mass. Ave and Vassar Street, right next door. We created a platform of sensors, a modular platform of sensors, using anywhere from camera to air quality sensors, thermal so that we could quantify human behavior but maintaining human privacy. And here you have some of those sensors, and actually gathering data that are sensors mounted on street poles. Air quality, PM, CO concentration levels.

    And then from these we use artificial intelligence, deep learning convolutional neural networks to do things like look at cars, track them, classify the type of vehicle, look at their change in velocity. And from the type of vehicle we can understand what are the types of engines, and then when we correlate the presence of vehicles in the street crossing alongside with the presence of humans in the street crossing, then what we actually do is we take that data and we model with real data a better intersection.

    So the system might look like it's behaving equally one from another, but actually what we're doing here is we're taking the data and inducing a shift in the traffic light behavior. For the driver, nothing changes. The driver is going to see a light going from red to green and then back to red. But the shift in timing is dynamically-- actually allows us to reduce the queue length, increase the overall speed of the traffic system, making it more efficient, having a drastic optimization in terms of the particulate matter emissions, which is very important for health. And this can have massive consequences in terms of traffic optimization of cities, in hours wasted for people in traffic.

    So now we're moving from understanding what a collection of single sensors can do, understanding the meaning of the data that those sensors can gather in space, and then recombining them for a very specific interpretation with the feedback loops to create urban systems. Another example is, can we use streetlights to understand and improving on street parking. Parking is a big problem, and if we think about it. There was a researcher at UCLA, Don Shoup, and he measured that in the United States about 30% of the time people spend behind the wheel, they spend it cruising for parking in city districts. So that's a lot of time.

    But not only that, a car being used today actually spends around 95% of its time parked. In the US, on average, there's about eight parking spots for each vehicle. And for as much as we love to talk, especially here at MIT about autonomous vehicles, the fact is that people love their cars. They're not going away soon, for as much as we love to talk about that future. To the point where in the US, we have about 264 million cars. At about a billion plus worldwide, 1.2 billion worldwide. Just the land allocated for parking needs here in the United States is equivalent to 190 times the size of Manhattan. That's the scale of the problem. That's a lot of land. We're talking about economies of scale, think about the value of that land.

    And the funny part is that we think about how we work around the problem today. We have an invention called the parking meter, invented in 1935 in Oklahoma. Where it allows us to quantify the use of parking at a resolution of a parking slot. Typically standardized to about 18 feet. So the problem statement is that it doesn't matter if you drive a Cadillac Escalade or if you drive a Fiat Cinquecento or a smart, you're going to be using a full slot of space. And you're going to be wasting a lot of space, because the standardization of the problem inherently generates a systematic inefficiency.

    So what we're doing here, if you think about it in terms of land-- land has value, particularly if you think about it as a dynamic public asset. In Cambridge, that are over 3,000 parking spots. In New York, there are over 65,000 parking spots. And if you look at it from the land value perspective, it's millions and millions and millions of dollars in terms of assets. So it's not only a revenue generator for city, but the fact that we only use that land for parking, it's also a failure of imagination on what other value-- not only economic but social-- we can generate out of that land. If only we could measure it better.

    So what we did is we also did an architecture of sensors across parking, where we observed cars, classify them, but also measure them bumper to bumper. Just trying to understand the parking efficiency levels. And here you can actually see the rates at which cars enter and exit and the net measurement here below of what is the percentage of utilization of each parking spot that cars are using. And when we aggregate that, we have what we call a parking EKG. Where the gray part of the graph is the actual net space being used, all the orange is the wasted space. And we waste a lot. Here in Cambridge, the average space use, depending on time of day, ranges between 49.6 percent, all the way to 67.9% at different spots. But more so, even when all parking is full, when all slots are filled, the net inefficiency ranges on usage from 63.2% to 81.9%. So that's wasted space. When we multiplied by the available number of slots, that's a lot of wasted space.

    When we contrast the methods of measurement at full slot on the orange versus the net space utilization, you can also see that there's a fairly wide margin in terms of the resolution and precision at which we can observe the system. Not only that, because in our case it's an optical data, on structure video data-based system. We can also look at other types of patterns. For example, overstay. We can look at slot invasion. There are some people that simply use two slots, and they don't even notice.

    And if we think about in terms of coverage-- these are all the on-street parking spots in Cambridge, Massachusetts. To the point where if we look at the potential coverage of streetlights, and we zoom in and zoom in and zoom in, this is a 30 meter radius, which is what we tested on the cameras. We actually have about 96.7 percent coverage overlap. So it's two systems that are perfectly aligned.

    So, again, this is thinking at the problem from the simple scale of the sensor, the logic of interpretation of the data, and then scaling it up to an urban system. And the results are staggering. For those of you who are familiar with MIT lore, you'll recognize this picture of Oliver Smoot, who in October 1958 laid down on the Harvard bridge. And they measured the Harvard Bridge on Smoot. 264, 364 Smoots plus or minus an ear. So if we think about the wasted space in Cambridge, we're talking about over 3,000 Smoots of wasted space, or 364 Harvard bridges.

    So in conclusion, why this matters? Is it about streetlights? Is it about, is it about parking? No, it's not. It's about how can we merge the technologies at scale to create enough sensing density that allows us to decode invisible realities in space so that we may create better spaces. How can we learn from these experiments on streetlights to extrapolate to other smart infrastructure systems? What does this study on streetlights means for waste, electricity, power systems, roads, et cetera, et cetera, et cetera. We tend to create urban systems that are mono-functional, but in our digital present, they're turning into multi-functional systems.

    And then finally, and in accordance to this conference, what is the role of nanotech as a key component in helping achieve that at scale. And this-- of course, in our field when we look at cities, this has meaning not only in terms of new experiences, new measurements, new operations, but also has a reflection on the type of systems that we create eventually become institutions. Or as Langdon Winner puts it, "In our time techne has become politeia-- our instruments are institutions in the making."

    So when we talk about the breadth and depth of a fundamental building block of tomorrow's technology-- we're talking about nanotech as one of that-- it's important not to think about it only in terms of unique applications that we get creative about and cook in the labs. But what does it mean of actually having those inventions performing at scale, not only in function, but also in terms of our human institutions. Thank you very much.

    SPEAKER 2: We do have a question, a couple of questions in the system. What are the current solutions to prevent privacy-related issues?

    SPEAKER 1: Very important. Super important. Actually, there's a lot of them. One of the things that we did for these experiments was actually do a series of processes that we call privacy by design. So in fact, if you looked at the multi-sensor nodes that we got, we actually embedded computational processing capabilities within the cameras so that all the processing was done at the edge. Now what that allows us is to, for example, when it comes to video, it allows us to yes, take video or take images. But instead of sending them to the cloud and storing those images, which are privacy vulnerabilities, we would actually process them on the edge, extract the measurement data, and then flush all the data. So that there is no data, number one.

    Number two, there's a lot that you can do with discipline. A lot. For example, you don't need-- you don't need ultra high definition, high frame rate video to monitor a system such as parking. Parking is a slow-moving system. So actually, if you're disciplined enough, and do the work-- which we did-- we're able to cut video from traditionally, from 30 frames per second video, all the way to one frame every 15 seconds. And still being able to measure with the latency.

    Now that has several advantages. In terms of privacy, it has the advantage that you have way less chance of invading privacy in terms of your data gathering, or your data collection processes. But it also has net advantages in terms of the computation required to process the data, in terms of whichever form of not only processing but telecommunications for storage if you go down that route. So it all compounds, and there's a series of things that you can do if you embed privacy by design.

    Number two, you can get really creative in the choice of sensors that you pick. For example, on the intersection where we were looking at pedestrians, we actually didn't use cameras to look at pedestrians. We used thermal cameras. And in order to encode privacy, we actually didn't use vision, machine vision to process the video from the thermal. We actually used a process called radon transforms to precisely go about all forms of detectable and identifiable features of humans.

    So, yes, that is very important. And it's increasingly more important, and it's actually enforced by regulation as well. Nothing, at the end of the day, will beat having a sensible conversation with the community in terms of what are you sensing, what are you going to be using it for. We presented this project on several locations in public hearings over here at the Cambridge City Hall. Because when people actually understand, then that generates a different type of conversation in regards to the benefits versus the cons of the data that you're gathering.

    SPEAKER 2: Thank you. Another question. What new insights may emerge from knowing about the activity of social interactions and our cities?

    SPEAKER 1: Oh my God. That is a very long question. Yeah, so when it comes to urban planning, human behavior has a very deep impact in terms of city form, urban form. There is actually an interrelationship between human behavior and technology. Technology actually has a deep impact in human behavior and how that relates to urban form. In fact, you cannot explain the form of the 20th century city without understanding the changes of behaviors due to technology. You cannot explain it without the automobile, the electric grid, the telephone, and the elevator. Try to explain Manhattan without the elevator, and you're going to have a hard trouble, right.

    So the fact of the matter is that it's always an iterative process where you inject a new technology and then people tend to use it. And adopt it on their own terms very, very often far different from what the inventors intended. And then you kind of have to take a step back and observe and have a conversation with the community and morph, morph, morph, morph your changes. Sometimes they happen organically, but what we are finding more and more is that there are very discernible patterns at scale, which can be used as fundamental principles for better urban design.

    For a long time, that was a series of big problems to tackle because of the scale. But because of the computational methods that we have today, we can decode those invisible patterns of great complexity. And now feed it into urban design.

    SPEAKER 2: One more question. How would a parking system that collaborates with streetlight sensor networks function?

    SPEAKER 1: Yes. So for that project in particular, actually our goal was to demonstrate a principle that when you integrate digital technologies with a mono-functional infrastructure, you transform it into a multifunctional infrastructure. And if you have a multifunctional urban system that was created by an institution that only does one function, then you probably also need to change the institution as well.

    So here in Cambridge, Streetlights are the realm of the city electrician. And this is a guy who's been waking up, and discussing, and talking-- he's an expert at streetlights and poles and lumens, et cetera, et cetera, et cetera. But if you get into this reality of the streetlights gathering information and working across other urban systems, then there's a bit of a mismatch. So how do you make it work?

    So in this case, what we did is we used the density of the street poles to measure, to gather data to help us measure and operate a system that falls outside of its operational jurisdiction. In this case, it's not about light, it's about parking spots. And there are ways in which you can relay that information at different degrees of latency. For a research study, you aggregate the results and you do the measurements like I showed here. And then you're able to prove to the city this is the degree of efficiency or lack of in the system so that we can improve it.

    But at the same time, you can create real time feedback loops. I showed a diagram over here. This is a very high-level diagram, actually, of it. But what this does is it's actually able to-- you can create an actuation feedback loop directly onto the streetlight. The streetlight can shine a light or can do a pattern of light to indicate that there is a parking space available nearby. That's a local feedback loop. You can also integrate it into your smartphone or your in-car dashboard-- and this is where it gets interesting. Because if you're driving that Fiat Cinquecento, then I can tell you, hey, here are the 12 spots where you fit nearby to your destination right now. If you're driving a Cadillac Escalade, I can tell you, hey, you better hurry up because here's the only spot that you fit close to your destination right now.

    When you get that degree of precision and granularity, then it opens up a whole series of new questions. Do you treat space as dynamic? Should the guy in the Fiat pay the same as the guy in the Escalade? Actually if you look at it from an econometric perspective-- in fact, one square foot of parking for the Escalade is actually more expensive than a square foot of parking for the Fiat because of supply and demand. So all of those are actually new models for the city, but the integration for a system such as this into an operational system is fairly straightforward.

    SPEAKER 2: Thank you very much.

    SPEAKER 1: Thank you.