2022-Manufacturing-Startups-Leela_AI

Conference Video|Duration: 5:48
March 16, 2022
  • Interactive transcript
    Share

    CYRUS SHAOUL: Good morning everybody. My name is Cyrus and I'm the CEO of Leela AI. And we have deep roots at Leela AI at MIT. My co-founders, Henry Minsky and Milan Minsky, come from deep AI backgrounds. And I was actually a UROP here at the Media Lab in the 90s, so it's great to be back on my old turf.

    And today I'd like to tell you a bit about what we're building at Leela AI. We call it resilient AI solving real world problems. And one of the most real problems that we've found, and one we really want to solve, is a problem of understanding what is going on in manufacturing work areas.

    The reason this has been hard is because time-to-value is actually, if anyone's ever-- has anyone ever implemented a machine vision solution in a factory? If you have, just raise your hand if you'd like. OK, we have some. Thank you for admitting that, I appreciate that.

    Then, after this, tell me if I'm right about this, but you might have had some experiences where you collect some data from your specific needs in your industry in this facility. Then you pipe those into a pipeline where you have, usually people, collecting the data, labeling the data, then training a model, validating the model. And then after that, it gets deployed. And then, what usually happens is there are some unexpected, unexplained decisions coming out of this system, either falsely identifying with a false alarm or missing things. And then you have to go back and collect more data, relabel that data, retrain the model, re-validate everything. And then you end into this really very long running spiral of cost and time being spent on getting to some value. It can take months and hundreds of thousands of dollars.

    So that, to us, was something we really wanted to help. And the way we thought we could help is if we could do this better, faster, cheaper, with less data. That would really advance machine vision in the factory.

    And so we created our product understand.video, which has a no code, human teachable interface, which you can see here. And this lets us quickly create new use-cases, new custom activity understanding systems, and requires a lot less data. You can build a use-case in hours instead of months. You can deal with multiple cameras all over a location or different locations around the world. And it's easy to adjust and improve. So instead of having to wait for an engineer to come fix things, our customers can fix themselves. So that's really a breakthrough, we feel. And has sort of made, finally-- we think-- machine vision useful for manufacturing.

    So here's an example of what comes out of our system. This is just with a couple of days of work, trying to teach our system the tasks going on in this facility. It now understands when people are welding, where the welding tanks are, where the material is, and all this kind of different level of detail. And so after that has been set up, our customers can log in and see what is happening at a very highly granular level. And that includes which tasks are happening. And here, green is welding, purple is walking, and gray is standing. And so what of time during the day was spent in all these activities. And that translates in the value-added activity count as what percent of the day's time was spent doing value-added activities versus non-value-added activities. So it's a great way to quickly start doing continuous, full-time, value stream mapping. Which has heretofore been very difficult, especially for these kind of environments.

    We also work on other kind of environments, but this complexity, the lack of structure, and a big factory area with lots of people working has been a point where our customers said to us, I thought I was going to work at this factory, and I was going to have lots of great data to work with, and I can do all this great stuff with this data, I got to my job, no data. No way to get the data. No way to measure what people are doing and how long you're doing it. So when they see this, they say, oh my God, now I can suddenly use all my knowledge about how to work with data. But not just from the machine output, from what people are doing. And this is, I think, a big breakthrough for digitizing factories and going towards industry 4.0.

    So, some of the benefits. Understanding where and when non-value-add activities are happening. And this gets to the Ben Armstrong's discussions today of labor. We need to help our customers, and we have goals like increasing labor productivity by 20% over six months. If we can increase by 20% by understanding how much time is spent on non-value-add, that's a big breakthrough. And this is the, really, only way we think people are going get the data to do that kind of work.

    Better resource management. Labor-intensive industries, labor shortages, better ways to manage those resources. We're working right now on our next release, which will start suggesting. Like suggesting, what if you move stuff around, or if you had a shorter path of walking, or all sorts of suggestions that will come from our system. So it'll support industrial engineers who need that kind of support sifting through millions of data points. Where's the efficiency to be found? We can suggest that. And also detecting anomalies automatically in activity.

    So today, if you are looking to partner with us, we have a cloud video solution, it's very easy to set up. And you can work on efficiency issues, continuous improvement, load balancing on lines, reducing injuries as well. Because we do understand also not just what you're doing, but if you're doing it safely. So there's a lot of safety applications out there. And if you have a tight continuous improvement loop you'd like to work on, and you like to begin with or without your own infrastructure, we provide the cameras for free with our service. So you don't have to buy cameras.

    Come talk to me, I'll be outside during the breaks and all afternoon. I'd love to talk to everybody here, if you like, and show you a demo at our table, go into depth. So, thank you for your time today. Also, feel free to email me.

  • Interactive transcript
    Share

    CYRUS SHAOUL: Good morning everybody. My name is Cyrus and I'm the CEO of Leela AI. And we have deep roots at Leela AI at MIT. My co-founders, Henry Minsky and Milan Minsky, come from deep AI backgrounds. And I was actually a UROP here at the Media Lab in the 90s, so it's great to be back on my old turf.

    And today I'd like to tell you a bit about what we're building at Leela AI. We call it resilient AI solving real world problems. And one of the most real problems that we've found, and one we really want to solve, is a problem of understanding what is going on in manufacturing work areas.

    The reason this has been hard is because time-to-value is actually, if anyone's ever-- has anyone ever implemented a machine vision solution in a factory? If you have, just raise your hand if you'd like. OK, we have some. Thank you for admitting that, I appreciate that.

    Then, after this, tell me if I'm right about this, but you might have had some experiences where you collect some data from your specific needs in your industry in this facility. Then you pipe those into a pipeline where you have, usually people, collecting the data, labeling the data, then training a model, validating the model. And then after that, it gets deployed. And then, what usually happens is there are some unexpected, unexplained decisions coming out of this system, either falsely identifying with a false alarm or missing things. And then you have to go back and collect more data, relabel that data, retrain the model, re-validate everything. And then you end into this really very long running spiral of cost and time being spent on getting to some value. It can take months and hundreds of thousands of dollars.

    So that, to us, was something we really wanted to help. And the way we thought we could help is if we could do this better, faster, cheaper, with less data. That would really advance machine vision in the factory.

    And so we created our product understand.video, which has a no code, human teachable interface, which you can see here. And this lets us quickly create new use-cases, new custom activity understanding systems, and requires a lot less data. You can build a use-case in hours instead of months. You can deal with multiple cameras all over a location or different locations around the world. And it's easy to adjust and improve. So instead of having to wait for an engineer to come fix things, our customers can fix themselves. So that's really a breakthrough, we feel. And has sort of made, finally-- we think-- machine vision useful for manufacturing.

    So here's an example of what comes out of our system. This is just with a couple of days of work, trying to teach our system the tasks going on in this facility. It now understands when people are welding, where the welding tanks are, where the material is, and all this kind of different level of detail. And so after that has been set up, our customers can log in and see what is happening at a very highly granular level. And that includes which tasks are happening. And here, green is welding, purple is walking, and gray is standing. And so what of time during the day was spent in all these activities. And that translates in the value-added activity count as what percent of the day's time was spent doing value-added activities versus non-value-added activities. So it's a great way to quickly start doing continuous, full-time, value stream mapping. Which has heretofore been very difficult, especially for these kind of environments.

    We also work on other kind of environments, but this complexity, the lack of structure, and a big factory area with lots of people working has been a point where our customers said to us, I thought I was going to work at this factory, and I was going to have lots of great data to work with, and I can do all this great stuff with this data, I got to my job, no data. No way to get the data. No way to measure what people are doing and how long you're doing it. So when they see this, they say, oh my God, now I can suddenly use all my knowledge about how to work with data. But not just from the machine output, from what people are doing. And this is, I think, a big breakthrough for digitizing factories and going towards industry 4.0.

    So, some of the benefits. Understanding where and when non-value-add activities are happening. And this gets to the Ben Armstrong's discussions today of labor. We need to help our customers, and we have goals like increasing labor productivity by 20% over six months. If we can increase by 20% by understanding how much time is spent on non-value-add, that's a big breakthrough. And this is the, really, only way we think people are going get the data to do that kind of work.

    Better resource management. Labor-intensive industries, labor shortages, better ways to manage those resources. We're working right now on our next release, which will start suggesting. Like suggesting, what if you move stuff around, or if you had a shorter path of walking, or all sorts of suggestions that will come from our system. So it'll support industrial engineers who need that kind of support sifting through millions of data points. Where's the efficiency to be found? We can suggest that. And also detecting anomalies automatically in activity.

    So today, if you are looking to partner with us, we have a cloud video solution, it's very easy to set up. And you can work on efficiency issues, continuous improvement, load balancing on lines, reducing injuries as well. Because we do understand also not just what you're doing, but if you're doing it safely. So there's a lot of safety applications out there. And if you have a tight continuous improvement loop you'd like to work on, and you like to begin with or without your own infrastructure, we provide the cameras for free with our service. So you don't have to buy cameras.

    Come talk to me, I'll be outside during the breaks and all afternoon. I'd love to talk to everybody here, if you like, and show you a demo at our table, go into depth. So, thank you for your time today. Also, feel free to email me.

    Download Transcript