AI-at-Scale Hinges on Gaining a 'Social License'


In January 2020, an unknown American facial recognition software company, Clearview AI, was thrust into the limelight. It had quietly flown under the radar until The New York Times reported that businesses, law enforcement agencies, universities, and individuals had been purchasing its sophisticated facial recognition software, whose algorithm could match human faces to a database of over 3 billion images the company had collected from the internet. The article renewed the global debate about the use of AI-based facial recognition technology by governments and law enforcement agencies.

Many people called for a ban on the use of the Clearview AI technology because the startup had created its database by mining social media websites and the internet for photographs but hadn’t obtained permission to index individuals’ faces. Twitter almost immediately sent the company a cease-and-delete letter, and YouTube and Facebook followed suit. When the COVID-19 pandemic erupted in March 2020, Clearview tried to pitch its technology for use in contact tracing in an effort to regain its credibility and gain social acceptance. Although Clearview’s AI technology could have helped tackle the crisis, the manner in which the company had gathered data and created its data sets created a social firestorm that discouraged its use.

In business, as in life, being responsible is necessary but far from sufficient to build trust. As exemplified by the controversies around some corporations’ AI applications — such as Amazon, which had to terminate its experiment with a resume-screening algorithm, and Microsoft, whose AIbased chatbot was a public relations disaster — society will not agree to the use of AI applications, however responsibly they may have been developed, if they haven’t a priori earned people’s trust.

Sign-In / Register to download