Designing AI systems that customers won't hate


The nexus of big data analytics, machine learning, and AI may be the brightest spot in the global economy right now. McKinsey Global Research estimates that the use of AI will add as much as $13 trillion to global GDP by 2030. The noneconomic benefits to humankind will be equally dramatic, leading to a world that is safer (by reducing destructive human error) and offers people a better quality of life (by reducing the time they spend on tedious tasks such as driving and shopping). Even if the coming automation-driven disruption of labor markets is as serious as many fear, we are still, on balance, likely to be better off than today.

But not everyone is convinced. Negative predictions center on two overarching concerns that are related yet distinct. First, there is the issue of data privacy. After all, AI runs on data, and people are understandably uneasy about the things that automation technologies are learning about them, and how their private information might be used. Privacy in the digital age has been extensively researched and written about, and companies are devoting increasing attention to allaying their customers' fears.

However, there is another consideration that many companies have yet to seriously think about: autonomy. Though autonomous technology has a large and growing range of potential applications, when taken too far, it also may threaten users' sense of autonomy and free will, or their belief that they can decide how to pursue their lives freely. A recent study found that when customers believed their future choices could be predicted based on their past choices, they chose less-preferred options. In other words, consumers violated their own preferences to reestablish their sense of autonomy by not choosing predictably.

Sign-In / Register to download