Idea in Brief

The Challenge

As companies increasingly embed artificial intelligence in their products, processes, and decision-making, the focus of discussions about digital risk is shifting to what the software does with the data.

Why Is This a Problem?

Misapplied and unregulated AI may lead to unfair outcomes, primarily because it can amplify biases in data. And algorithms often defy easy explanation, which is complicated by the fact that they change and adapt as more data comes in.

How to Fix It

Business leaders need to explicitly examine a number of factors. To ensure equitable decisions, they need to evaluate the impact of unfair outcomes, the scope of decisions taken, operational complexity, and their organizations’ governance capabilities. In setting standards for transparency, they must look at the level of explanation required and the trade-offs involved. In controlling the evolvability of AI, they need to consider risks, complexity, and the interaction between AI and humans.

For most of the past decade, public concerns about digital technology have focused on the potential abuse of personal data. People were uncomfortable with the way companies could track their movements online, often gathering credit card numbers, addresses, and other critical information. They found it creepy to be followed around the web by ads that had clearly been triggered by their idle searches, and they worried about identity theft and fraud.

A version of this article appeared in the September–October 2021 issue of Harvard Business Review.