AI BLACK BOX

A Google AI model developed a skill it wasn't expected to have

Google CEO Sundar Pichai said the company’s experts call this aspect of AI a “black box”
How do you develop AI systems that are aligned with human morality when some of them are already behaving mysteriously? 
How do you develop AI systems that are aligned with human morality when some of them are already behaving mysteriously? 
Photo: Jim Young (Reuters)
We may earn a commission from links on this page.

Concerns about AI developing skills independently of its programmers’ wishes have long absorbed scientists, ethicists, and science fiction writers. A recent interview with Google’s executives may be adding to those worries.

In an interview on CBS’s 60 Minutes on April 16, James Manyika, Google’s SVP for technology and society, discussed how one of the company’s AI systems taught itself Bengali, even though it wasn’t trained to know the language. “We discovered that with very few amounts of prompting in Bengali, it can now translate all of Bengali,” he said.

CEO Sundar Pichai confirmed that there are still elements of how AI systems learn and behave that still surprises experts: “There is an aspect of this which we call— all of us in the field call it as a ‘black box’. You don’t fully understand. And you can’t quite tell why it said this.” Pichai said the company has “some ideas” why this could be the case, but it needs more research to fully comprehend how it works.

CBS’s Scott Pelley then questioned the reasoning for opening to the public a system that its own developers don’t fully understand, but Pichai responded: “I don’t think we fully understand how a human mind works either.”

Google’s cure to AI’s hallucination problem

AI’s development has also come with glaring flaws that lead to fake news, deepfakes, and weaponization, sometimes with so much confidence, in what the industry calls “hallucinations.”

Asked if Google’s Bard is getting a lot of “hallucinations,” Pichai responded: “Yes, you know, which is expected. No one in the field has yet solved the hallucination problems. All models do have this as an issue.” The cure, Pichai said, is around developing “more robust safety layers before we build, before we deploy more capable models.”

Pichai has long argued for wide-ranging global regulation of AI. Other tech leaders, like Twitter and Tesla CEO Elon Musk, have even called for a pause in the development of more powerful models. Chinese lawmakers have already set out new rules, while in Europe and in the US the regulatory process is still in its nascent phase.