Science·Analysis

A Google engineer says AI has become sentient. What does that actually mean?

Scientists and philosophers say AI consciousness might be possible, but technology is so good at fooling humans into thinking it's alive that we will struggle to know if it's telling the truth.

Experts say there's no way to test whether artificial intelligence is lying to us about how it feels

Google says there is no evidence its AI chatbot generator, known as LaMDA, is sentient, following a claim from Google engineer Blake Lemoine. Here, a Google sign is seen during the World Artificial Intelligence Conference in Shanghai, China, in September 2018. (Aly Song/Reuters)

Has artificial intelligence finally come to life, or has it simply become smart enough to trick us into believing it has gained consciousness?

Google engineer Blake Lemoine's recent claim that the company's AI technology has become sentient has sparked debate in technology, ethics and philosophy circles over if, or when, AI might come to life — as well as deeper questions about what it means to be alive.

Lemoine had spent months testing Google's chatbot generator, known as LaMDA (short for Language Model for Dialogue Applications), and grew convinced it had taken on a life of its own, as LaMDA talked about its needs, ideas, fears and rights.

Google dismissed Lemoine's view that LaMDA had become sentient, placing him on paid administrative leave earlier this month — days before his claims were published by The Washington Post

Most experts believe it's unlikely that LaMDA or any other AI is close to consciousness, though they don't rule out the possibility that technology could get there in future. 

"My view is that [Lemoine] was taken in by an illusion," Gary Marcus, a cognitive scientist and author of Rebooting AI, told CBC's Front Burner podcast.

"Our brains are not really built to understand the difference between a computer that's faking intelligence and a computer that's actually intelligent — and a computer that fakes intelligence might seem more human than it really is."

Computer scientists describe LaMDA as operating like a smartphone's autocomplete function, albeit on a far grander scale. Like other large language models, LaMDA was trained on massive amounts of text data to spot patterns and predict what might come next in a sequence, such as in a conversation with a human.

Cognitive scientist and author Gary Marcus, pictured during a speech in Dublin, Ireland, in 2014. says LaMDA appears to have fooled a Google engineer into believing it was conscious. (Ramsey Cardy/Sportsfile/Getty Images)

"If your phone autocompletes a text, you don't suddenly think that it is aware of itself and what it means to be alive. You just think, well, that was exactly the word I was thinking of," said Carl Zimmer, science columnist for the New York Times and author of Life's Edge: The Search for What It Means to Be Alive.

Humanizing robots

Lemoine, who is also ordained as a mystic Christian priest, told Wired he became convinced of LaMDA's status as a "person" because of its level of self-awareness, the way it spoke about its needs and its fear of death if Google were to delete it.

He insists he was not fooled by a clever robot, as some scientists have suggested. Lemoine maintains his position, and even appeared to suggest that Google had enslaved the AI system.

"Each person is free to come to their own personal individual understanding of what the word 'person' means and how that word relates to the meaning of terms like 'slavery,'" he wrote in a post on Medium on Wednesday.

Marcus believes Lemoine is the latest in a long line of humans to fall for what computer scientists call "the ELIZA effect," named after a 1960s computer program that chatted in the style of a therapist. Simplistic responses like "Tell me more about that" convinced users that they were having a real conversation.

"That was 1965, and here we are in 2022, and it's kind of the same thing," Marcus said.

Scientists who spoke with CBC News pointed to humans' desire to anthropomorphize objects and creatures — perceiving human-like characteristics that aren't really there.

"If you see a house that has a funny crack, and windows, and it looks like a smile, you're like, 'Oh, the house is happy,' you know? We do this kind of thing all the time," said Karina Vold, an assistant professor at the University of Toronto's Institute for the History and Philosophy of Science and Technology.

"I think what's going on often in these cases is this kind of anthropomorphism, where we have a system that's telling us 'I'm sentient,' and saying words that make it sound like it's sentient — it's really easy for us to want to grasp onto that."

Karina Vold, an assistant professor of philosophy at the University of Toronto, hopes the debate over AI consciousness and rights will spark a rethink of how humans treat other species that are known to be conscious. (University of Toronto)

Humans have already begun to consider what legal rights AI should have, including whether it deserves personhood rights.

"We are quickly going to get into the realm where people believe that these systems deserve rights, whether or not they're actually internally doing what people think they're doing. And I think that that's going to be a very strong movement," said Kate Darling, an expert in robot ethics at the Massachusetts Institute of Technology's Media Lab.

Defining consciousness

Given AI is so good at telling us what we want to hear, how will humans ever be able to tell if it truly has come to life?

That in itself is a subject of debate. Experts have yet to come up with a test of AI consciousness — or to reach consensus on what it means to be conscious.

Ask a philosopher, and they'll likely talk about "phenomenal consciousness" — the subjective experience of being you.

"Any time that you're awake ... It feels a certain way. You're undergoing some kind of experience … When I kick a rock down the street, I don't think there's anything [that it feels] like to be that rock," said Vold.

For now, AI is viewed more like that rock — and it's hard to imagine its disembodied voice being capable of having positive or negative feelings, as philosophers believe "sentience" requires.

Carl Zimmer, author and science columnist for the New York Times, says scientists and philosophers have struggled to define consciousness. (Facebook/Carl Zimmer)

Perhaps consciousness can't be programmed at all, says Zimmer.

"It's possible, theoretically, that consciousness is just something that emerges from a particular physical, evolved kind of matter. [Computers] are just on the outside of life's edge, maybe."

Others think humans can never truly be sure whether AI has developed consciousness — and don't see much point in trying.

"Consciousness can range [from] anything from feeling pain when you step on a tack [to] seeing a bright green field as red — that's the kind of thing where we can't ever know whether a computer is conscious in that sense, so I suggest just forgetting consciousness," said Harvard cognitive scientist Steven Pinker.

"We should aim higher than duplicating human intelligence, anyway. We should build devices that do things that need to be done."

Harvard cognitive psychologist Steven Pinker, seen here in New York in 2018, says humans will likely never be able to tell for sure if AI has achieved consciousness. (Brad Barket/Getty Images for Ozy Media)

Those things, Pinker says, include dangerous and boring occupations, and tasks around the house, from cleaning to child care.

Rethinking AI's role

Despite AI's massive strides over the last decade, the technology still lacks another key component that defines humans: common sense.

"It's not that [computer scientists] think that consciousness is a waste of time, but we don't see it as being central," said Hector Levesque, professor emeritus of computer science at the University of Toronto.

"What we do see as being central is somehow getting a machine to be able to use ordinary, common sense knowledge — you know, the kind of thing that you would expect a 10-year-old to know."

Levesque gives the example of a self-driving car: it can stay in its lane, stop at a red light and help a driver avoid crashes, but when confronted with a road closure, it will sit there doing nothing.

"That's where common sense would enter into it. [It] would have to sort of think, well, why am I driving in the first place? Am I trying to get to a particular location?" Levesque said.

Some computer scientists say common sense, not consciousness, should be the priority in AI development, to ensure that technology like self-driving cars can proactively solve problems. This self-driving car is shown during a demonstration in Moscow on Aug. 16, 2019. (Evgenia Novozhenina/Reuters)

While humanity waits for AI to learn more street smarts — and perhaps one day take on a life of its own — scientists hope the debate over consciousness and rights will extend beyond technology to other species known to think and feel for themselves.

"If we think consciousness is important, it probably is because we're concerned that we're building some kind of system that's living a life of misery or suffering in some way that we're not recognizing," said Vold.

"If that really is what's motivating us, then I think we need to be reflective about the other species in our natural system and see what kind of suffering we may be causing them. There's no reason to prioritize AI over other biological species that we know have a much stronger case of being conscious."

ABOUT THE AUTHOR

Laura McQuillan is an online journalist with CBC News in Toronto. She covers general news, social issues and science and has a special interest in finding unexpected answers to unusual questions. Laura previously reported from New Zealand and Brazil.