Spark

Bots like ChatGPT aren't sentient. Why do we insist on making them seem like they are?

A look at the philosophical implications of interacting with tools that don't have consciousness, or even intelligence, but seem like they do.

'There's no secret homunculus inside the system that's understanding what you're talking about'

A hand holding up a phone.
ChatGPT responds to questions or commands using data it has consumed from the Internet. (Érik Chouinard/CBC-Radio-Canada)

What's the difference between a sentient human mind and a computer program that's just doing a very good job of mimicking the output of one?

For years, that's been a central question for many who study artificial intelligence  (AI), or the inner workings of the brain. But with the meteoric rise of OpenAI's ChatGPT — a large language model (LLM) that can generate convincing, detailed responses to natural language requests —  a once abstract, hypothetical question has suddenly become very real.

"They seem to be tools that are ontologically ambiguous," said Jill Fellows, a philosophy instructor at Douglas College, who specializes in philosophy of technology and AI.

"We don't necessarily know how to place them," she said. "On the one hand, we do treat it like a tool that we can offload labour to. But on the other hand, because of this ontological ambiguity, we also kind of treat it like an autonomous agent."

And if these tools are going to play as big a role in our future as their creators would have us believe, it's an ambiguity that thinkers like Fellows say may be important to resolve sooner rather than later.

A woman in a turtleneck sweater with straight brown hair is leaning against a wall, her head tilted and a half-smile on her face.
Jill Fellows is a philosophy instructor at Douglas College in British Columbia. ( Submitted by Jill Fellows)

For decades, the Turing Test has been a gold standard of artificial intelligence — creating a program that can convince a human they are talking to another human. ChatGPT can now easily do that, but AI experts widely agree that it's not anything close to sentient in the way a human is.

"Roughly what they do is they're pastiche machines," said Gary Marcus, a cognitive scientist and AI entrepreneur. "They put together lots of little pieces that they've seen before."

LLMs like ChatGPT are trained on massive troves of text, which they use to assemble responses to questions by analyzing and predicting what words could most plausibly come next based on the context of other words. One way to think of it, as Marcus has memorably described it, is "auto-complete on steroids."

Marcus says it's important to understand that even though the results sound human, these systems don't "understand" the words or the concepts behind them in any meaningful way. But because the results are so convincing, that can be easy to forget.

"We see some plausible bits of text, and we're like, there must be intelligence there somewhere, in the same way as we look up at the moon and we're like, I think I see a face on the moon," Marcus said.

"We're doing a kind of anthropomorphization … where we're attributing some kind of animacy and life and intelligence there that isn't really," he said.

"There's no secret homunculus inside the system that's understanding what you're talking about."

Who has the power?

So, if these tools don't actually understand the world, why are we so keen on making them seem like they do?

The easy answer is that it's just smart marketing from the tech companies that build them — it makes them fun and interesting to use. But for Fellows, that masks a power dynamic worth considering.

"We're encouraged to interact with these agents as though they did have some kind of subjectivity, but we also know that they don't," she said.

"We're made to feel empowered … by using these tools," she continued. "[But] instead, through this appearance of subordination, these tools gather a lot of knowledge on us [and] all that knowledge gets fed back to the tech companies, which I would argue are in the dominant position here."

"The way things get fed back to us is not necessarily serving our own interests."

Fellows also raises a perennial concern with machine learning, which is that models trained on massive data sets tend to replicate bias and under- or over-representation in those data sets.

That's concerning for Fellows, because these tools are increasingly able not just to do basic tasks for us, but to speak on our behalf — to represent us.

"Are we losing agency here or increasing our agency?" she asked. "I think in some senses at the moment, it's both, and we don't necessarily know how that's going to play out."

Misuse and misunderstanding

Marcus also has concerns about the potential for misuse of these systems. In particular, he's worried about how easy it is to use them to create convincing misinformation and disinformation.

"It makes the cost of generating misinformation almost zero," he said.

"Democracy is based on informed voters making informed decisions, and if we suddenly wind up in a world where the amount of misinformation so outnumbers accurate information, nobody's going to believe anything."

A man with short cropped, grey hair and black-rimmed glasses is smiling against a grey background.
Cognitive scientist, author and AI entrepreneur, Gary Marcus. (NYU)

Related to that is a bigger overarching problem: because LLMs don't understand or fact-check their output, that output is often misleading, or just flat-out wrong. AI experts call it "hallucinating," and there doesn't seem to be an obvious way to stop LLMs from doing it. That's especially dangerous, Marcus says, because of how convincing they still sound while doing it — and because many people still may not realize how they work and how prone they are to error.

"People can use those illusions if they enjoy them to some extent, but they have to understand that it is an illusion, that they can't trust them," he said.

"I like to think of AI as a teenager right now. Like, it's starting to be empowered and it's not really ready for that power," he continued. "I think we're entering an age where we really need AI literacy, which is something we never needed before."

Fellows agrees that it's crucial for people to understand the nature and limitations of LLMs before trusting them.

"I think the best case scenario would be that in 10 years time we would view these the same way we now view things like spell check," she said.

As Fellows points out, humans have always shaped the world around them with tools — even extended our minds into them — and some philosophers have argued that those tools have shaped us, too. Fellows says AI is likely no different — but that doesn't mean we shouldn't be cautious.

"In our quest to build more and more human-like machines," she said, "we may be running the risk of reshaping ourselves into more and more machine-like humans."