Business·Analysis

As AI becomes more human-like, experts warn users must think more critically about its responses

Companies like OpenAI and Google are trying to dominate the quickly emerging market for AI systems where people can ask questions of a computer — and get answers in the style of a human. But experts warn this could mean users must be more careful to verify the accuracy of AI responses.

Google, OpenAI announce new artificial intelligence systems in what's been called an 'arms race'

The Google logo is photographed at the Vivatech show in Paris, Thursday, June 15, 2023.
Google is promising that its search results will be informed by artificial intelligence in the U.S., with expansion to other countries to come. (Michel Euler/The Associated Press)

Tech giant Google has announced upgrades to its artificial intelligence technologies, just a day after rival OpenAI announced similar changes to its offerings, with both companies trying to dominate the quickly emerging market where human beings can ask questions of computer systems — and get answers in the style of a human response.

It's part of a push to make AI systems such as ChatGPT not just faster, but more comprehensive in their responses right away without having to ask multiple questions.

On Tuesday, Google demonstrated how AI responses would be merged with some results from its influential search engine. As part of its annual developers conference, Google promised that it would start to use AI to provide summaries to questions and searches, with at least some of them being labelled as AI at the top of the page.

Google's AI generated summaries are only available in the U.S., for now — but they will be written using conversational language.

A logo is shown, reading OpenAi and displaying a circuit board.
OpenAI also recently announced updates to its flagship products that will allow conversational interactions between AI and human users. (Dado Ruvic/Reuters)

Meanwhile, OpenAI's newly announced GPT-4o system will be capable of conversational responses in a more human-like voice.

It gained attention on Monday for being able to interact with users while employing natural conversation with very little delay — at least in demonstration mode. OpenAI researchers showed off ChatGPT's new voice assistant capabilities, including using new vision and voice capabilities to talk a researcher through solving a math equation on a sheet of paper. 

At one point, an OpenAI researcher told the chatbot he was in a great mood because he was demonstrating "how useful and amazing you are."

ChatGPT responded: "Oh stop it! You're making me blush!"

"It feels like AI from the movies," OpenAI CEO Sam Altman wrote in a blog post. "Talking to a computer has never felt really natural for me; now it does."

WATCH | OpenAI's GPT-4o speaks in a natural human tone: 

OpenAI demonstrates new model's capability for realistic conversation

7 months ago
Duration 0:42
From giving advice and analyzing graphs to guiding someone through a math equation and even cracking a joke, the new model of ChatGPT, called GPT-4o, is touted as having real-time responses in natural human tone.

AI responses aren't always right

But researchers in the technology and artificial intelligence sector warn that as people get information from AI systems in more user-friendly ways, they also have to be careful to watch for inaccurate or misleading responses to their queries.

And because AI systems often don't disclose how they came to a conclusion because companies want to protect the trade secrets behind how they work, they also do not tend to show as many raw results or source data as traditional search engines.

This means, according to Richard Lachman, they can be more prone to providing answers that look or sound confident, even if they're incorrect.

A man with glasses and a beard in a black button up shirt, with collar undone, looks towards the camera in an office space.
Richard Lachman, who teaches at the RTA School of Media at Toronto Metropolitan University, says AI chatbots are now able to manipulate users into feeling 'more comfortable than you should be with the quality of the responses.' (Adam Carter/CBC)

The associate professor of Digital Media at Toronto Metropolitan University's RTA School of Media says these changes are a response to what consumers demand when using a search engine: a quick, definitive answer when they need a piece of information. 

"We're not necessarily looking for 10 websites; we want an answer to a question. And this can do that," said Lachman, 

However, he points out that when AI gives an answer to a question, it can be wrong.

Unlike more traditional search results where multiple links and sources are displayed in a long list, it's very difficult to parse the source of an answer given by an AI system such as ChatGPT.

Lachman's perspective is that it might feel easier for people to trust a response from an AI chatbot if it's convincingly role-playing as a human by making jokes or simulating emotions that produce a sense of comfort.

"That makes you, maybe, more comfortable than you should be with the quality of responses that you're getting," he said.

A man in an unbuttoned pink collared shirt and grey blazer stands in an office space.
Duncan Mundell, who works with AI software company AltaML in Calgary, says there's enthusiasm for new AI technologies coming out. (Paula Duhatschek/CBC)

Business sees momentum in AI

Here in Canada, at least one business working in artificial intelligence is excited by a more human-like interface for AI systems like Google or OpenAI are pushing.

"Make no mistake, we are in a competitive arms race here with respect to generative AI and there is a huge amount of capital and innovation," said Duncan Mundell, with Alberta-based AltaML.

"It just opens the door for additional capabilities that we can leverage," he said about artificial intelligence in a general sense, mentioning products his company creates with AI, such as software that can predict the movement of wildfires.

He pointed out that while the technological upgrades are not revolutionary in his opinion, they move artificial intelligence in a direction he welcomes. 

"What OpenAI has done with this release is bringing us one step closer to human cognition, right?" said Mundell.

Researcher calls sentient AI 'nonsense'

The upgrades to AI systems from Google or OpenAI might remind science-fiction fans of the highly conversational computer on Star Trek: The Next Generation, but one researcher at Western University says he considers the new upgrades to be decorative, rather than truly changing how information is processed.

"A lot of the notable features of these new releases are, I guess you could say, bells and whistles," said Luke Stark, assistant professor at Western University's Faculty of Information & Media Studies.

Luke Stark is an assistant professor at the Faculty of Information and Media Studies at Western University.
Luke Stark, an assistant professor at the Faculty of Information and Media Studies at Western University, says he considers the new AI upgrades to be mostly decorative and says they don't truly change how information is processed. (Submitted by Luke Stark)

"In terms of the capacities of these systems to actually go beyond what they've been able to do so far... this isn't that big of a leap," said Stark, who called the idea that a sentient artificial intelligence could exist with today's technology "kind of nonsense."

The companies pushing artificial intelligence innovations make it hard to get clarity on "what these systems are good and not so good at," he said. 

That's a position echoed by Lachman, who says that lack of clarity will require users to be savvy about what they read online in a new way because

"Right now, when you and I speak, I'm used to thinking anything that sounds like a person is a person," he said, pointing out that human users may assume anything that seems like another human will have the same basic understanding of how the world works.

But even if a computer appears to look and sound like a human, it won't have that knowledge, he says.

"It does not have that sense of common understanding of the basic rules of society. But it sounds like it does."

ABOUT THE AUTHOR

Anis Heydari

Senior Reporter

Anis Heydari is a senior business reporter at CBC News. Prior to that, he was on the founding team of CBC Radio's "The Cost of Living" and has also reported for NPR's "The Indicator from Planet Money." He's lived and worked in Edmonton, Edinburgh, southwestern Ontario and Toronto, and is currently based in Calgary. Email him at anis@cbc.ca.

With files from The Associated Press, the CBC's Paula Duhatschek and Shawn Benjamin

Add some “good” to your morning and evening.

Your weekly look at what’s happening in the worlds of economics, business and finance. Senior business correspondent Peter Armstrong untangles what it means for you, in your inbox Monday mornings.

...

The next issue of the Mind your Business will soon be in your inbox.

Discover all CBC newsletters in the Subscription Centre.opens new window

This site is protected by reCAPTCHA and the Google Privacy Policy and Google Terms of Service apply.