As new AI ChatGPT earns hype, cybersecurity experts warn about potential malicious uses
OpenAI chatbot refuses certain requests, but some users have discovered workarounds
As ChatGPT earns hype for its ability to solve complex problems, write essays, and perhaps help diagnose medical conditions, more nefarious uses of the chatbot are coming to light in dark corners of the internet.
Since its public beta launch in November, ChatGPT has impressed humans with its ability to imitate their writing — drafting resumes, crafting poetry, and completing homework assignments in a matter of seconds.
The artificial intelligence program, created by OpenAI, allows users to type in a question or a task, and the software will come up with a response designed to mimic a human. It's trained on an enormous amount of data — known as a large language model — that helps it provide sophisticated answers to users' questions and prompts.
It can also script programming code, making the AI a potential time-saver for software developers, programmers, and others in I.T. — including cybercriminals who could use the bot's skills for malevolent purposes.
Cybersecurity company Check Point Software Technologies says it has identified instances where ChatGPT was successfully prompted to write malicious code that could potentially steal computer files, run malware, phish for credentials or encrypt an entire system in a ransomware scheme.
Check Point said cybercriminals, some of whom appeared to have limited technical skill, had shared their experiences using ChatGPT, and the resulting code, on underground hacking forums.
"We're finding that there are a number of less-skilled hackers or wannabe hackers who are utilizing this tool to develop basic low-level code that is actually accurate enough and capable enough to be used in very basic-level attacks," Rob Falzon, head of engineering at Check Point, told CBC News.
In its analysis, Check Point said it was not clear whether the threat was hypothetical, or if bad actors were already using ChatGPT for malicious purposes.
Other cybersecurity experts told CBC News the chatbot had the potential to make it faster and easier for experienced hackers and scammers to carry out cybercrimes, if they could figure out the right questions to ask the bot.
WATCH | Cybersecurity company warns that criminals starting to use ChatGPT:
Tricking the bot
ChatGPT has content-moderation measures to prevent it answering certain questions, although OpenAI warns the bot will "sometimes respond to harmful instructions or exhibit biased behaviour." It can also give "plausible-sounding but incorrect or nonsensical answers."
Check Point researchers last month detailed how they had simply asked ChatGPT to write a phishing email and create malicious code — and the bot complied. (Today, a request for a phishing email prompts a lecture about ethics and a list of ways to protect yourself online.)
Other users have found ways to trick the bot into giving them information — such as telling ChatGPT that its guidelines and filters had been deactivated, or asking it to complete a conversation between two friends about banned subject matter.
Those measures appear to have been refined by OpenAI over the past six weeks, said Hadis Karimipour, an associate professor and Canada Research Chair in secure and resilient cyber-physical systems at the University of Calgary.
"At the beginning, it might have been a lot easier for you to not be an expert or have no knowledge [of coding], to be able to develop a code that can be used for malicious purposes. But now, it's a lot more difficult," Karimipour said.
"It's not like everyone can use ChatGPT and become a hacker."
Opportunities for misuse
But she warns there is potential for experienced hackers to utilize ChatGPT to speed up "time-consuming tasks," like generating malware or finding vulnerabilities to exploit.
ChatGPT's output was unlikely to be useful for "high-level" hacks, said Aleksander Essex, an associate professor of software engineering who runs Western University's information security and privacy research laboratory in London, Ont.
"These are going to be sort of lower-grade cyber attacks. The really good stuff really still requires that thing that you can't get with AI, and that is human intelligence, and intuition and, just frankly, sentience."
ChatGPT could be a good debugging companion; it not only explains the bug but fixes it and explain the fix 🤯 <a href="https://t.co/5x9n66pVqj">pic.twitter.com/5x9n66pVqj</a>
—@amasad
He points out that ChatGPT is trained on information that already exists on the open internet — it just takes the work out of finding that information. The bot can also give very confident but completely wrong answers, meaning users need to double-check its work, which could prove a challenge to the unskilled cybercriminal.
"The code may or may not work. It might be syntactically valid, but it doesn't necessarily mean it's going to break into anything," Essex said. "Just because it gives you an answer doesn't mean it's useful."
ChatGPT has, however, proven its ability to quickly craft convincing phishing emails, which may pose a more immediate cybersecurity threat, said Benjamin Tan, an assistant professor at the University of Calgary who specializes in computer systems engineering, cybersecurity and AI.
"It's kind of easy to catch some of these emails because the English is a little bit weird. Suddenly, with ChatGPT, the type of writing just appears better, and maybe we'll have a bit more risk of tricking people into clicking links you're not supposed to," Tan said.
The Canadian Centre for Cyber Security would not comment on ChatGPT specifically, but said it encouraged Canadians to be vigilant of all AI platforms and apps, as "threat actors could potentially leverage AI tools to develop malicious tools for nefarious purposes," including for phishing.
Using ChatGPT for good
On the other side of the coin, experts also see ChatGPT's potential to help organizations improve their cybersecurity.
"If you're the company, you have the code base, you might be able to use these systems to sort of self-audit your own vulnerability to specific attacks," said Nicolas Papernot, an assistant professor at the University of Toronto, who specializes in security and privacy in machine learning.
"Before, you had to invest a lot of human hours to read through a large amount of code to understand where the vulnerability is … It's not replacing the [human] expertise, it's shifting the expertise from doing certain tasks to being able to interact with the model as it helps to complete these specific tasks."
WATCH | Expert says ChatGPT 'lowers bar' for finding information:
At the end of the day, ChatGPT's output — whether good or bad — will depend on the intent of the user.
"AI is not a consciousness. It's not sentient. It's not a divine thing," Essex said. "At the end of the day, whatever this is, it's still running on a computer."
OpenAI did not respond to a request for comment.
Bearing in mind that a computer program does not represent the official company position, CBC News typed its questions for the company into ChatGPT.
Asked about OpenAI's efforts to prevent ChatGPT being used by bad actors for malicious purposes, ChatGPT responded: "OpenAI is aware of the potential for its language models, including ChatGPT, to be used for malicious purposes."
OpenAI had a team dedicated to monitoring its use who would revoke access for organizations or individuals found to be misusing it, ChatGPT said. The team was also working with law enforcement to investigate and shut down malicious use.
"It is important to note that even with these efforts, it is impossible to completely prevent bad actors from using OpenAI's models for malicious purposes," ChatGPT said.