The fight against 'deepfake' videos includes former U.S. ambassador to Russia Michael McFaul
'A video circulated that suggested that I was a pedophile. What do you say to that?' says McFaul
Michael McFaul knows firsthand the negative impact of so-called "deepfakes" — digitally constructed videos that can make it appear that a person is saying or doing something they never did.
The former U.S. Ambassador to Russia — a vocal opponent of President Vladimir Putin — was a victim of this rapidly advancing technology.
McFaul was posted to Moscow during the Obama administration between 2012 and 2014. He says at the time Russia was starting to experiment with the video technology and created several fake photos and videos to discredit him.
"A video circulated that suggested that I was a pedophile. What do you say to that? You go on Twitter and argue you're not a pedophile? I mean, there's no excuse for that, no defence," McFaul told The Current.
"So it's effective. Disinformation is effective. Propaganda works."
He said the difficult narrative was hard to fight back as a government — but they did so with facts.
According to McFaul, deepfake videos may also allow public figures to retroactively avoid accountability for things they've said on tape in the past.
"When Donald Trump is recorded saying some really horrible things about how he treats women for instance — that happened in our presidential election in 2016 — it's going to be easier in the future for him, or other people like that, to say, 'well that's fake, that's not really me,'" McFaul explained.
"And how are we going to be able to know? It's really blurring what is fact and what is fiction and I think that's a pretty scary world."
'Incredibly significant' threat
The term "deepfake" was created by a Reddit user referring to deep learning and fake video.
This technology uses facial mapping and artificial intelligence to produce videos that appear so genuine it's hard to spot the phonies.
What deepfakes are capable of
Developer and entrepreneur Gaurav Oberoi has experimented with deepfake technology and shares them online.
In this example, he showcases how the algorithm has learned enough from about 300 images and videos inputted to make John Oliver look like he's hosting Jimmy Fallon's show.
The pace at which this technology is accelerating in the last year has shocked researchers at the U.S. Defence Advanced Research Projects Agency (DARPA). They have been tasked to find ways to detect fake content.
Researcher Hany Farid said that in the last two years, deepfakes have grown into a significant political and social concern.
"There is almost no doubt that within a year or so, well in time for the next national election at least here in the U.S., this is going to be a real threat," Farid told The Current's guest host Ioanna Roumeliotis.
Deepfakes become more dangerous, and their impact more potent, when combined with the speed that they can proliferate over social media.
"The fact that the social media companies are aggressively promoting this content because it engages users ... [means] that threat is incredibly significant," said Farid, who is also a computer science professor specializing in digital forensics at Dartmouth College.
He criticised companies like Facebook, Google and Twitter for not taking enough responsibility for how their technology and platforms can be misused in ways that potentially lead to harm.
"We have to acknowledge that technology has a dark side to it, and to pretend that technology is inherently good, is going to make everybody happy, is incredibly naive," he said.
"We have to do better than we have over the last few decades."
Listen to the full discussion near the top of this post.
With files from the Associated Press. This segment was produced by The Current's Danielle Carr.