In the 1990s, it was the advent of Adobe Photoshop that prompted the “death of truth” crisis – the concern that doctored images would lead to a distrust of photography and an inherent skepticism of pictures in the media.
These days, the biggest “threat to truth” is not photoshopped images. Instead, it is deepfakes or doctored videos that can make people seem like they or doing or saying anything.
Recent deepfakes that have gone viral include clips of President Barack Obama saying “stay woke, bitches” and House Speaker Nancy Pelosi stammering in a press conference.
In fact, the term “deepfake” was coined by the person who superimposed Gal Gadot’s image on a porn star.
It’s not just celebrities and public figures targeted by deepfakes. In the dark corners of the internet, you’ll find people paying developers to make deepfake revenge porn with the faces of their ex, for example.
Jamie Bartlett, writer, and author of The People Vs. Tech, says, “forget post-truth, this is the era of post-reality.”
There are hundreds of deepfakes surfacing by the day, which has sparked concerns about national security and the state of our democracy with the 2020 election fast approaching.
“Machine-learning algorithms are trained to use a dataset of videos and images of a specific individual to generate a virtual model of their face that can be manipulated and superimposed. One person’s face can be swapped onto another person’s head,” explains Huffington Post’s Jesselyn Cook.
Not only can their faces be digitally manipulated, but their voices as well. Known as “voice skins,” deepfake creators can imitate a person’s voice using only a few minutes of audio.
This technology is not all that new – Hollywood directors and producers have been using “video and audio manipulation” for decades, according to Cook. Because of advancements in artificial intelligence software, this technology is easier to use and more accessible to the general public. There are even apps that can help amateurs to create these deepfakes.
If anyone can create a video of a politician declaring a nuclear war or admitting to falsehood amid an election, there is a good reason for the building anxiety around deepfakes.
“People can duplicate me speaking and saying anything,” says former President Barack Obama at a recent forum in Ottawa, Canada. “And it sounds like me and it looks like I’m saying it — and it’s a complete fabrication. The marketplace of ideas that is the basis of our democratic practice has difficulty working if we don’t have some common baseline of what’s true and what’s not.”
However, some say it’s too early to sound the alarm, arguing that our society had seen a disruption in what we consider “truth” before and adapted accordingly.
Jeffrey Westling, a Technology and Innovation Policy Associate at the R Institute, argues that:
“Much of the fear of deepfakes stems from the assumption that this is a fundamentally new, game-changing technology that society has not faced before.
“But deepfakes are really nothing new; history is littered with deceptive practices.”
He points to Adobe Photoshop as the perfect example of the first real threat to “truth” in the media. Bartlett maintains the same, explaining that, “deepfakes aren’t all that new: the selective editing and clipping of real footage to create a falsehood, a ‘shallow fake,’ you could say, is already the staple of conspiracy theorists and even the odd respectable news outlet.”
Despite enormous concern about “shallow fakes” and photoshopped images, Westling contends that society adapted.
“No major regulation or legislation was needed to prevent the apocalyptic vision of Photoshop’s future; society adapted on its own,” he says.
With the ongoing efforts to educate the public about deepfakes and how to identify them, Westling is hopeful that it will be harder for deepfakes to spread.
Yet, there is something different about the harmful and viral nature of deepfakes.
In today’s day and age, these videos catch on fire quickly, spreading to all corners of the World with immediacy and ease. Moreover, the quality of these videos is quickly improving.
Before it was easier to spot a deepfake by the glitching, lack of eye-blinking and microblushing (which is detectable by algorithms), but these are now harder to spot.
As deepfake technology advances, so too does the technology to identify it.
Lawmakers, national agencies, like the Defense Advanced Research Projects Agency, tech experts, and media forensic researchers are working to develop technology that can, “identify manipulated videos and images, including deepfakes.”
They’re building, “machine-learning algorithms that analyze videos frame by frame to detect subtle distortions and inconsistencies, to determine if the videos have been tampered with,” Cook explains.
Edward Delp, the Director of the Video and Imaging Processing Laboratory at Purdue University tells Cook that, “We might get to a situation in the future where you won’t want to believe an image or a video unless there’s some authentication mechanism.”
While the advances in this technology are catching up to deepfakes, experts recommend some tips in identifying deepfakes yourself. They advise watching the perimeter of the face, analyzing eye-blinking, and watching on your computer where you can catch smaller details.
Stay alert and keep your eye out for deepfakes. We can’t believe everything we see.