Lifestyle

'Deepfakes' Are Getting More Advanced, And Yes, We Need To Worry

by Kristen Mae
Updated: 
Originally Published: 
‘Deepfakes’ Are Getting More Advanced, And Yes, We Need To Worry
Scary Mommy, Debrocke/ClassicStock/Getty and Liam Briese/Unsplash

Two years ago, I saw the video of “Obama” calling Donald Trump a “dipshit” and, along with everyone else, was amazed by the technology, known as “deepfake.”

I was also horrified. I was so horrified that I deliberately swallowed my fear and chose not to think about that video or its implications. And yet I know we need to worry about this technology.

It has never been an easy task for consumers of media, be it paper or digital, to judge what is real and what is fake. Even before TV, the internet, smart phones, and social media, a print newspaper could publish a person’s statement out of context, and sometimes that lack of context completely changed the meaning. Media of every kind has always been prone to at least some bias. Some publications are clearly leaning toward a particular agenda, but even the most earnest attempts at neutrality are colored with bias. Deepfake technology creates a platform for the production and dissemination of outright lies.

What exactly is deepfake?

Invented in 2014 by Ian Goodfellow, at the time a PhD student but now an employee for Apple, deepfake technology allows people to create a completely believable, nearly perfect representation of someone saying or doing something they didn’t actually say or do. Deepfake technology is developing at an astonishing pace. All one needs is a good processor, a powerful graphics card, and a whole lot of time. Currently a deepfake creator would need to gather many photos of the target and actor to accomplish believable facial mapping, but as rapidly as the technology is developing, it’s conceivable that in the near future one would need only a few photos from different angles.

To be clear, a deepfake creator doesn’t necessarily require expensive equipment or even an understanding of this technology to do real damage to someone. In May of 2019, a video of Speaker Pelosi was released wherein she appeared to be intoxicated or otherwise impaired. The video was the simplest kind of deepfake, known as a “shallowfake”: it had simply been slowed down to 75% of its original speed. Then our idiot president tweeted the video to his followers. It has since become clear that the video is obviously doctored, but the damage had already been done.

Deepfakes aren’t used exclusively to manipulate the public on politics. The vast majority of deepfakes are not politically motivated at all — as of July 2019, 96% of them are pornographic, and of those, 99% were videos manipulated to give porn stars the face of a female celebrity. It isn’t hard to imagine the potential for revenge porn here. But that doesn’t mean we don’t need to worry.

How much do we need to worry about deepfakes?

We’ve already seen how disruptive the simplest, lowest-tech deepfake can be to a single political figure. Now imagine what a convincing deepfake could do if it were released, say, before a critical election. Or to coerce someone. Even audio can be deepfaked. In March of 2019, the chief of a UK subsidiary of a German energy company received a call from a man he believed was the CEO of the German company. On the direction of that phone call, he transferred 220,000 Euros to a Hungarian bank account. It is believed the voice on that phone call was generated using A.I. to simulate the German CEO’s voice.

Elections are often won by the narrowest of margins. A strategically placed deepfake with nefarious intentions doesn’t have to fool millions of people. In the election between Hillary Clinton and Donald Trump, Clinton won the popular vote by a significant margin of 2.9 million votes, but it was a mere 80,000 votes in three states that decided the electoral win for Trump. Given that two thirds of Americans consume their news via social media, and a deepfake would only need to fool a hundred thousand people, it’s clear that a well-crafted deepfake gone viral absolutely could throw an election.

We also need to be concerned about what it could mean for society as a whole to further erode the public’s ability to believe the media that is presented to them. What happens when we not only no longer know what to believe, but we simply stop even trying to distinguish the difference?

So, yeah. We need to worry a lot.

How can you spot a deepfake?

Welp. Spotting deepfakes is only going to get harder and harder, especially for us mere humans. The moment a weakness is spotted, programmers find a way to fix it. For example, in 2018 researchers pointed out that deepfake faces typically don’t blink. Immediately after, deepfake faces began blinking.

Ironically, the way we will most likely learn to spot deepfakes, created with A.I. technology, is with more and better A.I. technology. Last year, Microsoft, Facebook, and Amazon jointly funded the Deepfake Detection Challenge, a worldwide competition to see who could come up with the best deepfake detection technology. The submissions window closed on March 31 of this year, with the million dollar prize currently hanging in the balance as servers test the efficacy of the submissions.

This makes me slightly less terrified… uh, I guess?

Deepfake technology represents a culmination of the evolution and integration of virtual reality and artificial intelligence. The fact is, there are massive moral implications to deepfake technology, and real harm that can be done. Not only does the opposing technology have to keep up, but so does the law. I’m not sure many of us expected to have to worry about people producing digital counterfeits of real human beings — of us — but we need to seriously consider whether it should be legal at all to do this. I sure as hell don’t want my likeness being used in any form without my explicit consent. Would you?

This article was originally published on