In the past, a deepfake was more of an amusing parlor trick than anything else. Back then, the concerning implications surrounding it seemed like a distant possibility. Now, the rise of generative artificial intelligence has propelled this technology into dangerous territory.
What Is a Deepfake?
A deepfake is a synthetic photo, video or audio clip imitating a person’s likeness. Sometimes, they’re used to falsify events or create fake places. While they can be lighthearted and funny, they’re often malicious.
Usually, deepfakes imitate a person’s likeness, making it look like they’re doing or saying something they never have. You’ve probably seen them in action, considering they’re all over the most popular social media platforms.
Traditionally, deepfakes leveraged machine learning technology to digitally manipulate existing content. However, the birth of generative AI nearly made that approach obsolete. Now, anyone can create replicas or fakes from scratch.
The line between deepfakes and generative AI has grown blurry. In the past, people considered deepfakes to be exclusively imitation-based — like the infamous Obama video Jordan Peele created in 2018. Now, people also use the term to refer to algorithm-created content.
Frankly, delving too deep into generative models opens up an entirely separate discussion. For the sake of clarity, we’ll mainly focus on deepfakes as digital manipulation rather than purely AI-created content.
How Are Deepfakes Made?
Making a deepfake is surprisingly simple. It relies on generative adversarial networks (GANs) — a deep learning framework. It works by having a content generator and discriminator compete against each other.
Basically, the generator repeatedly produces content and tries to make its imitation look like the original. It shoots its work over to the discriminator to see if its synthetic re-creation can pass as the real thing. It evolves with every rejection until it learns to make something acceptable.
For example, let’s say the generator wants to mimic a picture of a golden retriever. At first, it keeps producing image sets resembling chocolate chip cookies. However — after the discriminator rejects its bad attempts and approves its good ones — it adapts. Eventually, it creates lifelike pictures of dogs, and no one can tell the difference between the original and imitation.
Once the algorithm knows how to produce convincing images, you can use some basic tech wizardry to overlay the image onto a face. In truth, it’s similar technology to social media filters and is relatively easy to figure out.
The Repercussions of AI-Powered Deepfake Content
There’s more to AI-powered deepfake content than the memes. Unfortunately, this technology has a dark side.
You’d think the generations growing up with Photoshop and artificial intelligence would know how to spot a deepfake. In reality, every age group is susceptible to digital manipulation. Considering half of the people online seem unable to pick up on sarcasm or tell if something is a skit, it’s not a stretch to assume deepfake misinformation is rampant.
It’s fairly easy to spot a traditional deepfake. However, neural networks and generative models make it borderline impossible. For instance, an AI-generated picture of a bomb at the Pentagon resulted in a staggering $500 billion stock market loss after it went viral on Twitter.
Usually, people can instantly spot malware-filled ads. Whether they have huge, fake “Download Now” buttons or just look terrible, they’re obvious. However, deepfakes can trick even the most eagle-eyed internet users.
For example, a deepfake video of Mr. Beast — a wildly popular content creator — successfully passed moderation checks and circulated on multiple platforms. The advertisement claimed the first 10,000 people who clicked the link would receive the newest iPhone for a couple of bucks.
That kind of outlandish promise didn’t immediately raise red flags because Mr. Beast’s content revolves around massive, generous giveaways. In a similar situation, Tom Hanks had to publicly state he wasn’t selling a dental plan after a deepfake ad went viral.
You’d think digitally replacing someone in a photo or video would be difficult. In reality, you only need one minute of audio and one photograph to create a deepfake. Scammers already pose as kidnappers or authority figures when they call, so it’s not wild to assume they’ll start using this technology to elevate their current strategies.
4. Public Embarrassment
For now, most deepfakes involve public figures like celebrities or politicians. However, more scammers will target average people since AI makes the technology more accessible. Any photo, video or audio clip you post online will become fair game.
Even though public embarrassment isn’t the worst thing, it still has serious implications. After all, anyone with time and a grudge could make it look like you broke the law, cheated or badmouthed your boss. Imagine a world where you have to worry about a random deepfake costing you a job or relationship — that reality isn’t too far away.
5. Cyber Attacks
A cybercriminal can easily use an AI-powered deepfake to pose as someone’s colleague, superior or client. After all, it only takes a few minutes to create a convincing video with audio. Instead of silently exploiting software vulnerabilities, they’ll be able to enter freely and take whatever they want.
Most people assume this kind of cyber attack will only become common in the distant future. Frankly, it’s already happening. In 2022, 66% of cybersecurity professionals had to respond to security incidents caused by deepfake use.
It’s getting more challenging to tell reality and digital manipulation apart. Unfortunately, politicians and shadow groups are using that to their advantage. In Slovakia, incriminating audio of one of the candidates began circulating just days before the national election. In the clip, they spoke about their plans to rig the vote. The catch was they had never said those words — it was deepfake propaganda.
Do you think scammers will stop at public embarrassment when they have the means to extort you for everything you have? If they can find even a handful of your pictures, they can create a convincing deepfake. From there, they could ask for money, favors, illicit photos or access to your company’s sensitive data.
When you consider deepfake extortion on a larger scale, it quickly becomes concerning. We live in a world where high-ranking politicians, world leaders, judges or military members might be getting blackmailed with synthetic content.
Stay Aware Online and Look Out for Deepfakes
More often than not, a deepfake is convincing. Although most people understand what they see on the internet might not be real, many still jump to conclusions right away. Whenever you open up a video, image or audio clip, look for slight distortions or glitches to determine if it’s real.
You should also check who posted the content, do a reverse image search, look for tell-tale signs of generative AI influence — like too many fingers or unexplained warping — and see if any authoritative sources confirmed the post’s validity.
Follow Us On
Get the latest tech stories and news in seconds!
Sign up for our newsletter below to receive updates about technology trends