Date:

Staying Ahead of AI-Powered Deception

Deepfakes: The Growing Threat to Authentication and Security

The Rise of Deepfakes

Thanks to AI’s nonstop improvement, it’s becoming difficult for humans to spot deepfakes in a reliable manner. This poses a serious problem for any form of authentication that relies on images of the trusted individual.

Early Beginnings of Deepfakes

The first deepfake can be traced back to 1997, when a project called Video Rewrite demonstrated that it was possible to reanimate video of someone’s face to insert words that they did not say. Early deepfakes required considerable technological sophistication on the part of the user, but that’s no longer true in 2025. Thanks to generative AI technologies and techniques, like diffusion models that create images and generative adversarial networks (GANs) that make them look more believable, it’s now possible for anyone to create a deepfake using open source tools.

The Impact on Society and Security

The ready availability of sophisticated deepfake tools poses serious repercussions for privacy and security. Society suffers when deepfake tech is used to create things like fake news, hoaxes, child sexual abuse material, and revenge porn. Several bills have been proposed in the U.S. Congress and several state legislatures that would criminalize the use of technology in this manner.

The Financial Impact and Fraud

The impact on the financial world is also quite significant, in large part because of how much we rely on authentication for critical services, like opening a bank account or withdrawing money. While using biometric authentication mechanisms, such as facial recognition, can provide greater assurance than passwords or multi-factor authentication (MFA) approaches, the reality is that any authentication mechanism that relies on images or video in part to prove the identity of an individual is vulnerable to being spoofed with a deepfake.

Fraudsters’ Paradise

Fraudsters, ever the opportunists, have readily picked up deepfake tools. A recent study by Signicat found that deepfakes were used in 6.5% of fraud attempts in 2024, up from less than 1% attempts in 2021, representing more than a 2,100% increase in nominal terms. Over the same period, fraud in general was up 80%, while identity fraud was up 74%, it found.

The Threat is Real and Growing

The threat posed by deepfakes is not theoretical, and fraudsters currently are going after large financial institutions. Numerous scams were cataloged in the Financial Services Information Sharing and Analysis Center’s 185-page report. For instance, a fake video of an explosion at the Pentagon in May 2023 caused the Dow Jones to fall 85 points in four minutes. There is also the fascinating case of the North Korean who created fake identification documents and fooled KnowBe4 – the security awareness firm co-founded by the hacker Kevin Mitnick (who died in 2023) – into hiring him or her in July 2024. "If it can happen to us, it can happen to almost anyone," KnowBe4 wrote in its blog post. "Don’t let it happen to you."

Counter-Measures

However, some approaches to countering the deepfake threat show promise. iProov, a biometric authentication software company, developed patented flashmark technology to detect deepfakes. They use a proprietary flashmark technology during sign-in. By flashing different colored lights from the user’s device onto his or her face, iProov can determine the "liveness" of the individual, thereby detecting whether the face is real or a deepfake or a face-swap.

Conclusion

The AI technology that enables deepfake attacks is liable to improve in the future. That is putting pressure on companies to take steps to fortify their authentication process now or risk letting the wrong people into their operation.

FAQs

Q: What is a deepfake?
A: A deepfake is a photograph, video, or audio that’s been edited in a deceptive manner using artificial intelligence.

Q: How do deepfakes work?
A: Deepfakes use generative AI technologies and techniques, like diffusion models that create images and generative adversarial networks (GANs) that make them look more believable.

Q: What are the consequences of deepfakes?
A: Deepfakes can be used to create fake news, hoaxes, child sexual abuse material, and revenge porn, among other malicious activities.

Q: How can companies protect themselves from deepfakes?
A: Companies can use biometric authentication mechanisms, such as facial recognition, and implement additional security measures, like multi-factor authentication, to prevent deepfake attacks.

Latest stories

Read More

LEAVE A REPLY

Please enter your comment!
Please enter your name here