To receive industry-leading AI updates and exclusive content, sign up for our daily and weekly newsletters. Learn more
The wildly popular and award-winning HBO seriesgame of thronesA common warning in The Hunger Games was that “the White Walkers are coming,” a reference to a race of ice creatures that pose a serious threat to humanity.
Deepfakes should be thought of in the same way, argues Ajay Amrani, president and Americas head for a biometric authentication company. iProve.
“There’s been growing concern about deepfakes for the last few years,” he told VentureBeat. “What we’re seeing now is that we’re in a crypto winter.”
In fact, a recent survey by iProov found that nearly half of organizations (47%) say they have encountered a deepfake, and a new iProov survey released today also found that nearly three-quarters of organizations (70%) believe deepfakes created by generative AI will have a significant impact on their organization. Yet, at the same time, only 62% of respondents say their company takes this threat seriously.
“This is becoming a real concern,” Amrani said. “You can literally create completely fictional characters and make them look how you want them to, speak how you want them to, and react in real time.”
Deepfakes are a threat on par with social engineering, ransomware, and password breaches
Deepfakes – usually maliciously fabricated avatars, images, voices and other media delivered through photos, videos, phone calls and Zoom calls – have become incredibly sophisticated and often undetectable in just a short space of time.
This poses a major threat to organizations and governments. For example, financial officers of multinational corporations Paid $25 million In one case, a man was tricked into playing the role of his company’s “Chief Financial Officer” in a deepfake video call. In another notable case, cybersecurity firm KnowBe4 discovered that a new employee was actually The North Korean hackers who created it Through the hiring process using deepfake technology.
“We can now create fictional worlds that are completely undetectable,” Amrani said, adding that Epirov’s findings were “really astonishing.”
Interestingly, there are regional differences when it comes to deepfakes: for example, organizations in Asia Pacific (51%), Europe (53%) and Latin America (53%) are significantly more likely to have encountered a deepfake than organizations in North America (34%).
Amrani noted that many of the malicious actors are internationally based and target local territories first, “and that trend is expanding globally, especially since the internet is not geographically constrained.”
The survey also found that deepfakes are tied for third place as the biggest security concern, with password breaches topping the list (64%), followed by ransomware (63%) and phishing/social engineering attacks and deepfakes (61%).
“It’s very hard to trust anything digital,” Amrani said. “You have to question everything you see online. The call to action here is that people really need to start putting up defenses to prove they’re the right person on the other end.”
Amrani noted that threat actors have become extremely adept at creating deepfakes, thanks to improved processing speeds and bandwidth, improved and faster capabilities to share information and code via social media and other channels, and of course generative AI.
Some simple measures have been implemented to combat the threat, such as embedded software on video-sharing platforms that uses AI to flag altered content, “but that’s just one step into a very deep swamp,” Amrani said. Meanwhile, there are also “crazy systems” like Captcha that make it even harder.
“The concept is a random challenge to prove you’re a living human,” he said. But it’s becoming increasingly difficult for humans to even identify themselves, especially for older people and those with cognitive, vision or other issues (or for people who have never seen a seaplane and therefore wouldn’t be able to identify it when challenged).
Instead, “biometrics is a simple way to solve these problems,” Amrani said.
In fact, iProov’s research found that three-quarters of organizations are adopting facial recognition as their primary defense against deepfakes, followed by multi-factor authentication and device-based biometric tools (67%). Companies are also educating employees on how to spot deepfakes and the potential risks associated with them (63%). Additionally, they are conducting regular audits of their security measures (57%) and regularly updating their systems (54%) to combat the deepfake threat.
iProov also evaluated the effectiveness of various biometric authentication methods in combating deepfakes, and here is the ranking:
- Fingerprint 81%
- Iris 68%
- Facial 67%
- Advanced Actions 65%
- Palm 63%
- Basic Actions 50%
- Audio 48%
But not all authentication tools are the same, Amrani noted. Some are clunky and not very comprehensive, requiring users to move their head from side to side or raise and lower their eyebrows, for example. But threat actors using deepfakes can easily circumvent this, Amrani noted.
Meanwhile, iProov’s AI-powered tool uses light emitted from the device’s screen to reflect 10 random colors onto a human face. This scientific approach analyzes the finer details of humanness, including skin, lips, eyes, nose, pores, sweat glands, and hair follicles. Amrani explained that if the results are not as expected, it could be that the threat actor is holding up a real-life photo or a mobile phone image, or is wearing a mask that cannot reflect light like human skin.
The company has deployed the tool across the commercial and government sectors, and he said it is simple, quick and “very secure,” with an “extremely high pass rate” (over 98 percent), he said.
“Overall, there is a growing recognition around the world that this is a big problem,” Amrani said. “Fighting deepfakes requires a global effort because bad actors are all over the world. Now is the time to arm ourselves and fight this threat.”