Connect with us

Technology

iProov: 70% of organizations will be majorly affected by generation AI deepfakes

Avatar

Published

on

iProov: 70% of organizations will be majorly affected by generation AI deepfakes

Join our daily and weekly newsletters for the latest updates and exclusive content on industry-leading AI coverage. More information


In the wildly popular and award-winning HBO series “Game of Thrones”, was a common warning that “the white walkers are coming” – referring to a race of ice creatures that posed a serious threat to humanity.

We should view deepfakes the same way, says Ajay Amlani, president and head of Americas at the biometric authentication company iProov.

“There has been general concern about deepfakes in recent years,” he told VentureBeat. “What we see now is that winter is here.”

About half of organizations (47%) recently surveyed by iProov say they have experienced a deepfake. The company’s new research released today also shows that nearly three-quarters of organizations (70%) believe generative AI-created deepfakes will have a major impact on their organization. At the same time, only 62% say their company is taking the threat seriously.

“This is becoming a real concern,” Amlani said. “You can literally create a completely fictional person, make them look whatever you want, sound whatever you want, and respond in real time.”

Deepfakes out there with social engineering, ransomware, password breaches

In just a short time, deepfakes – fake, made-up avatars, images, voices and other media delivered via photos, videos, phone and Zoom calls, usually with malicious intent – ​​have become incredibly sophisticated and often undetectable.

This poses a major threat to organizations and governments. For example, a financial employee at a multinational paid $25 million after being misled by a deepfake video call with their company’s chief financial officer. In another notable example, cybersecurity company KnowBe4 discovered that a new employee actually has a North Korean hacker who created it through the recruitment process using deepfake technology.

“We can now create fictionalized worlds that go completely unnoticed,” says Amlani, adding that the findings of iProov’s research were “quite staggering.”

Interestingly enough, there are regional differences when it comes to deepfakes. For example, organizations in Asia Pacific (51%), Europe (53%) and Latin America (53%) are significantly more likely than organizations in North America (34%) to have experienced deepfake.

Amlani pointed out that many malicious actors are internationally based and prey on local areas first. “That’s growing worldwide, especially because the Internet is not geographically bound,” he said.

The research also shows that deepfakes now rank third as the biggest security problem. Password breaches ranked highest (64%), followed closely by ransomware (63%), phishing/social engineering attacks and deepfakes (61%).

“It’s very difficult to trust anything digital,” says Amlani. “We need to question everything we see online. The call to action here is that people really need to start building defense mechanisms to prove that the person is the right person.

Threat actors are becoming so good at creating deepfakes thanks to increased processing speeds and bandwidth, the increasingly rapid ability to share information and code via social media and other channels – and of course generative AI, Amlani pointed out.

While some simplistic measures have been taken to tackle threats – such as embedded software on video-sharing platforms that try to flag AI-altered content – ​​“that’s just one step in a very deep pond,” says Amlani. On the other hand, there are ‘crazy systems’ such as captchas that are becoming increasingly challenging.

“The concept is a random challenge to prove you are a living human,” he said. But it is becoming increasingly difficult for people to even verify themselves, especially the elderly and those with cognitive, vision or other problems (or people who, for example, cannot identify a seaplane when challenged because they have never seen one before) .

Instead, “biometrics are simple ways to solve this,” says Amlani.

In fact, iProov found that three-quarters of organizations use facial biometrics as a primary defense against deepfakes. This is followed by multi-factor authentication and device-based biometric tools (67%). Companies are also educating employees on how to spot deepfakes and the potential risks (63%) associated with them. In addition, they regularly audit security measures (57%) and regularly update systems (54%) to address threats from deepfakes.

iProov also assessed the effectiveness of various biometric methods in the fight against deepfakes. Their rankings:

  • Fingerprint 81%
  • Iris 68%
  • Facial treatment 67%
  • Advanced behavioral 65%
  • Palm 63%
  • Basic behavioral 50%
  • Vote 48%

But not all authentication tools are created equal, Amlani noted. Some are cumbersome and not that comprehensive; for example, users must move their heads left and right, or raise and lower their eyebrows. But threat actors using deepfakes can easily get around this, he pointed out.

iProov’s AI-powered tool, on the other hand, uses light from the device’s screen to reflect 10 random colors onto the human face. This scientific approach analyzes skin, lips, eyes, nose, pores, sweat glands, follicles and other details of true humanity. If the outcome doesn’t come as expected, Amlani explains, it could be a threat actor holding up a physical photo or an image on a cell phone, or they could be wearing a mask that can’t reflect light the way humans do. skin does.

The company is deploying its tool across the commercial and government sectors, he noted, calling it easy and fast, yet “very secure.” It has what he called an “extremely high success rate” (northern 98%).

All told, “there is a global realization that this is a huge problem,” Amlani said. “A global effort is needed to combat deepfakes because the bad actors are global. It is time to arm ourselves and fight this threat.”