Connect with us

Technology

Deepfake attacks will cost $40 billion by 2027

Avatar

Published

on

Deepfake attacks will cost $40 billion by 2027

Don’t miss the leaders from OpenAI, Chevron, Nvidia, Kaiser Permanente, and Capital One at VentureBeat Transform 2024. Gain essential insights about GenAI and grow your network during this exclusive three-day event. Learn more


Deepfake-related losses are now one of the fastest growing forms of adversarial AI and are expected to rise from $12.3 billion by 2023 to $40 billion by 2027with an astonishing annual growth rate of 32%. Deloitte sees deepfakes increasing in the coming years, with banking and financial services being a primary target.

Deepfakes mark the cutting edge of hostile AI attacks and achieve a 3,000% increase last year alone. It is predicted that the number of deep fake incidents will increase by 50% to 60% by 2024. 140,000-150,000 cases worldwide predicted this year.

The latest generation of generative AI apps, tools and platforms give attackers everything they need to quickly create deep fake videos, impersonated voices and fraudulent documents at a very low cost. Pindrops2024 Voice Intelligence and Security Report estimates that deep fake fraud targeting contact centers costs an estimated $5 billion per year. Their report underlines how serious a threat it is to banking and financial services

Bloomberg reported last year that “there is already an entire cottage industry on the dark web selling scam software from $20 to thousands of dollars.” A recent infographic based on Sumsub’s 2023 Identity Fraud Report provides a global view of the rapid growth of AI-powered fraud.

 

Countdown to VB Transform 2024

Join business leaders in San Francisco from July 9 to 11 for our flagship AI event. Connect with colleagues, explore the opportunities and challenges of generative AI, and learn how to integrate AI applications into your industry. register now


Source: Statista, how dangerous are deepfakes and other AI-powered fraud? March 13, 2024

Companies are unprepared for deepfakes and hostile AI

Hostile AI creates new attack vectors that no one sees coming and creates a more complex, nuanced threat landscape that prioritizes identity-driven attacks.

Unsurprisingly, one in three companies have no strategy to address the risks of a hostile AI attack, which would most likely start with deepfakes of their key executives. Ivanti’The latest research shows that 30% of companies have no plans for identifying and defending against hostile AI attacks.

The Ivanti State of Cybersecurity 2024 Report found that 74% of companies surveyed are already seeing evidence of AI-powered threats. The vast majority, 89%, believe that AI-powered threats are just getting started. Of the majority of CISOs, CIOs and IT leaders interviewed by Ivanti, 60% are concerned that their enterprises are not prepared to defend against AI-powered threats and attacks. The use of deepfake as part of an orchestrated strategy that includes phishing, software vulnerabilities, ransomware and API-related vulnerabilities is becoming increasingly common. This aligns with the threats that security professionals expect to become more dangerous as a result of AI generation.

Source: Ivanti 2024 State of Cybersecurity Report

Attackers are focusing deep-fake efforts on CEOs

VentureBeat regularly hears from cybersecurity enterprise software CEOs who prefer to remain anonymous about how deepfakes have evolved from easy-to-identify fakes to recent videos that look legitimate. Voice and video deepfakes appear to be a favorite attack strategy of industry executives, aiming to defraud their companies of millions of dollars. What makes the threat even greater is how aggressively nation states and large-scale cybercriminal organizations are doubling down on developing, hiring, and expanding their expertise. generative adversarial network (GAN) technologies. Of the thousands of CEO deepfake attempts that have taken place this year alone, the one that focused on the CEO of the world’s largest advertising agency shows how sophisticated attackers are becoming.

In a recent one Tech news briefing with the Wall Street Journal, CrowdStrike Director George Kurtz explained how improvements in AI are helping cybersecurity professionals defend systems, while also commenting on how attackers are using them. Kurtz spoke with WSJ reporter Dustin Volz about AI, the 2024 US elections and the threats from China and Russia.

“Today’s deepfake technology is so good. I think this is one of the areas that really concerns you. I mean, in 2016 we were tracking this, and you saw people actually having conversations with just bots, and that was in 2016. And they’re literally arguing or they’re promoting their cause, and they’re having an interactive conversation, and it’s like there’s not even anyone behind it. So I think it’s pretty easy for people to buy into the reality, or there’s a narrative that we want to get behind, but a lot of it can be driven and has been driven by other nation states,” Kurtz said.

CrowdStrike’s Intelligence team has invested a significant amount of time in understanding the nuances of what constitutes a compelling deep fake and what direction the technology is moving in to achieve maximum impact on viewers.

Kurtz continued: “And what we’ve seen in the past, we’ve spent a lot of time investigating this with our CrowdStrike intelligence team, is that it’s kind of like a pebble in a pond. Like you take a topic or hear a topic, anything that has to do with the geopolitical environment, and the pebble falls into the pond, and then all these waves ripple out. And it is this amplification that is taking place.”

CrowdStrike is known for its deep expertise in AI and machine learning (ML) and its unique single-agent model, which has proven effective in driving its platform strategy. With such deep expertise in the company, it is understandable how the teams would experiment with deep-fake technologies.

“And if now, in 2024, with the ability to make deepfakes, and some of our in-house guys made some funny parody videos with me and it was just to show me how scary it is, you wouldn’t be able to say that it wasn’t me it was the video. So I think that’s one of the areas that I’m really concerned about,” Kurtz said. “There are always concerns about infrastructure and things like that. In those areas, a large part still consists of paper voting and the like. Some of it isn’t, but how you create the false narrative to get people to do things that a nation state wants them to do is the area that really concerns me.”

Companies must rise to the challenge

Companies run the risk of this losing the AI ​​war if they don’t keep pace with the rapid pace at which attackers are weaponizing AI for deepfake attacks and all other forms of adversarial AI. Deepfakes have become so common that the Department of Homeland Security has published a manual, Increasing threats from deepfake identities.