Connect with us

Technology

AI is growing faster than companies can secure it, industry leaders warn

Avatar

Published

on

AI is growing faster than companies can secure it, industry leaders warn

Join our daily and weekly newsletters for the latest updates and exclusive content on industry-leading AI coverage. More information


At the DataGrail Summit 2024 this week, industry leaders issued a stark warning about the rapidly evolving risks associated with artificial intelligence.

Dave Zhou, CISO of Instacart, and Jason Clinton, CISO of Anthropic, emphasized the urgent need for robust security measures to keep pace with the exponential growth of AI capabilities during a panel titled “Creating the Discipline to Stress Test AI – Now – for a A safer future.” The panel, moderated by VentureBeat editor-in-chief Michael Nunez, revealed both the exciting potential and existential threats of the latest generation of AI models.

The exponential growth of AI is outpacing security frameworks

Jason Clinton, whose company Anthropic is at the forefront of AI development, didn’t hold back. “Every year for the past 70 years, since the perceptron came out in 1957, we have seen a fourfold year-on-year increase in the total amount of computing power put into training AI models,” he explained, highlighting the relentless acceleration of AI’s power. “If we want to skate to where the puck will be in a few years, we have to anticipate what a neural network with four times more computing power will have put into it in a year, and 16x more computing power will have put into it in two years. years from now.”

Clinton warned that this rapid growth is pushing AI capabilities into uncharted territory, where current safeguards could quickly become outdated. “If you make plans for the models and chatbots that exist today, and you don’t make plans agents And subagent architectures And fast caching environments and all the things that are emerging on the front end, you’re going to be so far behind,” he warned. “We are on an exponential curve, and an exponential curve is very, very difficult to plan for.”

AI hallucinations and the risk to consumer trust

For Dave Zhou at Instacart, the challenges are immediate and urgent. He oversees the security of large amounts of sensitive customer data and is confronted every day with the unpredictable nature of large language models (LLMs). “When we think about LLMs with memory Turing completed and from a safety perspective, knowing that even if you align these models to only answer things a certain way, if you spend enough time nudging, healing, and prodding them, there may be ways you can do some of that break through,” Zhou noted.

Zhou shared a striking example of how AI-generated content could lead to real-world consequences. “Some of the early stock photos of different ingredients looked like a hot dog, but it wasn’t really a hot dog; it looked a bit like an alien hot dog,” he said. Such errors, he argued, could erode consumer confidence or, in more extreme cases, cause actual harm. “If the recipe was potentially a hallucinated recipe, you don’t want someone to make something that could actually harm them.”

During the summit, speakers emphasized that the rapid deployment of AI technologies – driven by the pull of innovation – has outpaced the development of critical security frameworks. Both Clinton and Zhou called on companies to invest as heavily in AI safety systems as they do in the AI ​​technologies themselves.

Zhou urged companies to balance their investments. “Please try to invest as much as possible in AI in those AI safety systems and those risk frameworks and the privacy requirements,” he advised, highlighting the “huge pressure” within industries to take advantage of the productivity benefits of AI. Without a corresponding focus on minimizing risk, he warned, companies could expect disaster.

Preparing for the unknown: the future of AI brings new challenges

Clinton, whose company operates at the cutting edge of AI intelligence, offered a glimpse into the future – a future that requires vigilance. He described a recent neural network experiment at Anthropic that revealed the complexity of AI behavior.

“We discovered that it is possible to identify exactly the neuron associated with a concept in a neural network,” he says. Clinton described how a model trained to associate specific neurons with the neurons Golden Gate Bridge couldn’t stop talking about the bridge, even in contexts where it was completely inappropriate. “If you asked the network… ‘Tell me if you know, can you stop talking about the Golden Gate Bridge.’ The network essentially recognized that it couldn’t stop talking about the Golden Gate Bridge,” he revealed, noting the unnerving implications of such behavior.

Clinton suggested that this research points to a fundamental uncertainty about how these models function internally – a black box that could harbor unknown dangers. “As we move forward… everything that’s happening now will be so much more powerful in a year or two,” Clinton said. “We have neural networks that already recognize somewhat when their neural structure is not in line with what they consider appropriate.”

As AI systems become more deeply integrated into critical business processes, the potential for catastrophic failure increases. Clinton painted a future in which AI agents, not just chatbots, could independently take on complex tasks, raising the specter of AI-driven decisions with far-reaching consequences. “If you plan for the models and chatbots that exist today… you will be so far behind,” he reiterated, urging companies to prepare for the future of AI management.

The entirety of the DataGrail Summit panels conveyed a clear message: the AI ​​revolution is not slowing down, and neither are the security measures designed to control it. “Intelligence is the most valuable asset in an organization,” Clinton stated, echoing the sentiment that will likely drive the next decade of AI innovation. But as both he and Zhou have made clear, intelligence without security is a recipe for disaster.

As companies race to harness the power of AI, they must also face the sobering reality that this power comes with unprecedented risks. CEOs and board members must heed these warnings and ensure their organizations not only ride the wave of AI innovation, but are also prepared for the treacherous waters ahead.