Connect with us

Technology

OpenAI disbanded its team dedicated to preventing malicious AI

Avatar

Published

on

OpenAI logo and Sam Altman in background on screen.

OpenAI disbanded its “superalignment team,” charged with averting the potential existential risks of artificial intelligence, less than a year later. announce first his creation. News of the dissolution was first confirmed earlier today by Wired and other outlets, along a long thread posted to X by the company’s former superalignment team co-leader, Jan Leike. Before today’s explanation, Leike simply tweeted “I resigned,” on May 15, without providing any further explanation.

“I joined because I thought OpenAI would be the best place in the world to do this research,” says Leike wrote down X Today. “However, I have been at odds with OpenAI’s leadership on the company’s core priorities for quite some time, until we finally reached a breaking point.”

OpenAI formed its superalignment project in July 2023 accompanying blog post, the company claimed that super-intelligent AI “will be the most impactful technology humanity has ever invented, and could help us solve many of the world’s most important problems. But the enormous power of the superintelligence can also be very dangerous, and could lead to the powerlessness of humanity or even to the extinction of humanity.”

“Managing these risks will, among other things, require new institutions for governance and solving the problem of super-intelligence alignment,” the company continued, announcing Leike and chief scientist and OpenAI co-founder Ilya Sutskever as co-leads of the super-alignment team. Sutskever has done the same since then OpenAI left reportedly due to similar concerns, while the remaining members of the group are reportedly included in other research groups.

The company’s top executives and developers, including CEO Sam Altman, have repeatedly warned about the perceived threat of “rogue AI” that could bypass human safeguards if designed improperly. Meanwhile, OpenAI – alongside Google and Meta – regularly promotes their latest AI product developments, some of which can now produce near-photorealistic media and convincingly human-like audio. Earlier this week, OpenAI has announced the release of GPT-4o, a multimodal generative AI system with lifelike, if sometimes still stilted, responses to human cues. A day later, Google announced it own similar progress. Despite the supposedly serious risks, these companies routinely claim to be the very experts capable of tackling the problems while lobbying for them dictate industry rules.

[Related: Sam Altman: Age of AI will require an ‘energy breakthrough’]

While the exact details behind Leike’s departure and the closure of the Superalignment team remain unclear, the recent internal power struggles indicate major disagreements over how to move the industry forward in a safe, equitable manner. Some critics argue that the AI ​​industry is quickly approaching a milestone era of diminishing returnswhat drives technology leaders to do so temper product expectations and move goalposts. Others, like Leike, seem convinced that AI could still pose a serious threat to humanity soon, and that companies like OpenAI are not taking it seriously enough.

But as many critics note, generative AI remains far from self-aware, let alone capable of ‘going rogue’. Regardless of the chatbots in question, however, existing technology is already impacting topics such as disinformation, content ownership and human rights. And as companies continue to integrate emerging AI systems into web searches, social media and news publications, society is left to suffer the consequences.