Connect with us

Technology

With the help of memes, social media users have become red teams for half-baked AI features

Avatar

Published

on

With the help of memes, social media users have become red teams for half-baked AI features

“Scissor running is a cardio exercise that can increase your heart rate and requires concentration and focus,” says Google’s new AI search tool. “Some say it can also improve your pores and give you strength.”

Google’s AI feature pulled this answer from a website called Little old lady comedy, which, as the name makes clear, is a comedy blog. But the blunder is so ridiculous that it’s circulating on social media, along with other patently incorrect AI overviews on Google. In fact, regular users are now combining these products on social media.

In cybersecurity, some companies will hire “red teams” – ethical hackers – who attempt to breach their products as if they were bad actors. If a red team finds a vulnerability, the company can fix it before the product ships. Google certainly did some red teaming before releasing an AI product on Google Search estimated to process trillions of queries per day.

It’s surprising, then, that a well-resourced company like Google still delivers products with obvious flaws. Therefore, it has now become a meme to clown the failures of AI products, especially at a time when AI is becoming increasingly ubiquitous. We’ve seen this with bad spelling on ChatGPT and the inability of video generators to understand it how people eat spaghettiAnd Grok AI news summaries on X that, like Google, don’t understand satire. But these memes could actually serve as useful feedback for companies developing and testing AI.

Despite the glaring nature of these shortcomings, technology companies often downplay their impact.

“The examples we saw are generally very unusual questions and are not representative of most people’s experiences,” Google told JS in an emailed statement. “We conducted extensive testing before launching this new experience, and will use these isolated examples as we continue to refine our systems overall.”

Not all users see the same AI results, and by the time a particularly bad AI suggestion makes the rounds, the problem has often already been fixed. In a more recent case that went viral, Google suggested that as long as you make pizza the cheese does not stick, you could add about an eighth of a cup of glue to the sauce to make it more sticky. It turned out that the AI ​​gets this answer from an eleven year old Reddit comment from a user named “f––smith.”

Not only is it an incredible blunder, but it also indicates that AI content deals may be overvalued. Google has one $60 million contract with Reddit to license the content for AI model training, for example. Reddit signed a similar deal with OpenAI last week, and so do Automattic properties WordPress.org and Tumblr rumors to be in talks to sell data to Midjourney and OpenAI.

To Google’s credit, many of the errors circulating on social media come from unconventional searches designed to trip up the AI. At least I hope no one is seriously looking for “health benefits of scissor running.” But some of these errors are more serious. Science journalist Erin Ross posted on X that Google was spewing out incorrect information about what to do if you get a rattlesnake bite.

Ross’s post, which received more than 13,000 likes, shows that AI recommended applying a tourniquet to the wound, cutting the wound and sucking out the venom. According to the US Forest Servicethese are all things you need to do not what to do if you get bitten. Meanwhile, on Bluesky, author T Kingfisher amplified a post showing off Google’s Gemini misidentifying a poisonous mushroom like a regular white mushroom – have screenshots of the message scatter to other platforms as a cautionary tale.

When a bad AI response goes viral, the AI ​​can become even more confused by the new content around the topic that arises as a result. New York Times reporter Aric Toler wrote this on Wednesday a screenshot on X that shows a question asking if a dog has ever played in the NHL. The AI’s answer was yes: for some reason the AI ​​called Calgary Flames player Martin Pospisil a dog. Now when you ask the same question, the AI ​​will retrieve an article the Daily Point about how Google’s AI keeps thinking dogs are exercising. The AI ​​is fed its own mistakes, poisoning it even further.

This is the inherent problem with training these large-scale AI models on the Internet: sometimes people lie on the Internet. But just like there no rule against a dog playing basketballUnfortunately, there is no rule that prevents big tech companies from shipping bad AI products.

As the saying goes: garbage in, garbage out.