Connect with us

Technology

Researchers worry that AI is turning people into assholes

Avatar

Published

on

Researchers worry that AI is turning people into assholes

It has never taken so much for people to start treating computers like people. Since text-based chatbots first started gaining mainstream attention in the early 2000s, a small group of tech users have spent hours having conversations with machines. In some cases, users have formed what they believe are real friendships and even romantic relationships with inanimate bits of code. At least one user of Replicaa more modern conversational AI tool, even has virtually married their AI companion.

Security researchers at OpenAI, who themselves are no strangers to having the company’s own chatbot appears to be establishing relationships with some usersnow warns about the potential pitfalls of getting too close with these models. In a recent one safety analysis of his new conversation GPT4o chatbotAccording to researchers, the model’s realistic, human-sounding conversation rhythm could lead some users to anthropomorphize the AI ​​and rely on it as a human would.

[ Related: 13 percent of AI chat bot users in the US just want to talk ]

This extra level of comfort or trust, the researchers added, could make users more susceptible to believing made-up AI.hallucinations‘ as true statements of fact. Too much time spent interacting with these increasingly realistic chatbots can also end up influencing “social norms,” and not always in a good way. Other particularly isolated individuals, the report notes, could develop an “emotional dependency” on the AI.

Relationships with realistic AI could affect the way people talk to each other

GPT4o, which rolled out late last month, is specifically designed to communicate in ways that feel and sound more human. Unlike ChatGPT before it, GPT4o communicates via voice audio and can respond to queries almost as quickly (approximately 232 milliseconds) as another person. One of the selectable AI voices, which reportedly sounds like an AI character played by Scarlett Johansson in the film Her, has already been accused of being overly sexualized and flirtatious. Ironically, the 2013 film focuses on a lonely man who becomes romantically attached to an AI assistant who talks to him through an earbud. (Spoiler, it doesn’t end well for people). Johanson has accused OpenAI of copying her voice without her consentwhich the company denies. Altman, meanwhile, has called before Her incredibly prophetic.”

But security researchers at OpenAI say this human impersonation could stray beyond the occasional cringe exchange and into potentially dangerous territory. In a section of the report titled “Anthropomorphism and Emotional Dependence,” the security researchers said they saw human testers use language that suggested they were forming strong, intimate conventions with the modes. One of those testers reportedly used the phrase “This is our last day together” before parting ways with the machine. Although seemingly “benign,” researchers say these types of relationships need to be investigated to understand how they “manifest over longer periods of time.”

The research suggests that these extended conversations with somewhat convincingly human-sounding AI models may have “externalities” that impact interactions between people. In other words, conversation patterns learned while speaking to an AI can then emerge when that same person has a conversation with a human. But speaking to a machine and a human are not the same, even though they may sound the same at first glance. OpenAI notes that the model is programmed to be respectful to the user, meaning it will cede authority and allow the user to interrupt and otherwise dictate the conversation. In theory, a user who normalizes conservations with machines could then interrupt, interrupt, and fail to perceive common social signals. Applying the logic of chatbot conversations to humans can make someone clumsy, impatient, or downright rude.

People don’t exactly have a great song record of treating machines kindly. In the context of chatbots, some Replica users have reportedly abused the model’s deference to the user by engaging in abusive, verbal abuse, and cruel language. One user interviewed by Futurism earlier this year claimed so threatened to remove its Replica AI model so he could hear it begging him not to. If these examples are any guide, chatbots may run the risk of serving as a breeding ground for resentment, which can then manifest itself in real-life relationships.

Chatbots with a more human feel aren’t necessarily all bad. In the report, the researchers suggest that the models could especially benefit lonely people who long for some semblance of human conversions. Elsewhere, some AI users claim that AI comparisons can help anxious or nervous individuals build self-confidence to eventually start dating in the real world. Chatbots also offer people with learning differences a outlet to express themselves freely and practice talking in relative privacy.

On the other hand, AI security researchers fear that advanced versions of these models could have the opposite effect and reduce a person’s perceived need to talk to other people and develop healthy relationships with them. It is also unclear how individuals who rely on these models for companionship would respond to the model altering their personality through an update or even termination of their relationship. as has reportedly happened in the past. All of these observations, the report notes, require further testing and research. The researchers say they would like to recruit a broader population of testers who have “varied needs and desires” around AI models to understand how their experience changes over longer periods of time.

Concerns about the safety of AI conflict with the interests of business

The tone of the security report, which emphasizes caution and the need for further research, appears to run counter to OpenAI’s broader business strategy of bringing new products to market at an increasingly rapid pace. This apparent tension between safety and speed is not new. CEO Sam Altman famously found himself at the center of a power struggle within the company last year, after several members of the board of directors so-called he was ‘not consistently candid in his communications’.

Altman ultimately emerged victorious from that skirmish eventually formed a new security team with himself at the helm. The company too reportedly disbanded a security team fully focused on analyzing long-term AI risks. That uproar led to the resignation of prominent OpenAI researcher Jan Leike released a statement claiming that the company’s safety culture had “taken a backseat to shiny products” at the company.

With all this overarching context in mind, it’s hard to predict which minds will rule the day at OpenAI when it comes to chatbot security. Will the company heed the advice of its security team members and study the effects of long-term relationships with its realistic AIs, or will it simply roll out the service to as many users as possible with features primarily intended to privatize engagement and retention. So far, the approach appears to be the latter.