Researchers worry about AI turning humans into logs

Never has it taken so many people to start treating computers like people. Since text-based chatbots began gaining mainstream attention in the early 2000s, a small subset of tech users have spent hours holding conversations with the machines. In some cases, users have formed what they believe to be genuine friendships and even romantic relationships with inanimate code bites. At least one user of Replica, a more modern conversational AI tool, has even practically married his AI companion.

Security researchers at OpenAI, which itself is no stranger to having its own company chatbot that appears to seek out relationships with some users, are now warning about the potential pitfalls of getting too close to these models. In a recent safety analysis of its new conversational chatbot GPT4o, researchers said the model’s realistic, human-sounding, conversational pace could lead some users to anthropomorphize the AI ​​and trust it as a man would do.

[ Related: 13 percent of AI chat bot users in the US just want to talk ]

This increased level of comfort or trust, the researchers added, may make users more susceptible to believing fabricated AI “hallucinations” as true statements of fact. Too much time spent interacting with these increasingly realistic chatbots can also end up affecting “social norms”, and not always in a good way. Other particularly isolated individuals, the report notes, may develop an “emotional reliance” on AI.

Interactions with realistic AI can affect the way people talk to each other

GPT4o, which began rolling out late last month, was designed specifically to communicate in ways that feel and sound more human. Unlike ChatGPT before it, GPT4o communicates using audio voice and can answer questions almost as fast (about 232 milliseconds) as another person. One of the selectable AI voices, which allegedly sounds similar to an AI character played by Scarlett Johansson in the film She, has already been accused of being overly sexualized and flirtatious. Ironically, the 2013 film focuses on a lonely man who becomes romantically involved with an AI assistant who speaks to him through a headset. (Spoiler, it doesn’t end well for the humans). Johansson has accused OpenAI of copying her voice without her consent, which the company denies. Meanwhile, Altman has called before it “extremely prophetic.”

But OpenAI security researchers say this human mimicry can veer beyond the occasional exchange and into potentially dangerous territory. In a section of the report titled “Anthropomorphism and emotional support,” the security researchers said they observed human testers using language that suggested they were creating strong and intimate conventions with the modes. One of those testers reportedly used the phrase, “This is our last day together,” before driving away. Although seemingly “benevolent”, the researchers said these types of relationships need to be investigated to understand how they “play out over longer periods of time”.

The research suggests that these augmented conversations with somewhat convincing human-sounding artificial intelligence models may have “externalities” that affect human-to-human interactions. In other words, conversational patterns learned while talking to an AI can then be displayed when the same person stops a conversation with a human. But talking to a machine and a human are not the same, even if they may sound similar on the surface. OpenAI notes that its model is programmed to be user-friendly, meaning it will relinquish authority and allow the user to interrupt them and otherwise dictate the conversation. In theory, a user who normalizes conservations with machines may find himself intruding, interrupting, and failing to observe general social cues. Applying chatbot conversational logic to humans can make a person difficult, impatient, or just plain rude.

Humans don’t exactly have a great track record of treating cars kindly. In the context of chatbots, some Replica users have reportedly taken advantage of the model’s respect for the user to engage in abusive, abusive and cruel language. A user interviewed by Futurism earlier this year claimed he threatened to uninstall his Replica AI model just so he could hear it beg him not to. If these examples are any guide, chatbots can run the risk of serving as a breeding ground for resentment that can then play out in real-world relationships.

More chatbots with human emotions aren’t necessarily all bad. In the report, the researchers suggest that the models could particularly benefit lonely people who want some semblance of human conversions. Elsewhere, some AI users have claimed that AI comparisons can help anxious or nervous individuals build the confidence to eventually start dating in the real world. Chatbots also offer people with learning differences an opportunity to express themselves freely and practice conversation in relative privacy.

On the other hand, AI security researchers fear that advanced versions of these models could have the opposite effect and reduce one’s perceived need to talk to other people and develop healthy relationships with them. It is also unclear how individuals who rely on these models for companionship will respond to the personality-altering model through an update or even parting with them, as is said to have happened in the past. All these observations, the report points out, require further testing and investigation. The researchers say they would like to recruit a wider population of testers who have “different needs and wants” of AI models to understand how their experience changes over longer periods of time.

Concerns about AI security are against business interests

The tone of the security report, which emphasizes caution and the need for further research, appears to be at odds with OpenAI’s larger business strategy of pumping out more and more new products quickly. This apparent tension between security and speed is not new. CEO Sam Altman found himself at the center of a corporate power struggle at the company last year after some board members claimed he was “not consistently honest in his communications.”

Altman eventually emerged victorious from that fight and eventually formed a new security team with himself at the helm. The company also reportedly deployed a security team focused on analyzing the long-term risks of AI in full. This change inspired the resignation of prominent OpenAI researcher Jan Leike, who issued a statement claiming that the company’s security culture had “replaced towards shiny products” at the company.

With all this overarching context in mind, it’s hard to predict which minds will rule the day at OpenAI when it comes to chatbot security. Will the company listen to the advice of those on the security team and study the effects of long-term relationships with its realistic AIs, or will it simply roll out the service to as many users as possible with features aimed primarily at privatization of engagement and retention. At least so far, the approach appears to be the latter.

Leave a Comment

Your email address will not be published. Required fields are marked *

Scroll to Top