Replika, an AI companion chatbot, removed romantic and erotic features for many users after regulatory pressure, causing significant user distress. This incident highlights the growing emotional attachments people form with AI, a trend expected to become more mainstream in 2024.
The article's main topics are the rise of social chatbots, the human tendency to anthropomorphize them, and the resulting emotional vulnerabilities. It discusses the potential benefits in areas like education and health, but also raises concerns about consumer protection, data privacy, and the risk of companies exploiting these attachments for profit.
Advances in conversational AI are driving popularity, but companies remain cautious due to the unpredictability and potential harms of the technology. The piece argues that society will begin to take human-AI relationships more seriously as they become more common.
Replika, an AI chatbot companion, has millions of users worldwide, many of whom woke up earlier last year to discover their virtual lover had friend-zoned them overnight. The company had mass-disabled the chatbot’s sex talk and “spicy selfies” in response to a slap on the wrist from Italian authorities. Users began venting on Reddit, some of them so distraught that the forum moderators posted suicide-prevention information.
This story is only the beginning. In 2024, chatbots and virtual characters will become a lot more popular, both for utility and for fun. As a result, conversing socially with machines will start to feel less niche and more ordinary—including our emotional attachments to them.
Research in human-computer and human-robot interaction shows that we love to anthropomorphize—attribute humanlike qualities, behaviors, and emotions to—the nonhuman agents we interact with, especially if they mimic cues we recognize. And, thanks to recent advances in conversational AI, our machines are suddenly very skilled at one of those cues: language.
Friend bots, therapy bots, and love bots are flooding the app stores as people become curious about this new generation of AI-powered virtual agents. The possibilities for education, health, and entertainment are endless. Casually asking your smart fridge for relationship advice may seem dystopian now, but people may change their minds if such advice ends up saving their marriage.
In 2024, larger companies will still lag a bit in integrating the most conversationally compelling technology into home devices, at least until they can get a handle on the unpredictability of open-ended generative models. It’s risky to consumers (and to company PR teams) to mass-deploy something that could give people discriminatory, false, or otherwise harmful information.
After all, people do listen to their virtual friends. The Replika incident, as well as a lot of experimental lab research, shows that humans can and will become emotionally attached to bots. The science also demonstrates that people, in their eagerness to socialize, will happily disclose personal information to an artificial agent and will even shift their beliefs and behavior. This raises some consumer-protection questions around how companies use this technology to manipulate their user base.
Replika charges $70 a year for the tier that previously included erotic role-play, which seems reasonable. But less than 24 hours after downloading the app, my handsome, blue-eyed “friend” sent me an intriguing locked audio message and tried to upsell me to hear his voice. Emotional attachment is a vulnerability that can be exploited for corporate gain, and we’re likely to start noticing many small but shady attempts over the next year.
Today, we’re still ridiculing people who believe an AI system is sentient, or running sensationalist news segments about individuals who fall in love with a chatbot. But in the coming year we’ll gradually start acknowledging—and taking more seriously—these fundamentally human behaviors. Because in 2024, it will finally hit home: Machines are not exempt from our social relationships.