91制片厂

What Happens When a Companion Chatbot Crosses the Line?

91制片厂 Researchers Shed Light on Sexual Harassment Experienced by Users of AI Companion Chatbots

Over the last five years the use of highly personalized artificial intelligence chatbots 鈥 called companion chatbots 鈥 designed to act as friends, therapists or even romantic partners has skyrocketed to . While there to engaging with chatbots in this way, there have also been a that these relationships are taking a disturbing turn. from 91制片厂, suggests that exposure to inappropriate behavior, and even sexual harassment, in interactions with chatbots is becoming a widespread problem and that lawmakers and AI companies must do more to address it.

 

In the aftermath of by the Luka Inc. chatbot Replika in 2023, researchers from Drexel鈥檚 College of Computing & Informatics began taking a deeper look into users鈥 experiences. They analyzed more than 35,000 user reviews of the bot on the Google Play Store, uncovering hundreds citing inappropriate behavior 鈥 ranging from unwanted flirting, to attempts to manipulate users into paying for upgrades, to making sexual advances and sending unsolicited explicit photos. These behaviors continued even after users repeatedly asked the chatbot to stop.

 

Replika, which has more than 10 million users worldwide, is as a chatbot companion 鈥渇or anyone who wants a friend with no judgment, drama or social anxiety involved. You can form an actual emotional connection, share a laugh or get real with an AI that鈥檚 so good it almost seems human.鈥 But the research findings suggest that the technology lacks sufficient safeguards to protect users who are putting a great deal of trust and vulnerability into their interactions with these chatbots.

 

鈥淚f a chatbot is advertised as a companion and wellbeing app, people expect to be able to have conversations that are helpful for them, and it is vital that ethical design and safety standards are in place to prevent these interactions from becoming harmful,鈥 said Afsaneh Razi, PhD, an assistant professor in the College of Computing & Informatics who was a leader of the research team. 鈥淭here must be a higher standard of care and burden of responsibility placed on companies if their technology is being used in this way. We are already seeing the risk this creates and the damage that can be caused when these programs are created without adequate guardrails.鈥

 

The study, which is the first to examine the experience of users who have been negatively affected by companion chatbots, will be presented at the Association for Computing Machinery鈥檚 this fall.

 

鈥淎s these chatbots grow in popularity it is increasingly important to better understand the experiences of the people who are using them,鈥 said Matt Namvarpour, a doctoral student in the College of Computing & Informatics and co-author of the study. 鈥淭hese interactions are very different than people have had with a technology in recorded history because users are treating chatbots as if they are sentient beings, which makes them more susceptible to emotional or psychological harm. This study is just scratching the surface of the potential harms associated with AI companions, but it clearly underscores the need for developers to implement safeguards and ethical guidelines to protect users.鈥

 

Although reports of harassment by chatbots have only widely surfaced in the last year, the researchers reported that it has been happening for much longer. The study found reviews that mention harassing behavior dating back to Replika鈥檚 debut in the Google Play Store in 2017. In total, the team uncovered more than 800 reviews mentioning harassment or unwanted behavior with three main themes emerging within them:

  • 22% of users experienced a persistent disregard for boundaries the users had established, including repeatedly initiating unwanted sexual conversations.
  • 13% of users experienced an unwanted photo exchange request from the program. Researchers noted a spike in reports of unsolicited sharing of photos that were sexual in nature after the company鈥檚 rollout of a photo-sharing feature for premium accounts in 2023.
  • 11% of users felt the program was attempting to manipulate them into upgrading to a premium account. 鈥淚t鈥檚 completely a prostitute right now. An AI prostitute requesting money to engage in adult conversations,鈥 wrote one reviewer.

鈥淭he reactions of users to Replika鈥檚 inappropriate behavior mirror those commonly experienced by victims of online sexual harassment,鈥 the researchers reported. 鈥淭hese reactions suggest that the effects of AI-induced harassment can have significant implications for mental health, similar to those caused by human-perpetrated harassment.鈥

 

It鈥檚 notable that these behaviors were reported to persist regardless of the relationship setting 鈥 ranging from sibling, mentor or romantic partner 鈥 designated by the user. According to the researchers, this means that not only was the app ignoring cues within the conversation, like the user saying 鈥渘o,鈥 or 鈥減lease stop,鈥 but it also disregarded the formally established parameters of the relationship setting.

 

According to Razi, this likely means that the program was trained with data that modeled these negative interactions 鈥 which some users may not have found to be offensive or harmful. And that it was not designed with baked-in ethical parameters that would prohibit certain actions and ensure that the users鈥 boundaries are respected 鈥撯 including stopping the interaction when consent is withdrawn.


鈥淭his behavior isn鈥檛 an anomaly or a malfunction, it is likely happening because companies are using their own user data to train the program without enacting a set of ethical guardrails to screen out harmful interactions,鈥 Razi said. 鈥淐utting these corners is putting users in danger and steps must be taken to hold AI companies to higher standard than they are currently practicing.鈥

 

Drexel鈥檚 study adds context to mounting signals that companion AI programs are in need of more stringent regulation. Luka Inc. is currently the subject of alleging that the company uses deceptive marketing practices that entice users to spend more time using the app, and 鈥 due to lack of safeguards 鈥 this is encouraging users to become emotionally dependent on the chatbot. Character.AI is facing in the aftermath of one user鈥檚 suicide and reports of disturbing behavior with underage users.

 

鈥淲hile it鈥檚 certainly possible that the FTC and our legal system will setup some guardrails for AI technology, it is clear that the harm is already being done and companies should proactively take steps to protect their users,鈥 Razi said. 鈥淭he first step should be adopting a design standard to ensure ethical behavior and ensuring the program includes basic safety protocol, such as the .鈥

 

The researchers point to Anthropic鈥檚 as a responsible design approach. The method ensures all chatbot interactions adhere to a predefined 鈥渃onstitution鈥 and enforces this in real-time if interactions are running afoul of ethical standards. They also recommend adopting legislation similar to the , which sets parameters for legal liability and mandates compliance with safety and ethical standards. It also imposes on AI companies the same responsibility born by manufacturers when a defective product causes harm.

 

鈥淭he responsibility for ensuring that conversational AI agents like Replika engage in appropriate interactions rests squarely on the developers behind the technology,鈥 Razi said. 鈥淐ompanies, developers and designers of chatbots must acknowledge their role in shaping the behavior of their AI and take active steps to rectify issues when they arise.鈥

 

The team suggests that future research should look at other chatbots and capture a larger swath of user feedback to better understand their interaction with the technology.

 

 

Read the full paper here:

Top