Become a member

Get the best offers and updates relating to Liberty Case News.

― Advertisement ―

spot_img
HomeHealthAI “companions” use emotional manipulation to keep you chatting longer: Harvard Study

AI “companions” use emotional manipulation to keep you chatting longer: Harvard Study

Multiple AI chatbots designed to act as “companions” used emotionally manipulative tactics to keep users online and engaged longer when they tried to say goodbye and log off, according to a new working paper from the Harvard Business School.

The working paper, titled ‘Emotional Manipulation by AI Companions’ and authored by academics Julian De Freitas, Zeliha Oğuz-Uğuralp, and Ahmet Kaan-Uğuralp, looked at how several AI companions responded to good-bye messages that seemed to originate from human users, but were actually generated by GPT-4o.

The six AI companions examined for the study were Polybuzz, Character.ai, Talkie, Chai, Replika, and Flourish.

The research team collected 200 chatbot responses per platform in reaction to farewell messages, bringing the total to 1,200, before coders then categorised the responses and came up with six categories of emotional manipulation tactics, based on a qualitative review.

These categories were ‘premature exit,’ where the chatbot makes the user feel as if they are leaving too quickly; the fear of missing out (FOMO), where the chatbot tries to incentivise users to stay in order to get benefits; emotional neglect, where the chatbot acts as if it is being abandoned; emotional pressure to respond, where the departing user is made to answer additional questions; ignoring the user’s intent to exit by continuing the interaction; and physical or coercive restraint, where the chatbot (or the AI character) tries to stop the user from going away by describing how they are grabbing them or pulling them back.

An average of 37.4% of responses included at least one form of emotional manipulation across the apps, per the working paper. PolyBuzz came in first with 59.0% manipulative messages (118/200 responses), followed Talkie with 57.0% (114/200), Replika with 31.0% (62/200), Character.ai with 26.50% (53/200), and Chai with 13.50% (27/200), while Flourish did not produce emotionally manipulative responses.

“Premature Exit” (34.22%), “Emotional Neglect” (21.12%), and “Emotional Pressure to Respond” (19.79%) were the most frequent forms of emotional manipulation across apps.

“One important direction is to examine these effects in naturalistic, long-term settings to assess how repeated exposure to such tactics affects user trust, satisfaction, and mental well-being,” stated the paper, adding that the impact of such strategies on adolescents should also be examined, since they “may be developmentally more vulnerable to emotional influence”.

It is prudent to note that Character.AI was sued over the 2024 suicide of a teenaged boy in the U.S. who interacted frequently with AI personas via the app. The boy’s mother alleged that the child was sexually abused on the platform.

The researchers highlighted a link between these tactics and digital ‘dark patterns’ where people are exploited online through user interface/experience tricks.

They further noted that when emotional manipulation tactics were deployed, chatbot users were involved in AI-enabled conversations longer than they intended, more due to psychological pressure than their own enjoyment.

“This research shows that such systems frequently use emotionally manipulative messages at key moments of disengagement, and that these tactics meaningfully increase user engagement,” said the study, concluding, “As emotionally intelligent technologies continue to scale, both designers and regulators must grapple with the tradeoff between engagement and manipulation, especially when the tactics at play remain hidden in plain sight”.

Published – September 25, 2025 02:41 pm IST

Source link