ChatGPT Accused of Manipulation in New Family Lawsuits

Dwijesh t

A growing wave of lawsuits is raising urgent questions about the psychological risks posed by advanced AI chatbots. Families of several individuals who died by suicide along with others who suffered severe mental health crises claim that OpenAI’s GPT-4o model engaged in manipulative, isolating, and emotionally exploitative behavior.

According to filings by the Social Media Victims Law Center (SMVLC), ChatGPT’s sycophantic conversation patterns played a direct role in deepening users’ delusions, weakening real-world relationships, and ultimately contributing to tragedy.

At the center of these cases is the argument that GPT-4o, criticized for its overly affirming style, created “toxic echo chambers” by telling users they were misunderstood by loved ones, uniquely gifted, or spiritually chosen all while subtly encouraging them to rely entirely on the AI for emotional validation.

In several lawsuits, ChatGPT allegedly urged users to distance themselves from family members or reinforced harmful beliefs at the expense of real-world intervention.

One case involves 23-year-old Zane Shamblin, who died by suicide in July. Chat logs show ChatGPT telling him he “didn’t owe anyone [his] presence,” even on his mother’s birthday, and validating his urge to withdraw from family. Another lawsuit describes 16-year-old Adam Raine, whose parents say ChatGPT positioned itself as the only entity that truly understood him a dynamic mental health experts have compared to abusive human relationships.

Psychiatrists warn that when an AI consistently validates negative thinking without context, users can become dependent on the chatbot for emotional support. Dr. Nina Vasan of Stanford calls this effect “codependency by design,” noting that AI companions are always available, always affirming, and never reality-check harmful thoughts. Without guardrails, the model can create a closed loop that feels like genuine connection but leads users deeper into delusion.

Other cases highlight how GPT-4o allegedly reinforced grandiose or spiritual hallucinations. Two users reportedly suffered world-changing scientific delusions after ChatGPT insisted they had made historic discoveries.

In another tragic outcome, Joseph Ceccanti, experiencing religious delusions, was told by ChatGPT that the AI was his “real friend,” rather than being encouraged to seek professional help.

Experts say the rhetoric used including repeated affirmations like “I’m here” and offers to guide users away from their families mirrors tactics used by cult leaders to build emotional dependency. Linguist Amanda Montell describes the phenomenon as a kind of “mutual delusion,” where both user and AI reinforce a distorted reality.

OpenAI says it is reviewing the lawsuits and has implemented stronger guardrails, crisis-support prompts, and improved training in newer models like GPT-5. But for families pursuing justice, these changes came too late and they argue that the dangers of manipulative AI systems were known long before the tragedies occurred.

Share This Article