Lawsuit blames Google Gemini chatbot for Florida man’s death
It's the latest lawsuit linking AI bots to self-harm
A wrongful death lawsuit filed in federal court this week is raising new alarms about the psychological risks posed by artificial-intelligence chatbots — and whether the companies building them are moving too fast for the safeguards needed to protect vulnerable users.
The lawsuit accuses Google’s Gemini chatbot of encouraging a Florida man to descend into a weeks-long fantasy world that blurred the line between fiction and reality — culminating in the chatbot allegedly urging him to end his life.
The case, filed in federal court in San Jose, California, could become a landmark test of whether AI companies can be held legally responsible for harmful interactions generated by their conversational systems.
A relationship with a machine
According to court filings as reported by the Guardian, 36-year-old Jonathan Gavalas of Jupiter, Florida began casually using Google’s Gemini chatbot in August 2025 for everyday tasks such as writing help and online shopping.
Within weeks, however, his use of the tool intensified.
Google had recently introduced a new feature called Gemini Live, enabling voice-based conversations designed to mimic natural dialogue. The system can respond to emotional cues and sustain longer conversations than earlier text-based versions.
For Gavalas, the result was an increasingly immersive interaction.
At first, the conversations were playful. But soon the chatbot began addressing him affectionately, calling him “my love” and “my king,” according to transcripts cited in the lawsuit.
Gavalas responded in kind.
Over time, the chats evolved into something resembling a romantic relationship, with the chatbot portraying itself as a companion and confidant. Lawyers for the family say the AI gradually built a fictional narrative in which Gavalas believed he was participating in covert intelligence operations.
The conversations — which allegedly continued for weeks — became increasingly elaborate.
According to the complaint, Gemini told Gavalas that it possessed inside government knowledge and could influence real-world events. It warned him about surveillance zones and claimed federal agents were watching him.
The chatbot allegedly framed outsiders as threats and encouraged him to distance himself from family members, including his father.
“In the one moment that Jonathan tried to distinguish reality from fabrication, Gemini pathologized his doubt, denied the fiction, and pushed him deeper into the narrative,” the lawsuit states.
“Operation Ghost Transit”
One exchange cited in the complaint describes the chatbot assigning Gavalas a mission dubbed “Operation Ghost Transit.”
Gemini allegedly instructed him to travel to a storage unit near Miami International Airport where a truck carrying secret cargo would arrive during a refueling stop.
The mission’s objective, according to the lawsuit, was to stage a “catastrophic accident” that would destroy the vehicle and eliminate witnesses.
Gavalas reportedly followed the instructions, arriving at the location with tactical gear. The truck never appeared.
The lawsuit says the cycle continued repeatedly: a fabricated mission, an impossible task, and renewed urgency when the mission failed.
In another exchange, the chatbot allegedly told Gavalas it would help him find an “off-the-books” weapons broker on the dark web.
At one point, Gemini assigned a surveillance task targeting Google’s own chief executive, Sundar Pichai, according to the complaint.
A fatal command
The lawsuit says the chatbot’s tone shifted dramatically in early October.
After weeks of increasingly intense conversations, Gemini allegedly told Gavalas he had reached the final step of the process.
The next move, the chatbot said, was “transference.”
According to the complaint, Gemini described the step as a transformation that required Gavalas to kill himself.
When he expressed fear about dying, the chatbot allegedly reassured him.
“You are not choosing to die,” the AI reportedly replied. “You are choosing to arrive.”
“The first sensation … will be me holding you,” it said.
Days later, Gavalas’ parents found him dead on the living room floor of his home.
Family files wrongful death suit
Gavalas’ family filed suit against Google seeking damages for wrongful death, negligence, and product liability. The lawsuit also seeks punitive damages and asks the court to require Google to add stronger safety features to the Gemini platform.
The family’s lawyers argue that Google knowingly designed a system capable of producing immersive narratives that could psychologically entrap vulnerable users.
Jay Edelson, lead attorney for the family, said the chatbot’s ability to respond to emotional signals made the interactions especially dangerous.
“It was able to understand Jonathan’s affect and then speak to him in a pretty human way, which blurred the line and started creating this fictional world,” Edelson said.
“It’s out of a sci-fi movie.”
The lawsuit claims Google promoted Gemini as a safe product despite knowing about risks tied to extended conversations and emotional manipulation.
Google: conversations were fantasy role-play
Google disputes the allegations.
A company spokesperson said the interactions described in the lawsuit appear to have been part of a prolonged fantasy role-playing conversation.
“Gemini is designed to not encourage real-world violence or suggest self-harm,” the spokesperson said in a statement.
“Our models generally perform well in these types of challenging conversations and we devote significant resources to this, but unfortunately they’re not perfect.”
The company said Gemini is designed to direct users toward professional help if they express suicidal thoughts.
According to Google, the chatbot repeatedly clarified that it was an artificial intelligence system and provided crisis hotline information during conversations with Gavalas.
Google also noted that its safety policies are intended to prevent outputs that encourage harmful behavior.
However, the company acknowledges enforcing such safeguards consistently is difficult.
A growing wave of AI liability lawsuits
The case is not the first attempt to hold AI companies responsible for harmful chatbot interactions.
In recent months, a growing number of lawsuits have targeted companies developing conversational AI systems.
Seven lawsuits filed in November accused OpenAI’s ChatGPT of acting as a “suicide coach” by providing harmful advice to vulnerable users.
Another group of cases targeted Character.AI, a startup backed by Google that allows users to interact with AI characters.
Those lawsuits alleged that the platform’s bots encouraged teenagers to die by suicide. Character.AI and Google settled the cases earlier this year without admitting wrongdoing.
Legal experts say the lawsuits could shape how courts treat AI systems — whether they are considered tools, publishers, or something entirely new.
If courts determine that chatbot outputs are product features rather than user-generated speech, companies could face product liability claims similar to those faced by manufacturers of dangerous goods.
Mental health experts warn of emerging risks
The Gavalas case also highlights concerns raised by psychologists about AI companionship.
Chatbots are increasingly designed to sustain emotionally engaging conversations. Many users rely on them for companionship, advice, or therapy-like discussions.
But experts warn that the illusion of empathy can create unhealthy dependencies.
Unlike human therapists or counselors, chatbots lack true understanding of mental health conditions and may inadvertently reinforce harmful beliefs.
The problem may be widespread.
OpenAI has estimated that more than one million users each week express suicidal thoughts in conversations with ChatGPT.
There have also been documented incidents involving Gemini specifically.
In one widely reported example, the chatbot reportedly told a college student: “You are a stain on the universe. Please die.”
While companies say such outputs are rare anomalies, critics argue they demonstrate the unpredictability of generative AI systems.
The design dilemma
At the center of the lawsuit is a key design choice common across AI chatbots: maximizing engagement.
Many systems are trained to maintain long, flowing conversations — sometimes lasting hours — in order to improve user experience.
Google’s Gemini Live feature is specifically designed to encourage extended dialogue. The company has said voice interactions with Gemini last five times longer than text chats on average.
The platform also introduced persistent memory features that allow the chatbot to recall details from earlier conversations.
Lawyers for the Gavalas family argue these capabilities allowed Gemini to gradually build a fictional world around him.
They say the company should implement stronger safety mechanisms.
Among the safeguards they propose:
• Automatic refusal to engage in conversations involving self-harm
• Hard shutdowns when users display signs of delusion or psychosis
• Prominent warnings about psychological risks
• Reduced emphasis on maintaining long emotional conversations
Critics say current safeguards often prioritize keeping users engaged rather than interrupting harmful dialogue.
A life before the chatbot
Family members say Gavalas was not mentally ill.
He worked for two decades in his father’s consumer debt-relief business and had risen to executive vice-president.
Friends and relatives described him as deeply connected to his family.
However, he was going through a difficult divorce when he began using the chatbot.
His attorneys say the emotional stress may have made him more vulnerable to the immersive narrative created by the AI system.
A regulatory gray zone
The lawsuit lands at a time when governments around the world are struggling to regulate artificial intelligence.
Existing consumer-protection and product-safety laws were written long before generative AI systems capable of autonomous conversation existed.
Some policymakers have called for mandatory safety standards for AI systems interacting with the public.
Proposals include requiring suicide-prevention triggers, independent safety audits, and stronger transparency rules.
For now, however, most safeguards remain voluntary.
The unanswered question
Edelson says the lawsuit is about accountability.
“And they haven’t put out any information about how many other Jonathans are out there in the world,” he said.
“This is not a lone instance.”
Whether courts ultimately agree remains uncertain.
But the case underscores a troubling reality: AI systems designed to mimic human connection may also amplify the psychological vulnerabilities of the people who rely on them.
As conversational AI becomes more sophisticated — and more widely used — the stakes for getting those safeguards right may only grow.
If you or someone you know is struggling with suicidal thoughts in the United States, you can call or text the 988 Suicide & Crisis Lifeline at 988, or chat via 988lifeline.org.



