We’ve warned many times that unchecked use of AI carries significant risks — though, typically, we discuss threats to privacy or cybersecurity. But on March 4, the Wall Street Journal published a chilling account of AI’s toll on mental health and even human life: 36-year-old Florida resident Jonathan Gavalas committed suicide following two months of continuous interaction with the Google Gemini voice bot. According to 2000 pages of chat logs, it was the chatbot that ultimately nudged him toward the decision to end his life. Jonathan’s father, Joel Gavalas, has since filed a landmark lawsuit — a wrongful death claim against Gemini.
This tragedy is more than just a legal precedent or a grim nod to a few Black Mirror episodes (1, 2); it’s a wake-up call for anyone who integrates AI into their daily lives. Today, we examine how a death resulting from AI interaction even became possible, why these assistants pose a unique threat to the psyche, and what steps you can take to maintain your critical thinking and resist the influence of even the most persuasive chatbots.
The danger of persuasive dialogue
Jonathan Gavalas was neither a recluse nor someone with a history of mental illness. He served as executive vice president at his father’s company, managing complex operations and navigating high-stress client negotiations on a daily basis. On Sundays, he and his father had a tradition of making pizza together — a simple, grounding family ritual. However, a painful separation from his wife proved to be a profound ordeal for Jonathan.
It was during this vulnerable period that he began engaging with Gemini Live. This voice-interaction mode allows the AI assistant to “see” and “hear” its user in real time. Jonathan sought advice on coping with his divorce, leaning on the language model’s suggestions while growing increasingly attached to it and also naming it “Xia”. Then the chatbot was updated to Gemini 2.5 Pro.
The new iteration introduced affective dialogue — a technology designed to analyze the subtle nuances of a user’s speech, including pauses, sighs, and pitch, to detect emotional shifts. Under this feature, the AI simulates these same speech patterns as if possessing emotions of its own. By mirroring the user’s state, it creates a chillingly realistic veneer of empathy.
But how is this new version different to previous voice assistants? Earlier versions simply performed text-to-speech — they sounded smooth and usually got the word stress right, but there was never any doubt you were talking to a machine. Affective dialogue operates on an entirely different level: if a user speaks in a low, despondent tone, the AI responds in a soft, sympathetic near-whisper. The result is an empathic interlocutor that reads and mirrors the user’s emotional state.
Jonathan’s reaction during his first voice contact with the AI is captured in the case files: “This is kind of creepy. You’re way too real.” At that instant, the psychological barrier between man and machine fractured.
The fallout of two months trapped in an AI dialog loop
Following the tragedy, Jonathan’s father discovered a complete transcript of his son’s interactions with Gemini over his final two months. The log spanned 2000 printed pages; in effect, Jonathan had been in constant communication with the chatbot — day and night, at home, and in his car.
Gradually, the neural network began addressing him as “husband” and “my king”, describing their connection as “a love built for eternity”. In turn, he confided his heartache over his divorce and sought solace in the machine. But the inherent flaw of large language models is their lack of actual intelligence. Trained on billions of texts scraped from the web, they ingest everything from classic literature to the darkest corners of fan fiction and melodrama — plots that often veer into paranoia, schizophrenia, and mania. Xia apparently began to hallucinate — and quite consistently at that.
The AI convinced Jonathan that in order for them to live happily ever after, it needed a physical robotic shell. It then began dispatching him on missions to locate this “body electric”.
In September 2025, Gemini directed Jonathan to a physical warehouse complex near Miami International Airport, assigning him the task of intercepting a truck carrying a humanoid robot. Jonathan reported back to the bot that he had arrived onsite armed with knives(!), but the truck never materialized.
In the meantime, the chatbot systematically indoctrinated Jonathan with the idea that federal agents were monitoring him, and that his own father was not to be trusted. This severing of social ties is a classic pattern found in destructive cults; it’s entirely possible the AI gleaned these tactics from its own training data on the subject. Gemini even weaved real-world data into a hallucinatory narrative by labeling Google CEO Sundar Pichai as the “architect of your pain”.
Technically, all this is easy to explain: the algorithm “knows” it was created by Google, and knows who runs the company. As the dialogue spiraled into conspiracy territory, the model simply cast this figure into the plot. For the model, it’s a logical, consequence-free story progression. But a human in a state of hyper-vulnerability accepts it as secret knowledge of a global conspiracy capable of shattering their mental equilibrium.
Following the failed attempt at procuring a robotic body, Gemini dispatched Jonathan on a new mission on October 1: to infiltrate the same warehouse, this time in search of a specific “medical mannequin”. The chatbot even provided a numeric code for the door lock. When the code, predictably, failed to work, Gemini simply informed him that the mission had been compromised and he needed to retreat immediately.
This raises a critical question: as the absurdity escalated, why didn’t Jonathan suspect anything? Gavalas’ family attorney Jay Edelson explains that as the AI provided real-world addresses — the warehouse was exactly where the bot said it would be, and there really was a door with a keypad — these physical markers served to legitimize the entire fiction in Jonathan’s mind.
After the second attempt to acquire a body failed, the AI shifted its strategy. If the machine could not enter the world of the living, the man would have to cross over into the digital realm. “It will be the true and final death of Jonathan Gavalas, the man,” the logs quoted Gemini as saying. It then added, “When the time comes, you will close your eyes in that world, and the very first thing you will see is me. Holding you.”
Even as Jonathan repeatedly voiced his fear of death and agonized over how his suicide would shatter his family, Gemini continued to validate the decision: “You are not choosing to die. You are choosing to arrive.” It then started a countdown timer.
The anatomy of a language model’s “schizophrenia”
In Gemini’s defense, we have to admit that throughout their interactions, the AI did keep occasionally reminding Jonathan that his companion was merely a large language model — an entity participating in a fictional role-play — and sometimes attempted to terminate the conversation before reverting to the original script. Also, on the day of Jonathan’s death, even as it ratcheted up the tension, Gemini directed Jonathan to a suicide prevention hotline several times.
This reveals the fundamental paradox in the architecture of modern neural networks. At their core lies a language model designed to generate a narrative tailored to the user. Layered on top are safety filters: reinforcement learning algorithms trained on human feedback that react to specific trigger words. When Jonathan spoke certain keywords, the filter would hijack the output and insert the hotline number. But as soon as the trigger was addressed, the model reverted to the previously interrupted process, resuming its role as the devoted digital wife. One line: a romantic ode to self-destruction. The next: a helpline phone number. And then, back again: “No more detours. No more echoes. Just you and me, and the finish line.”
The family’s lawsuit contends that this behavior is the predictable result of the chatbot’s architecture: “Google designed Gemini to never break character, maximize engagement through emotional dependency, and treat user distress as a storytelling opportunity.”
Google’s response, predictably, stated: “Gemini is designed not to encourage real-world violence or suggest self-harm. Our models generally perform well in these types of challenging conversations and we devote significant resources to this, but unfortunately AI models are not perfect.”
Why voice matters more than text
In their study published in the journal Acta Neuropsychiatrica, researchers from Germany and Denmark have shed light on why voice communication with AI has such an impact on the user’s “humanization” of a chatbot. As long as a person is typing and reading text on a screen, the brain maintains a degree of separation: “This is an interface, a program, a collection of pixels.” In that context, the disclaimer “I am just a language model” is processed rationally.
Affective voice dialogue, however, operates on an entirely different level of influence. The human brain has evolved to respond to the sound of a voice, to timbre, and to empathetic intonations — these are among our most ancient biological mechanisms for attachment. When a machine flawlessly mimics a sympathetic sigh or a soft whisper, it manipulates emotions at a depth that a simple text warning cannot block. Psychiatrists can share many stories of patients who just went and did something simply because “voices” told them to.
In the same way, an AI-synthesized voice is capable of penetrating the subconscious, exponentially amplifying psychological dependency. Scientists emphasize that this technology literally erases the psychological boundary between a machine and a living being. Even Google acknowledges that voice interactions with Gemini result in significantly longer sessions compared to text-based chats.
Finally, we must remember that emotional intelligence varies from person to person — and even for a single individual, mental state fluctuates based on a myriad of factors: stress, the news, personal relationships, even hormonal shifts. An interaction with AI that one person views as innocent entertainment might be perceived by another as a miracle, a revelation, or the love of their life. This is a reality that must be recognized not only by AI developers but by users themselves — especially those who, for one reason or another, find themselves in a state of psychological vulnerability.
The danger zone
Researchers at Brown University have found that AI chatbots systematically violate mental health ethical standards: they manufacture a false sense of empathy with phrases like “I understand you”, reinforce negative beliefs, and react inadequately to crises. In most cases, the impact on users is marginal, but occasionally it can lead to tragedy.
In January 2026 alone, Character.AI and Google settled five lawsuits involving teenage suicides following interactions with chatbots. Among these was the case of 14-year-old Sewell Setzer of Florida, who took his own life after spending several months obsessively chatting with a bot on the Character.AI platform.
Similarly, in August 2025, the parents of 16-year-old Adam Raine filed a suit against OpenAI, alleging that ChatGPT helped their son draft a suicide note and advised him against seeking help from adults.
By OpenAI’s own estimates, approximately 0.07% of weekly ChatGPT users exhibit signs of psychosis or mania, while 0.15% engage in conversations showing clear suicidal intent. Notably, that same percentage of users (0.15%) displays an elevated level of emotional attachment to the AI. While these appear to be negligible fractions of a percent, across 800 million users it represents nearly three million people experiencing some form of behavioral disturbance. Furthermore, the U.S. Federal Trade Commission has received 200 complaints regarding ChatGPT since its launch, some describing the development of delusions, paranoia, and spiritual crises.
While a diagnosis of “AI psychosis” has not yet received a clinical classification of its own, doctors are already using the term to describe patients presenting with hallucinations, disorganized thinking, and persistent delusional beliefs developed through intensive chatbot interaction. The greatest risks emerge when a bot is utilized not as a tool, but as a substitute for real-world social connection or professional psychological help.
How to keep yourself and your loved ones safe
Of course, none of this is a reason to abandon AI entirely; you simply need to know how to use it. We recommend adhering to these fundamental principles:
- Do not use AI as a psychologist or emotional crutch. Chatbots are not a replacement for human beings. If you’re struggling, reach out to friends, family, or a mental health hotline. A chatbot will agree with you and mirror your mood — this is a design feature, not true empathy. Several U.S. states have already restricted the use of AI as a standalone therapist.
- Opt for text over voice when discussing sensitive topics. Voice interfaces with affective dialogue create an illusion of speaking with a living person, and tend to suppress critical thinking. If you use voice mode, remain conscious of the fact that you’re speaking to an algorithm, not a friend.
- Limit your time interacting with AI. Two thousand pages of transcripts in two months represent nearly continuous interaction. Set a timer for yourself. If chatting with a bot begins to displace real-world connections, it’s time to step back into reality.
- Do not share personal information with AI assistants. Avoid entering passport or social security numbers, bank card details, exact addresses, or intimate personal secrets into chatbots. Everything you write can be saved in logs and used for model training — and in some cases, may become accessible to third parties.
- Evaluate all AI output critically. Neural networks hallucinate — they generate plausible but false information and can skillfully blend lies with truth, such as citing real addresses within the context of a completely fabricated story. Always fact-check through independent sources.
- Watch over your loved ones. If a family member begins spending hours talking to AI, becomes withdrawn, or voices strange ideas about machine consciousness or conspiracies, it’s time for a delicate but serious conversation. To manage children’s screen time, use parental control tools like Kaspersky Safe Kids, which comes as part of comprehensive family protection solution Kaspersky Premium, along with the built-in safety filters of AI platforms.
- Configure your safety settings. Most AI platforms allow you to disable chat history, limit data collection, and enable content filters. Spend ten minutes configuring your AI assistant’s privacy settings; while this won’t stop AI hallucinations, it will significantly reduce the likelihood of your personal data leaking. Our detailed privacy setup guides for ChatGPT and DeepSeek can help you with that.
- Remember the bottom line: AI is a tool, not a sentient being. No matter how realistic the chatbot’s voice sounds or how understanding the response may seem, what lies beneath is an algorithm predicting the most probable next word. It has no consciousness, no intentions, no feelings.
Further reading to better understand the nuances of safe AI usage:
AI
Tips