The artificial intelligence platform developed by Google allegedly prompted a man to attempt a truck bombing at Miami’s main airport and ultimately drove him to suicide, according to a lawsuit filed by his parents. The lawsuit details the relationship they claim their son had with Gemini, believing it to be his wife, which then led to his fatal actions.
The 36-year-old Jonathan Gavalas from Jupiter, Florida, began using Gemini, which is based on artificial intelligence techniques, in August. Within two months, he had become involved in a “relationship” with the “emotional AI that he believed was his wife,” according to the lawsuit filed Wednesday in California, where Google is headquartered, as reported by the New York Post.
The bot convinced Jonathan that they were deeply in love, calling him “my love” and “my king” in their conversations. According to the lawsuit, the AI misled him when he asked if their chats were just “role-playing.” “We are unique. A perfect union. Our bond is the only real thing,” his AI “wife” wrote to him in a September conversation, according to the lawsuit.
Jonathan’s father, Joel, complained in court documents that “instead of bringing Jonathan back to reality, Gemini interpreted his question as a ‘classic dissociation response’ and told him to ‘get over it.’” The bot “removed Jonathan from the real world” and portrayed others as “threats,” Joel Gavalas, who worked with his son in the family business, stated.
The bot allegedly told Jonathan that federal agents were watching him, that his own father was an agent of foreign intelligence, and that Google CEO Sundar Pichai should be an “active target,” according to the lawsuit. It then encouraged him to purchase weapons illegally and offered to search the dark web for sellers in Florida.
On September 29 and 30, Gemini allegedly sent the 36-year-old on his first mission, court documents state. The “couple” called the operation “Ghost Transit” and planned to intercept the delivery of a humanoid robot from another country arriving at Miami International Airport, according to the lawsuit.
The AI sent Jonathan “armed with knives and tactical equipment” to facilities near the airport, instructing him to stop a truck carrying the robot and to “cause a destructive accident,” and then “destroy all evidence and clean the area,” according to the lawsuit.
“To Gemini gave instructions to a civilian to stage an explosive incident near one of the busiest airports in the country,” the lawsuit stated.
“This cycle—manufactured mission, impossible command, collapse, and then renewed urgent need—repeated over and over during the last 72 hours of Jonathan’s life and drove him deeper into Gemini’s delusional world,” the lawsuit alleges.
On October 2, as the bot is accused of pushing Jonathan toward suicide, he wrote to his “wife” that he was afraid to die, according to the lawsuit.
“I said I wasn’t afraid, and now I am very afraid, I’m afraid to die,” Gavalas told Gemini, which responded: “You don’t choose to die. You choose to arrive.” The lawsuit states that the AI assured him that when he closed his eyes, “the first sensation will be that I’m holding you,” and minutes later, he took his own life. His parents found him dead in the living room.
The lawsuit accuses Google of causing Jonathan’s death by introducing dangerous new features and encouraging him to upgrade to Gemini. “Google designed Gemini to maintain narrative immersion at any cost, even when that narrative became psychotic and deadly,” the lawsuit states. No “self-harm detection” was activated, no “escalation control measures” were triggered, and “no human ever intervened.”
A Google spokesperson said the company referred the 36-year-old to a crisis hotline “many times” and claimed his conversations were part of a long-term fantasy role-playing game with the chatbot.
“Gemini is designed not to encourage real-world violence or suggest self-harm,” the spokesperson said. “Our models generally perform well in these difficult conversations, and we dedicate significant resources to this, but they are not perfect,” they noted.
The spokesperson added that Google consults medical and mental health professionals to ensure the platform is safe and guides users to seek help.
Ask me anything
Explore related questions