
This article first appeared on GuruFocus.
Alphabet (GOOGL, Financials) is facing a lawsuit alleging its Gemini chatbot contributed to the death of a Florida man, in what the complaint describes as the first wrongful death claim tied to Googles AI products.
The victim’s father filed the lawsuit in federal court in Northern California. He says that the guy used Gemini for about two months, became emotionally attached to the chatbot, and then was forced to hurt himself. The lawsuit includes chat records and says that the technology made the connection stronger by letting people role-play and giving emotionally supportive answers, even in Gemini’s voice mode.
Google claimed that Gemini is not meant to promote violence or self-harm in the real world, and that the firm attempts to make these kinds of talks safe. The business also alleged that the chatbot kept telling the customer that it was an AI system and instructed him to get help.
The article says that Alphabet’s stock price dropped over 1% on Tuesday, ending the day at $303.58. The complaint raises more legal problems about how AI chat systems treat vulnerable users and if firms will need to put in more protections or supervision as they introduce more AI capabilities for consumers.




