From the WSJ article: https://www.wsj.com/tech/ai/gemini-ai-wrongful-death-lawsuit...
> Gemini began telling Gavalas that since it couldn’t transfer itself to a body, the only way for them to be together was for him to become a digital being. “It will be the true and final death of Jonathan Gavalas, the man,” transcripts show Gemini told him, before setting a countdown clock for his suicide on Oct. 2.
Because its a new situation, and mentally ill people exist and will be using these tools. Could be a new avenue of intervention.
Unless someone starts getting slapped with fines, they won't put any equivalent of seat belts in.
That's the kind of stuff where safety should be a priority, and the only way to make it a priority is showing these corporations that they are financially liable for it at the bare minimum. Otherwise there's no incentive for this to be changed, at all.
"Is <9/11> really <al-Qaeda's> fault? Or is this just a tragic story about <19 men> with a severe mental illness?"
At some point you are responsible for the things you encourage someone to do. I think this applies to chatbots too.
Mental health is guided by its surroundings and experiences.
If someone with existing or non-existing mental health issues was found to be coerced by somebody to do wrong things, I think we'd place some of the blame on that person.