upvote
Yeah the case is quite terrifying.

It reminds me of an episode of Star Trek TNG, if memory serves correct there were loads of episodes about a crew member falling for a hologram dec character.

Given that there’s a loneliness epidemic I believe tech like this could have a wide impact on peoples mental health.

I stronger believe AI should be devoid of any personality and strictly return data/information then frame its responses as if you’re speaking to another human.

reply
deleted
reply
There are many explanations why these incidents could be rare but not impossible.

These models are still stochastic and very good at picking up nuances in human speech. It may be simply unlikely to go off the rails like that or (more terrifyingly) it might pick up on some character trait or affectation.

Honestly I'm appalled by the lack of safety culture here. "My plane killed only 1% of pilots" and variations thereof is not an excuse in aerospace, but it seems perfectly acceptable in AI. Even though the potential consequences are more catastrophic (from mass psychosis to total human extinction if they achieve their AGI).

reply
The default mode that untrained people enter when thinking about mental illness is denial, as in, "thank <deity> that will never happen to me". Appallingly, that is ingrained in AI product safety; why would we sacrifice double-digit effectiveness/performance/whatever to prevent negative interactions with the single-digit population who are susceptible to mental illness in the first place?

We just aren't comfortable with the idea that all of us are fragile, and when we think we could endure a situation that would induce self-harm in others, we are likely wrong.

reply