upvote
Because like the word 'intelligence' the word safety means a lot of things.

If your language model cyberbullies some kid into offing themselves could that fall under existing harassment laws?

If you hook a vision/LLM model up to a robot and the model decides it should execute arm motion number 5 to purposefully crush someone's head, is that an industrial accident?

Culpability means a lot of different things in different countries too.

reply
I don't see bullying from a machine as a real thing, no more than people getting bullied from books or a TV show or movie. Bullying fundamentally requires a social interaction.

The real issue is more AI being anthropomorphized in general, like putting one in realistically human looking robot like the video game 'Detroit: Become Human'.

reply
"Lawsuit claims Character.AI is responsible for teen's suicide": https://www.nbcnews.com/tech/characterai-lawsuit-florida-tee...
reply