If your language model cyberbullies some kid into offing themselves could that fall under existing harassment laws?
If you hook a vision/LLM model up to a robot and the model decides it should execute arm motion number 5 to purposefully crush someone's head, is that an industrial accident?
Culpability means a lot of different things in different countries too.
The real issue is more AI being anthropomorphized in general, like putting one in realistically human looking robot like the video game 'Detroit: Become Human'.