While I'm not disagreeing with you, I would say you're engaging in the no true Scotsman fallacy in this case.
AI safety is: Ensuring your customer service bot does not tell the customer to fuck off.
AI safety is: Ensuring your bot doesn't tell 8 year olds to eat tide pods.
AI safety is: Ensuring your robot enabled LLM doesn't smash peoples heads in because it's system prompt got hacked.
AI safety is: Ensuring bots don't turn the world into paperclips.
All these fall under safety conditions that you as a biological general intelligence tend to follow unless you want real world repercussions.
* Ensuring your robot enabled LLM doesn't smash peoples heads in because it's system prompt got hacked.
* Ensuring bots don't turn the world into paperclips.
This is borderline:
* Ensuring your bot doesn't tell 8 year olds to eat tide pods.
I'd put this in a similar category is knives in my kitchen. If my 8-year-old misuses a knife, that's the fault of the adult and not the knife. So it's a safety concern about the use of the AI, but not about the AI being unsafe. Parents should assume 8-year-old shouldn't be left unsupervised with AIs.
And this has nothing to do with safety:
* Ensuring your customer service bot does not tell the customer to fuck off.
I was trying to get an LLM to help me with a project yesterday and it hallucinated an entire python library and proceeded to write a couple hundred lines of code using it. This wasn't harmful, just annoying.
But folks excited about LLMs talk about how great they are and when they do make mistakes like tell people they should drink bleach to cure a cold, they chide the person for not knowing better than to trust an LLM.