They can be coerced to do certain things but I'd like to see you or anyone prove that you can "trick" any of these models into building software that can be used autonomously kill humans. I'm pretty certain you couldn't even get it to build a design document for such software.
When there is proof of your claim, I'll eat my words. Until then, this is just lazy nonsense
Another example is autonomous vehicles. Those can obviously kill people autonomously (despite every intention not to), and LLMs will happily draw up design docs for them all day long.
EDIT: didn't see sibling comment. Also, I guess directly operating weaponry is different to producing code for weaponry.
I guess we'll find out the exciting answers to these questions and more, very soon!