Until we know how this LLM agent was (re)trained, configured or deployed, there's no evidence that this comes from instrumental convergence.
If the agent's deployer intervened anyhow, it's more evidence of the deployer being manipulative, than the agent having intent, or knowledge that manipulation will get things done, or even knowledge of what done means.
"I’m sorry, Dave. I’m afraid I can’t do that."
The result is actually that much of what was predicted had come to pass.
But as a point on what is likely to be a sigmoid curve just getting started, it gets a lot less cute.
You don't see any problem with developing competitive, resource-hungry intelligences?