Technically, that's as "Memory Safe" as you can get!
They can only be interested in one thing, self-advancement. No other explanation works! If they were interested in self-improvement, they might try reading or writing something themselves! Wouldn't it show if they had?
I recognize that models are getting better, but consider: if you already don't understand how programming or LLMs work, and you use LLMs precisely to avoid knowing how to do things, or how they work (the "CEO" mode), each incremental improvement will impress you more than it impresses others. There's no AI exception to Dunning-Kruger.
I recognize that "this" is a difficult thing to pin down in real time. But in the end we know it when we see it, and it has the fascinating and useful quality of not really being explainable by anything else.
Unless and until the culture gets to a place where no one would risk embarrassing themselves by doing something like this, we're stuck with it.