I remember in the early days of LLMs this was the joke meme. But, now seeing it happen in real life is more than just alarming. It's ridiculous. It's like the opposite of compressing a payload over the wire: We're taking our output, expanding it, transmitting it over the wire, and then compressing it again for input. Why do we do this?
I assume this is satire.
I had a similar realization. My team was discussing whether we should hook our open-source codebases into an AI to generate documentation for other developers, and someone said "why can't they just generate documentation for it themselves with AI"? It's a good point: what value would our AI-generated documentation provide that theirs wouldn't?
If you aren't willing to put in the time to verify it works than it is indeed, no more useful than anyone else doing the same task on their own.
I started out with telling the AI common issues that people get wrong and gave it the code. Then I read (not skim, not speed, actually read and think) the entire thing and asked for changes. Then repeat the read everything, think, ask for changes loop until it’s correct which took about 10 iterations (most of a day).
I suspect the AI would have provided zero benefit to someone who is good at technical writing, but I am bad at writing long documents for humans so likely would just not have done it without the assistance.
It was especially unfortunate because to do its thing, the code required a third party's own personal user credentials including MFA, which is a complete non-starter in server-side code, but apparently the manager's LLM wasn't aware enough to know that.