Importantly, I think AI companies are motivated towards the overengineered solutions as they increase the buyer's token spend. I'm not sure how we can create incentives that optimize for finding the 'right' solution, which may be the cheapest (the bash one-liner). Perhaps a widely recognized but not overly optimized for benchmark for this class of problems?
Yes that, and also, the more complicated the solution, the more likely no one reads or reviews it too carefully, and will instead depend on an LLM to ‘read’ and ‘review it’
Even ignoring token costs, there’s a high incentive for LLMs to generate complex solutions, because those solutions generate demand for further LLM use. (You don’t really want to review that 30,000 line pull request by hand, do you?)
A bash oneliner can be a chain of 5+ programs, each buffering the stdin/out, what if the CLI is doing the same operation via streams instead? Just a random example but that can easily be worth it
I just don't like the fictional straw man where an expert has somehow been brainwashed by AI into forgetting everything they ever knew.
It sounds like the kind of thing people will think surely must be very important and in use, because why go through all those hoops instead of doing a quick hack?
But I guess we can just throw AI at the maintenance burden anyways..
I decided to go for the charitable interpretation of "the alternatives are close enough in functionality that writing by hand is not worth it", instead of the uncharitable interpretation of "these examples are completely made up".
It used to be the proverbial one-liner with zero documentation because that was the best ratio of effort to results. Now the effort is on the AI and the results look more impressive. Today that will still impress a lot of people, bosses, colleagues. Very soon everyone will see through it and anything overly stuffy will have the opposite effect of looking low-effort.