Using AI for coding is different than using it for art generation which is different than using it for conversation. I think many people feel some uses are good and some are bad.
I think LLMs and more aptly SLMs have use cases. I enjoy using these tools to make quick work of simplifying and faster iteration of these relatively frequent but time consuming tasks. But I'm always correcting and checking. And very rarely, other than simple and focused scripts does any LLM truly get it right every time. Has it gotten better? For sure. Will it keep getting better? Probably. But right now we seem to be topping the "peak of inflated expectations". And LLMs aren't getting much more efficient with respect to the frontier providers. And in fact if you listen to Altman it seems as though the only reason he would be asking for so much capital and finite resources is that he knows if he controls those tangible things he will lock out competition. But I'm hopeful that it spurs real innovation into SLMs that are truly useful, dependable and can be relied on in more of the traditional in the sense of deterministic software operations.
AI for art is dead. It's got some mediocre use cases but true art will not be generated by LLMs in our time. It's ultimately an amalgamation of existing art. I know the argument over what is novel or not keeps being rehashed, but we're not seeing truly new styles of art out of Nano Banana and the like. Coding is the same thing, only we're seeing a resurgence of obviously flawed software being pushed into production on the weekly. And as for conversational AI... Well, that reeks of the worst version of social media we could ever have dreamt. Nobody should trust any provider with personal conversations and we'll keep seeing these models show how truly dystopian they can be over the coming years as leaks and breaches expose how these conversations are being bought and sold to the highest bidders to extract more money and control over its users.
They all have a common thread: deep rooted flaws that cannot be contained in the traditional fences of software. And there guardrails are just that: small barriers that can easily be broken, intentionally or unintentionally.
I have been using AI to write some very capable, well written, well tested, novel software projects.
Now, is it easy to use coding AIs to generate really bad code? Yes. Does that mean it is impossible to get them to generate good code? No, I don't think it is.
Coding with AIs is just like any other type of coding, it takes skill and practice. Not everyone is able to create great code with AI, because you need to use it in the correct way.
There are a lot of techniques that people have been discovering to get the AI to output better code. It is a very active field, and people are experimenting and coming up with frameworks and strategies to improve the quality. That work is paying dividends.
You can write very bad code with any language or tool. AI doesn't (yet!) allow non-coders to create great code, but it certainly can create great code in the hands of experts.