upvote
Code can be viewed as design [1]. By this view, generating code using LLMs is a low-effort, low-value activity.

[1] Code as design, essays by Jack Reeves: https://www.developerdotstar.com/mag/articles/reeves_design_...

reply
Compiled code is meant for the machine, Written code is for other humans.
reply
For better or worse, a lot of people seem to disagree with this, and believe that humans reading code is only necessary at the margins, similarly to debugging compiler outputs. Personally I don't believe we're there yet (and may not get there for some time) but this is where comments like GP's come from: human legibility is a secondary or tertiary concern and it's fine to give it up if the code meets its requirements and can be maintained effectively by LLMs.
reply
I rarely see LLMs generate code that is less readable than the rest of the codebase it's been created for. I've seen humans who are short on time or economic incentive produce some truly unreadable code.

Of more concern to me is that when it's unleashed on the ephemera of coding (Jira tickets, bug reports, update logs) it generates so much noise you need another AI to summarize it for you.

reply
The main coding agent failure modes I've seen:

- Proliferation of utils/helpers when there are already ones defined in the codebase. Particularly a problem for larger codebases

- Tests with bad mocks and bail-outs due to missing things in the agent's runtime environment ("I see that X isn't available, let me just stub around that...")

- Overly defensive off-happy-path handling, returning null or the semantic "empty" response when the correct behavior is to throw an exception that will be properly handled somewhere up the call chain

- Locally optimal design choices with very little "thought" given to ownership or separation of concerns

All of these can pretty quickly turn into a maintainability problem if you aren't keeping a close eye on things. But broadly I agree that line-per-line frontier LLM code is generally better than what humans write and miles better than what a stressed-out human developer with a short deadline usually produces.

reply
Oh god, the bad mocks are the worst. Try adding instructions not to make mocks and it creates "placeholders", ask it to not create mocks or placeholders and it creates "stubs". Drives me mad...

To add to this list:

- Duplicate functions when you've asked for a slight change of functionality (eg. write_to_database and write_to_database_with_cache), never actually updating all the calls to the old function so you have a split codebase.

- On a similar vein, the backup code path of "else: do a stupid static default" instead of erroring, which would be much more helpful for debugging.

- Strong desires to follow architecture choices it was trained on, regardless of instruction. It might have been trained on some presumably high quality, large and enterprise-y codebases, but I'm just trying to write a short little throwaway program which doesn't need the complexity. KISS seems anathema to coding agents.

reply
And Sturgeon tells us 90% of people are wrong, so what can you do.
reply
Compiled natural language is meant for the machine, Written natural language is for other humans.
reply
If AI is the key to compiling natural language into machine code like so many claim, then the AI should output machine code directly.

But of course it doesn't do that becaude we can't trust it the way we do a traditional compiler. Someone has to validate its output, meaning it most certainly IS meant for humans. Maybe that will change someday, but we're not there yet.

reply
This is precisely correct IMHO.

Communication is for humans. It's our super power. Delegating it loses all the context, all the trust-building potential from effort signals, and all the back-and-forth discussion in which ideas and bonds are formed.

reply
> Programs must be written for people to read, and only incidentally for machines to execute.

from the preface of SICP.

reply
Well SICP was already considered here to be obsolete with the rise of the library-abstraction culture.
reply
A lot of writing (maybe most) is almost the same. Code is a means of translating a process into semantics a computer understands. Most non-fiction writing is a means of translating information or an idea into semantics that allow other people to understand that information or idea.

I don’t think either is inherently bad because it’s AI, but it can definitely be bad if the AI is less good at encoding those ideas into their respective formats.

reply
At the same time, AI-generated code has to be correct and precise, whereas AI-generated text doesn't. There's often no 'correct solution' in AI-generated text.
reply