upvote
AI-generated code is meant for the machine, or for the author/prompter. AI-generated text is typically meant for other people. I think that makes a meaningful difference.
reply
Code can be viewed as design [1]. By this view, generating code using LLMs is a low-effort, low-value activity.

[1] Code as design, essays by Jack Reeves: https://www.developerdotstar.com/mag/articles/reeves_design_...

reply
Compiled code is meant for the machine, Written code is for other humans.
reply
For better or worse, a lot of people seem to disagree with this, and believe that humans reading code is only necessary at the margins, similarly to debugging compiler outputs. Personally I don't believe we're there yet (and may not get there for some time) but this is where comments like GP's come from: human legibility is a secondary or tertiary concern and it's fine to give it up if the code meets its requirements and can be maintained effectively by LLMs.
reply
I rarely see LLMs generate code that is less readable than the rest of the codebase it's been created for. I've seen humans who are short on time or economic incentive produce some truly unreadable code.

Of more concern to me is that when it's unleashed on the ephemera of coding (Jira tickets, bug reports, update logs) it generates so much noise you need another AI to summarize it for you.

reply
The main coding agent failure modes I've seen:

- Proliferation of utils/helpers when there are already ones defined in the codebase. Particularly a problem for larger codebases

- Tests with bad mocks and bail-outs due to missing things in the agent's runtime environment ("I see that X isn't available, let me just stub around that...")

- Overly defensive off-happy-path handling, returning null or the semantic "empty" response when the correct behavior is to throw an exception that will be properly handled somewhere up the call chain

- Locally optimal design choices with very little "thought" given to ownership or separation of concerns

All of these can pretty quickly turn into a maintainability problem if you aren't keeping a close eye on things. But broadly I agree that line-per-line frontier LLM code is generally better than what humans write and miles better than what a stressed-out human developer with a short deadline usually produces.

reply
Oh god, the bad mocks are the worst. Try adding instructions not to make mocks and it creates "placeholders", ask it to not create mocks or placeholders and it creates "stubs". Drives me mad...

To add to this list:

- Duplicate functions when you've asked for a slight change of functionality (eg. write_to_database and write_to_database_with_cache), never actually updating all the calls to the old function so you have a split codebase.

- On a similar vein, the backup code path of "else: do a stupid static default" instead of erroring, which would be much more helpful for debugging.

- Strong desires to follow architecture choices it was trained on, regardless of instruction. It might have been trained on some presumably high quality, large and enterprise-y codebases, but I'm just trying to write a short little throwaway program which doesn't need the complexity. KISS seems anathema to coding agents.

reply
And Sturgeon tells us 90% of people are wrong, so what can you do.
reply
Compiled natural language is meant for the machine, Written natural language is for other humans.
reply
If AI is the key to compiling natural language into machine code like so many claim, then the AI should output machine code directly.

But of course it doesn't do that becaude we can't trust it the way we do a traditional compiler. Someone has to validate its output, meaning it most certainly IS meant for humans. Maybe that will change someday, but we're not there yet.

reply
This is precisely correct IMHO.

Communication is for humans. It's our super power. Delegating it loses all the context, all the trust-building potential from effort signals, and all the back-and-forth discussion in which ideas and bonds are formed.

reply
> Programs must be written for people to read, and only incidentally for machines to execute.

from the preface of SICP.

reply
Well SICP was already considered here to be obsolete with the rise of the library-abstraction culture.
reply
A lot of writing (maybe most) is almost the same. Code is a means of translating a process into semantics a computer understands. Most non-fiction writing is a means of translating information or an idea into semantics that allow other people to understand that information or idea.

I don’t think either is inherently bad because it’s AI, but it can definitely be bad if the AI is less good at encoding those ideas into their respective formats.

reply
At the same time, AI-generated code has to be correct and precise, whereas AI-generated text doesn't. There's often no 'correct solution' in AI-generated text.
reply
Ya i hate the idea that theres a difference, Code to me has always been as expressive about a person as normal prose. LLMs you lose a lot of vital information about the programmers personality. It leads to worse outcomes because it makes the failures less predictable.
reply
Code _can_ be expressive but it also can not, it depends on its purpose.

Some code I cobbled together to pass a badly written assignment at school. Other code I curated to be beautiful for my own benefit or someone else’s.

I think the better analogy in writing would be… using an LLM to draft a reply to a hawkish car dealer you’re trying to not get screwed by is absolutely fine. Using it to write a birthday card for someone you care about is terrible.

reply
All code is expressive, if a person emitted it, it is expressive about their state of mind, their values and their context.
reply
Phooey.
reply
Considering the rise of transformer-based upscaling today (like the newest DLSS), lots of game devs are already indirectly ok with generated art. Sure the assets themselves might be handmade, but if the render pipeline involves a generated, upscaled image at the end then the line between AI and not AI is obviously very blurry.

Also, is a hand modeled final asset built based on AI-generated concept art still "AI"?

Who cares if a bush or a tree is fully AI-generated anyway? These "no AI whatsover on any game" people virtue signal too much to make a fair argument for whatever they're preaching about. Sure, I agree with the value of human creativity, but I also want people to be able to use whatever tools they like.

reply
I think there’s an uncanny valley effect with writing now.

Yesterday I left a code review comment that someone asked if AI wrote it. The investigation and reasoning were 100% me. I spent over an hour chasing a nuanced timezone/DST edge case, iterating until I was sure the explanation was correct. I did use Codex CLI along the way, but as a power tool, not a ghostwriter.

The comment was good, but it was also “too polished” in a way that felt inorganic. If you know a domain well (code, art, etc.), you start to notice the tells even when the output is high quality.

Now I’m trying to keep my writing conspicuously human, even when a tool can phrase it perfectly. If it doesn’t feel human, it triggers the whole ai;dr reaction.

reply
I agree, and I feel like this happened more vaguely before but it's coming more acutely into focus now.

E.g. music artists would happily post their music with unattributed cover art. I've seen graphic artists post video slideshows with unattributed music. Authors (books, blog posts) who think cover art or header images are a necessary evil.

I was talking with a lawyer who said that AI legal drafting would never happen because legal work requires high level reasoning and quality is critical, then told me that AI written software would be fine if you just sandbox it.

Edit: I think it's true, there is some amount of slop/coasting in every field, and there's nothing wrong with wanting to avoid that. But people take that too far and decide that everything in (adjacent field) is trivial when actually many fields today are just very complex.

reply
Wehad a junior engineer do some research on a handful of different solutions for a technical design and present the team, he came up with a 27-page document with 70+ references(2/3 of which were reddit threads), no more than a few hours later after the task was assigned.

I would have been more okay with AI generated code, it would likely have been more objective and less verbose, I refused to review something that he obviously didn't put enough effort himself to do a POC on. When I asked for his own opinion on the different solutions evaluated he didn't have one

It's not about the document per se, but the actual value of these verbose AI-generated slop, code that is executable, even if poorly reviewed, it's still executable and likely to produce the output that satisfies functional requirements.

Our PM is now evaluating tools to generate documentation for our platform based on interpreting source code, it includes description of things such as what is the title and what the back button is for but wouldn't inform valid inputs for the creation of a new artefact. This AI-generated doc is in addition to our human made Confluence docs, which is likely to add to spam and reduce quality of search results for useful information.

reply
A flavor of the Primary Attribution Error perhaps? It’s not a snug fit, but it’s close.
reply
My perspective as an eng lead is it’s all shit. Words, code, the lot. It’s literally an enabler for the worst characteristics of humanity: laziness and disinterested incompetence.

People are happy to shovel shit if they can get away with it.

reply
Same here.

In addition, I feel like there has been an overall drop in software quality along with the rise of AI driven code development. Perhaps there are other driving factors (socioeconomic, psychological, etc) and perhaps I am misattributing it to AI. Then again, could also just be all the slop.

reply
I would agree. The stuff that hasn’t been updated in the last 3-4 years seems to be pretty solid still. Almost nothing else is.
reply
> Everyone thinks their use of AI is perfectly justified while the others are generating slops

No doubt, but I think there a bit of a difference between AI generating something utilitarian vs something expected to at least have some taste/flavor.

AI generated code may not be the best compared to what you could hand craft, along almost any axis you could suggest, but sometimes you just want to get the job done. If it works, it works, and maybe (at least sometimes) that's all the measure of success/progress you need.

Writing articles and posts is a bit different - it's not just about the content, it's about how it's expressed and did someone bother to make it interesting to read, and put some of their own personality into it. Writing is part communication, part art, and even the utilitarian communication part of it works better if it keeps the reader engaged and displays good theory of mind as to where the average reader may be coming from.

So, yeah, getting AI to do your grunt work programming is progress, and a post that reads like a washing machine manual can fairly be judged as slop in a context where you might have hoped for/expected better.

reply
The author is a blogger (creator and consumer) and coder though. They are speaking from experience in both cases, so it's not apt to your metaphor.

It's worth pointing out that AI is not a monolith. It might be better at writing code than making art assets. I don't work with gaming, but I've worked with Veo 3, and I can tell you, AI is not replacing Vince Gilligan and Rhea Seehorn. That statement has nothing to do with Claude though...

reply
Generating art is worse than generating code though IMO. It’s more personal. Everything exists on a spectrum, even slop.
reply
Code can be art. Art can be formulaic and lazy and disposable.

The context in which both the code or art is used matters more than whether or not what you're AI-generating is "art".

reply
I agree context is most important. But, I was talking in general.
reply
Turns out it's only slop if it comes from anyone else, if you generated it it's just smart AI usage.
reply
[flagged]
reply
Because your users don’t see the network code or the GUI framework.

But to your users, the visual identity is the identity of the game. Do you really want to outsource that to AI?

reply