upvote
> In the end it will be the users sculpting formal systems like playdoh.

And unless the user is a competent programmer, at least in spirit, it will look like the creation of the 3-year-old next door, not like Wallace and Gromit.

It may be fine, but the difference is that one is only loved by their parents, the other gets millions of people to go to the theater.

Play-Doh gave the power of sculpting to everyone, including small children, but if you don't want to make an ugly mess, you have to be a competent sculptor to begin with, and it involves some fundamentals that does not depend on the material. There is a reason why clay animators are skilled professionals.

The quality of vibe coded software is generally proportional to the programming skills of the vibe coder as well as the effort put into it, like with all software.

reply
It really depends what kind of time frame we're talking about.

As far as today's models, these are best understood as tools to be used as humans. They're only replacements for humans insofar as individual developers can accomplish more with the help of an AI than they could alone, so a smaller team can accomplish what used to require a bigger team. Due to Jevon's paradox this is probably a good thing for developer salaries: their skills are now that much more in demand.

But you have to consider the trajectory we're on. GPT went from an interesting curiosity to absolutely groundbreaking in less than five years. What will the next five years bring? Do you expect development to speed up, slow down, stay the course, or go off in an entirely different direction?

Obviously, the correct answer to that question is "Nobody knows for sure." We could be approaching the top of a sigmoid type curve where progress slows down after all the easy parts are worked out. Or maybe we're just approaching the base of the real inflection point where all white collar work can be accomplished better and more cheaply by a pile of GPUs.

Since the future is uncertain, a reasonable course of action is probably to keep your own coding skills up to date, but also get comfortable leveraging AI and learning its (current) strengths and weaknesses.

reply
I don't expect exponential growth to continue indefinitely... I don't think the current line of LLM based tech will lead to AGI, but that it might inspire what does.

That doesn't mean it isn't and won't continue to be disruptive. Looking at generated film clips, it's beyond impressive... and despite limitations, it's going to lead to a lot of creativity, that doesn't mean someone making something longer won't have to work that much harder to get something consistent... I've enjoyed a lot of the StarWars fan films that have been made, but there's a lot of improvements needed in terms of the voice acting, sets, characters, etc that arre needed for something I'd pay to rent or see in a thaater.

Ironically, the push towards modern progressivism and division from Hollywood has largely been a shortfall... If they really wanted to make money, they'd lean into pop-culture fun and rah rah 'Merica, imo. Even with the new He-Man movie, the biggest critique is they bothered to try to integrate real world Earth as a grounding point. Let it be fantasy. For that matter, extend the delay from theater to PPV even. "Only in theaters for 2026" might actually be just enough push to get butts in seats.

I used to go to the movies a few times a month, now it's been at least a year since I've thought of going. I actually might for He-Man or the Spider-Man movies... Mixed on Mandalorean.

For AI and coding... I've started using it more the past couple months... I can't imagine being a less experienced dev with it. I predict, catch and handle so many issues in terms of how I've used it even. The thought of vibe-coded apps in the wild is shocking to terrifying and I wouldn't wany my money anywhere near them. It takes a lot of iteration, curation an baby-sitting after creating a good level of pre-documentation/specifications to follow. That said, I'd say I'm at least 5x more productive with it.

reply
so agentic play-doh sculpting

challenge accepted

reply
> The benefits we get from checking in with other humans, like error correction, and delegation can all be done better by AI.

Not this generation of AI though. It's a text predictor, not a logic engine - it can't find actual flaws in your code, it's just really good at saying things which sound plausible.

reply
> it can't find actual flaws in your code

I can tell from this statement that you don't have experience with claude-code.

It might just be a "text predictor" but in the real world it can take a messy log file, and from that navigate and fix issues in source.

It can appear to reason about root causes and issues with sequencing and logic.

That might not be what is actually happening at a technical level, but it is indistinguishable from actual reasoning, and produces real world fixes.

reply
> I can tell from this statement that you don't have experience with claude-code.

I happen to use it on a daily basis. 4.6-opus-high to be specific.

The other day it surmised from (I assume) the contents of my clipboard that I want to do A, while I really wanted to B, it's just that A was a more typical use case. Or actually: hardly anyone ever does B, as it's a weird thing to do, but I needed to do it anyway.

> but it is indistinguishable from actual reasoning

I can distinguish it pretty well when it makes mistakes someone who actually read the code and understood it wouldn't make.

Mind you: it's great at presenting someone else's knowledge and it was trained on a vast library of it, but it clearly doesn't think itself.

reply
What do you mean the content of your clipboard?
reply
I either accidentally pasted it somewhere and removed, forgetting about doing that or it's reading the clipboard.

The suggestion it gave me started with the contents of the clipboard and expanded to scenario A.

reply
Sorry to sound rude - but you polluted the context, pointing to the fact you would like A, and then found it annoying it tried to do A ?
reply
Oh, please. There’s always a way to blame the user, it’s a catch-22. The fact is that coding agents aren’t perfect and it’s quite common for them to fail. Refer to the recent C-compiler nonsense Anthropic tried to pull for proof.
reply
It fails far less often than I do at the cookie cutter parts of my job, and it’s much faster and cheaper than I am.

Being honest; I probably have to write some properly clever code or do some actual design as a dev lead like… 2% of my time? At most? The rest of the code related work I do, it’s outperforming me.

Now, maybe you’re somehow different to me, but I find it hard to believe that the majority of devs out there are balancing binary trees and coming up with shithot unique algorithms all day rather than mangling some formatting and dealing with improving db performance, picking the right pattern for some backend and so on style tasks day to day.

reply
I know I am not supposed to be negative in HN, but lay off the koolaid, dear colleague.
reply
What you're describing is not finding flaws in code. It's summarizing, which current models are known to be relatively good at.

It is true that models can happen to produce a sound reasoning process. This is probabilistic however (moreso than humans, anyway).

There is no known sampling method that can guarantee a deterministic result without significantly quashing the output space (excluding most correct solutions).

I believe we'll see a different landscape of benefits and drawbacks as diffusion language models begin to emerge, and as even more architectures are invented and practiced.

I have a tentative belief that diffusion language models may be easier to make deterministic without quashing nearly as much expressivity.

reply
This all sounds like the stochastic parrot fallacy. Total determinism is not the goal, and it not a prerequisite for general intelligence. As you allude to above, humans are also not fully deterministic. I don't see what hard theoretical barriers you've presented toward AGI or future ASI.
reply
Did you just invent a nonsense fallacy to use as a bludgeon here? “Stochastic parrot fallacy” does not exist, and there actually quite a bit of evidence supporting the stochastic parrot hypothesis.
reply
I haven't heard the stochastic parrot fallacy (though I have heard the phrase before). I also don't believe there are hard theoretical barriers. All I believe is that what we have right now is not enough yet. (I also believe autoregressive models may not be capable of AGI.)
reply
> moreso than humans

Citation needed.

reply
Much of the space of artificial intelligence is based on a goal of a general reasoning machine comparable to the reasoning of a human. There are many subfields that are less concerned with this, but in practice, artificial intelligence is perceived to have that goal.

I am sure the output of current frontier models is convincing enough to outperform the appearance of humans to some. There is still an ongoing outcry from when GPT-4o was discontinued from users who had built a romantic relationship with their access to it. However I am not convinced that language models have actually reached the reliability of human reasoning.

Even a dumb person can be consistent in their beliefs, and apply them consistently. Language models strictly cannot. You can prompt them to maintain consistency according to some instructions, but you never quite have any guarantee. You have far less of a guarantee than you could have instead with a human with those beliefs, or even a human with those instructions.

I don't have citations for the objective reliability of human reasoning. There are statistics about unreliability of human reasoning, and also statistics about unreliability of language models that far exceed them. But those are both subjective in many cases, and success or failure rates are actually no indication of reliability whatsoever anyway.

On top of that, every human is different, so it's difficult to make general statements. I only know from my work circles and friend circles that most of the people I keep around outperform language models in consistency and reliability. Of course that doesn't mean every human or even most humans meet that bar, but it does mean human-level reasoning includes them, which raises the bar that models would have to meet. (I can't quantify this, though.)

There is a saying about fully autonomous self driving vehicles that goes a little something like: they don't just have to outperform the worst drivers; they have to outperform the best drivers, for it to be worth it. Many fully autonomous crashes are because the autonomous system screwed up in a way that a human would not. An autonomous system typically lacks the creativity and ingenuity of a human driver.

Though they can already be more reliable in some situations, we're still far from a world where autonomous driving can take liability for collisions, and that's because they're not nearly as reliable or intelligent enough to entirely displace the need for human attention and intervention. I believe Waymo is the closest we've gotten and even they have remote safety operators.

reply
It's not enough for them to be "better" than a human. When they fail they also have to fail in a way that is legible to a human. I've seen ML systems fail in scenarios that are obvious to a human and succeed in scenarios where a human would have found it impossible. The opposite needs to be the case for them to be generally accepted as equivalent, and especially the failure modes need to be confined to cases where a human would have also failed. In the situations I've seen, customers have been upset about the performance of the ML model because the solution to the problem was patently obvious to them. They've been probably more upset about that than about situations where the ML model fails and the end customer also fails.
reply
That's not a citation.
reply
That’s because there’s no objective research on this. Similarly, there are no good citations to support your objection. They simply don’t exist yet.
reply
Maybe not worth discussing something that cannot be objectively assessed then.
reply
It's roughly why I think this way, along with a statement that I don't have objective citations. So sure, it's not a citation. I even said as much, right in the middle there.
reply
Nothing you've said about reasoning here is exclusive to LLMs. Human reasoning is also never guaranteed to be deterministic, excluding most correct solutions. As OP says, they may not be reasoning under the hood but if the effect is the same as a tool, does it matter?

I'm not sure if I'm up to date on the latest diffusion work, but I'm genuinely curious how you see them potentially making LLMs more deterministic? These models usually work by sampling too, and it seems like the transformer architecture is better suited to longer context problems than diffusion

reply
The way I imagine greedy sampling for autoregressive language models is guaranteeing a deterministic result at each position individually. The way I'd imagine it for diffusion language models is guaranteeing a deterministic result for the entire response as a whole. I see diffusion models potentially being more promising because the unit of determinism would be larger, preserving expressivity within that unit. Additionally, diffusion language models iterate multiple times over their full response, whereas autoregressive language models get one shot at each token, and before there's even any picture of the full response. We'll have to see what impact this has in practice; I'm only cautiously optimistic.
reply
I guess it depends on the definition of deterministic, but I think you're right and there's strong reason to expect this will happen as they develop. I think the next 5 - 10 years will be interesting!
reply
deleted
reply
And not this or any existing generation of people. We're bad a determining want vs need, being specific, genericizing our goals into a conceptual framework of existing patterns and documenting & explaining things in a way that gets to a solid goal.

The idea that the entire top down processes of a business can be typed into an AI model and out comes a result is again, a specific type of tech person ideology that sees the idea of humanity as an unfortunate annoyance in the process of delivering a business. The rest of the world see's it the other way round.

reply
I would have agreed with you a year ago
reply
Absolutely nuts, I feel like I'm living in a parallel universe. I could list several anecdotes here where Claude has solved issues for me in an autonomous way that (for someone with 17 years of software development, from embedded devices to enterprise software) would have taken me hours if not days.

To the nay sayers... good luck. No group of people's opinions matter at all. The market will decide.

reply
I think it’s just fear, I sure know that after 25 years as a developer with a great salary and throughout all that time never even considering the chance of ever being unemployable I’m feeling it too.

I think some of us come to terms with it in different ways.

reply
I used to sometimes get stuck on a problem for weeks and then get a budget pulled or get put on another project. Sometimes those issues never did get solved. Or have to tell someone sorry I don't have capacity to solve a problem for you. Now a lot of that anxiety has been replaced with a more can do attitude. Like wouldn't being able to pull off results create more opportunity?
reply
I wonder if the parent comments remark is a communication failure or pedantry gone wrong, because like you, claude-code is out there solving real problems and finding and fixing defects.

A large quantity of bugs as raised are now fixed by claude automatically from just the reports as written. Everything is human reviewed and sometimes it fixes it in ways I don't approve, and it can be guided.

It has an astonishing capability to find and fix defects. So when I read "It can't find flaws", it just doesn't fit my experience.

I have to wonder if the disconnect is simply in the definition of what it means to find a flaw.

But I don't like to argue over semantics. I don't actually care if it is finding flaws by the sheer weight of language probability rather than logical reasoning, it's still finding flaws and fixing them better than anything I've seen before.

reply
I can't control random internet people, but within my personal and professional life, I see the effective pattern of comparing prompts/contexts/harnesses to figure out why some are more effective than others (in fact tooling is being developed in the AI industry as a whole to do so, claude even added the "insights" command).

I feel that many people that don't find AI useful are doing things like, "Are there any bugs in this software?" rather than developing the appropriate harness to enable the AI to function effectively.

reply
If you only realized how ridiculous your statement is, you never would have stated it.
reply
It's also literally factually incorrect. Pretty much the entire field of mechanistic interpretability would obviously point out that models have an internal definition of what a bug is.

Here's the most approachable paper that shows a real model (Claude 3 Sonnet) clearly having an internal representation of bugs in code: https://transformer-circuits.pub/2024/scaling-monosemanticit...

Read the entire section around this quote:

> Thus, we concluded that 1M/1013764 represents a broad variety of errors in code.

(Also the section after "We find three different safety-relevant code features: an unsafe code feature 1M/570621 which activates on security vulnerabilities, a code error feature 1M/1013764 which activates on bugs and exceptions")

This feature fires on actual bugs; it's not just a model pattern matching saying "what a bug hunter may say next".

reply
Was this "paper" eventually peer reviewed?

PS: I know it is interesting and I don't doubt Antrophic, but for me it is so fascinating they get such a pass in science.

reply
Modern ML is old school mad science.

The lifeblood of the field is proof-of-concept pre-prints built on top of other proof-of-concept pre-prints.

reply
Sounds like you agree this “evidence” lacks any semblance of scientific rigor?
reply
(Not GP) There was a well recognized reproducibility problem in the ML field before LLM-mania, and that's considering published papers with proper peer-reviews. The current state of afairs in some ways is even less rigourous than that, and then some people in the field feel free to overextend their conclusions into other fields like neurosciences.
reply
> This feature fires on actual bugs; it's not just a model pattern matching saying "what a bug hunter may say next".

You don't think a pattern matcher would fire on actual bugs?

reply
Mechanistic interpretability is a joke, supported entirely by non-peer reviewed papers released as marketing material by AI firms.
reply
Some people are still stuck in the “stochastic parrot” phase and see everything regarding LLMs through that lense.
reply
Current LLMs do not think. Just because all models anthropomorphize the repetitive actions a model is looping through does not mean they are truly thinking or reasoning.

On the flip side the idea of this being true has been a very successful indirect marketing campaign.

reply
What does “truly thinking or reasoning” even mean for you?

I don’t think we even have a coherent definition of human intelligence, let alone of non-human ones.

reply
Everyone knows to really think you need to use your fleshy meat brain, everything else is cheating.
reply
deleted
reply
While I agree, if you think that AI is just a text predictor, you are missing an important point.

Intelligence, can be borne of simple targets, like next token predictor. Predicting the next token with the accuracy it takes to answer some of the questions these models can answer, requires complex "mental" models.

Dismissing it just because its algorithm is next token prediction instead of "strengthen whatever circuit lights up", is missing the forest for the trees.

reply
You’re committing the classic fallacy of confusing mechanics with capabilities. Brains are just electrons and chemicals moving through neural circuits. You can’t infer constraints on high-level abilities from that.
reply
This goes both ways. You can't assume capabilities based on impressions. Especially with LLMs, which are purpose built to give an impression of producing language.

Also, designers of these systems appear to agree: when it was shown that LLMs can't actually do calculations, tool calls were introduced.

reply
It's true that they only give plausible sounding answers. But let's say we ask a simple question like "What's the sum of two and two?" The only plausible sounding answer to that will be "four." It doesn't need to have any fancy internal understanding or anything else beyond prediction to give what really is the same answer.

The same goes for a lot of bugs in code. The best prediction is often the correct answer, being the highlighting of the error. Whether it can "actually find" the bugs—whatever that means—isn't really so important as whether or not it's correct.

reply
It becomes important the moment your particular bug is on one hand typical, but has a non-typical reason. In such cases you'll get nonsense which you need to ignore.

Again - they're very useful, as they give great answers based on someone else's knowledge and vague questions on part of the user, but one has to remain vigilant and keep in mind this is just text presented to you to look as believable as possible. There's no real promise of correctness or, more importantly, critical thinking.

reply
100% They're not infallible but that's a different argument to "they can't find bugs in your code."
reply
[flagged]
reply
I use these tools and that's my experience.
reply
I think it all depends on the use case and a luck factor.

Sometimes I instruct copilot/claude to do a development (stretching it's capabilities), and it does amazingly well. Mind you that this is front-end development, so probably one of the more ideal use-cases. Bugfixing also goes well a lot of times.

But other times, it really struggles, and in the end I have to write it by hand. This is for more complex or less popular things (In my case React-Three-Fiber with skeleton animations).

So I think experiences can vastly differ, and in my environment very dependent on the case.

One thing is clear: This AI revolution (deep learning) won't replace developers any time soon. And when the next revolution will take place, is anyones guess. I learned neural networks at university around 2000, and it was old technology then.

I view LLM's as "applied information", but not real reasoning.

reply
[flagged]
reply
Ok, I'll bite. Let's assume a modern cutting edge model but even with fairly standard GQA attention, and something obviously bigger than just monosemantic features per neuron.

Based on any reasonable mechanistic interpretability understanding of this model, what's preventing a circuit/feature with polysemanticity from representing a specific error in your code?

---

Do you actually understand ML? Or are you just parroting things you don't quite understand?

reply
Polysemantic features in modern transformer architectures (e.g., with grouped-query attention) are not discretely addressable, semantically stable units but superposed, context-dependent activation patterns distributed across layers and attention heads, so there is no principled mechanism by which a single circuit or feature can reliably and specifically encode “a particular code error” in a way that is isolable, causally attributable, and consistently retrievable across inputs.

---

Way to go in showing you want a discussion, good job.

reply
Nice LLM generated text.

Now go read https://transformer-circuits.pub/2024/scaling-monosemanticit... or https://arxiv.org/abs/2506.19382 to see why that text is outdated. Or read any paper in the entire field of mechanistic interpretability (from the past year or two), really.

Hint: the first paper is titled "Scaling Monosemanticity: Extracting Interpretable Features from Claude 3 Sonnet" and you can ctrl-f for "We find three different safety-relevant code features: an unsafe code feature 1M/570621 which activates on security vulnerabilities, a code error feature 1M/1013764 which activates on bugs and exceptions"

Who said I want a discussion? I want ignorant people to STOP talking, instead of talking as if they knew everything.

reply
Your entire argument is derived from a pseudoscientific field without any peer-reviewed research. Mechanistic interpretability is a joke invented by AI firms to sell chatbots.
reply
Ok, let's chew on that. "reasonable mechanistic interpretability understanding" and "semantic" are carrying a lot of weight. I think nobody understands what's happening in these models -irrespective of narrative building from the pieces. On the macro level, everyone can see simple logical flaws.
reply
> I think nobody understands what's happening in these models

Quick question, do you know what "Mechanistic Interpretability Researcher" means? Because that would be a fairly bold statement if you were aware of that. Try skimming through this first: https://www.alignmentforum.org/posts/NfFST5Mio7BCAQHPA/an-ex...

> On the macro level, everyone can see simple logical flaws.

Your argument applies to humans as well. Or are you saying humans can't possibly understand bugs in code because they make simple logical flaws as well? Does that mean the existence of the Monty Hall Problem shows that humans cannot actually do math or logical reasoning?

reply
> do you know what "Mechanistic Interpretability Researcher" means? Because that would be a fairly bold statement if you were aware of that.

The mere existence of a research field is not proof of anything except "some people are interested in this". Its certainly doesn't imply that anyone truly understands how LLMs process information, "think", or "reason".

As with all research, people have questions, ideas, theories and some of them will be right but most of them are bound to be wrong.

reply
Your brain is a slab of wet meat, not a logic engine. It can't find actual flaws in your code - it's just half-decent at pattern recognition.
reply
That is not exactly true. The brain does a lot of things that are not "pattern recognition".

Simpler, more mundane (not exactly, still incredibly complicated) stuff like homeostasis or motor control, for example.

Additionally, our ability to plan ahead and simulate future scenarios often relies on mechanisms such as memory consolidation, which are not part of the whole pattern recognition thing.

The brain is a complex, layered, multi-purpose structure that does a lot of things.

reply
Its pattern recognition all the way down.
reply
> In the end it will be the users sculpting formal systems like playdoh.

I’m very skeptical of this unless the AI can manage to read and predict emotion and intent based off vague natural language. Otherwise you get the classic software problem of “What the user asked for directly isn’t actually what they want/need.”

You will still need at least some experience with developing software to actually get anything useful. The average “user” isn’t going to have much success for large projects or translating business logic into software use cases.

reply
I love this optimistic take.

Unfortunately, I believe the following will happen: By positioning themselves close to law makers, the AI companies will in the near future declare ownership of all software code developed using their software.

They will slowly erode their terms of service, as happens to most internet software, step by step, until they claim total ownership.

The point is to license the code.

reply
> AI companies will in the near future declare ownership of all software code developed using their software.

(X) Doubt

Copyright law is WEEEEEEIRRRDD and our in-house lawyer is very much into that, personally and professionally. An example they gave us during a presentation:

A monkey took a selfie of itself in 2011. We still don't know who has the copyright to that image: https://en.wikipedia.org/wiki/Monkey_selfie_copyright_disput...

IIRC the latest resolution is "it's not the monkey", but nobody has ruled the photographer has copyright either. =)

Copyright law has this thing called "human authorship" that's required to apply copyright to a work. Animals and machines can't have a copyright to anything.

A second example: https://en.wikipedia.org/wiki/Zarya_of_the_Dawn

A comic generated with Midjourney had its copyright revoked when it was discovered all of the art was done with Generative AI.

AI companies have absolutely mindboggling amounts of money, but removing the human authorship requirement from copyright is beyond even them in my non-lawyer opinion. It would bring the whole system crashing down and not in a fun way for anyone.

reply
AFAIK you can't copyright AI generated content. I don't know where that gets blurry when it's mixed in with your own content (ie, how much do you need to modify it to own it), but I think that by that definition these companies couldn't claim your code at all. Also, with the lawsuit that happened to Anthropic where they had to pay billions for ingesting copyrighted content, it might actually end up working the other way around.
reply
> the AI companies will in the near future declare ownership of all software code developed using their software.

Pretty sure this isn’t going to happen. AI is driving the cost of software to zero; it’s not worth licensing something that’s a commodity.

It’s similar to 3D printing companies. They don’t have IP claims on the items created with their printers.

The AI companies currently don’t have IP claims on what their agents create.

Uncle Joe won’t need to pay OpenAI for the solitaire game their AI made for him.

The open source models are quite capable; in the near future there won’t be a meaningful difference for the average person between a frontier model and an open source one for most uses including creating software.

reply
1. Commodities are huge business.

2. Show me these open source models that cost me $20/month to operate, because that’s what I pay for ChatGPT/Claude.

3. This is not at all similar to “3D printing”.

4. Nobody cares about some solitaire game

reply
This assumes every individual is capable of succinctly communicating to the AI what they want. And the AI is capable of maintaining it as underlying platforms and libraries shift.

And that there is little value in reusing software initiated by others.

reply
> This assumes every individual is capable of succinctly communicating to the AI what they want. And the AI is capable of maintaining it as underlying platforms and libraries shift.

I think there are people who want to use software to accomplish a goal, and there are people who are forced to use software. The people who only use software because the world around them has forced it on them, either through work or friends, are probably cognitively excluded from building software.

The people who seek out software to solve a problem (I think this is most people) and compare alternatives to see which one matches their mental model will be able to skip all that and just build the software they have in mind using AI.

> And that there is little value in reusing software initiated by others.

I think engineers greatly over-estimate the value of code reuse. Trying to fit a round peg in a square hole produces more problems than it solves. A sign of an elite engineer is knowing when to just copy something and change it as needed rather than call into it. Or to re-implement something because the library that does it is a bad fit.

The only time reuse really matters is in network protocols. Communication requires that both sides have a shared understanding.

reply
>The only time reuse really matters is in network protocols. Communication requires that both sides have a shared understanding.

A lot of things are like network protocols. Most things require communication. External APIs, existing data, familiar user interfaces, contracts, laws, etc.

Language itself (both formal and natural) depends on a shared understanding of terms, at least to some degree.

AI doesn't magically make the coordination and synchronisation overhead go away.

Also, reusing well debugged and battle tested code will always be far more reliable than recreating everything every time anything gets changed.

reply
Even within a single computer or program, there is need for communication protocols and shared understanding - such as types, data schema, function signatures. It's the interface between functions, programs, languages, machines.

It could also be argued that "reuse" doesn't necessarily mean reusing the actual code as material, but reusing the concepts and algorithms. In that sense, most code is reuse of some previous code, written differently every time but expressing the same ideas, building on prior art and history.

That might support GP's comment that "code reuse" is overemphasized, since the code itself is not what's valuable, what the user wants is the computation it represents. If you can speak to a computer and get the same result, then no code is even necessary as a medium. (But internally, code is being generated on the fly.)

reply
I think we shouldn't get too hung up on specific artifacts.

The point is that specifying and verifying requirements is a lot of work. It takes time and resources. This work has to be reused somehow.

We haven't found a way to precisely specify and verify requirements using only natural language. It requires formal language. Formal language that can be used by machines is called code.

So this is what leads me to the conclusion that we need some form of code reuse. But if we do have formal specifications, implementations can change and do not necessarily have to be reused. The question is why not.

reply
This reframes the whole conversation. If implementations are cheap to regenerate, specifications become the durable artifact.

Something like TLA+ model checking lets you verify that a protocol maintains safety invariants across all reachable states, regardless of who wrote the implementation. The hard part was always deciding what "correct" means in your specific domain.

Most teams skip formal specs because "we don't have time." If agents make implementations nearly free, that excuse disappears. The bottleneck shifts from writing code to defining correctness.

reply
> I think there are people who want to use software to accomplish a goal, and there are people who are forced to use software.

Typically people feel they're "forced" to use software for entirely valid reasons, such as said software being absolutely terrible to use. I'm sure that most people like using software that they feel like actually helps rather than hinders them.

reply
> The only time reuse really matters is in network protocols.

And long term maintenance. If you use something. You have to maintain it. It's much better if someone else maintains it.

reply
> I think engineers greatly over-estimate the value of code reuse[...]The only time reuse really matters is in network protocols.

The whole idea of an OS is code reuse (and resources management). No need to setup the hardware to run your application. Then we have a lot of foundational subsystems like graphics, sound, input,... Crafting such subsystems and the associated libraries are hard and requires a lot of design thinking.

reply
There is a balance. Some teams take DRY too far.
reply
Which is why we should always just write and train our own LLMs.

I mean it’s just software right? What value is there in reusing it if we can just write it ourselves?

reply
Every internal piece of software you write is a potentially-infinite money sink of training
reply
no but if the old '10x developer' is really 1 in 10 or 1 in 100, they might just do fine while the rest of us, average PHP enjoyers, may go to the wayside
reply
>This assumes every individual is capable of succinctly communicating to the AI what they want. And the AI is capable of maintaining it as underlying platforms and libraries shift.

It's true that at first not everyone is just as efficient, but I'd be lying if I were to claim that someone needs a 4-year degree to communicate with LLM's.

reply
LLM technology does not have a connection with reality nor venues providing actual understanding.

Correction of conceptual errors require understanding.

Vomiting large amounts of inscrutable unmaintainable code for every change is not exactly an ideal replacement for a human.

We have not started to scratch the surface of the technical debt created by these systems at lightning speed.

reply
> We have not started to scratch the surface of the technical debt created by these systems at lightning speed.

Bold of you to assume anyone cares about it. Or that it’ll somehow guarantee your job security. They’ll just throw more LLMs on it.

reply
> We pay huge communication/synchronization costs to eek out mild speed ups on projects by adding teams of people.

Something Brooks wrote about 50 years ago, and the industry has never fully acknowledged. Throw more bodies at it, be they human bodies or bot agent bodies.

reply
The point of the mythical man month is not that more people are necessarily worse for a project, it's just that adding them at the last minute doesn't work, because they take a while to get up to speed and existing project members are distracted while trying to help them.

It's true that a larger team, formed well in advance, is also less efficient per person, but they still can achieve more overall than small teams (sometimes).

reply
Interesting point. And from the agents point of view, it’s always joining at the last minute, and doesn’t stick around longer than its context window. There’s a lesson in there maybe…
reply
The context window is the onboarding period. Every invocation is a new hire reading the codebase for the first time.

This is why architecture legibility keeps getting more important. Clean interfaces, small modules, good naming. Not because the human needs it (they already know the codebase) but because the agent has to reconstruct understanding from scratch every single time.

Brooks was right that the conceptual structure is the hard part. We just never had to make it this explicit before.

reply
But there is a level of magnitude difference between coordinating AI agents and humans - the AIs are so much faster and more consistent than humans, that you can (as Steve Yegge [0] and Nicholas Carlini [1] showed) have them build a massive project from scratch in a matter of hours and days rather than months and years. The coordination cost is so much lower that it's just a different ball game.

[0] https://steve-yegge.medium.com/welcome-to-gas-town-4f25ee16d...

[1] https://www.anthropic.com/engineering/building-c-compiler

reply
Then why aren’t we seeing orders of magnitude more software being produced?
reply
I think we are. There's definitely been an uptick in "show HN" type posts with quite impressively complex apps that one person developed in a few weeks.

From my own experience, the problem is that AI slows down a lot as the scale grows. It's very quick to add extra views to a frontend, but struggles a lot more in making wide reaching refactors. So it's very easy to start a project, but after a while your progress slows significantly.

But given I've developed 2 pretty functional full stack applications in the last 3 months, which I definitely wouldn't have done without AI assistance, I think it's a fair assumption that lots of other people are doing the same. So there is almost certainly a lot more software being produced than there was before.

reply
I think the proportion of new software that is novel has absolutely plummeted after the advent of AI. In my experience, generative AI will easily reproduce code for which there are a multitude of examples on GitHub, like TODO CRUD React Apps. And many business problems can be solved with TODO CRUD React Apps (just look at Excel’s success), but not every business problem can be solved by TODO CRUD React Apps.

As an analogy: imagine if someone was bragging about using Gen AI to pump out romantasy smut novels that were spicy enough to get off to. Would you think they’re capable of producing the next Grapes of Wrath?

reply
> I think the proportion of new software that is novel has absolutely plummeted after the advent of AI.

We were not awash in novel software before AI (say last decade in 2019).

I can only assume what you're really trying to say is "AI bad".

reply
Didn't we have a post the other day saying that the number of "Show HN" posts is skyrocketing?

https://news.ycombinator.com/item?id=47045804

reply
This question remains the 900-pound gorilla of this discussion
reply
Claude Code released just over a year ago, agentic coding came into its own maybe in May or June of last year. Maybe give it a minute?
reply
It’s been a minute and a half and I don’t see the evidence you can task an agent swarm to produce useful software without your input or review. I’ve seen a few experiments that failed, and I’ve seen manic garbage, but not yet anything useful outside of the agent operators imagination.
reply
Agent swarms are what, a couple of months old? What are you even talking about. Yes, people/humans still drive this stuff, but if you think there isn't useful software out there that can be handily implemented with current gen agents that need very little or no review, then I don't know what to tell you, apart from "you're mistaken". And I say that as someone who uses three tools heavily but has otherwise no stake in them. The copium in this space is real. Everyone is special and irreplaceable, until another step change pushes them out.
reply
The next thing after agent swarms will be swarm colonies and people will go "it's been a month since agentic swarm colonies, give it a month or two". People have been moving the goal posts like that for a couple years now, it's starting to grow stale. This is like self driving cars which were going to be workingin 2016 and replace 80% of drivers by 2017, all over again. People falling for hype instead of admitting that while it appears somewhat useful, nobody has any clue if it's 97% useful or just 3% useful but so far it's looking like the later.
reply
I generally agree, but counterpoint: Waymo is successfully running robocabs in many cities today.
reply
When does it come to Mumbai?
reply
They're launching in London this year. So... 2035?
reply
I would love to see this in Mumbai or Dhaka or something like that, just like thrown in there. Can it move 2 meters without stopping?

Don't take me wrong, I like Waymo but 2035 is probably realistic for the cities in more developing countries.

reply
The whole point is that an agent swarm doesn’t need a month, supposedly.
reply
We're talking about whether the human users have caught up with usage of tech, not the speed of the tech itself.
reply
Why do you assume there isn't?

Enterprise (+API) usage of LLMs has continued to grow exponentially.

reply
I work for one of those enterprises with lots of people trying out AI (thankfully leadership is actually sane, no mandates that you have to use it, just giving devs access to experiment with the tools and see what happens). Lots of people trying it out in earnest, lots of newsletters about new techniques and all that kinda stuff. Lots of people too, so there's all sorts of opinions from very excited to completely indifferent.

Precisely 0 projects are making it out any faster or (IMO more importantly) better. We have a PR review bot clogging up our PRs with fucking useless comments, rewriting the PR descriptions in obnoxious ways, that basically everyone hates and is getting shut off soon. From an actual productivity POV, people are just using it for a quick demo or proof of concept here and there before actually building the proper thing manually as before. And we have all the latest and greatest techniques, all the AGENTS.mds and tool calling and MCP integrations and unlimited access to every model we care to have access to and all the other bullshit that OpenAI et al are trying to shove on people.

It's not for a lack of trying, plenty of people are trying to make any part of it work, even if it's just to handle the truly small stuff that would take 5 minutes of work but is just tedious and small enough to be annoying to pick up. It's just not happening, even with extremely simple tasks (that IMO would be better off with a dedicated, small deterministic script) we still need human overview because it often shits the bed regardless, so the effort required to review things is equal or often greater than just doing the damn ticket yourself.

My personal favorite failure is when the transcript bots just... Don't transcript random chunks of the conversation, which can often lead to more confusion than if we just didn't have anything transcribed. We've turned off the transcript and summarization bots, because we've found 9/10 times they're actively detrimental to our planning and lead us down bad paths.

reply
I build a code reviewer based on the claude code sdk that integrates with gitlab, pretty straightforward. The hard work is in the integration, not the review itself. That is taken care of with SDK.

Devs, even conservative ones, like it. I’ve built a lot of tooling in my life, but i never had the experience that devs reach out to me that fast because it is ‘broken’. (Expired token or a bug for huge MRs)

reply
It doesn't appear to have improved the quality of the software we have either.
reply
we are. you can check the APP STORE release yoy. it's skyrocketing.
reply
I have barely downloaded any apps in the last 5-10 years except some necessary ones like bank apps etc. Who even needs that garbage? Steam also has tons of games but 80% make like no money at all and no one cares. Just piles of garbage. We already have limited hours per day and those are not really increasing so I wonder where are the users.
reply
Here’s a talk about leaning into the garbage flow. And that was a decade ago.

https://youtu.be/E8Lhqri8tZk

I can’t imagine the number being economically meaningful now.

reply
"The future is already here, it's just not evenly distributed"
reply
> But there is a level of magnitude difference between coordinating AI agents and humans

And yet, from https://news.ycombinator.com/item?id=47048599

> One of the tips, especially when using Claude Code, is explictly ask to create a "tasks", and also use subagents. For example I want to validate and re-structure all my documentation - I would ask it to create a task to research state of my docs, then after create a task per specific detail, then create a task to re-validate quality after it has finished task.

Which sounds pretty much the same as how work is broken down and handed out to humans.

reply
Yes, but you can do this at the top level, and then have AI agents do this themselves for all the low level tasks, which is then orders of magnitude faster than with human coordination.
reply
Communication overhead between humans is real, but it's not just inefficiency, it's also where a lot of the problem-finding happens. Many of the biggest failures I've seen weren't because nobody could type the code fast enough, but because nobody realized early enough that the thing being built was wrong, brittle or solving the wrong problem
reply
> Many of the biggest failures I've seen weren't because nobody could type the code fast enough, but because nobody realized early enough that the thing being built was wrong, brittle or solving the wrong problem

Around 99% of biggest failures come from absent, shitty management prioritizing next quarter over long strategy. YMMV.

reply
> There's an undertone of self-soothing "AI will leverage me, not replace me",

Which is especially hilarious given that this article is largely or entirely LLM-generated.

reply
Everybody in the world is now a programmer. This is the miracle of artificial intelligence.

- Jensen Huang, February 2024

https://www.techradar.com/pro/nvidia-ceo-predicts-the-death-...

reply
God help us!

Far from everyone are cut out to be programmers, the technical barrier was a feature if anything.

There's a kind of mental discipline and ability to think long thoughts, to deal with uncertainty; that's just not for everyone.

What I see is mostly everyone and their gramps drooling at the idea of faking their way to fame and fortune. Which is never going to work, because everyone is regurgitating the same mindless crap.

reply
Remember when Visual Basic was making everyone a programmer too?

(btw, warm fuzzies for VB since that's what I learned on! But ultimately, those VB tools business people were making were:

1) Useful, actually!

2) Didn't replace professional software. Usually it'd hit a point where if it needed to evolve past its initial functionality it probably required an actual software developer. (IE, not using Access as a database and all the other eccentricities of VB apps at that time)

reply
The problem I mostly see with non programmers is that they don't really grasp the concept of a consistent system.

A lot of people want X, but they also want Y, while clearly X and Y cannot coexist in the same system.

reply
This looks like the same problem as when the first page layout software came out.

It looked to everyone like a huge leap into a new world word processing applications could basically move around blocks of text to be output later, maybe with a few font tags, then this software came out that wow actually showed the different fonts, sizes, and colors on the screen as you worked! With apps like "Pagemaker" everyone would become their own page designers!

It turned out that everyone just turned out floods of massively ugly documents and marketing pieces that looked like ransom notes pasted together from bits of magazines. Years of awfulness.

The same is happening now as we are doomed to endure years AI slop in everything from writing to apps to products to vending machines an entire companies — everyone and their cousin is trying to fully automate it.

Ultimately it does create an advance and allows more and better work to be done, but only for people who have a clue about what they are doing, and eventually things settle at a higher level where the experts in each field take the lead.

reply
> it will be the users sculpting formal systems like playdoh.

People are pushing back against this phrase, but on some level it seems perfect, it should be visualized and promoted!

reply
I think Lego is a better analogy. LLMs aren't great at working on novel cutting edge problems.
reply
Well, without the self soothing I think what's left is pitchforks.
reply
Maybe it's time for pitchforks.
reply
Pitchforks are not going to scare Robocop.
reply
> AI will leverage me

I think I know what you mean, and I do recall once seeing "this experience will leverage me" as indicating that something will be good for a person, but my first thought when seeing "x will leverage y" is that x will step on top of y to get to their goal, which does seem apt here.

reply
How does a single human acquire said "good taste" for architecting?
reply
>In the end it will be the users sculpting formal systems like playdoh.

Yet another person who thinks that there is a silver bullet for complexity. The mythical intelligent machines that from poorly described natural language can erect flawless complex system is like the philosopher's stone of our time.

reply
I'm rounding the corner on a ground's up reimplementation of `nix` in what is now about 34 hours of wall clock time, I have almost all of it on `wf-record`, I'll post a stream, but you can see the commit logs here: https://github.com/straylight-software/nix/tree/b7r6/correct...

Everyone has the same ability to use OpenRouter, I have a new event loop based on `io_uring` with deterministic playbook modeled on the Trinity engine, a new WASM compiler, AVX-512 implementations of all the cryptography primitives that approach theoretical maximums, a new store that will hit theoretical maximums, the first formal specification of the `nix` daemon protocol outside of an APT, and I'm upgrading those specifications to `lean4` proof-bearing codegen: https://github.com/straylight-software/cornell.

34 hours.

Why can I do this and no one else can get `ca-derivations` to work with `ssh-ng`?

reply
And it's teachable.

Here's a colleague who is nearly done with a correct reimplementation of the OpenCode client/server API: https://github.com/straylight-software/weapon-server-hs

Here's another colleague with a Git forge that will always work and handle 100x what GitHub does per infrastructure dollar while including stacked diffs and Jujitsu support as native in about 4 days: https://github.com/straylight-software/strayforge

Here's another colleague and a replacement for Terraform that is well-typed in all cases and will never partially apply an infrastructure change in about 4 days: https://github.com/straylight-software/converge

Here's the last web framework I'll ever use: https://github.com/straylight-software/hydrogen

That's all *begun in the last 96 hours.

This is why: https://github.com/straylight-software/.github/blob/main/pro...

reply
/tangent i've always like the word "straylight", I use to run a fansite for a local band and the site was called straylight6. This was maybe 20 years ago.
reply
Please check your links, 3/7 don't work and it's the most interesting ones.
reply
ah, not my place to early launch my colleague's work, my bad.

keep an eye on https://straylight.software, it'll all be there extremely soon. well, everything i mentioned, which is different than all of it. :)

reply
I mean, have you tried getting `ca-derivations` to work with `ssh-ng`? That sounds like a good way to answer your own question.
reply
I have ca-derivations working with ssh-ng.

It's a fairly hairy patch and now the broken ass eval cache breaks more.

I'm fixing it all. Read the fucking repo friend, it's biblical.

reply
> I would rather a single human (for now) architect with good taste and an army of agents than a team of humans.

A human might have taste, but AI certainly doesn't.

reply
It has average taste based on the code it was trained on. For example, every time I attempted to polish the UX it wanted to add a toast system, I abhor toasts as a UX pattern. But it also provided elegant backend designs I hadn't even considered.
reply
I’d say AI has better taste than an average human but definitely not the taste you would see in competent people around you.
reply
Well of course. In the long run AI will do almost all tasks that can be done from a computer.
reply
> especially in the long run, at least in software

"at least in software".

Before that happens, the world as we know it will already have changed so much.

Programmers have already automated many things, way before AI, and now they've got a new tool to automate even more thing. Sure in the end AI may automate programmers themselves: but not before oh-so-many people are out of a job.

A friend of mine is a translator: translates tolerates approximation. Translation tolerates some level of bullshittery. She gets maybe 1/10th the job she used to get and she's now in trouble. My wife now does all he r SMEs' websites all by herself, with the help of AI tools.

A friend of my wife she's a junior lawyer (another domain where bullshitting flies high) and the reason for why she was kicked out of her company: "we've replaced you with LLMs". LLMs are the ultimate bullshit producers: so it's no surprise junior lawyers are now having a hard time.

In programming a single character is the difference between a security hole or no security hole. There's a big difference between something that kinda works but is not performant and insecure and, say, Linux or Git or K8s (which AI models do run on and which AI didn't create).

The day programmers are replaced shall only come after AI shall have disrupted so many other jobs that it should be the least of our concerns.

Translators, artists (another domain where lots of approximative full-on bullshit is produced), lawyers (juniors at least) even, are having more and more problems due to half-arsed AI outputs coming after their jobs.

It's all the bullshitty jobs where bullshit that tolerates approximation is the output that are going to be replaced first. And the world is full of bullshit.

But you don't fly a 767 and you don't conceive a machine that treats brain tumors with approximations. This is not bullshit.

There shall be non-programmers with pitchforks burning datacenters or ubiquitous UBI way before AI shall have replaced programmers.

That it's an exoskeleton for people who know what they're doing rings very true: it's yet another superpower for devs.

reply
> We pay huge communication/synchronization costs to eek out mild speed ups on projects by adding teams of people.

I am surprised at how little this is discussed and how little urgency there is in fixing this if you still want teams to be as useful in the future.

Your standard agile ceremonies were always kind of silly, but it can now take more time to groom work than to do it. I can plausibly spend more time scoring and scoping work (especially trivial work) than doing the work.

reply
It's always been like that. Waterfall development was worse and that's why the Agilists invented Agile.

YOLOing code into a huge pile at top speed is always faster than any other workflow at first.

The thing is, a gigantic YOLO'd code pile (fake it till you make it mode) used to be an asset as well as a liability. These days, the code pile is essentially free - anyone with some AI tools can shit out MSLoCs of code now. So it's only barely an asset, but the complexity of longer term maintenance is superlinear in code volume so the liability is larger.

reply