It’s similar to how doing math in natural language without math notation is cumbersome and error-prone.
Using a formal language also help to enter in a kind of flow. And then details you did not think about before using the formal language may appear. Everything cannot be prompted, just like Alex Honnold prepared his climbing of El Capitan very carefully but it's only when he was on the rock that he took the real decisions. Same for Lindbergh when he crossed the Atlantic. The map is not the territory.
If you invent a formal language that is easy to read and easy to write, it may look like Python... Then someone will probably write an interpreter.
We have many languages, senior people who know how to use them, who enjoy coding and who don't have a "lack of productivity" problem. I don't feel the need to throw away everything we have to embrace what is supposed to be "the future". And since we need good devs to read and LLM generated code how to remain a good dev if we don't write code anymore ? What's the point of being up to date in language x if we don't write code ? Remaining good at something without doing it is a mystery to me.
"A sufficiently detailed specification is code"
If you are thinking through deterministic code, you are thinking through the manipulation of bits in hardware. You are just doing it in a language which is easier for humans to understand.
There is a direct mapping of intent.
No I'm not. If I want the machine to evaluate 2+2, I don't know or care what bits in hardware it uses to do that (as long as it doesn't run out of memory), I just want the result to come back as 4.
When you think through what will happen as a result of deterministic code, you are also thinking through what the bits will do, albeit at a higher level of abstraction.
When you ask an LLM to do something, you have no guarantee that the intent you provide is accurately translated, and you have no guarantee you’ll get the result you want. If you want your answer to 2+2 to always be 4, you shouldn’t use a non deterministic LLM. To get that guarantee, the bit manipulation a machine does needs to be logically equivalent to the way you evaluate the question.
That doesn’t mean you can’t minimize intent distortion or cognitive debt while using LLMs, or that you can’t think through the logic of whatever problem you’re dealing with in the same structured way a formal language forces you to while using them. But one of my pet peeves is comparing LLMs to compilers. The nondeterminism of LLMs and lack of logical rigidity makes them fundamentally different.
The whole purpose of an abstractions is to not have to look underneath it to make sure what you did with the abstraction is still correct. You can make sure because you, or someone you trust, did the work of paying that debt once.
With LLMs you always need to verify the output, for every generation you need to pay that debt. So it is not an abstraction.
"The purpose of abstraction is not to be vague, but to create a new semantic level in which one can be absolutely precise." -- Edsger Dijkstra
The interpreter is deterministic but LLMs aren't.
Reading the Hacker News comments, I kept thinking that programming is fundamentally about building mental models, and that the market, in the end, buys my mental model.
If we start from human intent, the chain might look something like this:
human intent -> problem model -> abstraction -> language expression -> compilation -> change in hadrware
But abstraction and language expression are themselves subdivided into many layers. How much of those layers a programmer can afford not to know has a direct effect on that programmer’s position in the market. People often think of abstraction as something clean, but in reality it is incomplete and contextual. In theory it is always clean; in practice it is always breaking down.
Depending on which layer you live in, even when using the same programming language, the form of expression can become radically different. From that point of view, people casually bundle everything together and call it “abstraction” or “intent,” but in reality there is a gap between intent and abstraction, and another gap between abstraction and language expression. Those subtle friction points are not fully reducible.
Seen from that perspective, even if you write a very clear specification, there will always be something that does not reduce neatly. And perhaps the real difference between LLMs and humans lies in how they deal with that residue.
Martin frames the issue in a way that suggests LLM abstractions are bad, but I do not fully agree. As someone from a third-world country in Asia, I have seen a great deal of bad abstraction written in my own language and environment. In that sense, I often feel that LLM-generated code is actually much better than the average abstractions produced by my Asian peers. At the same time, when I look at really good programming from strong Western engineers, I find myself asking again what a good abstraction actually is.
The essay talks about TDD and other methodologies, but personally I think TDD can become one of the worst methodologies when the abstraction itself is broken. If the abstraction is wrong, do the tests really mean anything? I have seen plenty of cases where people kept chasing green tests while gradually destroying the architecture. I have seen this especially in systems involving databases.
The biggest problem with methodology is that it always tends to become dogma, as if it were something that must be obeyed. SOLID principles, for example, do not always need to be followed, but in some organizations they become almost religious doctrine. In UI component design, enforcing LSP too rigidly can actually damage the diversity and flexibility of the UI. In the end, perhaps what we call intent is really the ability to remain flexible in context and search for the best possible solution within that context.
From that angle, intent begins to look a lot like the reward-function-based learning of LLMs.
That being said, even model-based design (MBD) has largely been a failure, despite it being about mapping formal models to (formal-language) program code.
there is the famious bowling game tdd example where their result doesn't have a frame object and they argue they proved you don't need one. That is wrong though, the example took just a couple hours - there is nothing so bad in a a two hour program you will regret. If you were doing a real bowling system with pin setters, support for 50 lanes and a bunch of other things that I who don't work in that area don't even know about - you will find places to regret things.
What I meant is that, like any powerful tool, there are situations where it shouldn't be used.
Thanks for the thoughtful comment.
It’s easier to keep the balance by keeping everything simple and maintaining a good hygiene in your codebase.
I agree! You often see this realized when projects slowly migrate to using more and more ctypes code to try and back out of that pit.
In a previous job, a project was spun up using Python because it was easier and the performance requirements weren't understood at that time. A year or two later it had become a bottleneck for tapeout, and when it was rewritten most of the abstract architecture was thrown out with it, since it was all Pythonic in a way that required a different approach in C++