It's "9 women can't make a baby in one month".
It still takes roughly nine months to make a human baby, regardless of how many women or babies are involved!
On paper your CPU can execute at least one instruction per core per cycle but that's on average too, if you actually only have one instruction to run it takes several cycles.
Also, you can get a baby tonight if you steal one from the maternity ward.
The real question is, how do LLMs turn the mythical man month on its head. If we accept AI generated code, can an agentic AI swarm make software faster simply by parallelizing in a way that 9 women can't make a baby in 1 month because they're am AI, not human, and communicate in a different way.
The pitfall of AI coding is that previously every shiny tangent that was a distraction, is now a rabbit hole to be leaped into for an afternoon, if you feel like it. It's like that ancient Chinese curse, may you live in interesting times. Everybody can recreate an MVP of Twitter in a weekend now when previously that was just a claim a certain type of people made.
> The nearest related Chinese expression translates as "Better to be a dog in times of tranquility than a human in times of chaos."
https://en.wikipedia.org/wiki/May_you_live_in_interesting_ti...
There's a good point in here along the lines of "if you need X in a month, and someone else has something that's 90% of what you want X to be, can you buy it from them before starting any crazy internal death marches instead?"
> The real question is, how do LLMs turn the mythical man month on its head. If we accept AI generated code, can an agentic AI swarm make software faster simply by parallelizing in a way that 9 women can't make a baby in 1 month because they're am AI, not human, and communicate in a different way.
This is quite possibly only a one-time switch from a changed baseline, though. Give it a few years and "the fastest way an LLM tool can do it" will be what gets tossed out a an estimate, and stakeholders will still want you to do it in a tenth the time...
As far as I know, all women everywhere start not pregnant
we learn by doing
If you're not coding anymore, but using AI tools, you're developing skills in using those AI tools, and your code abilities will atrophy unless exercised elsewhere.
[1] Depending on the topic and the level of knowledge of it.
2. Don’t assume you’re the next Mozart. Someone is, statistically it’s not you.
Take juggling for example - something that was on HN homepage last week. You can learn everything you need to know about juggling though a post or a book or an educational video. But can you juggle after all that book learning? Not at all - to be able to juggle one has to spend time practicing and no amount of reading can help meaningfully compress that process.
Muscle memory required for juggling is not a 1:1 correlation to experience, but I feel like it's close enough to it.
I do think that these pieces sometimes smuggle in a nostalgic picture of how engineers "really" learn which has only ever been partly true.
Also could be shortened to "IA, not AI", and gets even more fun when you translate it to Spanish: "AI, no IA".
The problem is that it was coined so early that we are way past the aphorism stage now.
If you asked 100 Americans what this aphorism means, I strongly doubt a single one could capture McLuhan's original meaning.
https://publichealthpolicyjournal.com/mit-study-finds-artifi...
I think it means something like we're trapped in the constraints of the medium. Tweets say more about the environment of twitter than whatever message happened to be sent.
but i think im off on that, ill look this person up and find out!
Firstly, Twitter has an upper bound on the complexity of thoughts it can carry due to its character limit (historically 180, now somewhat longer but still too short).
Secondly, a biased or partial platform constrains and filters the messages that are allowed to be carried on it. This was Chomsky's basic observation in Manufacturing Consent where he discussed his propaganda model and the four "filters" in front of the mass media.
Finally, social media has turned "show business [into] an ordinary daily way of survival. It's called role-playing." [0] The content and messages disseminated by online personas and influencers are not authentic; they do not even originate from a real person, but a "hyperreal" identity (to take language from Baudrillard) [0]:
You are just an image on the air. When you don't have a physical body, you're a
_discarnate being_ [...] and this has been one of the big effects of the electric age. It
has deprived people of their public identity.
Emphasis mine. Influencers have been sepia-tinted by the profit orientation of the medium and their messages do not correspond to a position authentically held. You must now look and act a certain way to appease the algorithm, and by extension the audience.If nothing else, one should at least recognize that people primarily identify through audiovisual media now, when historically due to lack of bandwidth, lack of computing and technology, etc. it was far more common for one to represent themselves through literate media - even as recently as IRC. You can come to your own conclusions on the relative merits and differences between textual vs. audiovisual media, I will not waffle on about this at length here.
The medium itself is reshaping the ways people represent, think about, and negotiate their own self-concept and identity. This is beyond whatever banal tweets (messages) about what McSandwich™ your favourite influencer ate for lunch, and it's this phenomena that is important and worth examining - not the sandwich.
[0] Marshall McLuhan in Conversation with Mike McManus, 1977. https://www.tvo.org/transcript/155847
For "the medium is the message", "medium" refers to any tool that acts as an extension of yourself. TV is an extension of your community, even things like light bulbs (extends your vision) are included in his meaning.
McLuhan argued that all forms of media like that carry a message that's more than just their content. "The message" in that argument refers to the message the medium itself brings rather than its content. For example, the airplane is "used for" speeding up travel over long distance, but the the message of its medium itself is to "dissolve the railway form of city, politics, and association, quite independently of what the airplane is used for."
You can see it happening via online media that extend ourselves across the internet. Think of how, once easy video creation via Youtube became uniform, web comics stopped becoming a popular medium for comedy online. It's not like the web comics faded because they got worse; it's that they faded into a niche format because people didn't want to communicate via static images anymore. Or how, once short form videos on TikTok got big, you saw other platforms shift to copy the paradigm. McLuhan's point is that it's not just the content of those short form videos that matters; it's the message of the format itself. Peoples' attention spans grow shorter because of the format, and before too long, we saw the tastes and expectations of the masses change. Reddit's monosite-with-subcommunities format and dopamine triggering voting feedback mechanism were its message more than any actual content posted there, and it's why traditional forums are niche and dwindling.
If you want to get a pretty good understanding of it, just read the first chapter from his book Understanding Media. It's short and relatively straight forward.
To maintain relevance, we must find common ground. There is no true objectivity, because every sign must be built up from an arbitrary ground. At the very least, there will be a conflict of aesthetics.
The problem with LLMs is that they avoid the ground entirely, making them entirely ignorant to meaning. The only intention an LLM has is to preserve the familiarity of expression.
So yes, this kind of AI will not accomplish any epistemology; unless of course, it is truly able to facilitate a functional system of logic, and to ground that system near the user. I'm not going to hold my breath.
I think the great mistake of "good ole fashioned AI" was to build it from a perspective of objectivity. This constrains every grammar to the "context-free" category, and situates every expression to a singular fixed ground. Nothing can be ambiguous: therefore nothing can express (or interpret) uncertainty or metaphor.
What we really need is to recreate software from a subjective perspective. That's what I've been working on for the last few years... So far, it's harder than I expected; but it feels so close.
What does "subjective" mean here? Are you talking about just-in-time software? That is, software that users get mold on the fly?
I'm reminded immediately of the Enochian language which purportedly had the remarkable property of having a direct, unambiguous, 1-to-1 correspondence with the things being signified. To utter, and hear, any expression in Enochian is to directly transfer the author's intent into the listener's mind, wholly intact and unmodified:
Every Letter signifieth the member of the substance whereof it speaketh.
Every word signifieth the quiddity of the substance.
- John Dee, "A true & faithful relation of what passed for many yeers between Dr. John Dee ... and some spirits," 1659 [0].
The Tower of Babel is an allegory for the weak correspondence between human natural language and the things it attempts to signify (as opposed to the supposedly strong 1-to-1 correspondence of Enochian). The tongues are confused, people use the same words to signify different referents entirely, or cannot agree on which term should be used to signify a single concept, and the society collapses. This is similar to what Orwell wrote about, and we have already implemented Orwell's vision, sociopolitically, in the early 21st century, through the culture war (nobody can define "man" or "woman" any more, sometimes the word "man" is used to refer to a "woman," etc).LLMs just accelerate this process of severing any connection whatsoever between signified and signifier. In some ways they are maximally Babelian, in that they maximize confusion by increasing the quantity of signifiers produced while minimizing the amount of time spent ensuring that the things we want signified are being accurately represented.
Speaking more broadly, I think there is much confusion in the spheres of both psychology and religion/spirituality/mysticism in their mutual inability to "come to terms" and agree upon which words should be used to refer to particular phenomenological experiences, or come to a mutual understanding of what those words even mean (try, for instance, to faithfully recreate, in your own mind, someone's written recollection of a psychedelic experience on erowid).
[0] https://archive.org/details/truefaithfulrela00deej/page/92/m...
Non determinism is what conveniently feels the gap of having no spec.
In fact turn temperature to 0. And it will be virtually deterministic. It exacerbates the problem that LLMs, as you rightly point out, have no spec.
But it seems we are heading there. For simple stuff, if I made a very clear spec - I can be almost sure, that every time I give that prompt to a AI, it will work without error, using the same algorithms. So quality of prompt is more valuable, than the generated code
So either way, this is what I focus my thinking on right now, something that always was important and now with AI even more so - crystal clear language describing what the program should do and how.
That requires enough thinking effort.
What makes you think it will work for you?
Unless you review that code carefully, and then we're back to the point about it not saving you any cognitive overhead.
The “with extra steps” is doing a lot of work in that sentence.
That "almost" is doing a lot of heavy lifting here. This is just "make no mistakes" "you're holding it wrong" magical thinking.
In every project, there is always a gap between what you think you want and what you actually need. Part of the build process is working that out. You can't write better specs to solve this, because you don't know what it is yet.
On top of that, you introduce a _second_ gap of pulling a lever and seeing if you get a sip of juice or an electric shock lol. You can't really spec your way out of that one, either, because you're using a non-deterministic process.
So right now, humans are for sure more reliable. But it is changing. There are things I already trust a LLM more than a random or certain known humans.
A lot of people are using them as such too: the amount of people talking about "my fleets of agents working on 4 different projects": they aren't reviewing that output. They say they are, but they aren't, anymore than I review the LLVM IR. It makes me feel like I'm in some fantasy land: I watch Opus 4.7 get things consistently backwards at the margins, mess up, make bugs: we wouldn't accept a compiler that did any of this at this scale or level lol
So far, my conclusion is that while LLMs can be s productivity boost, you have to direct them carefully. They don't really care about friction and bad abstractions in your codebase and will happily keep piling cards on top of the crooked house of cards they've generated.
Just like before AI, you need a cycle of building and refactoring running on repeat with careful reviews. Otherwise you will end up with something that even an LLM will have a hard time working in.
Isn't it an abstraction similar to how an engineering or product manager is? Tell the (human or AI coder) what you want, and the coder writes code to fulfill your request. If it's not what you want, have them modify what they've made or start over with a new approach.
Software engineering is a lot more social and communication-heavy than people think. Part of my job is to _not_ take specs at face value. You learn real quick that what people say they need and what they actually need are often miles apart. That's not arrogance, that's just how humans work.
A good product manager understands the biz needs and the consumer market and I know how to build stuff and what's worked in the past. We figure out what to build together. AIs don't think and can't do this in any effective way.
Also, if you fuck up badly enough that you make your engineers throw out code, you're gonna get fired lol
A human coder can be seen as an abstraction level because it will talk to the PM in product terms, not in code. And the PM will be reviewing the product. What makes this work is that the underlying contract is that there's a very small amount of iterations necessary before the product is done and the latter one should require shorter time from the PM.
We've already established using a LLM tool that way does not work. You can spend a whole month doing back and forth, never looking at code and still have not something that can be made to work. And as soon as you look at the code, you've breached the abstraction layer yourself.
There are skills we're losing that are probably ok to lose (e.g. spacial memory & reasoning vs GPS, mental arithmetic vs calculators), primarily because those are well bounded domains, so we understand the nature of the codependency we're signing up for. AI is an amorphous and still growing domain. It is not a specific rung in the abstraction hierarchy; it is every rung simultaneously, but at different fidelity levels.
I'd argue these are not at all OK to lose. You live in an earthquake zone? You sure better know which way is north and where you have to walk to get back home when all the lines are down after a big one. You need to do a quick mental check if a number is roughly where it should be? YOu should be able to do that in your head.
There might be better examples that support your point more effectively e.g. cursive writing
The arguments you make ≤ the values you actually hold ≤ the actions you take in support of those values.
I'm only interested in any such argument to the extent to which you've personally put it into practice. Otherwise, you're living proof of the argument's weakness. (To be fair, it's extremely hard to be internally consistent on this stuff! We all want better for ourselves than we have time and energy for. But that's my point: your fully subconscious emotional calculus will often undercut at least some of your loftier aspirations. Skills that don't matter anymore invariably atrophy due to the opportunity cost of keeping them honed.)
The ones I use certainly are. And with a bit of training you can reason and predict how they will respond to a given input with a large degree of accuracy without being familiar with how the particular compiler under question was implemented.
Not so with the AI tools. At least with the ones I use anyway.
Nevermind the fact that these tools are nowhere near as capable as their marketing suggests. Once companies and society start hitting the brick wall of inevitable consequences of the current hype cycle, there will be a great crash, followed by industry correction. Only then will actually useful applications of this technology surface, of which there are plenty. We've seen how this plays out a few times before already.