upvote
> Also, when did we stop liking to learn?

I suspect it happened when we achieved a level of such constant stimulation (there is a pocket computer always on us with infinite effortless distraction) that we’re never bored and never engage the default mode network.

https://en.wikipedia.org/wiki/Default_mode_network

https://www.youtube.com/watch?v=orQKfIXMiA8

When you’re bored, your mind goes to places it wouldn’t otherwise go. Curiosity kicks in. Curiosity is a precursor to learning. Learning engages the brain and is fun. But it’s not fun all the time, some of it is challenging and frustrating (which is good, that’s the process that teaches you).

When you have the digital equivalent to infinite candy and the brain equivalent to a sweet tooth, it’s hard to resist the siren’s call. The consequence is the brain equivalent to a stomachache—depression and loss of meaning—but unfortunately it doesn’t hit you the same way so you don’t make the immediate connection to make yourself stop. When you think about it, it’s ridiculous from several angles: the candy is infinite, it’s never going to run out, so you don’t need to gorge! But then we justify ourselves as only a true addict would, that while the candy is infinite, the flavours are limited editions and always rotating, and what if I miss that really good one everyone is on?! Then you miss it, is the answer. No one will be talking about it in fifteen minutes anyway.

reply
> it happened when we achieved a level of such constant stimulation (...) that we’re never bored and never engage the default mode network.

I don't know... I don't disagree, but I think this has been repeated so much that I believe everyone, at least everyone that is actively participating in HN discussions is aware of this.

So if we are aware of this and we consciously choose to keep engaging in dopaminergic activities, without having some time to be bored, I think it starts to become a choice. We can blame tech for starting this trend of stealing our attention, but once we become aware of this, we can only blame ourselves for perpetuating it.

reply
I still love learning, especially outside of tech. Been working in the ML field for over 8 years, and while I went into it because I liked the field, I did lose some interest in learning things, but mostly because of the sheer volume of publication and the rate of change. Learning stopped being something I enjoyed doing and went to something I had to do to keep up. And it just stopped having the same flavor.
reply
I agree with you on everything you said here except:

> when you know how the thing works and have that mental context, you will always be faster than an AI

That's just plain false, honestly. No one can type at the speed AI can code, even factoring in the time you need to spend to properly write out the spec & design rules the AI needs to follow when implementing your app/feature/whatever. And that gap will only increase as LLMs get more intelligent.

reply
Some of us do actually have intimate knowledge in certain areas where guidance of an AI takes longer than doing it yourself. It's not about typing speed, it's that when you know something really really well the solution/code is already known to you or the very act of thinking about the problem makes the solution known to you in full. When that happens it's less text to write that solution than it is to write a sufficient description of the solution to AI (not even counting the back and forth required of reviewing the AI output and correcting it).
reply
Giving a precise description of what the computer is supposed to do is exactly what programming is.

The more specific your requirements the closer you get to natural language not being useful anymore.

reply
I code mostly in APL and J. It’s much faster to type the code than explain everything to AI.
reply
The exceptions that prove the rule. When your programming language is built up of singular Unicode characters with specific meanings, of course that's faster than typing out in English what you want.

What do you use them for? For most AI users it's usually CRUD and I've never seen a web server or frontend in APL like languages.

reply
The exception is the rule.

The reason why programming is hard is because most languages force you to use a hammer when you need a screw driver. LLMs are very good at misusing hammers and most people find them useful for that reason.

If you use a sane dsl instead the natural language description of a problem is always more complex and much longer than the equivalent description in a dsl. It's also usually wrong to boot.

This is what algebra used to look like before variables: https://en.wikipedia.org/wiki/Archimedes%27s_cattle_problem#...

I don't think you will find anyone who can do better than an LLM at one shotting the prose version of the problem. Both will of course be wrong.

But I also don't think you will find an LLM that can solve the problem faster than a human with Prolog when you have to use the prose description of the problem.

reply
Using esoteric programming languages doesn’t suddenly make it true for the majority of development, which is web apps, CRUD stuff, some data science, etc.
reply
This is actually my biggest gripe with vibecoding. The single best feature of any programming language is that it is precise. And that is what we throw out?! I favor of natural language, of all things?! We're insane!
reply
It turns out an awful lot of precision (plenty for many things) lives in library and web APIs, documentation, header files and dependency manifests. Language can literally just point at it without repeating it all. Avoidance of mistake through elimination of manual copying in things like actuarial and ballistics tables was what the original computers were built for.
reply
Custom written code can also point at those APIs and libraries without repeating it all? Or am I missing your point?
reply
deleted
reply
Yes, there are still many areas where skilled humans are faster than AI (meaning faster coding yourself, than providing so much context and guidance that the AI can do it on its "own").

But in general the statement is really not true anymore, generic projects/problems have a pretty good chance that the AI can one shot a working solution from a lazily typed vague prompt.

reply
Maybe a failure to automate?

The volume of people successfully adopting agentic engineering practices suggests this stuff isn't rocket science, but it is a learned skill and takes setup.

A year later into heavy AI coding, my experience is what you're describing should aid in being able to run 5+ agents simultaneously on a project because you know what you're doing, you set it up right, and you know how to tell agents to leverage that properly.

reply
> successfully adopting agentic engineering practices

What's your definition of "successfully"?

More LOC committed per day is probably the only one that's guaranteed when you let spicy autocomplete take the wheel.

I don't think it's at all possible to reason about the other more meaningful metrics in software development, because we simply don't have the context of what each human is working on, and as with the WYSIWYG fad of 3 decades ago, "success" is generally self-reported, by people who don't know what they don't know, and thus they don't know what spicy autocomplete is getting woefully wrong.

"But it {compiles,runs,etc}" isn't a meaningful metric when a large portion of the code in question is dynamic/loosely typed in a non-compiled language (JavaScript, Python, Ruby, PHP, etc).

reply
Also, if your boss tells you "we're AI company now, you will use AI or be fired" then of course you will use AI and claim it is productive.
reply
You seem to have missed OP's point: some things are only encoded in our brains when you are sufficiently experienced.

Translating that into code can happen directly by you, or into prompt iterations that need to result in the same/similar coded representation.

In other words, when it matters how something works and it is full of intricate details, you do not need to specify it, you just do it (eg. as an example which is probably not the best is you knowing how to avoid N+1 query performance issue — you do not need a ticket or spec to be explicit, you can just do it at no extra effort — models are probably OK at this as it is such a pervasive gotcha, but there are so many more).

reply
That's the failure to automate. The AI isn't telepathic, so agentic engineers not automating this stuff is skipping out on the engineering part.

You setup the environment and then you do the work. Unless you are switching employers every week, you invest in writing that stuff down so the generation is right-ish and generate validation tooling so it auto-detects the mistakes and self-repairs.

reply
sometimes you write the feature and write it well so it's reusable.

imagine you have to implement a specific algorithm for a quantum computer.

There's no value setting up AI to do the writing for you. That might be orders of magnitude harder then writing the algorithm directly.

For highly specialized one-off features, it doesn't always pay off.

On the other hand, if all you do are some generic items that AI can do well... then I'm not sure you're going to have a job long term, your prompts and automation will be useful for the new junior hires that will be specialized in using these and cost effective.

reply
I think there's a level above that where the words to describe such structure are familiar and readily available and hey guess what? The model understands those too. Just about every pattern has a name. Or a shape. Or an analog or metaphor in other languages or codebases. All work as descriptors.
reply
This presumes that most of this stays encoded as words in our brains: the effort to translate some of these into words might be similar to translating it into code (still words, just very precise).

It's like talking legalese vs plain English; or formal logic vs English. Some people have the formal stuff come more naturally, and then spitting code out is not a burden.

reply
Maybe you're the exception and are actually doing it right and actually getting good results, but every time I have heard this, it has been an ignorance-is-bliss scenario where the person saying it is generating massive amounts of code that they don't understand, not because they're incapable but because they don't care to, and immediately wiping their hands of it afterward.

To give an example of where I hear this, it is indistinguishable from the things I hear from my coworkers: "You just need the right setup!" (IMO the actual difference is I need to turn off the part of my brain that cares about what the code actually does or considers edge cases at all) What I actually see, in practice, are constant bugs where nobody ever actually addresses the root cause, and instead just paves over it with a new Claude mass-edit that inevitably introduces another bug where we'll have to repeat the same process when we run into another production issue.

We end up making no actual progress, but boy do we close tickets, push PRs, and move fast and oh man do we break things. We're just doing it all in-place. But at least we're sucking ourselves off for how fast we're moving and how cutting edge we are, I guess.

I dunno, maybe I'm doing it wrong, maybe my team is all doing it wrong. But like I said the things they say are indistinguishable from the common HN comment that insists how this stuff is jet fuel for them, and I see the actual results, not just the volume of output, and there's no way we're occupying the same reality.

reply
1. If what you're replying to was a thing, wouldn't there be a open source project where I could see this in action? or Some sort of example I could watch on youtube somewhere. 2. The people that talk like this in my company, spin up new projects all the time and then just get to hand them off for other teams to clean up the mess and decode what the heck is going on.
reply
Yeah it’s when you go off the happy path that it gets difficult. Like there’s a weird behaviour in your vibe-coded app that you don’t quite know how to describe succinctly and you end up in some back-and-forth.

But man AI is phenomenal for getting stuff out of your head and working quick.

reply
That doesn't matter. The statement wasn't "faster than AI right now", it was "will always be faster than AI". And that's just nonsense.

Current AI systems are extremely serial, in that very little of the inherent parallelism of the problem is utilized. Current-gen AI systems run at most a few hundreds of thousands of operations in parallel, while for frontier models, billions of operations could be run in parallel. Or in other words, what currently takes AI 8 hours will take it barely long enough for you to perceive the delay after you release the enter key.

For a demo, play around with https://chatjimmy.ai/ , the AI chatbot of Taalas, where they etched the model into silicon in a distributed way, instead of saving it in RAM and sucking it to execution units by a straw. It's a 8B parameter model, so it's unsuitable for complex problems, but the techniques used for it will work for larger models too, and they are working to get there.

And even Taalas is very far from the limits. Modern better quality LLM chatbots operate at ~40 tokens per second. The Taalas chatbot operates at 17000 tokens/s. If you took full advantage of parallelism, you should be able to have a latency of low hundreds of clock cycles per token, or single request throughput of tens of millions of tokens per second. (With a fully pipelined model able to serve one token per clock cycle, from low hundreds of requests.) Why doesn't everyone do it like that right now? Because to do this, you need to etch your model into silicon, which on modern leading edge manufacturing is a very involved process that costs hundreds of millions+ in development and mask costs (we are not talking about single chips here, you can barely fit that 8B model into one), and will take around a year. So long as the models keep improving so much that a year-old model is considered too old to pay back the capital costs, the investment is not justified. But when it will be done, it will not just make AI faster, it will also make it much more energy-efficient per token. Most of the energy costs are caused by moving data around and loading/storing it in memory.

And I want to stress that none of the above is dependent on any kind of new developments or inventions. We know how to do it, it's held back only by the pace of model improvement and economics. When models reach a state of truly "good enough", it will happen. It feels perverse to me that people are treating this situation as "there was a per-AI period that worked like X, now we are in a post-AI period and we have figured out that it will work like Y". No. We are at the very bottom of a very steep curve, and everything will be very different when it's over.

reply
I don't believe this. Either you're lying, or you just haven't caught on with how to use Agentic AI.

Everything I do to interact with my computer is through an agent now.

reply
I don't believe this. Either you're lying, or you just haven't caught on how to use a computer.

Everything I do to interact with my computer is still the same.

See how boring you are?

reply
Ok sorry about that. I seriously don't believe him. The Agent is so fast there's literally no way you can be faster.

Telling the agent your high level plan that you are extremely familiar with and then having the agent execute on 2000 lines of code is FASTER then having you execute on that 2000 lines of code. There is no reality where that can be physically beaten by even someone who's typing really quickly with zero pause. Physically impossible.

Less boring or not? Another way to put it... although my answer is boring, I think I'm right. He is either a liar or like many other people lacks skill in using AI... because the transition to AI is happening so fast... not many people are fully utilizing AI to it's maximum potential. Many still use IDEs, many still interact with terminal. Many people still don't use it to configure infrastructure, do database administration, deploy code... etc.

reply
AI can write 2000 lines faster than you, but you can write the 2000 lines correctly first shot faster than having AI do 10 iterations on these 2000 lines with your guidance to finally get it right

I know that a better plan could mean fewer iterations, but again that extends the time you need to spend on that plan => the total time of the AI solution

reply
Care to explain which particular intimate knowledge allowed you in the last 6-9 months to be faster than AI in certain area?

Honestly, I'm still faster than AI cooking scrambled eggs, but definitely not faster than neither AI (or compiler) in translating stuff into code.

reply
I interpret "faster than AI" to include writing the prompt. For me (scientific computing) it is more often than not faster to write out a simulation or design in a language I know inside out like fortran or mathematica than explicate the requirements to an LLM to request the code. Obviously if someone wrote out a prompt to me and the LLM it would be way faster, but I don't think that's what the commenter had in mind.
reply
If you're good at SQL, or SQL-like languages like Linq, it might be more efficient precisely writing a reasonably complex query than trying to explain it in detail to an AI.
reply
I am very good at SQL, I worked half my life with SQL and teached it and know all kinds of SQL flavour. But good luck getting ahead of AI on a complex query with recursive CTEs, left outers, 625-column tables that change semantics conditional to certain prop, and then some obscure Oracle package APIs.

No way U beat an LLM on this, even on trivial ones. LLMs are better at that since at least 2024, if you haven't noticed, then you're not doing enough SQL perhaps.

But, of course it took years for people to realize they cannot outpace Visual Studio in the 90s by being very good at x86 assembly.

reply
Not the parent but I've had this happen when debugging for sure. Sometimes I ask Claude Code to help me debug something and it makes a wrong assumption and just churns in circles burning tokens. While it's doing that I realize the problem and fix it.
reply
Sometimes debuggind is faster indeed, and making small very focused changes too.

But during feature development? Not possible. And I consider myself a very fast developer

reply
Don't you find that debugging takes place as part of feature development though?
reply
What I meant is that only sometimes I am faster than Claude with debugging. When it's a standalone problem, a report in Sentry, and I just know immediately where I need to go to fix it. Then it's faster to do myself, than telling Claude what's the problem and where to look and wait.

Bugs happen during feature development, as you say, but then Claude is in the context, and I don't need to tell it where to go, it sees the bug with failing tests, or smth similar.

BTW. One thing that helps my Claude with debugging harder problems is that I tell it to apply scientific method to debugging. Generate hypotheses, gather pros/cons evidence, write to a journal file debug-<problem>.md, design minimal experiments to debunk hypotheses.

You can add that as a skill, and sometimes it will pick it up automatically, but it works wonders just as a single sentence in the input.

reply
..but then you ignore all other times CC got it right, and statistically I would put my bets CC does it right (or Codex (or PI)) than you would, and more often is right than tis not.

besides it is a system that you query, it responds. I'm sure your dbs are not always 'right' and particularly when you as the wrong questions.

reply
[dead]
reply
> Some of us do actually have intimate knowledge in certain areas where guidance of an AI takes longer than doing it yourself.

You speak as if AI development is frozen, and you ignore the poster's point:

> that gap will only increase as LLMs get more intelligent

reply
In my experience AI can write _something_ from scratch, but often edge cases won't be handled until I go through and read the results or test it. Usually when I'm writing by hand I will naturally find the majority of edge cases as I go. By the time I've read through the results and fixed said edge cases, I usually would have been faster just doing it myself.
reply
This has been my experience thus far. Yes, a complete prototype can be made, but.. you don't really know until you read the code and test it. Just yesterday, small things came up in terms of Qt screen focus that wouldn't have come up otherwise save for initial testing.

I think, and I recognize it is mostly against the 'agentic' push, I will stick with slow iteration.

reply
My experience is the opposite: AI takes too many edge cases into account and guard against even the most unlikely thing. The upside is that it often handles edge cases that I either didn't think about or was too lazy to implement.

I can with full confidence say that the code AI writes is more robust and safe than if I would have done it myself. The code definitely becomes more bloated though.

reply
It also loves to add edge case handling where it's not needed and in poorly chosen places
reply
> No one can type at the speed AI can code

Don't we already have a weekly post nowadays explaining, again, that typing isn't the bottleneck?

reply
Which is still false and not serious. It's one of the dumbest rationalizations I've seen. AI has many flaws but pretending that it's useless because of that is not it.
reply
They probably mean faster to a higher-level goal rather than SLOC. Typing speed and SLOC have never been that useful for measuring productivity.
reply
if you've never had the experience of handing something off to someone else being more laborious and slower than doing it yourself due to having to set constraints and define success, then you simply haven't held a senior enough position to comment on this with any authority
reply
Also employees who work slower than you (and spend most of their time not actually working).
reply
It should be “…you will always be faster than someone _without the knowledge_ using an AI”
reply
as i understood it he's referring to the overall time it takes to build a complete finished piece of software, accounting for the refactoring and bug fixes and all that. cause handn't you understood the tools you're using you would be running into roadblocks and that adds up
reply
deleted
reply
Plenty of cars can get off the line faster than an F1 car. But around a track, an F1 is by far the fastest in the world.

Going fast isn’t the difficult bit.

reply
Where does this certainty that LLMs will get more intelligent stem from?
reply
Except it's often faster to make the change yourself than explain it to an AI.
reply
deleted
reply
>No one can type at the speed AI can code

You can definitely be faster than frontier models. The number of tokens per second is not that high and they require a lot of tokens for thinking and navigating things.

reply
Especially if you use auto-complete AI, ironically. You type a few characters, the line fills out in less than a second, as opposed to a reasoning model that takes maybe a second per 2-3 lines it writes out.
reply
> LLMs get more intelligent

The Spicy Autocomplete koolaid club is out in force today I see.

We clearly have different ideas of what the word "intelligent" means.

reply
Explaining your idea of intelligent would have been a better comment than name calling and shallow dismissal.
reply
Your views might carry more weight if the crux of your rebuttal wasn't manufactured outrage that I used a laughably accurate nickname for a type of software.
reply
> Also, when did we stop liking to learn?

When the economy got so bad for so many people, that every waking moment has to be either chasing fresh cash (or spent in recovery from cash-chasing, worrying about new cash), to the point they have to largely ignore their own long term goals or basic morals or principles.

You can blame all the new gadgets (phones/social media/tiktok/‘dopamine-things’) — but it’s a very much blaming the symptom, not the problem.

(It’s the meme. “Guys, this isn’t funny. Humans only do this when they’re very distressed”)

reply
AI is just revealing the two types of people in this line of work. Those who don’t actually like software and just do it because it’s lucrative, and the actual nerds who care.
reply
I think there's a continuum here, too. I've heard it said, in jest, mind, that LLM's square the dev. It turns a 1.5x dev into a 2.25x dev, but it also turns a 0.75x dev into a ~0.56x dev.

I think the exponent of 2 is probably too high, but it's not a bad approximation of a very messy reality.

There is also the division of people who value the thing being produced vs. valuing the actual production of that thing, whether or not its used. I don't see one side here being "right", necessarily, but when a company is behind it one is certainly more valued, and I think not incorrectly.

reply
You are probably talking about people who just crunch out some half baked solutions for the sake of getting somewhere.

But there are other nerds who care, just not about the code quality, but about conversion, testing out business ideas quickly, getting to know their customers better.

There are nerds who care about business strategy.

There are nerds who care about accounting principles and clean financial reporting.

There are nerds who care about sales targets and partnerships.

There are many types of nerds out there. Don’t limit nerds to engineers, because “tech” world is not just an engineering world anymore. All these nerds you can team up with to build meaningful things, because they do care.

reply
They very clearly weren't talking about nerds in general but rather nerds who care about software.
reply
A much more charitable framing: people who enjoy the process vs people who enjoy the result.

(Though, granted, the results are a lot better if you craft it by hand)

reply
But business people always cared only about thr result. My PM (who speaks like a salesman) only cares about the results. My “head of” same. My ceo same. The only ones who ever cared about the process and quality were us the engineers… if we don’t have that care, well, to hell with everything
reply
Assuming it is accurate, the logical conclusion is that the race is over. The management can get their $result and fast. Now, whether it is good or bad, is a separate story, and only time will tell whether they will be forced to learn anything. Right now, the expectation is to push for results and management seems to ascribe current set of failures to: people not embracing AI enough.
reply
That's not true as a simple statement, many business people really do care about quality and process, and you may find you care much more about them than you think.

How often have engineers decried yet another rewrite that some project is doing? Or talked about "over-engineering" something that isn't needed, or that another person in a team has setup a full kubernetes gitops thing that's glorious to them but you just want to scp a go binary and be done with it?

I've seen truly excellent engineers hit this issue, I worked in a team years ago and people disagreed on the approach to take on a new project. So we all made a prototype and presented it, so we could pick a direction. There was a requirement that it be done in ruby since that was the language most of the developers were most fluent in. One of the engineers, remarkably smart, wrote a lisp interpreter in ruby so that technically it'd be "in ruby" but have the benefits of lisp.

He cared about the quality and process in one area. Deeply. However focussing on that would be at the detriment to the rest of the actual product we wanted to ship. If you considered the quality of the product as a whole and the process at the level of the organisation, you'd do something very different.

Now, none of this means all business people are good at this or long term vision or anything, just as it doesn't mean all engineers have a very narrow focus. But I've seen engineers focus on the quality or engineering of some component without looking at what it is you're actually trying to achieve as a business, and so push for a worse overall process and lower "quality" result. It's the same sort of disconnect that leads a lot of engineers to rail against meetings and PMs that slow them down without seeing from the other side that it's often better to build the right thing more slowly than the wrong thing more quickly.

reply
> enjoy the process

This means different things to different people, lot of people enjoy the process of engineering solutions with LLM agents, build out tailored skilled, custom approaches that make up their own flavour "agentic" workflow. There are also people who find joy in Javascript that other people cannot understand why. And other people again love system languages or even tinkering with assembly etc.

What I wanted to say is that LLM use does not automatically mean people just want to get results faster, there are still nerds enjoying the process of working with these new tools.

reply
I am not really sure. I wrote some scripts that aggregated data from several APIs with an LLM and the LLM had the foresight to create a caching layer for the API responses as it properly inferred that I would need the results over and over again as well as using asyncio to accelerate fetch speed. This would have been a v2 or v3 and it one-shotted it perfectly.
reply
Yeah, they are good at applying generic patterns, but often it can be overkill/YAGNI that lead to more maintenance work in places that are fine with a much simpler/straightforward solution. But this is what the engineer can decide and with LLMs they wont be forced to make the trade off because it takes longer to build, but rather whether it is really necessary or not.
reply
Can we build a list of the actual nerds who care? Need it for my future recruitment needs lol.
reply
The benchmark is "do they do it for fun", i.e. personal projects.

But the real trick isn't "number of personal projects", but how weird they are. There's no "rational" reason to do them, they don't increase the person's marketability / hireability. They are done purely for intrinsic reasons.

(On reflection, this also seems to be a pretty robust predictor of autism. :)

reply
This is such a naive take. Most of the nerdiest and most "quality" oriented engineers are hard leaning in to agentic coding. I feel like the most impressive engineers I know have always leaned in to learning how to "sharpen the axe" and AI is really the biggest axe we have seen.
reply
I take software engineering and production reliability very seriously. But coding is just a small part of my job. It's not really the meat and potatoes. I'll vibe code (responsibility) where I can.
reply
I care a lot about software and I use LLMs extensively. There are some things I deeply understand yet I don't care for doing anymore because I've done them for years and there's nothing to be gained from doing them manually.
reply
It goes for all professions really, people who do it for work and people who care. Apply to any profession, plumbers, doctors, carpenters, cleaners, etc etc. Most of us have experienced both types and I haven’t heard of anyone preferring the ”do it for work” over the ones who care. And like those other professions, in software we accept the worse of the two because finding people who care is both time consuming and often much more expensive.
reply
>in software we accept the worse of the two

and the whole world suffers for it.

reply
No disagreement from me
reply
I've posited for a while now that the people who find spicy autocomplete to be exciting are the people who can't really do what it does.

I played with Image Playground last year some time. It was really fun. You know why? I can't draw, and I can't paint, to save my life. It's letting me do something I can't do well/at all on my own.

Using an LLM to do something I can do, with the caveat that it's pretty mediocre at the task, and needs to be constantly monitored to check it isn't doing stupid things? If I wanted that I'd just get an intern and watch them copy crappy examples from StackOverflow all day.

The same logic explains the use of LLM's to write emails/other long form text.

It makes accessible something that people otherwise cannot do well. Go look at submissions on community writing sites. The people who write because they're good at it, are adamant they don't use an LLM.

People use LLM's to do things they're otherwise not able to do. I will die on this hill.

reply
"I've posited for a while now" and you post the most lukewarm and outdated take like it's an enlightenment. I've been coding for 20 years and can very well do everything the AI does, and so can all devs I know. We use it because it amplifies us, not because we couldn't otherwise. You've chosen a very ridiculous hill to die on.
reply
Is your argument that there is no imaginable situation where someone who was competent at software development could find use for a semi-automated tool for writing software?

That would imply that either the person in question has infinite time, or has access to all software that could ever be of utility to them, which seems unlikely.

reply
There's a reason I call it spicy autocomplete.
reply
Which is what?
reply
.... that an IDE providing a suggestion about what comes next as you type is not new, and the entire basis of how an LLM works is "what word probably comes next".

I'd have thought someone who's so enamoured with the tech would have at least a basic understanding of how it works.

reply
Initially I wanted to write more but I can boil it down to taste and context mismatch. By that I mean some people see LLM output as tasteless or kitsch (which I ascribe to generally) and another set of people (though sometimes overlapping more often than not) hold disdain or at the very least look funny at heavy LLM users like gym-goers would look at someone in the middle of the gym loudly suggesting using a dolly or forklift instead of barbell training.

So yeah, I guess the value of doodles has shot up simply because of optics.

Somewhere else in this comment section someone tried to broaden the definition of nerd so much so that pretty much anybody who is a consummate professional is also a nerd. The hill I will die on is that people don't actually dislike all this new AI stuff but more so the attitude of people heavily invested in it.

And to add another data point regarding your hill my drawing/painting moment was NLP stuff. Now if I want to do (rudimentary) sentiment analysis or keyword extraction I can lean on a local LLM. Yet I don't go around yelling Snowball (I think?) is obsolete.

reply
> more so the attitude of people heavily invested in it

Exactly.

LLM bros are just the new blockchain/crypto bros, but they aren't necessarily even writing their own spruiking comments any more.

reply
While you are dying on a hill, with the help of LLMs, I'm shipping quality software and features to my customers at a pace I haven't been able to before. And no, not some nextjs slop. If you are letting your LLM look at StackOverflow, you are doing it wrong - it needs to be grounding in your stacks official docs and any other style/rules you prefer wired with other tooling like linting/formatting, duplication checking, etc. And yes, you have to constantly monitor the output and review every line of code - but it's still faster and if managed correctly, produces better code and (this is the hill I will die on) better test suites and documentation than I would have written.
reply
> If you are letting your LLM look at StackOverflow, you are doing it wrong

So you've evaluated all the sources that the model was trained on initially have you? How long did that take you?

> I'm shipping quality software and features to my customers at a pace I haven't been able to before.

I'm sorry are you agreeing with me or not? It sounds like you're agreeing with me.

reply
I’m just saying that you can’t just let it rip based on its training alone, it needs to be grounded and harnessed in stack specific tooling.
reply
I care about solving problems for and delivering value to my users. The software is simply a means to that end. It needs to work well, but that does not mean every line of code requires an artisanal touch and high attention to detail.
reply
I think there's some ambiguity in the discussion around what people mean when they say "good code".

Good code for a business is robust code, that's functionally correct, efficient where it needs to be and does not cost too much.

I believe most developers who care about good code are trying to articulate this, they care about a strong system that delivers well, which comes from good architecture.

LLMs actually deliver pretty well on the more trivial code cleanlines stuff, or can be made to pretty trivially with linters, so I don't think devs working with it should be worried about that aspect.

What is changing fast is that last point I mentioned, "that doesn't cost too much" because if you can get 70% of the requirements for 10% of the perceived up front cost, that calculus has changed. But you are not going to be getting the same level of system architecture for that time/cost ratio. That can bite you later, as it does often enough with human coders too.

reply
I think the other aspect to this which you allude to at the end is that all of these arguments start with the assumption that all human software engineers produce high quality code that meets the requirements, but obviously that’s very much not the case in the real world. After all, 80-90% of drivers rate themselves as above average.

If one compares a single competent software engineer directing a number of agents against a random group of engineers (not necessarily working at FAANG or a YC startup), then those quality arguments are going to be significantly less compelling.

reply
Why exactly does "actual nerds who care" stipulate writing code?
reply
I have been building an iOS app that I had kicking around in my head for years but never had time to build. I have been a frontend UX engineer for the better part of a decade and went through a handful of tutorials on Swift. The project definitely sits in this uncanny valley for me. I have test suites for every aspect of the app and have the agent using TDD to avoid cheating - this has gotten me pretty far without having to look too close at the output other than general structure. As I'm reaching a more mature stage of the project though, I'm finding that I want to tweak a lot by hand in the code to get the details right without burning tokens.
reply
The agents always do the best work IMO if you already know exactly what you want, but are too lazy to implement it. I like having the agent mock up a working solution before reimplementing it.

To split the difference, I now try to hand code as much as I can from the beginning, leave TODO comments for the agent to mop up and I'll ask it to complete the issue with reference to the current diff. It reduces the surface for agents to make stupid assumptions. If I can get it done fast on my own, win for me, if the agent finds issues or there's logic that needs checking, also a win. This way you stay sharp, but you have access to an oracle if you get stuck and it costs you fewer tokens.

reply
deleted
reply
>Also, when did we stop liking to learn? Why is it a bad thing to know all the ins and outs of a programming language?

I do not know the inns and out of the assembly layer my high level code end up as. It's not because I don't like to learn, it's because I genuinely don't need to. At a certain level of AI performance, how will this be any different?

reply
Because you may not know the specifics of the assembly being generated, but you’ve likely learned a language built on top of assembly. And the compilers do some great tricks behind the scenes to generate efficient assembly, but those tricks are specifically coupled to semantics of the source language.

An LLM is not coupled to anything and can generate output that simply does not relate to the input. This doesn’t happen with compilers, and if it does, then it’s a specific bug to be addressed. An LLM can never guarantee certain output based on the input.

If I write x < 100, I know exactly how the compiler will treat that code every single time, and I know what < means and how it differs from <=

If I tell an LLM that “I want numbers up to 100.” Will that give me < or <= and will it be consistent every single time, even the ten thousandth program that I write?

The language is ambiguous where the code is specific

reply
To me this is semantics as far as it's related to "why don't you want to learn?"

I have a co-worker in another team that write java endpoins we consume. I can tell him what I need and I trust the output. I don't need to know java to trust him, it doesn't mean I don't want to learn.

There are thousand examples like this across every stack and abstraction level. From ssh-handshakes to gps.

Sure my co-worker is fundamentally different from a compiler which is fundamentally different from an LLM.

My argument is that the chain-of-trust where you offload knowledge to an external source is identical. We do it all the time but somehow doing it with an LLM means we no longer want to learn?

reply
However, curious programmers who develop in high level languages will dabble with assembly maybe for fun, and will be much better off for it than those who treat parts of the stack like a black box never to be opened.
reply
One difference is: to use a top notch compiler/assembler you don’t need to pay. They are open source and have a lot of support. To use the latest and greatest models (bc no one around likes to use non sota ones) you need to pay a premium price.

Multibillion dollars companies are now the gateway for every line of code you need to write. That’s dystopian. It sucks

reply
Local models are increasingly becoming capable of taking on serious coding tasks that I would have previously sent to a frontier lab
reply
Yes, but that's a completely different argument (that I agree with). Essentially, yes they are conceptually similar but one is bad because you have to pay rent to use it.
reply
Fundamentally you need to start with "what am I trying to do?" and "given that goal, where is my time best spent?".

I made a checklist for my kids to stamp off items after they get back from school (sort bag, get changed, etc). I had two goals, 1) I was trying to solve a problem at home and would have pip installed a library that just straight up did this already and 2) I wanted to check out what the claude website outputs was like at the time. My time was best spent poking at claude a bit but mostly playing with my kids - so vibe coding it was.

Client test speedup issues, I'm trying to speed up tests for them and spend as little time as possible doing so. Vibe coded some analysis and visualisation tools, mostly AI but with some review guided multiple prototypes for timing and let it just fix whatever. More dedicated review for the actual solutions.

Learning a new thing - goal is to learn that thing. AI there is good for doing a lot of the work around that. Maybe I'm focussing on, say, Z3. AI there can help with debugging, finding docs, setting up an environment and leave me to do the central part.

reply
Let’s see if someone can point me towards some resources over the following.

The problem is mixing vibe-coding and agentic-eng, and switching the brain in 2 different modes (fast-feedback gratification vs deep-focus gratification).

There’s no clear cut rule on what works. Different people, different brains, and especially amongst devs some optimized low-key neurodivergence.

And then there’s waiting mode, those N seconds/minutes that agents take to think and write.

What’s the right mix? Keep a main focused project and … what do you do in the meantime? Vibe code something else? Hn? Social media? Draw lines on a paper sheet? Wood carving? Exercise? Rewatch some old tv series?

I have experimented….

There are side activities that help you go back to the task at hand in the correct mental framework for it. Not just for productivity, but for efficiency and enhancing critical thinking on the main task. Or whatever you choose to optimize for. Can anyone point me towards some people talking about this?

reply
100% aggred, i learn coding by building stuff and breaking it when you let ai do everything you skip that pain and also skip the understanding.
reply
> Why is it a bad thing to know all the ins and outs of a programming language? To write and make all the decisions yourself? That shit is fun.

It's not just fun (i agree it is), but it is also essential for creation.

What we have done with the 'AI' is to create a lot of ignorant morons who think they can create a lot of things without knowledge. This is not gonna end well.

reply
> they can create a lot of things without knowledge. This is not gonna end well.

Who said "managers"

reply
Oh managers are not the biggest evil here. At least they know basics.

Now we have influx of people with not a single shred of technical knowledge thinking they can create something.

reply
When I started spending 40-60 hours a week programming and wanted to spend my remaining time doing other things.
reply
I imagine my future will involve spending 40–60 hours a week using LLMs to do the work of multiple roles instead of just one, while wishing I could spend my remaining time doing other things.
reply
Some people actually don't really like to learn new things. If the machine spits out plausible working code, they'd be perfectly happy with that. Personally I think AI is doing a lot more harm than good and I can't wait for the bubble to burst.
reply
I don’t think it’s going to burst like how other people expect. The technology is already out there, when it loses steam people aren’t suddenly going to stop using it. I predit it’ll be more like the dot come crash where companies that can survive the downturn come out dominant.
reply
It ends like this: all codebases become unmaintainable spaghetti after agentic AI spends years on it. Then after every agent in existence will spend minimum 24 hours reading the codebase to add a simple feature, the software is abandoned.
reply
I believe most codebases were "unmaintainable spaghetti" even before LLMs: depends on how you define it though.

To be, it means expensive to evolve.

reply
Let those who want to learn go learn. And let those who just want something that works well enough without having to learn get it.
reply
To use an analogy, LLMs are like the Ring of Power in Lord of the Rings. The Ring of Power does not corrupt one nor does it magically turn one evil. Rather, the Ring just serves as a catalyst for what is already inside the bearer.

Many that wore the Ring had pure and righteous intentions. The thought of, "If I were in power, I would..." was the arrogance and corruption which the Ring amplifies.

So, I cannot agree that it is AI doing the harm. Rather, AI just gives us the power to do the harm, the shortcuts, the cheats, etc. we have always desired. And just like the Ring, I believe much of the harm from LLMs often comes from people that started with good intentions, and the power it grants is just too tempting for many.

reply
Agree except for this part

> If you're at work and they really care about getting something out of the door, do whatever you think is best.

If you don’t mind being jobless, sure do whatever you think is best. Not all of us can simply switch companies easily. Folks need to realise that AI in a company setting works for the benefit of the company, not for the individual.

reply
But do companies really know how to use AI? I think most of it is experimentation - throwing things to the wall and seeing what sticks.

It's the practitioner who eventually figures out what really works. I see this the same way the agile movement emerged. It was initiated by people who were hands-on programmers and showed enough benefit at minimizing software waste before it took a life of its own and started getting peddled by people who didn't really understand the underlying principles.

reply
Except those are the same people that will decide who is getting hired, and who gets layoff because of increasing productivity.

And no, this isn't playing what ifs.

I have seen it happening with offshoring, migration to cloud, serverless, SaaS and iPaaS products, and now AI powered automations via agents.

Less devops people, less backend devs , no translation team, no asset creation team,...

I have been layoff a few times, having to do competence transfer to offshoring teams, the quality of the output is something c suites don't care all.

Do you wanna bet what is behind Microslop, Apple Tahoe bugs and so forth?

reply
thanks for this take, articulates what i've been feeling towards "AI" without my angst
reply
deleted
reply
100% agree!
reply
[flagged]
reply
[dead]
reply
[dead]
reply
[dead]
reply