I suspect it happened when we achieved a level of such constant stimulation (there is a pocket computer always on us with infinite effortless distraction) that we’re never bored and never engage the default mode network.
https://en.wikipedia.org/wiki/Default_mode_network
https://www.youtube.com/watch?v=orQKfIXMiA8
When you’re bored, your mind goes to places it wouldn’t otherwise go. Curiosity kicks in. Curiosity is a precursor to learning. Learning engages the brain and is fun. But it’s not fun all the time, some of it is challenging and frustrating (which is good, that’s the process that teaches you).
When you have the digital equivalent to infinite candy and the brain equivalent to a sweet tooth, it’s hard to resist the siren’s call. The consequence is the brain equivalent to a stomachache—depression and loss of meaning—but unfortunately it doesn’t hit you the same way so you don’t make the immediate connection to make yourself stop. When you think about it, it’s ridiculous from several angles: the candy is infinite, it’s never going to run out, so you don’t need to gorge! But then we justify ourselves as only a true addict would, that while the candy is infinite, the flavours are limited editions and always rotating, and what if I miss that really good one everyone is on?! Then you miss it, is the answer. No one will be talking about it in fifteen minutes anyway.
I don't know... I don't disagree, but I think this has been repeated so much that I believe everyone, at least everyone that is actively participating in HN discussions is aware of this.
So if we are aware of this and we consciously choose to keep engaging in dopaminergic activities, without having some time to be bored, I think it starts to become a choice. We can blame tech for starting this trend of stealing our attention, but once we become aware of this, we can only blame ourselves for perpetuating it.
> when you know how the thing works and have that mental context, you will always be faster than an AI
That's just plain false, honestly. No one can type at the speed AI can code, even factoring in the time you need to spend to properly write out the spec & design rules the AI needs to follow when implementing your app/feature/whatever. And that gap will only increase as LLMs get more intelligent.
The more specific your requirements the closer you get to natural language not being useful anymore.
What do you use them for? For most AI users it's usually CRUD and I've never seen a web server or frontend in APL like languages.
The reason why programming is hard is because most languages force you to use a hammer when you need a screw driver. LLMs are very good at misusing hammers and most people find them useful for that reason.
If you use a sane dsl instead the natural language description of a problem is always more complex and much longer than the equivalent description in a dsl. It's also usually wrong to boot.
This is what algebra used to look like before variables: https://en.wikipedia.org/wiki/Archimedes%27s_cattle_problem#...
I don't think you will find anyone who can do better than an LLM at one shotting the prose version of the problem. Both will of course be wrong.
But I also don't think you will find an LLM that can solve the problem faster than a human with Prolog when you have to use the prose description of the problem.
But in general the statement is really not true anymore, generic projects/problems have a pretty good chance that the AI can one shot a working solution from a lazily typed vague prompt.
The volume of people successfully adopting agentic engineering practices suggests this stuff isn't rocket science, but it is a learned skill and takes setup.
A year later into heavy AI coding, my experience is what you're describing should aid in being able to run 5+ agents simultaneously on a project because you know what you're doing, you set it up right, and you know how to tell agents to leverage that properly.
What's your definition of "successfully"?
More LOC committed per day is probably the only one that's guaranteed when you let spicy autocomplete take the wheel.
I don't think it's at all possible to reason about the other more meaningful metrics in software development, because we simply don't have the context of what each human is working on, and as with the WYSIWYG fad of 3 decades ago, "success" is generally self-reported, by people who don't know what they don't know, and thus they don't know what spicy autocomplete is getting woefully wrong.
"But it {compiles,runs,etc}" isn't a meaningful metric when a large portion of the code in question is dynamic/loosely typed in a non-compiled language (JavaScript, Python, Ruby, PHP, etc).
Translating that into code can happen directly by you, or into prompt iterations that need to result in the same/similar coded representation.
In other words, when it matters how something works and it is full of intricate details, you do not need to specify it, you just do it (eg. as an example which is probably not the best is you knowing how to avoid N+1 query performance issue — you do not need a ticket or spec to be explicit, you can just do it at no extra effort — models are probably OK at this as it is such a pervasive gotcha, but there are so many more).
You setup the environment and then you do the work. Unless you are switching employers every week, you invest in writing that stuff down so the generation is right-ish and generate validation tooling so it auto-detects the mistakes and self-repairs.
imagine you have to implement a specific algorithm for a quantum computer.
There's no value setting up AI to do the writing for you. That might be orders of magnitude harder then writing the algorithm directly.
For highly specialized one-off features, it doesn't always pay off.
On the other hand, if all you do are some generic items that AI can do well... then I'm not sure you're going to have a job long term, your prompts and automation will be useful for the new junior hires that will be specialized in using these and cost effective.
It's like talking legalese vs plain English; or formal logic vs English. Some people have the formal stuff come more naturally, and then spitting code out is not a burden.
To give an example of where I hear this, it is indistinguishable from the things I hear from my coworkers: "You just need the right setup!" (IMO the actual difference is I need to turn off the part of my brain that cares about what the code actually does or considers edge cases at all) What I actually see, in practice, are constant bugs where nobody ever actually addresses the root cause, and instead just paves over it with a new Claude mass-edit that inevitably introduces another bug where we'll have to repeat the same process when we run into another production issue.
We end up making no actual progress, but boy do we close tickets, push PRs, and move fast and oh man do we break things. We're just doing it all in-place. But at least we're sucking ourselves off for how fast we're moving and how cutting edge we are, I guess.
I dunno, maybe I'm doing it wrong, maybe my team is all doing it wrong. But like I said the things they say are indistinguishable from the common HN comment that insists how this stuff is jet fuel for them, and I see the actual results, not just the volume of output, and there's no way we're occupying the same reality.
But man AI is phenomenal for getting stuff out of your head and working quick.
Current AI systems are extremely serial, in that very little of the inherent parallelism of the problem is utilized. Current-gen AI systems run at most a few hundreds of thousands of operations in parallel, while for frontier models, billions of operations could be run in parallel. Or in other words, what currently takes AI 8 hours will take it barely long enough for you to perceive the delay after you release the enter key.
For a demo, play around with https://chatjimmy.ai/ , the AI chatbot of Taalas, where they etched the model into silicon in a distributed way, instead of saving it in RAM and sucking it to execution units by a straw. It's a 8B parameter model, so it's unsuitable for complex problems, but the techniques used for it will work for larger models too, and they are working to get there.
And even Taalas is very far from the limits. Modern better quality LLM chatbots operate at ~40 tokens per second. The Taalas chatbot operates at 17000 tokens/s. If you took full advantage of parallelism, you should be able to have a latency of low hundreds of clock cycles per token, or single request throughput of tens of millions of tokens per second. (With a fully pipelined model able to serve one token per clock cycle, from low hundreds of requests.) Why doesn't everyone do it like that right now? Because to do this, you need to etch your model into silicon, which on modern leading edge manufacturing is a very involved process that costs hundreds of millions+ in development and mask costs (we are not talking about single chips here, you can barely fit that 8B model into one), and will take around a year. So long as the models keep improving so much that a year-old model is considered too old to pay back the capital costs, the investment is not justified. But when it will be done, it will not just make AI faster, it will also make it much more energy-efficient per token. Most of the energy costs are caused by moving data around and loading/storing it in memory.
And I want to stress that none of the above is dependent on any kind of new developments or inventions. We know how to do it, it's held back only by the pace of model improvement and economics. When models reach a state of truly "good enough", it will happen. It feels perverse to me that people are treating this situation as "there was a per-AI period that worked like X, now we are in a post-AI period and we have figured out that it will work like Y". No. We are at the very bottom of a very steep curve, and everything will be very different when it's over.
Everything I do to interact with my computer is through an agent now.
Everything I do to interact with my computer is still the same.
See how boring you are?
Telling the agent your high level plan that you are extremely familiar with and then having the agent execute on 2000 lines of code is FASTER then having you execute on that 2000 lines of code. There is no reality where that can be physically beaten by even someone who's typing really quickly with zero pause. Physically impossible.
Less boring or not? Another way to put it... although my answer is boring, I think I'm right. He is either a liar or like many other people lacks skill in using AI... because the transition to AI is happening so fast... not many people are fully utilizing AI to it's maximum potential. Many still use IDEs, many still interact with terminal. Many people still don't use it to configure infrastructure, do database administration, deploy code... etc.
I know that a better plan could mean fewer iterations, but again that extends the time you need to spend on that plan => the total time of the AI solution
Honestly, I'm still faster than AI cooking scrambled eggs, but definitely not faster than neither AI (or compiler) in translating stuff into code.
No way U beat an LLM on this, even on trivial ones. LLMs are better at that since at least 2024, if you haven't noticed, then you're not doing enough SQL perhaps.
But, of course it took years for people to realize they cannot outpace Visual Studio in the 90s by being very good at x86 assembly.
But during feature development? Not possible. And I consider myself a very fast developer
Bugs happen during feature development, as you say, but then Claude is in the context, and I don't need to tell it where to go, it sees the bug with failing tests, or smth similar.
BTW. One thing that helps my Claude with debugging harder problems is that I tell it to apply scientific method to debugging. Generate hypotheses, gather pros/cons evidence, write to a journal file debug-<problem>.md, design minimal experiments to debunk hypotheses.
You can add that as a skill, and sometimes it will pick it up automatically, but it works wonders just as a single sentence in the input.
besides it is a system that you query, it responds. I'm sure your dbs are not always 'right' and particularly when you as the wrong questions.
You speak as if AI development is frozen, and you ignore the poster's point:
> that gap will only increase as LLMs get more intelligent
I think, and I recognize it is mostly against the 'agentic' push, I will stick with slow iteration.
I can with full confidence say that the code AI writes is more robust and safe than if I would have done it myself. The code definitely becomes more bloated though.
Don't we already have a weekly post nowadays explaining, again, that typing isn't the bottleneck?
Going fast isn’t the difficult bit.
You can definitely be faster than frontier models. The number of tokens per second is not that high and they require a lot of tokens for thinking and navigating things.
The Spicy Autocomplete koolaid club is out in force today I see.
We clearly have different ideas of what the word "intelligent" means.
When the economy got so bad for so many people, that every waking moment has to be either chasing fresh cash (or spent in recovery from cash-chasing, worrying about new cash), to the point they have to largely ignore their own long term goals or basic morals or principles.
You can blame all the new gadgets (phones/social media/tiktok/‘dopamine-things’) — but it’s a very much blaming the symptom, not the problem.
(It’s the meme. “Guys, this isn’t funny. Humans only do this when they’re very distressed”)
I think the exponent of 2 is probably too high, but it's not a bad approximation of a very messy reality.
There is also the division of people who value the thing being produced vs. valuing the actual production of that thing, whether or not its used. I don't see one side here being "right", necessarily, but when a company is behind it one is certainly more valued, and I think not incorrectly.
But there are other nerds who care, just not about the code quality, but about conversion, testing out business ideas quickly, getting to know their customers better.
There are nerds who care about business strategy.
There are nerds who care about accounting principles and clean financial reporting.
There are nerds who care about sales targets and partnerships.
There are many types of nerds out there. Don’t limit nerds to engineers, because “tech” world is not just an engineering world anymore. All these nerds you can team up with to build meaningful things, because they do care.
(Though, granted, the results are a lot better if you craft it by hand)
How often have engineers decried yet another rewrite that some project is doing? Or talked about "over-engineering" something that isn't needed, or that another person in a team has setup a full kubernetes gitops thing that's glorious to them but you just want to scp a go binary and be done with it?
I've seen truly excellent engineers hit this issue, I worked in a team years ago and people disagreed on the approach to take on a new project. So we all made a prototype and presented it, so we could pick a direction. There was a requirement that it be done in ruby since that was the language most of the developers were most fluent in. One of the engineers, remarkably smart, wrote a lisp interpreter in ruby so that technically it'd be "in ruby" but have the benefits of lisp.
He cared about the quality and process in one area. Deeply. However focussing on that would be at the detriment to the rest of the actual product we wanted to ship. If you considered the quality of the product as a whole and the process at the level of the organisation, you'd do something very different.
Now, none of this means all business people are good at this or long term vision or anything, just as it doesn't mean all engineers have a very narrow focus. But I've seen engineers focus on the quality or engineering of some component without looking at what it is you're actually trying to achieve as a business, and so push for a worse overall process and lower "quality" result. It's the same sort of disconnect that leads a lot of engineers to rail against meetings and PMs that slow them down without seeing from the other side that it's often better to build the right thing more slowly than the wrong thing more quickly.
This means different things to different people, lot of people enjoy the process of engineering solutions with LLM agents, build out tailored skilled, custom approaches that make up their own flavour "agentic" workflow. There are also people who find joy in Javascript that other people cannot understand why. And other people again love system languages or even tinkering with assembly etc.
What I wanted to say is that LLM use does not automatically mean people just want to get results faster, there are still nerds enjoying the process of working with these new tools.
But the real trick isn't "number of personal projects", but how weird they are. There's no "rational" reason to do them, they don't increase the person's marketability / hireability. They are done purely for intrinsic reasons.
(On reflection, this also seems to be a pretty robust predictor of autism. :)
and the whole world suffers for it.
I played with Image Playground last year some time. It was really fun. You know why? I can't draw, and I can't paint, to save my life. It's letting me do something I can't do well/at all on my own.
Using an LLM to do something I can do, with the caveat that it's pretty mediocre at the task, and needs to be constantly monitored to check it isn't doing stupid things? If I wanted that I'd just get an intern and watch them copy crappy examples from StackOverflow all day.
The same logic explains the use of LLM's to write emails/other long form text.
It makes accessible something that people otherwise cannot do well. Go look at submissions on community writing sites. The people who write because they're good at it, are adamant they don't use an LLM.
People use LLM's to do things they're otherwise not able to do. I will die on this hill.
That would imply that either the person in question has infinite time, or has access to all software that could ever be of utility to them, which seems unlikely.
I'd have thought someone who's so enamoured with the tech would have at least a basic understanding of how it works.
So yeah, I guess the value of doodles has shot up simply because of optics.
Somewhere else in this comment section someone tried to broaden the definition of nerd so much so that pretty much anybody who is a consummate professional is also a nerd. The hill I will die on is that people don't actually dislike all this new AI stuff but more so the attitude of people heavily invested in it.
And to add another data point regarding your hill my drawing/painting moment was NLP stuff. Now if I want to do (rudimentary) sentiment analysis or keyword extraction I can lean on a local LLM. Yet I don't go around yelling Snowball (I think?) is obsolete.
Exactly.
LLM bros are just the new blockchain/crypto bros, but they aren't necessarily even writing their own spruiking comments any more.
So you've evaluated all the sources that the model was trained on initially have you? How long did that take you?
> I'm shipping quality software and features to my customers at a pace I haven't been able to before.
I'm sorry are you agreeing with me or not? It sounds like you're agreeing with me.
Good code for a business is robust code, that's functionally correct, efficient where it needs to be and does not cost too much.
I believe most developers who care about good code are trying to articulate this, they care about a strong system that delivers well, which comes from good architecture.
LLMs actually deliver pretty well on the more trivial code cleanlines stuff, or can be made to pretty trivially with linters, so I don't think devs working with it should be worried about that aspect.
What is changing fast is that last point I mentioned, "that doesn't cost too much" because if you can get 70% of the requirements for 10% of the perceived up front cost, that calculus has changed. But you are not going to be getting the same level of system architecture for that time/cost ratio. That can bite you later, as it does often enough with human coders too.
If one compares a single competent software engineer directing a number of agents against a random group of engineers (not necessarily working at FAANG or a YC startup), then those quality arguments are going to be significantly less compelling.
To split the difference, I now try to hand code as much as I can from the beginning, leave TODO comments for the agent to mop up and I'll ask it to complete the issue with reference to the current diff. It reduces the surface for agents to make stupid assumptions. If I can get it done fast on my own, win for me, if the agent finds issues or there's logic that needs checking, also a win. This way you stay sharp, but you have access to an oracle if you get stuck and it costs you fewer tokens.
I do not know the inns and out of the assembly layer my high level code end up as. It's not because I don't like to learn, it's because I genuinely don't need to. At a certain level of AI performance, how will this be any different?
An LLM is not coupled to anything and can generate output that simply does not relate to the input. This doesn’t happen with compilers, and if it does, then it’s a specific bug to be addressed. An LLM can never guarantee certain output based on the input.
If I write x < 100, I know exactly how the compiler will treat that code every single time, and I know what < means and how it differs from <=
If I tell an LLM that “I want numbers up to 100.” Will that give me < or <= and will it be consistent every single time, even the ten thousandth program that I write?
The language is ambiguous where the code is specific
I have a co-worker in another team that write java endpoins we consume. I can tell him what I need and I trust the output. I don't need to know java to trust him, it doesn't mean I don't want to learn.
There are thousand examples like this across every stack and abstraction level. From ssh-handshakes to gps.
Sure my co-worker is fundamentally different from a compiler which is fundamentally different from an LLM.
My argument is that the chain-of-trust where you offload knowledge to an external source is identical. We do it all the time but somehow doing it with an LLM means we no longer want to learn?
Multibillion dollars companies are now the gateway for every line of code you need to write. That’s dystopian. It sucks
I made a checklist for my kids to stamp off items after they get back from school (sort bag, get changed, etc). I had two goals, 1) I was trying to solve a problem at home and would have pip installed a library that just straight up did this already and 2) I wanted to check out what the claude website outputs was like at the time. My time was best spent poking at claude a bit but mostly playing with my kids - so vibe coding it was.
Client test speedup issues, I'm trying to speed up tests for them and spend as little time as possible doing so. Vibe coded some analysis and visualisation tools, mostly AI but with some review guided multiple prototypes for timing and let it just fix whatever. More dedicated review for the actual solutions.
Learning a new thing - goal is to learn that thing. AI there is good for doing a lot of the work around that. Maybe I'm focussing on, say, Z3. AI there can help with debugging, finding docs, setting up an environment and leave me to do the central part.
The problem is mixing vibe-coding and agentic-eng, and switching the brain in 2 different modes (fast-feedback gratification vs deep-focus gratification).
There’s no clear cut rule on what works. Different people, different brains, and especially amongst devs some optimized low-key neurodivergence.
And then there’s waiting mode, those N seconds/minutes that agents take to think and write.
What’s the right mix? Keep a main focused project and … what do you do in the meantime? Vibe code something else? Hn? Social media? Draw lines on a paper sheet? Wood carving? Exercise? Rewatch some old tv series?
I have experimented….
There are side activities that help you go back to the task at hand in the correct mental framework for it. Not just for productivity, but for efficiency and enhancing critical thinking on the main task. Or whatever you choose to optimize for. Can anyone point me towards some people talking about this?
It's not just fun (i agree it is), but it is also essential for creation.
What we have done with the 'AI' is to create a lot of ignorant morons who think they can create a lot of things without knowledge. This is not gonna end well.
Who said "managers"
Now we have influx of people with not a single shred of technical knowledge thinking they can create something.
To be, it means expensive to evolve.
Many that wore the Ring had pure and righteous intentions. The thought of, "If I were in power, I would..." was the arrogance and corruption which the Ring amplifies.
So, I cannot agree that it is AI doing the harm. Rather, AI just gives us the power to do the harm, the shortcuts, the cheats, etc. we have always desired. And just like the Ring, I believe much of the harm from LLMs often comes from people that started with good intentions, and the power it grants is just too tempting for many.
> If you're at work and they really care about getting something out of the door, do whatever you think is best.
If you don’t mind being jobless, sure do whatever you think is best. Not all of us can simply switch companies easily. Folks need to realise that AI in a company setting works for the benefit of the company, not for the individual.
It's the practitioner who eventually figures out what really works. I see this the same way the agile movement emerged. It was initiated by people who were hands-on programmers and showed enough benefit at minimizing software waste before it took a life of its own and started getting peddled by people who didn't really understand the underlying principles.
And no, this isn't playing what ifs.
I have seen it happening with offshoring, migration to cloud, serverless, SaaS and iPaaS products, and now AI powered automations via agents.
Less devops people, less backend devs , no translation team, no asset creation team,...
I have been layoff a few times, having to do competence transfer to offshoring teams, the quality of the output is something c suites don't care all.
Do you wanna bet what is behind Microslop, Apple Tahoe bugs and so forth?