upvote
I see my job as having many aspects. One of those aspects is coding. It is the aspect that gives me the most joy even if it's not the one I spend the most time on. And if you take that away then the remaining part of the job is just not very appealing anymore.

It used to be I didn't mind going through all the meetings, design discussions, debates with PMs, and such because I got to actually code something cool in the end. Now I get to... prompt the AI to code something cool. And that just doesn't feel very satisfying. It's the same reason I didn't want to be a "lead" or "manager", I want to actually be the one doing the thing.

reply
You won't be prompting AI for the fun stuff (unless laying out boring boilerplate is what you consider "fun"). You'll still be writing the fun part - but you will be able to prompt beforehand to get all the boilerplate in place.
reply
If you’re writing that much boilerplate as part of your day to day work, I daresay you’re Doing Coding Wrong. (Virtue number one of programming: laziness. https://thethreevirtues.com)

Any drudgework you repeat two or three times should be encapsulated or scripted away, deterministically.

reply
There are many tens (hundreds?) of billions of dollars being poured into the smartest minds in the world to push this thing forward

I'm not so confident that it'll only be code monkeys for too long

reply
Until they can magically increase context length to such a size that can conveniently fit the whole codebase, we're safe.

It seems like the billions so far mostly go to talk of LLMs replacing every office worker, rather than any action to that effect. LLMs still have major (and dangerous) limitations that make this unlikely.

reply
Models do not need to hold the whole code base in memory, and neither do you. You both search for what you need. Models can already memorize more than you !
reply
> Models do not need to hold the whole code base in memory, and neither do you

Humans rewire their mind to optimize it for the codebase, that is why new programmers takes a while to get up to speed in the codebase. LLM doesn't do that and until they do they need the entire thing in context.

And the reason we can't do that today is that there isn't enough data in a single codebase to train an LLM to be smart about it, so first we need to solve the problem that LLM needs billions of examples to do a good job. That isn't on the horizon so we are probably safe for a while.

reply
Getting up to speed is a human problem. Computers are so fast they can 'get up to speed' from scratch for every session, and we help them with AGENTS files and newer things like memories; e.g., https://code.claude.com/docs/en/memory

It is not perfect yet but the tooling here is improving. I do not see a ceiling here. LSPs + memory solve this problem. I run into issues but this is not a big one for me.

reply
I’ll believe it when coding agents can actually make concise & reusable code instead of reimplementing 10 slightly-different versions of the same basic thing on every run (this is not a rant, I would love for agents to stop doing that, and I know how to make them - with proper AGENTS.md that serves as a table of contents for where stuff is - but my point is that as a human I don’t need this and yet they still do for now).
reply
In my experience they can definitely write concise and reusable code. You just need to say to them “write concise and reusable code.” Works well for Codex, Claude, etc.
reply
Writing reusable code is of no use if the next iteration doesn’t know where it is and rewrites the same (reusable) code again.
reply
I guide the AI. If I see it produce stuff that I think can be done better, I either just do it myself or point it in the right direction.

It definitely doesn't do a good job of spotting areas ripe of building abstractions, but that is our job. This thing does the boring parts, and I get to use my creativity thinking how to make the code more elegant, which is the part I love.

As far as I can tell, what's not to love about that?

reply
If you’re repeatedly prompting, I will defer to my usual retort when it comes to LLM coding: programming is about translating unclear requirements in a verbose (English) language into a terse (programming) language. It’s generally much faster for me to write the terse language directly than play a game of telephone with an intermediary in the verbose language for it to (maybe) translate my intentions into the terse language.

In your example, you mention that you prompt the AI and if it outputs sub-par results you rewrite it yourself. That’s my point: over time, you learn what an LLM is good at and what it isn’t, and just don’t bother with the LLM for the stuff it’s not good at. Thing is, as a senior engineer, most of the stuff you do shouldn’t be stuff that an LLM is good at to begin with. That’s not the LLM replacing you, that’s the LLM augmenting you.

Enjoy your sensible use of LLMs! But LLMs are not the silver bullet the billion dollars of investment desperately want us to believe.

reply
> programming is about translating unclear requirements in a verbose (English) language into a terse (programming) language

Why are we uniquely capable of doing that, but an LLM isn't? In plan mode I've been seeing them ask for clarifications and gather further requirements

Important business context can be provided to them, also

reply
An LLM isn’t (yet?) capable of remembering a long-term representation of the codebase. Neither is it capable of remembering a long-term representation of the business domain. AGENTS.md can help somewhat but even those still need to be maintained by a human.

But don’t take it from me - go compete with me! Can you do my job (which is 90% talking to people to flesh out their unclear business requirements, and only 10% actually writing code)? It so, go right ahead! But since the phone has yet to stop ringing, I assume LLMs are nowhere there yet. Btw, I’m helping people who already use LLM-assisted programming, and reach out to me because they’ve reached their limitations and need an actual human to sanity-check.

reply
> the smartest minds in the world

Dunning–Kruger is everywhere in the AI grift. People who don't know a field trying to deploy some AI bot that solves the easy 10% of the problem so it looks good on the surface and assumes that just throwing money (which mostly just buys hardware) will solve it.

They aren't "the smartest minds in the world". They are slick salesmen.

reply
Agreed. Programming languages are not ambiguous. Human language is very ambiguous, so if I'm writing something with a moderate level of complexity, it's going to take longer to describe what I want to the AI vs writing it myself. Reviewing what an AI writes also takes much longer than reviewing my own code.

AI is getting better at picking up some important context from other code or documentation in a project, but it's still miles away from what it needs to be, and the needed context isn't always present.

reply
I see what these can do and I'm already thinking, why would I ever hire a junior developer? I can fire up opencode and tell it to work multiple issues at once myself.

The bottleneck becomes how fast you can write the spec or figure out what the product should actually be, not how quickly you can implement it.

So the future of our profession looks grim indeed. There will be far fewer of us employed.

I also miss writing code. It was fun. Wrangling the robots is interesting in its own way, but it's not the same. Something has been lost.

reply
You hire the junior developer because you can get them to learn your codebase and business domain at a discount, and then reap their productivity as they turn senior. You don’t get that with an LLM since it only operates on whatever is in its context.

(If you prefer to hire seniors that’s fine too - my rates are triple that of a junior and you’re paying full price for the time it takes me learning your codebase, and from experience it takes me at least 3 months to reach full productivity.)

reply
Yes. And I'm excited as hell.

But I also have no idea how people are going to think about what code to write when they don't write code. Maybe this is all fine, is ok, but it does make me quite nervous!

reply
That is definitely a problem, but I would say it’s a problem of hiring and the billion-dollars worth of potential market cap resting on performative bullshit that encourages companies to not hire juniors to send a signal to capture some of those billions regardless of actual impact on productivity.

LLMs benefit juniors, they do not replace them. Juniors can learn from LLMs just fine and will actually be more productive with them.

When I was a junior my “LLM” was StackOverflow and the senior guy next to me (who no doubt was tired of my antics), but I would’ve loved to have an actual LLM - it would’ve handled all my stupid questions just fine and freed up senior time for the more architectural questions or those where I wasn’t convinced by the LLM response. Also, at least in my case, I learnt a lot more from reading existing production code than writing it - LLMs don’t change anything there.

reply