upvote
> Same, if anything, the opposite seems to be true, the ones that I'd call "good engineers" were slower

Unfortunately, a lot of workplaces are ignoring this, believing their engineers are assembly line workers, and the ones who complete 10 widgets per minute are simply better than the ones who complete 5 widgets per minute.

reply
It isn't just that they believe this - they want a business model where this is how it works. For a big company a star coder is a liability - they have strong labor power, they can leave and they are hard to replace, etc.

Companies want workflows that work with mediocre programmers because they are more like interchangeable parts. This is the real secret to why AI programming will work in a lot of places. If you look at the externalities of employing talented people, shitty code actually looks better than great code.

reply
To these kinds of companies, what's even better than a rack of mediocre programmers? AI agents that you can just conjure up and prompt. They take up no facility space, don't require lunch breaks or vacations, obey all commands and direction, and produce a predictable and consistent amount of output per dollar.

This is the earworm the leaders of these companies have allowed into their minds. Like Agent Mulder, they Want To Believe in this so badly...

reply
> This is the earworm the leaders of these companies have allowed into their minds. Like Agent Mulder, they Want To Believe in this so badly...

If you assume they are not idiots and analyze the FOMO incentives via a little game-theory, it becomes clear why.

Assuming the competition has adopted AI, leadership can ignore it, or pursue it. If they adopt it, then they are level with the completion whether AI actually succeeds or fails - they get to keep their executive job.

If leadership ignores AI, and it actually delivers the productivity gains to the competition, they will be fired. If they ignore AI and it's a bust, they gain nothing.

reply
If AI turns out to be a bust, ignoring it could become a significant win. One possible outcome of AI adoption is that existing code bases are degraded, and existing programmer capability is allowed to atrophy. In that situation, companies that adopt AI lose out relative to companies that eschew it.
reply
What if the outcome is the competition burns their money on LLM usage for little to no gain? If you're an exec and you jumped into LLMs as well then you also lose any advantage you would have had by saving your money or hiring a few more humans.
reply
> What if the outcome is the competition burns their money on LLM usage for little to no gain?

The company does better than the money-burning competition, but the executives personally gain nothing; there are no bonuses just because the competition took a misstep.

reply
Yeah but does this work? Are there companies doing this successfully?
reply
It's also true that a lot of times, it doesn't even matter how shitty the code is. For example, I'm locked in to a company whose web "app" hasn't functioned for me for the vast majority of the last two to three years. I can't leave without effectively being required to leave my job. So, they still get my business.
reply
Glad I find myself employed under a division called Research and Development. Poaching and retaining highly compensated individuals is the entire purpose.
reply
Bingo. This is something that many people fail to understand.
reply
I think you can understand that line of reasoning, but you can question its feasibility. You might not have any “star coders”, nor need them day-to-day, but I think the cost of not having one true expert, or having a completely vibe coded system that crashes in production will be extremely high.
reply
Which workplaces?
reply
This is true. But I find AI tools to be a huge help for all of this. Not to do any of it faster, but to remove a bunch of the tedium from the process of testing ideas and iterating on them. Instead of "I wonder if the problem is..." requiring half an hour of research, now I can do an initial check of that theory in less than a minute, and then dig further, or move onto the next one. Or say I estimate it's gonna take me an hour or more to test an idea, I might just decide I don't have time to invest in that. Well now maybe I can get a tentative answer on that by spending a minute laying out the theory and letting an agent spend ten or twenty minutes on it in the background. In this way I can explore space I just would have determined was not worth the effort previously.

To me, none of this feels like "going faster", it feels like "opening up possibilities to try more things, with a lot less tedious work".

reply
Have you ever wonder how people do it without it being a tedium for them?

For things that have a visual elements like UI and UX, you can start with sketches (analog or digital) and eliminate the bad ideas, refine the good ones with higher quality rendering. Then choose one concept and inplement it. By that time, the code is trivial. What I found with LLM usage is that people will settle on the first one, declaring it good enough, and not exploring further (because that is tedious for them).

The other type of problem are mostly three categories (mathematical, logical, or data/information/communication). For the first type you have to find the formula, prove it is correct, and translate it faithfully to code. But we rarely have that kind of problem today unless you’re in a research lab or dealing with floating-point issues.

The second type is more common where you enacting rules based on some axioms originating from the systems you depend on. That leads to the creation of constraints and invariants. Again I’m not seeing LLM helping there as they lack internal consistency for this type of activity. (Learning Prolog helps in solving that kind of problem)

The third type is about modelizing real world elements as data structures and designing how they transform overtime and how they interact with each other. To do it well, you need deep domain knowledge about the problem. If LLM can help you there that means two things: a) Your knowledge is lacking and you ought to talk to the people you’re building the system for; b) The problem is solved and you’d do well to learn from the solution. (Basically what the DDD books are all about)

Most problems are a combination of subproblems of those three categories (recursively). But from my (admittedly small amount of) interactions with pro LLM users, they don’t want to solve a problem, they want it to be solved for them. So it’s not about avoiding tediousness, it’s sidestepping the whole thing.

reply