I wanted to talk about this more but couldn't quite figure out how to phrase it, so I cut a fair bit: with "incanters" I'm trying to point at a sort of ... intuitive, more informal practitioner knowledge / metis, and contrast it with a more statistically rigorous approach in "statistical/process engineers". I expect a lot of people will fuse the two, but I'm trying to stake out some tentpoles here. Users integrate a continuum of approaches, including individual intuition, folklore, formal and informal texts, scientific papers, and rigorously designed harnesses & in-house experiments. Like farming--there's deep, intuitive knowledge of local climate and landraces, but also big industrial practice, and also research plots, and those different approaches inform (and override) each other in complex ways.
If we think of the digitization tech revolution... the changes it made to the economy are hard to describe well, even now.
In the early days, it was going to turn banks from billion dollar businesses to million dollar ones. Universities would be able to eliminate most of their admin. Accounting and finances would be trivialized. Etc.
Earlier tech revolution s were unpredictable too... But at lest retrospectively they made sense.
It's not that clear what the core activities of our economy even are. It's clear at micro level, but as you zoom out it gets blurry.
Why is accountability needed? It's clearly needed in its context... but it's hard to understand how it aggregates.
I said this in response to the example above, that humans are needed where accountability is a concern. This is pretty distant from the macro.
If we think of the 19th century economy... it was mostly about food, household products and suchlike. Now the economy is a lot harder to reason about and it's easy to miss the forrest for the trees... when talking about how technology will affect it.
Accountability is required to work with your payment processor, which works with visa and mastercard, that also have requirements, etc. Depending on where (of anywhere) paradigm shifts occur... we may or may not even need these functions.
That's why it's so hard to reason our way to predictions about upcoming Ai-mediated changes.
This is dependent on having a court system uncaptured by corruption. We're already seeing that large corporations in the "too big to fail" categories fall outside of government control. And in countries with bribing/lobbying legalized or ignored they have the funds to capture the courts.
"oh I'm sorry your hospital burned down mr plantiff but the electrician was following his professional rules so his liability is capped at <small number> you'll just have to eat this one"
I would wager that a solid half if not more of the economy exists under some sort of arrangement like that.
Sounds to me like following orders is in fact this magical thing that causes courts to direct liability away from the defendant.
We generally don't hold people liable for acts of God or random chance failures. For example, malpractice suits generally need to prove that a doctor was intentionally negligent on their responsibility.
Everything in real life has quantifiable risk, and part of why we have governing bodies for many things is because we can improve our processes to reduce the risk.
It's not just following orders :) it's recognizing that the solution to risks isn't to punish the actor but to improve the system.
More accurately, how many jobs are probabilistically mechanical. That is, how many jobs are really the execution of a serious Bayesian decisions with a strong prior. LLMs are really great at displacing such jobs.
Why can't LLMs and agents progress further to do this software engineering job better than an actual software engineer? I've never seen anyone give a satisfactory answer to this. Especially the part about making mistakes. A lot of the defense of LLM shortcomings (i.e., generating crappy code) comes down to "well humans write bad code too." OK? Well, humans make mistakes too. Theoretically, an LLM software engineer will make far fewer than a human. So why should I prefer keeping you in the loop?
It's why I just can't understand the mindset of software engineers who are giddy about the direction things are going. There really is nothing special about your expertise that an LLM can't achieve, theoretically.
We're always so enamored by new and exciting technology that we fail to realize the people in charge are more than happy to completely bury us with it.
"Software engineer" as a job title has included a lot of people who write near-zero-code, at least at the higher levels of the career ladder, for years prior to LLMs. People assuming the only, or even primary, function of the job is outputting code reveal a profound lack of understanding of the industry in my opinion. Beyond the first year or two it has been commonly accepted that the code is the easy part of the job.
This is something that I would have thought HN readers were pretty familiar with. LLMs can make my code work faster or more prolific, but with 30yoe I spend a fairly significant chunk of my work time doing anything but code.
Because a machine can never take accountability. If a software engineer throughout the entire year has been directing AI with prompts that created weaker systems then that person is on the chopping block, not the AI. Compared to another software engineer who directed prompts to expand the system and generate extra revenue streams.
A business leader can though.
> Compared to another software engineer who directed prompts to expand the system and generate extra revenue streams.
I think you're missing the point. Why can't an LLM advance sufficiently to be a REAL senior software engineer that a business person/product manager is prompting instead of YOU, a software engineer? Why are YOU specifically needed if an LLM can do a better job of it than you? I can't believe people are so naive to not see what the endgame is: getting rid of those primadonna software engineers that the C-suite and managers have nothing but contempt for.
If a 'business leader' is prompting out software through their agents, ensuring it works, maintaining it, and taking accountability... they're also a software engineer
These titles are mostly semantics
Dismissal of arguments as "just semantics" is high school level argumentation.
by semantics, i mean the definition and pool of tasks, responsibilities, and outcomes a job is comprised of is shifting so fast that the borders of what is a 'software engineer' and 'business person' are melding together. software engineers are business people in their own way
If the rhetoric is to be believed, the set of responsibilities falling to the role of "software engineer" is shrinking to zero, and all engineers are being forcibly "promoted" to the managerial class of shepherding around agents.
software engineers who are comfortable doing business work - managing, working with different stakeholders, having product and design taste, being sociable, driving business outcomes are going to be more desired than ever
likewise, business leads who can be technical, can decompose vague ideas into product, leverage code to prototype and work with the previous person will also be extremely high value.
i would be concerned if i was an engineer with no business acumen or a business lead with no technical acumen (not counting CEOs obviously, but then again the barrier to starting your own business as a SWE has never been lower)
Why can't VCs feed your pitch deck into an AI and get a business they own 100%?
If the only thing you're paying for is compute time...
Some.people are claiming it's about taste. Why can't an AI learn taste?
The lines between a software engineer / business person / product / design and everything else will blur, because AI increases the individual person's leverage. I posit that there will be more 'software engineers' in this new world, but also more product people, more business people, more companies in general.
They’re stupid or they’re already set up for success. The general ideas seems to be generalists are screwed, domain experts will be fine.
But I don't see how this holds up to even the slightest amount of scrutiny. We're literally training LLMs to BE domain experts.
1) My experiences with LLMs are so impressive that I consider their output to generally be better than what the typical developer would produce. People who can't see this have not gotten enough experience with the models I find so impressive, or are in denial about the devaluation of their skills.
2) My experiences with LLMs have been mundane. People who see them as transformative lack the expertise required to distinguish between mediocre and excellent code, leading them to deny there is a difference.
It's people in camp 1 that I wonder about. They're convinced that LLMs can accomplish anything and understand a codebase better than anyone (and that may be the case!). However, they're simultaneously convinced that they'll still be needed to do the prompting because ???reasons???.
So now I tend to think a lot of people are in heavy denial in thinking that LLMs are going to stop getting better before they personally end up under the steamroller, but I'm not sure what this faith is based on.
I also think people tend to treat the "will LLMs replace <job>" question in too much of a binary manner. LLMs don't have to replace every last person that does a specific job to be wildly disruptive, if they replace 90% of the people that do a particular job by making the last 10% much more productive that's still a cataclysmic amount of job displacement in economic terms.
Even if they replace just 10-30% that's still a huge amount of displacement, for reference the unemployment rate during the Great Depression was 25%.
They still have a long way to go before they can master a domain from first principles, which constrains the mastery possible.
For an LLM and this "vague" domain expertise, even if none of the LLM's training material includes certain nuggets of wisdom, if the material includes enough cases of problems and the solutions offered by domain experts, we should expect the model to find a decent relationship between them. That the LLM has never ingested an explicit documentation of the reasoning is irrelevant, because it does not perform reasoning.
We even have some infamous "dark" domains in computer science where it is nearly impossible for a human to get to the frontier because the research that underpins much of the state-of-the-art hasn't existed as public literature for decades. If you want to learn it, you either have to know a domain expert willing to help you or reinvent it from first principles.
Mastery isn't necessary. Why are Waymos lacking drivers? Not because self-driving cars have mastered driving, but because self-driving works sufficiently well that the economics don't play out for the cab driver.