upvote
A related quote from A. N. Whitehead:

> It is a profoundly erroneous truism ... that we should cultivate the habit of thinking of what we are doing. The precise opposite is the case. Civilization advances by extending the number of important operations which we can perform without thinking about them.

reply
Current civilization is very complex. And it’s also fragile in some parts. When you build systems around instant communication and the availability of stuff built in the other side of the world on a fixed schedule, it’s very easy to disrupt.

> 4. People will eventually get the hang of using AI to do the optimum amount of delegation such that they still retain what is necessary and delegate what is not necessary. People who don't do this optimally will get outcompeted

Then they’ll be at the mercy of the online service availability and the company themselves. Also there’s the non deterministic result. I can delegate my understanding of some problems to a library, a software, a framework, because their operation are deterministic. Not so with LLMs.

reply
I have been able to produce 20x the amount of useful outputs both in my day job and in my free time using a popular coding agent in 2026. Part of me is uncomfortable at having from some perspective my hard won knowledge of how to write English, code and to design systems partly commoditized. Part of me is amazed and grateful for being in this timeline. I am now learning and building things I only dreamed about for years. Sky is the limit.
reply
When technology progressed enough to allow for

1. outsourcing and offshoring (non deterministic, easy to disrupt)

2. cloud computing (mercy of the online service availability)

we had the same dilemma.

Outsource exactly what you think is not critical to the business. Offshore enough so that you gain good talent across the globe. Use cloud computing so that your company does not spend time working on solving problems that have already been solved. Assess what skills are required and what aren't - an e-commerce company doesn't need deep expertise in linux and postgres.

Companies that do this well outcompete other companies that obsess over details that are not core to their value proposition. This is how modern startups work: it is in finding that critical balance of buying products externally vs building only the crucial skills internally.

reply
I think you missed the point. The entire article is about specialists: astrophysicists. The problem with AI is that specialists are delegating their thinking about their specialty! The fear here is that society will stop producing specialists, and thus society will no longer progress.
reply
You are assuming that set of specialists are fixed system! That's not the case. With change in technology, you would get more and more specialists, the same way Agricultural revolution allowed for more specialists to exist.
reply
This comment sounds like hand-waving to me.

The author describes specifically how specialists are produced and how AI undermines their production.

No, we won't get more and more specialists literally "the same way" as the agricultural revolution. You need to be much more specific about how we'll get more specialists under the incentive structure created by AI, otherwise this sounds like some kind of religious faith in AI and progress.

reply
I can't tell what specialists we will get the same way you wouldn't be able to tell me we will have Linux Kernel specialists at the year 1945.

People do more things with AI.

More things = more inventions = the field growing.

The field grows and people become specialists on what used to be a small or trivial.

A mathematician in 1500's wouldn't think algebraic topology would be a specialisation.

reply
> I can't tell what specialists we will get the same way you wouldn't be able to tell me we will have Linux Kernel specialists at the year 1945.

How about addressing astrophysics specifically. What are you claiming about it? Are you claiming that in the future, we won't need astrophysicists at all, AI can do all of our astrophysics for us, freeing humans to specialize in... other subjects?

And doesn't the same problem exist for Linux kernel specialists? Why even become a Linux kernel specialists when AI can write your source code for you?

> people become specialists

This is precisely what is in question.

> A mathematician in 1500's wouldn't think algebraic topology would be a specialisation.

The specific subjects have changed over time, but the production of specialist mathematicians hasn't really changed. It takes hard work, grunt work, struggling, making mistakes and learning from them, as well as expert supervision. The problem with AI is that it encourages and incentivizes intellectual laziness, the opposite of what is required to produce specialists.

A related problem: LLMs have been trained with papers written and supervised by Alice-type specialists. There's a common claim that LLMs will hallucinate less in the future, but I think that LLMs will hallucinate more in the future, when specialty fields become dominated by Bob-type "specialists" who have a harder time distinguishing fact from fiction. When LLMs have to train on material produced by earlier versions of LLMs, the quality trend will go down, not up.

reply
> The specific subjects have changed over time, but the production of specialist mathematicians hasn't really changed. It takes hard work, grunt work, struggling, making mistakes and learning from them, as well as expert supervision. The problem with AI is that it encourages and incentivizes intellectual laziness, the opposite of what is required to produce specialists

Let's take the example of economics. Economists use ideas in Mathematics like integrals, statistics, PDE's and so on. They know that these concepts exist. They know how to apply them. They don't know these concepts deep enough to make progress here.

Do you think that Economists should deeply learn integrals, PDE's, Functional Analysis and Differential Geometry and all other concepts they use? Or do you think its better for them to focus just on their specific domain while learning just enough from other domains?

You keep coming back to AI replacing mathematicians. I'm not making that claim. I'm not saying Linux kernel specialists will be replaced by AI. I'm simply claiming that not everyone needs to be Linux Kernel specialists. This is precisely what AI is allowing: it automates things I don't need to know deeply so that I can focus on things I do need to understand deeply.

reply
> I'm simply claiming that not everyone needs to be Linux Kernel specialists.

This is an uninteresting and indeed silly claim, because nobody has ever asserted the opposite.

The point is that society needs some Linux kernel specialists, and some astrophysicists, but AI is undermining their production.

> This is precisely what AI is allowing: it automates things I don't need to know deeply so that I can focus on things I do need to understand deeply.

The submitted article is about how AI is automating the things that a specialist does need to understand deeply. It's about so-called astrophysicists using AI to produce astrophysics papers, not about how non-astrophysicists use AI to produce astrophysics papers so that they can focus on whatever their non-astrophysics specialty may be, if they have any specialty.

reply
I'm responding to this quote

> Frank Herbert (yeah, I know I'm a nerd), in God Emperor of Dune, has a character observe: "What do such machines really do? They increase the number of things we can do without thinking. Things we do without thinking; there's the real danger." Herbert was writing science fiction. I'm writing about my office. The distance between those two things has gotten uncomfortably small.

If we both agree that an astrophysicist may not need to understand things (even in their own domain) to make progress then we are in agreement. Not all the things a researcher works on while writing their paper is useful or necessarily done by them manually. In such cases it becomes necessary to let LLM take over.

reply
> I'm responding to this quote

> > Frank Herbert (yeah, I know I'm a nerd), in God Emperor of Dune, has a character observe

The article author and I share a love of Frank Herbert, God Emperor of Dune, and the quote in question. Nonetheless, it's a mistake to focus on this quote rather than on the rest of the article. The quote is nothing more than a nice literary reference; it's not central to the argument.

The character who spoke the quote is a magically prescient human-sandworm hybrid, thousands of years old, speaking to his distant relative who was specially bred by him to be invisible to the magical prescience, so let's take the quote with a grain of... sand. ;-)

> If we both agree that an astrophysicist may not need to understand things (even in their own domain) to make progress then we are in agreement.

Your parenthetical remark is actually the main problem!

reply