upvote
Sadly I don’t see how our current social paradigm works for this. There is no history of any sort of long planning like this or long term loyalty (either direction) with employees and employers for this sort of journeyman guild style training. AI execs are basically racing, hoping we won’t need a Schwartz before they are all gone. But what incentives are in place to high a college grad, have them work without llms for a decade and then give them the tools to accelerate their work?
reply
Then the social paradigm needs to change. Is everyone just going to roll over and die while AI destroys academia (and possibly a lot more)?

Last September, Tyler Austin Harper published a piece for The Atlantic on how he thinks colleges should respond to AI. What he proposes is radical—but, if you've concluded that AI really is going to destroy everything these institutions stand for, I think you have to at least consider these sorts of measures. https://www.theatlantic.com/culture/archive/2025/09/ai-colle...

reply
I was pretty interested until I got to this part:

> Another reason that a no-exceptions policy is important: If students with disabilities are permitted to use laptops and AI, a significant percentage of other students will most likely find a way to get the same allowances, rendering the ban useless. I witnessed this time and again when I was a professor—students without disabilities finding ways to use disability accommodations for their own benefit. Professors I know who are still in the classroom have told me that this remains a serious problem.

This would be a huge problem for students with severe and uncorrectable visual impairments. People with degenerative eye diseases already have to relearn how to do every single thing in their life over and over and over. What works for them today will inevitably fail, and they have to start over.

But physical impairments like this are also difficult to fake and easy to discern accurately. It's already the case that disability services at many universities only grants you accommodations that have something to do with your actual condition.

There are also some things that are just difficult to accommodate without technology. For instance, my sister physically cannot read paper. Paper is not capable of contrast ratios that work for her. The only things she can even sometimes read are OLED screens in dark mode, with absolutely black backgrounds; she requires an extremely high contrast ratio. She doesn't know braille (which most blind people don't, these days) because she was not blind as a little girl.

Committed cheaters will be able to cheat anyway; contemporary AI is great at OCR. You'll successfully punish honest disabled people with a policy like this but you won't stop serious cheaters.

reply
Yeah, this proposal is likely straight up illegal.
reply
> Then the social paradigm needs to change. Is everyone just going to roll over and die while AI destroys academia (and possibly a lot more)?

My 40-some-odd years on this planet tells me the answer is yes.

reply
>What he proposes is radical

It sounds entirely reasonable and moderate to me.

reply
It's neither reasonable nor moderate, which is why it'll never happen.
reply
Well, we are already rolling over and dying (literally) on everything from vaccine denial to climate change. So, yes, we are. Obviously yes.
reply
In the US it is dying off.

Not so in plenty of other countries. Hopefully US reverses the anti-science trend before it's too late

reply
These movements are growing in every western nation. The trend has been growing over decades. It would be nice to see it reverse but seems unlikely before calamity.
reply
It’s a deliberate process powered by rightwing and capitalist interests designed to create a dumber, less educated and more distracted population. A war as stupid as the one with Iran would not have been possible three decades ago. As ill-advised as the Iraq war was, Bush at least spent months explaining the rationale and building support for it, successfully. Now that’s not needed.

I saw interviews with young Americans on spring break and they were so utterly uninformed it was mind-blowing. Their priorities are getting drunk and getting laid while their country bombs a nation “into the stone ages”, according to their president. And it’s not their fault: they are the product of a media environment and education system designed for exactly this outcome.

reply
I was there for that war. Kids weren't listening and didn't care back then either. If anything, Gen Z is the most politically-aware generation we've had since we started keeping track.

Trump doesn't have to justify a single thing because the billionaires behind him know that every last bet is off and their very livelihoods are at risk, and his entire base of support up and down the chain are either complicit or fooled.

What the world does when they finally realize Democrats and Republicans are simply two sides of the vast apparatus suppressing the will of the people by any means necessary will be... spectacular.

reply
I was there as well, the bush presidency lasted my entire middle and high school career, and I got the chance to vote for Obama in my senior year.

I remember things very differently. Everyone cared about the Iraq war, gay-straight alliance was one of the most up and coming clubs, and political music was everywhere. Green Day had their big second wave with American Idiot, System Of A Down was on top of the world, Rock Against Bush was huge, anarcho-punk like Rise Against was getting big.

I'm not a teenager anymore obviously, so it's entirely possible I'm just missing it, but I've seen very little that compares to those sort of movements. On the other hand, most millennials I know are still wildly politically active.

reply
In 2002, there war in Iraq had large popular support, something like 70-80 percent. It took a few years for people to realize it was based on a lie and was a massive mistake. It was also morally reprehensible, but that part is not often discussed in mainstream US politics.

If you compare that to the current Iran war, a majority of the population is already against it, however the current administration doesn't seem to care much about public opinion, and there doesn't seem to be much that the public can do about it.

reply
Yeah I was there too and I don’t know what this guy is talking about. Gen X was highly politically active. This was the era of violent in the street anti-globalization clashes like the WTO protests.
reply
Where exactly because in the Midwest we were very vocal about it. We have tons of military families out there and we were poor enough to feel almost like military service was inevitable if we didn’t get scholarships for school. You know the band NOFX had an album, the War on Errorism that was quite successful based on the fuckery of the bush administration. Punk rock and protest music was huge then.
reply
Article is paywalled, so perhaps you could just summarize his proposal?
reply
> At the type of place where I taught until recently—a small, selective, private liberal-arts college—administrators can go quite far in limiting AI use, if they have the guts to do so. They should commit to a ruthless de-teching not just of classrooms but of their entire institution. Get rid of Wi-Fi and return to Ethernet, which would allow schools greater control over where and when students use digital technologies. To that end, smartphones and laptops should also be banned on campus. If students want to type notes in class or papers in the library, they can use digital typewriters, which have word processing but nothing else. Work and research requiring students to use the internet or a computer can take place in designated labs. [...] Colleges that are especially committed to maintaining this tech-free environment could require students to live on campus, so they can’t use AI tools at home undetected.

You can access the full article at https://archive.is/zSJ13 (I know archive.is is kind of shady, but it works).

reply
> If students want to type notes in class or papers in the library, they can use digital typewriters, which have word processing but nothing else.

Only, replacing the guts of such a machine to contain a local LLM is damn easy today. Right now the battery mass required to power the device would be a giveaway, but inference is getting energetically cheaper.

> Colleges that are especially committed to maintaining this tech-free environment could require students to live on campus, so they can’t use AI tools at home undetected.

Just like my on-campus classmates never smoked weed or drank underage, I'm sure.

reply
Are you suggesting we should do nothing if the solution has any flaws?
reply
This isn't aimed at you, but this strikes me as exactly the kind of divorced-from-the-real-world thinking that academia is pilloried for all the time. This kind of proposal will never happen, I'd basically stake my life on it. Students (and their parents) have zero interest in this kind of anti-technology nonsense, so it's DOA. College isn't compulsory, and those students aren't some captive audience you can do whatever you want with, they're customers. And I frankly doubt that most professors or administrators want this either.
reply
Some folks need to touch the hot stove before they learn but eventually they learn.

If AI output remains unreliable then eventually enough companies will be burned and management will reinstate proper oversight. All while continuing to pay themselves on the back.

reply
> There is no history of any sort of long planning

Sure there is. Its the formal education system that produced the college grad.

reply
… between employees and employers.

The proposal that everyone pay for college until they are in their 40s doesn’t seem viable.

reply
Maybe, but there is a trend towards more and longer education. More college graduates, more PhD grads, etc.
reply
Well, the astrophysics situation is special because, as the article notes, there aren't breakthroughs that can be externally verified.

Other projects' success will be proportional to their number of Schwartz' and so it seems unlikely they disappear. But they may disappear for areas in which there is no immediate money.

reply
> Which means we need people like Alice! We have to make space for people like Alice, and find a way to promote her over Bob, even though Bob may seem to be faster.

If you are a massive company that owns all of the knowledge and all of the technology needed to apply that knowledge, then you don't need Alice. You don't _want_ Alice. You want more Bobs. It looks better on the books.

Tale as old as time.

reply
I think we already know what we need to do: encourage people to do the work themselves, discourage beginners from immediately asking an LLM for help and re-introducing some kind of oral exam. As the article mentions, banning LLMs is impractical and what we really need are people who can tell when the LLM is confidently wrong; not people who don't know how to work with an LLM.

I hope it will encourage people to think more about what they get out of the work, what doing the work does for them; I think that's a good thing.

reply
I think we'll get there. We need to get at least some AI bust going first though. It's impossible to talk sense into people who think AI is about to completely replace engineers, or even those who think that, while it might not replace engineers, it's going to be doing 100% of all coding within a year. Or even that it can do 100% of coding right now.

There's a couple unfortunate truths going on all at the same time:

- People with money are trying to build the "perfect" business: SaaS without software eng headcount. 100% margin. 0 Capex. And finally near-0 opex and R&D cost. Or at least, they're trying to sell the idea of this to anyone who will buy. And unfortunately this is exactly what most investors want to hear, so they believe every word and throw money at it. This of course then extends to many other business and not just SaaS, but those have worse margins to start with so are less prone to the wildfire.

- People who used to code 15 years ago but don't now, see claude generating very plausible looking code. Given their job is now "C suite" or "director", they don't perceive any direct personal risk, so the smell test is passed and they're all on board, happily wreaking destruction along the way.

- People who are nominally software engineers but are bad at it are truly elevated 100x by claude. Unfortunately, if their starting point was close to 0, this isn't saying a lot. And if it was negative, it's now 100x as negative.

- People who are adjacent to software engineering, like PMs, especially if they dabble in coding on the side, suddenly also see they "can code" now.

Now of course, not all capital owners, CTOs, PMs, etc exhibit this. Probably not even most. But I can already name like 4 example per category above from people I know. And they're all impossible to explain any kind of nuance to right now. There's too many people and articles and blog posts telling them they're absolutely right.

We need some bust cycle. Then maybe we can have a productive discussion of how we can leverage LLMs (we'll stop calling it "AI"...) to still do the team sport known as software engineering.

Because there's real productivity gains to be had here. Unfortunately, they don't replace everyone with AGI or allow people who don't know coding or software engineering to build actual working software, and they don't involve just letting claude code stochastically generate a startup for you.

reply
> Or even that [AI] can do 100% of coding right now.

I don't actually think the article refutes this. But the AI needs to be in the hands of someone who can review the code (or astrophysics paper), notice and understand issues, and tell the AI what changes to make. Rinse, repeat. It's still probably faster than writing all the code yourself (but that doesn't mean you can fire all your engineers).

The question is, how do you become the person who can effectively review AI code without actually writing code without an AI? I'd argue you basically can't.

reply
My boss decreed the other day that we’re all to start maximising our use of agents, and then set an accordingly ambitious deadline for the current project. I explained that being relatively early in my career I’ve been hesitant to use any kind of LLM so I can gain experience myself (to say nothing of other concerns), and yeah in his words I’ve “missed the opportunity”
reply
Unfortunately in the majority of organizations, the idiots are at the wheels. It's not people with actual experience of how engineers do things, that dictates what those engineers should do.
reply
Interesting, we only have generic 'use AI' in our goals. Though its generic framing probably indicates more company's belief in this tech than anything else.
reply
> The question is, how do you become the person who can effectively review AI code without actually writing code without an AI? I'd argue you basically can't.

I agree, and I'd go a step further:

You can be the absolute best coder in the world, the fastest and most accurate code reviewer ever to live, and AI still produces bad code so much faster than you can review that it will become a liability eventually no matter what

There is no amount of "LLM in a loop" "use a swarm of agents" or any other current trickery that fixes this because eventually some human needs to read and understand the code. All of it.

Any attempt to avoid reading and understanding the code means you have absolutely left the realm of quality software, no exceptions

reply
> Which means we need people like Alice! We have to make space for people like Alice, and find a way to promote her over Bob

The solution is relatively simple though - not sure the article suggests this as I only skimmed through:

Being good in your field doesn't only mean pushing articles but also being able to talk about them. I think academia should drift away from written form toward more spoken form, i.e. conferences.

What if, say, you can only publish something after presenting your work in person, answer questions, etc? The audience can be big or small, doesn't matter.

It would make publishing anything at all more expensive but maybe that's exactly what academia needs even irrespective of this AI craze?

reply
I thought that was kind of how the hard sciences work already?

My grad school friend who was a physicist would write his talk just before his conferences, and then submit the paper later. My experience in CS was totally backwards from that.

reply
Essentially a PhD thesis style grilling to replace the current text slop
reply
I've been using ChatGPT to re-bootstrap my coding hobby. After the initial honeymoon wore off, I realized I was staring down the barrel of a dilemma. If I use AI to "just handle" the parts of the system I don't want to understand, I invariably end up in a situation where I gotta throw a whole bunch of work out. But I can't supervise without an understanding of what it's supposed to be doing, and if I knew what it was supposed to be doing, I could just do it myself.

So I settled on very incremental work. It's annoying cutting and pasting code blocks into the web interface while I'm working on my interface to Neovim, spent a whole day realizing I can't trust it to instrument neovim and don't want to learn enough lua to manage it. (I moved onto neovim from Emacs because I don't like elisp and gpt is even worse at working on my emacs setup than neovim, the end goal is my own editor in ruby but gpt damn sure can't understand that atm) But at least I'm pushing a real flywheel and not the brooms from Fantasia.

reply
The article is a thought experiment. The author hypothesizes that Bob isn't getting the same benefit that Alice is getting. That hypothesis could be wrong. I don't know and the author doesn't know. It could be that Bob is going to have a very successful career and will deeply know the field because he is able to traverse a wider set of problems more quickly. At this point, it's just hypothesis. I don't think that we can say we need more Alices any more than we can say we need more Bobs. Unfortunately we will have to wait and see. It will be upon the academic community to do the work to enforce quality controls. That is probably the weakness to worry about.
reply
We do know. There have always been ways that people could avoid the painful process of learning, and...they don't learn.

Here's a competing thought experiment:

Jorge's Gym has a top notch body building program, which includes an extensive series of exercises that would-be body builders need to do over multiple years to complete the program. You enroll, and cleverly use a block and tackle system to complete all the exercises in weeks instead of years.

Did you get the intended results?

reply
AI is an accelerant, not a replacement for skill. At least, not yet.

I built a full stack app in Python+typescript where AI agents process 10k+ near-real-time decisions and executions per day.

I have never done full stack development and I would not have been able to do it without GitHub Copilot, but I have worked in IT (data) for 15 years including 6 in leadership. I have built many systems and teams from scratch, set up processes to ensure accuracy and minimize mistakes, and so on.

I have learned a ton about full stack development by asking the coding agent questions about the app, bouncing ideas off of it, planning together, and so on.

So yes, you need to have an idea of what you're doing if you want to build anything bigger than a cheap one shot throwaway project that sort of works, but brings no value and nobody is actually gonna use.

This is how it is right now, but at the same time AI coding agents have come an incredibly long way since 2022! I do think they will improve but it can't exactly know what you want to build. It's making an educated guess. An approximation of what you're asking it to do. You ask the same thing twice and it will have two slightly different results (assuming it's a big one shot).

This is the fundamental reality of LLMs, sort of like having a human walking (where we were before AI), a human using a car to get to places (where we are now) and FSD (this is future, look how long this took compared to the first cars).

reply
> the paradox is, the LLMs are only useful† if you're Schwartz, and you can't become Schwartz by using LLMs.

That you can't "become Schwartz" by using LLMs is an unproven assumption. Actually, it's a contradiction in the logic of the essay: if Bob managed to produce a valid output by using an LLM at all, then it means that he must have acquired precisely that supervision ability that the essay claims to be necessary.

Btw, note that in the thought experiment Bob isn't just delegating all the work to the LLM. He makes it summarise articles, extract important knowledge and clarify concepts. This is part of a process of learning, not being a passive consumer.

reply
There's no contradiction, the point is that Bob is able to produce valid output using LLMs, but only while he himself is being supervised; and that he doesn't develop the skills to supervise independently himself in the future.
reply
> only while he himself is being supervised

No, this is impossible unless Bob is presenting at each weekly meeting simply the output of the LLM and feeding the tutor's feedback straight into it. For a total of 10 minutes work per week, and the tutor would notice straight away at least for the lack of progress.

No, the article specifies that Bob actually works with the LLM, doesn't just delegate. He asks the agent to summarise, to explain, and to help with bug fixing. You could easily argue that Bob, having such an AI tutor available 24/7, can develop understanding much faster. He certainly won't waste his time with small details of python syntax (though working with a "coding expert" will make his code much cleaner and more advanced).

reply
It doesn't contradict the logic of the essay.

There are flowers that look & smell like female wasps well enough to fool male wasps into "mating" with them. But they don't fly off and lay wasp eggs afterwards.

reply
> And so the paradox is, the LLMs are only useful† if you're Schwartz, and you can't become Schwartz by using LLMs.

I have gained a lot of benefit using LLMs in conjunction with textbooks for studying. So, I think LLMs could help you become Schwartz.

reply
How do you know you have?
reply
I have been using it to learn Chinese along with other standard resources. My reading comprehension has improved a lot after I started to use LLMs to understand sentence structures and grammar.
reply
Actually, I think this is a case where LLMS _can_ be useful. If we're prompting for small enough outputs, for examples around things we can already sort of reason about it, we're able to judge whether or not what's presented to use makes sense.

Presumably you're also reading some kind of learning text about the Chinese language, so the sole source isn't just the LLM?

In my experience, asking an LLM to produce small examples of well-known things (or rather, things that are going to be talked about frequently in the training data, so generally basic or fundamental topics) tend to work fine, and is going to be at a level where you yourself can judge what's presented.

I think the real danger is when a person is prompting things they don't know how to verify for themselves, since then we're basically just rolling dice and hoping

reply
Profession (1957) by Isaac Asimov is relevant: https://news.ycombinator.com/item?id=46664195
reply
My cynical take is that the 20th century exhausted the major scientific discoveries that move various global needles, and now doing well at fostering science matters less.

In the past, if you neglected your institutes and academia, what happened? A rival state got electricity/cars/nuclear weapons first and you would be SOL.

These days, what happens? They invent faster phones? Higher res video?

This take may be a bit hyperbolic but I find it's a good thought exercise prompt.

reply
Why use a tool that generates plausible garbage?
reply
Because I’m skilled enough to use a tool that generates plausible garbage to be more productive than those who don’t use it at making non-garbage.
reply
Are you sure you’re more productive?

Doesn’t sound like these tools should be used to write scientific papers for example and they seem to bamboozle people far more than help them.

reply
Yes I’m sure I’m more productive. I have decades of experience before AI to compare to.

I can only speak to the engineering context not academia, but I would expect there’s similar patterns of busywork. Even the blog authors admit to using AI. AI cant replace thinking but it can replace menial labor, which is prevalent even among “knowledge workers”.

reply
deleted
reply
Because there is no appreciable difference between outputs. Most of the work that most of us do isn't important. It's busywork byproducts of making widgets that most people don't even need. So if your job is already pointless why not make it easier using LLMs?
reply
Sounds a little sad. I think I’d rather find another job.
reply
So very many of us are underemployed and do meaningless work. IMO these jobs exist to prevent societal upheaval aka revolt
reply
> And so the paradox is, the LLMs are only useful† if you're Schwartz, and you can't become Schwartz by using LLMs.

It's exactly the same for coding.

reply
I totally agree - the article misses this point in a very conspicuous way. It suggests that Alice and Bob will both graduate at the same level.

What may well happen instead is that Bob publishes two papers. He then outcompetes Alice based on the insistence that others have on "publish or perish". Alice becomes unemployed and struggles, having been pushed out.

The person who puts the time and effort in doesn't just sit at the same level and they don't both just find decent employment. Competition happens and the authentic learning is considered a waste of time, which leads to real and often life threatening consequences (like being homeless after being unable to find employment).

reply
<< authentic learning is considered a waste of time

This, I think, may be the more interesting bit. Steve Jobs anecdotally did caligraphy in school, which some would consider a waste of time, but Steve credited some of the stylistic choices to.

The question then becomes whether it will become an issue now or later. Having seen some of the output, I have no doubt that a lot can now be built by non-programmers ( including myself; I suppose I belong in the adjacent category ). The building blocks exist and as long as the problem was part of the initial training, odds are, LLM will help you build what you want.

It may not be perfect, safe, or optimized, but it may still be exactly what the user wanted to do. Now, the problems will start when those will, inevitably, move into production at big corps. In a sense, we have seen some interesting results of that in the past few weeks ( including accidental claude code release ).

In a grand scheme of things, not much is changing... except for speed of change. But are we quite ready for this?

reply
deleted
reply
>And so the paradox is, the LLMs are only useful† if you're Schwartz

For so many workers, their companies just want them to produce bullshit. Their managers wouldn't frame it this way, but if their subordinates start producing work with strict intellectual rigor it's going to be an issue and the subordinates will hear about it.

So, you're not wrong. But the majority of LLM customers don't care and they just want to report success internally, and the product needs to be "just good enough." An LLM might produce a shitty webpage. So long as the page loads no on will ever notice or care that it's wrong in the way that a physics paper could be wrong.

reply
> And so the paradox is, the LLMs are only useful† if you're Schwartz

Was the LLM even useful for Schwartz, if it produced false output?

reply
Maybe it saved them some time? So far the studies seem to lean toward probably the LLM didn't save them any time.
reply
So far the studies point to study authors having a profound misunderstanding of what’s happening. Which isn’t surprising, since any study right now requires speculating about what’s important and impactful in a new and fast-moving field. Very few people are good at that, and most of the ones who are are not running studies.
reply
deleted
reply