Last September, Tyler Austin Harper published a piece for The Atlantic on how he thinks colleges should respond to AI. What he proposes is radical—but, if you've concluded that AI really is going to destroy everything these institutions stand for, I think you have to at least consider these sorts of measures. https://www.theatlantic.com/culture/archive/2025/09/ai-colle...
> Another reason that a no-exceptions policy is important: If students with disabilities are permitted to use laptops and AI, a significant percentage of other students will most likely find a way to get the same allowances, rendering the ban useless. I witnessed this time and again when I was a professor—students without disabilities finding ways to use disability accommodations for their own benefit. Professors I know who are still in the classroom have told me that this remains a serious problem.
This would be a huge problem for students with severe and uncorrectable visual impairments. People with degenerative eye diseases already have to relearn how to do every single thing in their life over and over and over. What works for them today will inevitably fail, and they have to start over.
But physical impairments like this are also difficult to fake and easy to discern accurately. It's already the case that disability services at many universities only grants you accommodations that have something to do with your actual condition.
There are also some things that are just difficult to accommodate without technology. For instance, my sister physically cannot read paper. Paper is not capable of contrast ratios that work for her. The only things she can even sometimes read are OLED screens in dark mode, with absolutely black backgrounds; she requires an extremely high contrast ratio. She doesn't know braille (which most blind people don't, these days) because she was not blind as a little girl.
Committed cheaters will be able to cheat anyway; contemporary AI is great at OCR. You'll successfully punish honest disabled people with a policy like this but you won't stop serious cheaters.
My 40-some-odd years on this planet tells me the answer is yes.
It sounds entirely reasonable and moderate to me.
Not so in plenty of other countries. Hopefully US reverses the anti-science trend before it's too late
I saw interviews with young Americans on spring break and they were so utterly uninformed it was mind-blowing. Their priorities are getting drunk and getting laid while their country bombs a nation “into the stone ages”, according to their president. And it’s not their fault: they are the product of a media environment and education system designed for exactly this outcome.
Trump doesn't have to justify a single thing because the billionaires behind him know that every last bet is off and their very livelihoods are at risk, and his entire base of support up and down the chain are either complicit or fooled.
What the world does when they finally realize Democrats and Republicans are simply two sides of the vast apparatus suppressing the will of the people by any means necessary will be... spectacular.
I remember things very differently. Everyone cared about the Iraq war, gay-straight alliance was one of the most up and coming clubs, and political music was everywhere. Green Day had their big second wave with American Idiot, System Of A Down was on top of the world, Rock Against Bush was huge, anarcho-punk like Rise Against was getting big.
I'm not a teenager anymore obviously, so it's entirely possible I'm just missing it, but I've seen very little that compares to those sort of movements. On the other hand, most millennials I know are still wildly politically active.
If you compare that to the current Iran war, a majority of the population is already against it, however the current administration doesn't seem to care much about public opinion, and there doesn't seem to be much that the public can do about it.
You can access the full article at https://archive.is/zSJ13 (I know archive.is is kind of shady, but it works).
Only, replacing the guts of such a machine to contain a local LLM is damn easy today. Right now the battery mass required to power the device would be a giveaway, but inference is getting energetically cheaper.
> Colleges that are especially committed to maintaining this tech-free environment could require students to live on campus, so they can’t use AI tools at home undetected.
Just like my on-campus classmates never smoked weed or drank underage, I'm sure.
If AI output remains unreliable then eventually enough companies will be burned and management will reinstate proper oversight. All while continuing to pay themselves on the back.
Sure there is. Its the formal education system that produced the college grad.
The proposal that everyone pay for college until they are in their 40s doesn’t seem viable.
Other projects' success will be proportional to their number of Schwartz' and so it seems unlikely they disappear. But they may disappear for areas in which there is no immediate money.
If you are a massive company that owns all of the knowledge and all of the technology needed to apply that knowledge, then you don't need Alice. You don't _want_ Alice. You want more Bobs. It looks better on the books.
Tale as old as time.
I hope it will encourage people to think more about what they get out of the work, what doing the work does for them; I think that's a good thing.
There's a couple unfortunate truths going on all at the same time:
- People with money are trying to build the "perfect" business: SaaS without software eng headcount. 100% margin. 0 Capex. And finally near-0 opex and R&D cost. Or at least, they're trying to sell the idea of this to anyone who will buy. And unfortunately this is exactly what most investors want to hear, so they believe every word and throw money at it. This of course then extends to many other business and not just SaaS, but those have worse margins to start with so are less prone to the wildfire.
- People who used to code 15 years ago but don't now, see claude generating very plausible looking code. Given their job is now "C suite" or "director", they don't perceive any direct personal risk, so the smell test is passed and they're all on board, happily wreaking destruction along the way.
- People who are nominally software engineers but are bad at it are truly elevated 100x by claude. Unfortunately, if their starting point was close to 0, this isn't saying a lot. And if it was negative, it's now 100x as negative.
- People who are adjacent to software engineering, like PMs, especially if they dabble in coding on the side, suddenly also see they "can code" now.
Now of course, not all capital owners, CTOs, PMs, etc exhibit this. Probably not even most. But I can already name like 4 example per category above from people I know. And they're all impossible to explain any kind of nuance to right now. There's too many people and articles and blog posts telling them they're absolutely right.
We need some bust cycle. Then maybe we can have a productive discussion of how we can leverage LLMs (we'll stop calling it "AI"...) to still do the team sport known as software engineering.
Because there's real productivity gains to be had here. Unfortunately, they don't replace everyone with AGI or allow people who don't know coding or software engineering to build actual working software, and they don't involve just letting claude code stochastically generate a startup for you.
I don't actually think the article refutes this. But the AI needs to be in the hands of someone who can review the code (or astrophysics paper), notice and understand issues, and tell the AI what changes to make. Rinse, repeat. It's still probably faster than writing all the code yourself (but that doesn't mean you can fire all your engineers).
The question is, how do you become the person who can effectively review AI code without actually writing code without an AI? I'd argue you basically can't.
I agree, and I'd go a step further:
You can be the absolute best coder in the world, the fastest and most accurate code reviewer ever to live, and AI still produces bad code so much faster than you can review that it will become a liability eventually no matter what
There is no amount of "LLM in a loop" "use a swarm of agents" or any other current trickery that fixes this because eventually some human needs to read and understand the code. All of it.
Any attempt to avoid reading and understanding the code means you have absolutely left the realm of quality software, no exceptions
The solution is relatively simple though - not sure the article suggests this as I only skimmed through:
Being good in your field doesn't only mean pushing articles but also being able to talk about them. I think academia should drift away from written form toward more spoken form, i.e. conferences.
What if, say, you can only publish something after presenting your work in person, answer questions, etc? The audience can be big or small, doesn't matter.
It would make publishing anything at all more expensive but maybe that's exactly what academia needs even irrespective of this AI craze?
My grad school friend who was a physicist would write his talk just before his conferences, and then submit the paper later. My experience in CS was totally backwards from that.
So I settled on very incremental work. It's annoying cutting and pasting code blocks into the web interface while I'm working on my interface to Neovim, spent a whole day realizing I can't trust it to instrument neovim and don't want to learn enough lua to manage it. (I moved onto neovim from Emacs because I don't like elisp and gpt is even worse at working on my emacs setup than neovim, the end goal is my own editor in ruby but gpt damn sure can't understand that atm) But at least I'm pushing a real flywheel and not the brooms from Fantasia.
Here's a competing thought experiment:
Jorge's Gym has a top notch body building program, which includes an extensive series of exercises that would-be body builders need to do over multiple years to complete the program. You enroll, and cleverly use a block and tackle system to complete all the exercises in weeks instead of years.
Did you get the intended results?
I built a full stack app in Python+typescript where AI agents process 10k+ near-real-time decisions and executions per day.
I have never done full stack development and I would not have been able to do it without GitHub Copilot, but I have worked in IT (data) for 15 years including 6 in leadership. I have built many systems and teams from scratch, set up processes to ensure accuracy and minimize mistakes, and so on.
I have learned a ton about full stack development by asking the coding agent questions about the app, bouncing ideas off of it, planning together, and so on.
So yes, you need to have an idea of what you're doing if you want to build anything bigger than a cheap one shot throwaway project that sort of works, but brings no value and nobody is actually gonna use.
This is how it is right now, but at the same time AI coding agents have come an incredibly long way since 2022! I do think they will improve but it can't exactly know what you want to build. It's making an educated guess. An approximation of what you're asking it to do. You ask the same thing twice and it will have two slightly different results (assuming it's a big one shot).
This is the fundamental reality of LLMs, sort of like having a human walking (where we were before AI), a human using a car to get to places (where we are now) and FSD (this is future, look how long this took compared to the first cars).
That you can't "become Schwartz" by using LLMs is an unproven assumption. Actually, it's a contradiction in the logic of the essay: if Bob managed to produce a valid output by using an LLM at all, then it means that he must have acquired precisely that supervision ability that the essay claims to be necessary.
Btw, note that in the thought experiment Bob isn't just delegating all the work to the LLM. He makes it summarise articles, extract important knowledge and clarify concepts. This is part of a process of learning, not being a passive consumer.
No, this is impossible unless Bob is presenting at each weekly meeting simply the output of the LLM and feeding the tutor's feedback straight into it. For a total of 10 minutes work per week, and the tutor would notice straight away at least for the lack of progress.
No, the article specifies that Bob actually works with the LLM, doesn't just delegate. He asks the agent to summarise, to explain, and to help with bug fixing. You could easily argue that Bob, having such an AI tutor available 24/7, can develop understanding much faster. He certainly won't waste his time with small details of python syntax (though working with a "coding expert" will make his code much cleaner and more advanced).
There are flowers that look & smell like female wasps well enough to fool male wasps into "mating" with them. But they don't fly off and lay wasp eggs afterwards.
I have gained a lot of benefit using LLMs in conjunction with textbooks for studying. So, I think LLMs could help you become Schwartz.
Presumably you're also reading some kind of learning text about the Chinese language, so the sole source isn't just the LLM?
In my experience, asking an LLM to produce small examples of well-known things (or rather, things that are going to be talked about frequently in the training data, so generally basic or fundamental topics) tend to work fine, and is going to be at a level where you yourself can judge what's presented.
I think the real danger is when a person is prompting things they don't know how to verify for themselves, since then we're basically just rolling dice and hoping
In the past, if you neglected your institutes and academia, what happened? A rival state got electricity/cars/nuclear weapons first and you would be SOL.
These days, what happens? They invent faster phones? Higher res video?
This take may be a bit hyperbolic but I find it's a good thought exercise prompt.
Doesn’t sound like these tools should be used to write scientific papers for example and they seem to bamboozle people far more than help them.
I can only speak to the engineering context not academia, but I would expect there’s similar patterns of busywork. Even the blog authors admit to using AI. AI cant replace thinking but it can replace menial labor, which is prevalent even among “knowledge workers”.
It's exactly the same for coding.
What may well happen instead is that Bob publishes two papers. He then outcompetes Alice based on the insistence that others have on "publish or perish". Alice becomes unemployed and struggles, having been pushed out.
The person who puts the time and effort in doesn't just sit at the same level and they don't both just find decent employment. Competition happens and the authentic learning is considered a waste of time, which leads to real and often life threatening consequences (like being homeless after being unable to find employment).
This, I think, may be the more interesting bit. Steve Jobs anecdotally did caligraphy in school, which some would consider a waste of time, but Steve credited some of the stylistic choices to.
The question then becomes whether it will become an issue now or later. Having seen some of the output, I have no doubt that a lot can now be built by non-programmers ( including myself; I suppose I belong in the adjacent category ). The building blocks exist and as long as the problem was part of the initial training, odds are, LLM will help you build what you want.
It may not be perfect, safe, or optimized, but it may still be exactly what the user wanted to do. Now, the problems will start when those will, inevitably, move into production at big corps. In a sense, we have seen some interesting results of that in the past few weeks ( including accidental claude code release ).
In a grand scheme of things, not much is changing... except for speed of change. But are we quite ready for this?
For so many workers, their companies just want them to produce bullshit. Their managers wouldn't frame it this way, but if their subordinates start producing work with strict intellectual rigor it's going to be an issue and the subordinates will hear about it.
So, you're not wrong. But the majority of LLM customers don't care and they just want to report success internally, and the product needs to be "just good enough." An LLM might produce a shitty webpage. So long as the page loads no on will ever notice or care that it's wrong in the way that a physics paper could be wrong.
Was the LLM even useful for Schwartz, if it produced false output?