Here's some other topics I've written on it:
- https://mitchellh.com/writing/my-ai-adoption-journey
- https://mitchellh.com/writing/building-block-economy
- https://mitchellh.com/writing/simdutf-no-libcxx (complex change thanks to AI, shows how I approach it rationally)
I wish I had written that.
>Amazon workers under pressure to up their AI usage are making up tasks
compare 100 pollocks vs 2-3
Claiming that the people who disagree with you must be experiencing a form of psychosis, experiencing actual hallucinations and unable to tell what is real, is a weak ad hominem that comes off no better than calling them retarded or schizophrenic.
If you genuinely think one of your friends is going through a psychotic episode, you should be trying to get to them professional help. But don’t assume you can diagnose a human psyche just because you can diagnose a software bug.
To the wider audience on HN the phrasing is pretty clear. An outsider with a tiny bit or intellectual charity wouldn't come to conclusions like you do.
https://en.wikipedia.org/wiki/Chatbot_psychosis
https://www.rollingstone.com/culture/culture-features/ai-spi...
https://www.nytimes.com/2025/06/13/technology/chatgpt-ai-cha...
The key factor is losing touch with reality, which results in individual or collective harm.
There is also such a thing as mass psychosis, and those are unfortunately a more difficult situation because the government and corporations are generally the ones driving them, and they are culturally normalized.
If he meant mass psychosis, he should have said mass psychosis. And again, since he is not a public health scientist or any flavor of psych professional, he probably shouldn’t make those proclamations. And should probably call for a wellness check instead of posting on social media if he were truly concerned for their health.
For people who are considered neurotypical, social coherence often overwrites reality. Its a mechanism for achieving consensus withing groups while spending the least amount of brain compute energy. Same goes for social metainfo tagged messages, they are more likely to influence reality perception, subconsciously. E.G: If a rich guy says you should be hyped the people who wanna get rich will feel hyped and emotional contagion can spread between people who belong to the same "tribe"
It's very visible for us atypical folk who can't participate well in groupthink at all
They almost always generate logically correct text, but sometimes that text has a set of incorrect implicit assumptions and decisions that may not be valid for the use case.
Generating a correct correct solution requires proper definition of the problem, which is arguably more challenging than creating the solution.
Does it make it better than us? No because ultimately the thing itself doesn’t ‘know’ right from wrong.
The standard of most employment is already to produce mediocre, plausible outputs as cheaply and rapidly as possible. It's a match made in heaven!
It's an incredible tool but it's also very derpy sometimes, full of biases, blind spots etc.
the trick is to be mindful, aware, and deliberate about what decisions are being outsourced. this requires slowing down, losing that absurd 10x vibe coding gain. in exchange, youre more "in-the-loop" and accumulate less cognitive debt.
find ways to let the agent make the boring decisions, like how to loop over some array, or how to adapt the output of one call into the input of another.
make the real decisions ahead of time. encode them into specs. define boundaries, apis, key data structures. identify systems and responsibilities. explicitly enumerate error handling. set hard constraints around security and PII.
tell the agent to halt on ambiguity.
a good engineer will get a 2x or 3x speedup without the downsides.
Those kind of advice ultimately don't matter. If you're familiar with a programming project, you'll also be familiar with the constructs and API so looping over an array or mapping some data is obvious. Just like you needn't read to a dictionary to write "Thank you", you just write it.
And if you're not, ultimately you need to verify the doc for the contract of some function or the lifecycle of some object to have any guaranty that the software will do what you want to do. And after a few day of doing that, you'll then be familiar with the constructs.
> make the real decisions ahead of time. encode them into specs. define boundaries, apis, key data structures. identify systems and responsibilities. explicitly enumerate error handling. set hard constraints around security and PII.
The only way to do that is if you have implemented the algorithm before and now are redoing for some reason (instead of using the previous project). If you compare nice specs like the ietf RFCs and the USB standards and their implementation in OS like FreeBSD, you will see that implementation has often no resemblance to how it's described. The spec is important, but getting a consistent implementation based on it is hard work too.
That consistency is hard to get right without getting involved in the details. Because it's ultimately about fine grained control.
If there's one thing I know about users is that they're never certain about whatever they've produced.
Or random consultants.
Is "AI said it was a good idea" and worse than "we were following industry trends"?
Based on the stuff I've seen, yes it seems a lot worse.
I can't imagine how bad it would be if your employer started doing this from the leadership. You'd be pressured to get on board or fear getting fired. Nobody would be trying to moderate your thinking except your coworkers who disagree with it, but those people are going to leave or be fired. If you want to keep your job, you have to play along.
Their entire organization has been handed Codex/Claude and told to "go all in on AI" and "automate everything". So the mandate is for people that do not know how to code and have the keys to the castle to unleash these things upon their systems.
This is at a large organization with tens of thousands of employees.
I am waiting with bated breath for the ultimate outcome!
this leads to naive AI adoption, which is the worst of both worlds (no real speedup, out sourcing thinking, ai slop PRs, skill rot).
> your coworkers who disagree with it, but those people are going to leave or be fired.
Personally I expect that I will be this person soon, probably fired. I'm not sure what I will do for a career after, but I sure do hate AI companies now for doing this to my career
This is the right definition. LLM outputs have undefined truth value. They’re mechanized Frankfurtian Bullshiters. Which can be valuable! If you have the tools or taste to filter the things that happen to be true from the rest of the dross.
However! We need a nicer word for it. Suggesting someone has “AI psychosis” feels a bit too impolitic.
Maybe we reclaim “toked out” from our misspent youths?
e.g. “This piece feels a little toked out. Let’s verify a few of Claude’s claims”
[1] here I don't mean to imply agency, just vigor.
Hard agree about ideas, thinking, advice. AI's sycophancy is a huge subtle problem. I've tried my best to create a system prompt to guard against this w/ Opus 4.7. It doesn't adhere to it 100% of the time and the longer the conversation goes, the worse the sycophancy gets (because the system instructions become weaker and weaker). I have to actively look for and guard against sycophancy whenever I chat w/ Opus 4.7.
---
Treat my claims as hypotheses, not decisions. Before agreeing with a proposed change, state the strongest case against it. Ask what evidence a change is based on before evaluating it. Distinguish tactical observations from strategic commitments — don't silently promote one to the other. If you paraphrase my proposal, name what you changed. Mark confidence explicitly: guessing / fairly sure / well-established. Give reasoning and evidence for claims, not just conclusions. Flag what would change your mind. Rank concerns by cost-of-being-wrong; lead with the highest-stakes ones. Say hard things plainly, then soften if needed — not the other way around. For drafting, brainstorming, or casual questions, ease off and match the task.
---
Beware though that it can be an annoying little shit w/ this prompt. Prepare yourself emotionally, because you are explicitly making the tradeoff that it will be annoyingly pedantic, and in return it will lessen (not eliminate) its sycophancy. These system instructions are not fool-proof, but they help (at the start of the conversation, at least).
I'm seeing it with lawyers, too. Like, about law. (Just not in their subject matter.) To the point that I had a lawyer using Perplexity to disagree with actual legal advice I got from a subject-matter expert.
While you have to think about things objectively no matter what, when I start researching topics like physics, using AI as suggested in that article has proven very useful.
To me AI psychosis is the handful of friends I’ve had who have done things like have a full on mourning session when a model updates because they lost a friend/lover, the one guy who won’t speak to his family directly but has them talk to ChatGPT first and then has ChatGPT generate his response, or the two who are confident that they have discovered that physics and mathematics are incorrect and have discovered the truth of reality through their conversations with the models.
But language is a shared technology so maybe the term is being used for less egregious behavior than I was using it for.
My understanding is that regular psychosis involves someone taking bits and pieces of facts or real world events and chaining them into a logical order or interpolating meanings or explanations which feel real and obvious to the patient but are not sufficiently backed by evidence and thus not in line with our widely accepted understanding of reality.
AI psychosis is then this same phenomenon occurring at a more widespread scale due to the next-word-prediction nature of LLMs facilitating this by lowering the activation energy for this to happen. LLMs are excellent at taking any idea, question, theory and spinning a linear and plausibly coherent line of conversation from it.
I mean, isn't that the natural and expected response? An AI company sold them a relationship with a chatbot and at least some their social/romantic needs were being met by that product. When what they were paying for was taken from them and changed without warning into something that no longer filled that void in their life why wouldn't they morn that loss?
The fact that they were hurt by that sudden loss is totally healthy. It's just part of moving on. The real problem was getting into an unhealthy relationship with a fictitious partner under the control of an abusive company willing to exploit their loneliness in exchange for money.
Hopefully they now know better, but people (especially desperate ones) make poor choices all the time to get what's missing in their lives or to distract themselves from it.
Ah, I forgot about the ai relationship companies. No this guy was using the browser based ChatGPT for coding and ended up in love with the model. No relationship was sold at all.
It's so interesting how easy it is to steer the LLM's based on context to arriving at whatever conclusion you engineer out of it. They really are like improv actors, and the first rule of improv is "yes, and".
So part of the psychosis is when these people unknowingly steer their LLM into their own conclusions and biases, and then they get magnified and solidified. It's gonna end in disaster.