Not meant to be snarky. It's been two decades now since my first wide-eyed entry into the workforce, moving for new opportunities, meeting new people. it's been great. There's a lot of smart people out there. I also realize that many people I seen as smart had more access to more content then i did. i still appreciated their sharing , it was enlightening to me. But after 20 years, I think back and it's literally quoting things from smart youtube videos. and regurgitating the latest thought leaders.
We all do this, but like you, what's meaningful to me is the chewing, the dissection and synthesis. coming together to share different perspectives and so on. i've had those friends too! it's just not 1:1
Maybe it's something like that AI allows them to indulge in their shallowness/laziness by giving them the impression that they're not doing that.
I also enjoy the series. But sometimes my friends send me things and I'm like, "not gonna read all of that."
Just because you're friends don't want to invest the same amount of time that you want to invest in your own personal enrichment doesn't mean they're getting stupid.
> Your Brain on ChatGPT: Accumulation of Cognitive Debt when Using an AI Assistant for Essay Writing Task
> https://arxiv.org/abs/2506.08872
> Cognitive activity scaled down in relation to external tool use. …
> Self-reported ownership of essays was the lowest in the LLM group and the highest in the Brain-only group. LLM users also struggled to accurately quote their own work. While LLMs offer immediate convenience, our findings highlight potential cognitive costs. Over four months, LLM users consistently underperformed at neural, linguistic, and behavioral levels.
I read one of his last week? and didn't like it that much. I read this one despite it because its quite high on hn for whatever reason.
I don't think everything is lies and i don't like how he thinks a LLM is just some bullshit machine.
Its also waaaay to early to even understand were this is going. We as humans have never had that much compute and used it this particular way. It could literlay be the road to a utopia or dystopia. But its very crazy to experience it.
His article series feels so negative and dismissive, that i'm not taking anything from it.
There is so much more research, money and compute behind this AI topic right now, every week or two weeks something relevant better/new comes out of this. From 2d, 3d models, new LLM versions, smaller LLms, faster inferencing (Nvidias Nemotron), we don't know how this will continue.
And the weird thing is that he clearly knows plenty about LLMs but it feels so negative dismissive, hard to put a finger to it.
Rather than dismissive, I see it as effort intensive. The conclusions can be negative, but they've spawned so much discussion which i think is great.
(FYI, I didn't downvote your comment)
Also, I’m reading this comment thread instead of TFA because I didn’t find the previous part I read that great. And I’m not an AI proponent, more of an AI skeptic.
So my main concern here is that my experience may be a microcosm of the shallowing of discussions correlated with some people's increased use of AI. That worries me.
It's more of a meta point to me. I get that this series isn't landing for some people, yourself included, but the meta-observation is that given something of roughly equal substantiveness as before, these friends' motivations for long form content and discussion seem to have atrophied, perhaps largely due to the addition of the AI summary reality cipher to their lives.
Of course, correlation isn't causation. Maybe they both just got older and more lazy, but given their reliance on AI summaries in other debates happening recently, I'm worried.