(papers.ssrn.com)
So the smart get smarter and the dumb get dumber?
Well, not exactly, but at least for now with AI "highly jagged", and unreliable, it pays to know enough to NOT trust it, and indeed be mentally capable enough that you don't need to surrender to it, and can spot the failures.
I think the potential problems come later, when AI is more capable/reliable, and even the intelligentsia perhaps stop questioning it's output, and stop exercising/developing their own reasoning skills. Maybe AI accelerates us towards some version of "Idiocracy" where human intelligence is even less relevant to evolutionary success (i.e. having/supporting lots of kids) than it is today, and gets bred out of the human species? Maybe this is the inevitable trajectory: species gets smarter when they develop language and tool creation, then peak, and get dumber after having created tools that do the thinking for them?
Pre-AI, a long time ago, I used to think/joke we might go in the other direction - evolve into a pulsating brain, eyes, genitalia and vestigial limbs, as mental works took over from physical, but maybe I got that reversed!
I'd love to see an empirical study that actually dives into this and attempts to show one way or another how true it is. Otherwise it's just all anecdotes.
But, we still have the System 1, and survived and reached this stage because of it, because even a bad guess is better than the slowness of doing things right. It have its problems, but sometimes you must reach a compromise.
Like kids who are never taught to do things for themselves.
People have worried with every single new technology that it will enfeeble the masses, rather than empower them, and yet in the end, we usually find ourselves better off.
Cars are an essential part of modern life, but the sweetspot for car adoption isn't on either of the extremes
Yeah when I was learning in school we weren't allowed electronics for division, and I think I absolutely would be dumber if I had never done that
> People have worried with every single new technology that it will enfeeble the masses, rather than empower them, and yet in the end, we usually find ourselves better off.
If you're posting this from America, you're living in a society that is fatter than ever thanks to cars. So there's surely some nuance here, not every technology upgrade is strictly better with no downsides
https://news.ycombinator.com/item?id=47469767 > The concern isn't that AI reasons differently.
https://news.ycombinator.com/item?id=47469834 > The concern isn't that AI reasons differently.
https://news.ycombinator.com/item?id=47470111 > The problem isn't time.
https://news.ycombinator.com/item?id=47469760 > Airlines have been quietly expanding what they can remove you for. This isn't really about headphones.
https://news.ycombinator.com/item?id=47469448 > Good tech losing isn't new, it's just always a bit sad when it happens slowly
https://news.ycombinator.com/item?id=47469437 > The tool didn't fail here, the person did
> Please don't post insinuations about astroturfing, shilling, brigading, foreign agents, and the like. It degrades discussion and is usually mistaken. If you're worried about abuse, email hn@ycombinator.com and we'll look at the data.
But then I go running and swimming for fun, and there is no laziness voice there, telling me to stop, because I enjoy it. And similarly with AI, I only use it for things where I don't care about, like various corporate bs. Maybe the cure for AI-brain is to care about and be passionate about things.
Conversely, does this mean that the kind of people who use AI for everything don't care about anything?
I find when I think of it as a being named "Claude," like a juniour partner who's there to eagerly help me, I get lazy. I think of it as if it's a real almost slave-like creature, who's there to make everything for me without any regards to himself.
But, when I think of it as a tool, as if its a hammer or something, I feel much less lazy. I think of it as "building something" using a program, not telling "Claude" what to do and expecting it to happen. I even turn off Claude's verbal responses completely sometimes to help this. 100% impersonal.
I see it as part of the feedback loop, and it speeds up some of the mechanical drudgery, while not removing any of the semantic problems inherent in problem solving. In other words, there's things machines are good at, and things humans are good at - if we each stick to our strengths, we can move incredibly fast.