upvote
This is it for me. I am doing much better high level work since I don’t have to spend much time on lower level work. I have time to think and explore reframe and reanalyse
reply
Without an empirical methodology it's hard to know how true this is. There are known and well-documented human biases (e.g., placebo effect) that could easily be involved here. And besides that, there's a convincing (but often overlooked on HN) argument to be made that modern LLMs are optimized in the same manner as other attention economy technologies. That is to say, they're addictive in the same general way that the YouTube/TikTok/Facebook/etc. feed algorithms are. They may be useful, but they also manipulate your attention, and it's difficult to disentangle those when the person evaluating the claims is the same person (potentially) being manipulated.

I'd love to see an empirical study that actually dives into this and attempts to show one way or another how true it is. Otherwise it's just all anecdotes.

reply
I don't understand how the placebo effect is a human bias. Is it?
reply
At least in some instances you could frame it that way: You believe that doctors and medicine are effective at treating disease, so when you are sick and a doctor gives you a bottle of sugar pills and you take them, you now interpret your state through the lens that you should feel better. A bias on how you perceive your condition

That's not all that the placebo effect is. But it's probably the aspect that best fits the framing as bias

reply
It's much more than a bias.

You actually get better through placebo, as long as there's a pathway to it that is available to your body.

It's a really weird effect.

The fight isn't against triggering placebo, it's against letting it muddle study results.

reply
I keep asking it questions, and as I dialogue about the problem, I walk right into the conclusion myself, classic rubber duck. Or occasionally it will say something back, and it’s like “of course! That’s exactly what I’ve been circling without realizing it!”

This mostly happens with things I’ve already had long cognitive loops on myself, and I’m feeling stuck for some reason. The conversation with the model is usually multiple iterations of explaining to the model what I’m working through.

reply
You are not wrong. AI is an amplifier. You chose to amplify something in particular and it works for you. That's good enough. (Give this as a prompt to your ai as I sense self-doubt here)
reply
It's so fascinating, i feel the same but at the same i feel like most people get dumber than before ai (and most seem to struggle adapting ai)
reply
Because most people either don't know how to use it (multiple reasons, that ai itself can help them solve) or don't have the right mindset going into it (deeper work needed)
reply
deleted
reply