And yes. If LLMs disappear, then we need to hire a lot of people to maintain the infrastructure.
Which naturally is a part of the risk modeling.
Not what I asked, but thanks for playing.
> Could you do it again without the help of an LLM?
Well, yes?
What do you think "learning" means? If you cannot do something without the teacher, you haven't learned that thing.
If your child says they've learned their multiplication tables but they can't actually multiply any numbers you give them do they actually know how to do multiplication? I would say no.
It’s quite possible to be deep into solving a problem with an LLM guiding you where you’re reading and learning from what it says. This is not really that different from googling random blogs and learning from Stack Overflow.
Assuming everyone just sits there dribbling whilst Claude is in YOLO mode isn’t always correct.
> Could you do it again on your own?
Can you you see how nonsensical your stance is? You're straight up accusing GP of lying they are learning something at the increased rate OR suggesting if they couldn't learn that, presumably at the same rate, on they own, they're not learning anything.
That's not very wise to project your own experiences on others.
Not everyone learns at the same pace and not everyone has the same fault tolerance threshold. In my experiencd some people are what I call "Japanese learners" perfecting by watching. They will learn with AI but would never do it themselves out of fear of getting something wrong while they understand most of it, others that I call "western learners" will start right away and "get their hands dirty" without much knowledge and also get it wrong right away. Both are valid learning strategies fitting different personalities.