In other words, the aim is to get kids used to using AI as soon as possible, so that they do not learn the skills to function without depending on it.
I can see the angle for making sure kids start using it before they develop the skills to become independent of it.
I've been using AI for some legal issues, and it's been incredibly good at searching for case law and summarising the key implications of various statutes - much more efficient than web search, with direct links to the primary sources it finds.
I'm still the one gaming out "What if...?" and "Does that mean..?" scenarios and making sure the answers are grounded in the relevant statutes, and aren't mistakes or hallucinations.
It's not so much a prompting problem as a critical thinking and verbal reasoning problem.
Schools are slow, by the time the teachers get around to teaching the sophisticated techniques you use today, those techniques will be obsolete, the new AI models will require completely different style of prompts.
As for critical thinking and reasoning, those are even harder to teach. How can teachers teach what they don't know?
And that means you have to learn without AI to understand when the AI is wrong. This is just how its dangerous to use a calculator without knowing math since you wont spot when you entered things wrongly etc.
My 6 year old kid who watches me is a better prompter.
http://www.coding2learn.org/blog/2013/07/29/kids-cant-use-co...
It seems to me that if someone can read and think critically-- they can RTFM and get much better much quicker at computers and AI than people who spent all their time tapping an ipad to watch the next video.
It would take a few sessions at most to take someone from 10 years ago and get them fully up to speed with AI tools since they have zero learning curve.
I think it comes easily to the sort of people who comment here. Moat people have a very vague understanding of computers in general.