If you state “in 6 months AI will not require that much knowledge to be effective” every year and it hasn’t happened yet then every time it has been stated has been false up to this point.
In 6 months we can come back to this thread and determine the truth value for the premise. I would guess it will be false as it has been historically so far.
Three months ago, we didn't have Opus 4.5, which almost everyone is saying is leaps and bounds better than previous models. MCP and A2A are mostly antiquated. We also didn't have Claude Desktop, which is trying to automate work in general.
Three _weeks_ ago, we didn't have Clawdbot/Openclaw, which people are using to try and automate as much of their lives as possible...and succeeding.
Things are changing outrageously fast in this space.
I think that this has been true, though maybe not quiet a strongly as strongly worded as your quote says it.
The original statement was "Maybe GP is right that at first only skilled developers can wield them to full effect, but it's obviously not going to stop there."
"full effect" is a pretty squishy term.
My more concrete claim (and similar to "Ask again in 6 months. A year.") is the following.
With every new frontier model released [0]:
1. the level of technical expertise required to achieve a given task decreases, or
2. the difficulty/complexity/size of a task that a inexperienced user can accomplish increases.
I think either of these two versions is objectively true looking back and will continue being true going forward. And, the amount that it increases by is not trivial.
[0] or every X months to account for tweaks, new tooling (Claude Code is not even a year old yet!), and new approaches.