Wait what? So a robot who is accurately copying the actions of an intelligent human, is intelligent?
If it's just basically being a puppet, then no. You tell me what claude code is more like, a puppet, or a person?
But that is the key insight, how can you tell when an imitation of intelligence becomes the real thing?
If the idea is that something cannot accurately replicate the entirety of intelligence without being intelligent itself, then perhaps. But that isn't really what people talk about with LLMs given their obvious limitations.