They clearly are not conscious, they are just guessing what words should come next.
Anything that looks like intelligence will look like a prediction machine because the alternative is logic being hardcoded apriori.
Consciousness is emergent. A human is not conscious by our definition until the moment they are. How will we be able to identify the singularity when it comes? I feel like this is what the article is really addressing.
> LLMs are word prediction engines
Humans can also do this too, so what are the missing parts for consciousness? Close a few loops on learning pipeline and we might be there.