The problem is just that even in the most lousy, turing test-failing LLM there's no guarantee that not a single subsection of these giant neural nets hasn't replicated the basic computational blocks of consciousness found in something even as simple as a snail.
Here's another question: can LLMs do addition?