upvote
Of course the language abilities of LLMs is not proof of consciousness at all. If some alien entity made a model that was truly just 10^1000 hard-coded if-statements to respond to every possible question, it might seem way better than our best models now but would obviously not be conscious.

The problem is just that even in the most lousy, turing test-failing LLM there's no guarantee that not a single subsection of these giant neural nets hasn't replicated the basic computational blocks of consciousness found in something even as simple as a snail.

Here's another question: can LLMs do addition?

reply