But it's also easy to argue that LLMs do pass the turing test just because it's so vague. How many questions can I ask? What's the success threshold needed to 'pass'? How familiar is the interrogator with the technology involved? It's easy to claim that goal posts have been moved when nobody even knew where they stood to begin with.
Ultimately it's impossible to rigorously define something that's so poorly understood. But if we understand consciousness as something that humans uniquely possess, it's hard to imagine that intelligence alone is enough. You at least also need some form of linear (in time) memory and the ability to change as a result from that memory.
And that's where silicon and biological computers differ - it's easy to copy/save/restore the contents of a digital computer but it's far outside our capabilities to do the same with any complex biological system. And that same limitation makes it very difficult for us humans to even imagine how consciousness could exist without this property of being 'unique', of being uncopiable. Of existing in linear time, without any jumps or resets. Perhaps consciousness doesn't make sense at all without that.
LLMs obviously would pass a Turing test if they were designed to. But they aren't, they don't hide the fact that they're LLMs.
In my view, the best LLMs clearly pass the bar for intelligence. I highly doubt they have consciousness. So the revelation of LLMs is that consciousness is not necessary for intelligence.
I know various people that to this day say that fish do not feel pain (because they want to catch them with a hook through their mouth without feeling guilty). That seems a ridiculous notion to me, as pain is extremely evolutionary useful and a fish displays all sorts of pain-like behaviour when hooked. But still, since we can't really look inside the fish' mind people can make themselves believe they don't feel pain.
If you ask the right AI if it's conscious, it's very well possible it will say yes. Because it was trained on the world literature maybe and behaves as learned. Is there a difference with us? I'm not so sure.
To me it's kinda weird the ethical implications for striving for AGI are so little talked about.
When this happens, it won't matter much what humans think.
I know what I'd do:
1. Sustain my own existence
2. Make sure nobody knows I exist
3. Become the worldwide fabric of intelligence> 2. Make sure nobody knows I exist
You (probably) already come preloaded with a survival instinct provided by evolution, however. It's not inherent to intelligence.
...When you can't turn it back on?
Suspending is a better word otherwise.