As you say it’s static, fixed, deterministic, and so on, and if you know how it works it’s more like a lossy compression model of knowledge than a mind. Ultimately it’s a lot of math.
So if it’s conscious, a rock is conscious. A rock can process information in the form of energy flowing through it. It’s a fixed model. It’s non-reflective. Etc.
What makes the argument facile is that the singular focus on LLMs reveals an indulgence in the human tendency to anthropomorphize, rather than a reasoned perspective meant to classify the types of things in the universe which should be conscious and why LLMs should fall into that category.
AI is stochastic, not static and deterministic.
As I said, in another post, there is evidence that sensory experience creates the emergent property of awareness in responding to stimulus, self-awareness and consciousness is an emergent property of a language that has a concept of the self and others. Rocks, just like most of nature, like both sensory and language systems
LLMs are deterministic. If you provide the same input to the same GPU, it will produce the same output every time. LLM providers arbitrarily insert a randomised seed into the inference stack so that the input is different every time because that is more useful (and/or because it gives the illusion of dynamic intelligence by not reproducing the same responses verbatim), but it is not an inherent property of the software.
2. Not provably so.
3. Even if it were so, it is self-evident that the human brain's programming is infinitely more complex than that of an LLM's. I am not, in principle, in opposition to the idea that a sufficiently advanced computer program would be indistinguishable from that of human consciousness. But it is evidence of psychosis to suggest that the trivially simple programs we've created today are even remotely close, when this field of software specifically skips anything that programming a real intelligence would look like and instead engages in superficial, statistic-based mimicry of intelligent output.
Fractals, Game of Live, the emergent abilities of highly-scaled generative pre-trained transformers.
Coincidences appears to be an emergent property of (relatively) simple matter.
70kg of rocks will struggle to do anything that might look like consciousness, but when a handful of minerals and three buckets of water get together they can do the weirdest things, like wondering why there is anything at all rather than nothing.
IF current AI is conscious, so are trees, rocks, turbulent flows, etc.
The argument being that LLMs are so simple that if you want to ascribe consciousness to them you have to do the same to a LOT of other stuff.
I’m not sure I believe that consciousness emerges from sensory experience, but if it does, LLMs won’t get it.
Surely "having senses" is predicated more on "being able to sense the world around you" than "having a body."
> Does my installation of starcraft have consciousness?
Can your installation of StarCraft take in information about the world and then reason about its own place in that world?
(I'm still not sure that that makes them conscious, or if we can even determine that at all, but I don't think that's a fair argument.)
Conflating senses with cognitive awareness of sensory input is a mistake.
Edit: what they don’t have, obviously, is a hard-coded twitch response, where the brain itself is largely bypassed and muscles react to massive temperature differentials independently of conscious thought. But I don’t think that defines consciousness either. Ants instinctively run away from flames too.
Your best argument is that the weights are set because that means it’s not a system that can self reflect and alter the experience. But I don’t see why that is necessary to have an experience. It seems that I can sense a light and feel its warmth regardless of whether my neurons change. One experience being identical to another doesn’t mean neither was an experience.
LLMs do not have a self. This is like arguing that the algorithm responsible for converting ripped YouTube music videos to MP3s has a consciousness.
Can such an algorithm reason about itself in relation to others?
No, but an LLM doesn't do that either. An LLM is an algorithm to generate text output which can simulate how humans describe reasoning about themselves in relation to others. Humans do that by using words to describe what they internally experienced. LLMs do it by calculating the statistical weight of linguistic symbols based on a composite of human-generated text samples in its training data.
LLMs never experienced what their textual output is describing. It's more similar to a pocket calculator calculating symbols in relation to other symbols, except scaled up massively.
That they do it at all is the point and is what separates then from MP3 encoding algorithms. The "how" doesn't seem to me to be as important as you're suggesting.
You asked a hypothetical above about a different algorithm and now we've ascertained the reasons why that was reductive.
> LLMs never experienced ...
What is experience beyond taking input from the world around you and holding an understanding of it?
How do you know other humans do?
I merely object to the notion that we know how to tell who or what has a consciousness.
I do not pretend. I asked honest questions that clearly neither you nor the previous person are able to answer.