If you talk about having a subjective experience, then we don't know of any way to prove that even other humans than ourselves have one. We go entirely by assumptions based on physical similarity and our ability to communicate.
But we have no evidence that physical similarity is a prerequisite, nor that it is sufficient.
So the bigger trap is to assume that we know what causes a subjective experience, and what does not.
None of us even know if a subjective experience exists for more than a single entity.
But the second problem is that it is not clear at all whether that subjective experience in any way matters.
Unless our brains exceed the Turing computable, for which we have no evidence is even possible, either whatever causes the subjective experience is also within the Turing computable or it can not in any way influence our actions.
Ultimately we know very little about this, and we have very little basis for ruling out consciousness in computational systems, and the best and closest we have is whether or not they appear conscious when communicating with them.
Wittgenstein kinda blows this burden of proof apart. Just because you can doubt something like the subjectivity of others to the point where it needs to be reconstructed from proofs, that’s an issue with the doubting experiment more than the subjectivity. Others possessing Subjectivity is the kind of hinge certainty upon which your world is constructed, it’s not a proof worthy endeavour to doubt it - it’s something you’re certain is the case. If it wasn’t then pretty well everything else about reality would be in doubt and needing constant reconstruction from proofs, which is an exercise in madness and futility, not philosophy. There’s really nothing in your experience where others not possessing subjective experiences of some kind really arises, except for the philosophical exercise of doubting and requiring epistemological proofs which can’t ever exist in the face of a relentless and unconvincable doubter. Heidegger talks about pretty well the same idea as Wittgenstein.
The argument you present like many arguments breaks down when the topic becomes self referential. It makes sense for other topics as analyzing subjectivity becomes pedantic when asking questions like why is the sky blue.
But now subjectivity itself is in question. The argument you present calls for the subjectivity of others to be taken as true because all reality breaks down if we don’t… but what’s suddenly stopping you from applying the same assumptions to an LLM? That is the heart of the problem. People are questioning whether the burden of subjectivity is applicable to LLMs.
Or another way to frame it… what makes humans rise to the level where we can assume their subjectivity is true? What is the mechanism and reasoning behind that? We can no longer simply assume human subjectivity is true because LLMs are now displaying outward behaviors that are indistinguishable from humans.
Also stop relying on the wonderings of old school philosophers. We are now in times where you can basically classify their ideas as historically foundational but functionally obsolete and outdated. Think deeper.
The phrase “the trap of anthropomorphism” betrays a rather dull premise: that consciousness is strictly defined by human experience, and no other experience. It refuses to examine the underlying substrate, at which point we’re not even talking the same language anymore when discussing consciousness.
That said, if a chimpanzee bares its teeth to me, I could interpret that to be a smile when in fact its a threatening gesture. Its this misinterpretation that I am trying to get at. The overlaying of my human experiences onto something which is not human. We fall for this over and over again, likely as we are hard wired to - akin to mistakenly seeing eyes when observing random patterns in nature.
In the case of LLMs though, why does using a mathmatical formula for predicting the next word give any more credence to conciousness than an algorithm which finds a nearest neighbour? To me, its humans falling foul of false pattern matching in the pursuit of understanding
As I understand it neuroscience hasn’t come up with a clear explanation of thought, much less a mind or consciousness. It seems to me complex pattern matching is a reasonable a cause of consciousness as anything else.
When another human smiles you assume he is happy and not just baring his teeth at you because that’s what you do when you smile. You are “anthropomorphizing” other people. You fall for the same category error in a daily basis when you interact with people; it is not just chimpanzees.
> In the case of LLMs though, why does using a mathmatical formula for predicting the next word give any more credence to conciousness than an algorithm which finds a nearest neighbour?
First we don’t know whether LLMs are conscious. People speaking here are talking about the realistic possibility that it is conscious.
Second the algorithm is much more than a next word predictor. The intelligence that goes into choosing the next word such that it constructs arguments and answers that are correct involves a lot more then simple prediction. We know this because the LLM regularly answers questions that require extreme understanding of the topic at hand. It cannot token predict working code in my companies code base without understanding the code.
Third, we do not know what drives human consciousness but we do know it is model-able in a very complex mathematical algorithm. We know this because we have pretty complete mathematical models for lower resolutions of reality. For example we can models atoms mathematically. We know brains are made of atoms and because atoms are mathematically model-able we know that human brains and thus consciousness is mathematically model-able.
The sheer complexity of the LLM model is the problem we cannot have high level understanding of it because conceptual understanding cannot be simplified into a few concepts.
To understand the LLM requires simultaneous understanding of likely billions of concepts at the same time and how all the weights interact in the LLM.
What you are missing with your analysis is that this is the same reason why we don’t understand the human brains. The foundational math already exists as we can models atoms in math and thus since the brain is made out of atoms we should be able to model the brain… but we can’t. We can’t because it is too complex. To understand the human brains requires simultaneous understanding of likely billions of concepts at the same time and how all the weights interact in the human brain.
I italicized two sentences here to help you understand the logic. Our thinking is more foundational then anthropomorphization. The argument has moved far beyond that. You need to think deeper.The key here is that we don’t understand human brains and we don’t understand LLMs. But since the output LLMs produce are very similar to the output produced by the human brain… and since for no logical reason we assume human brains are conscious… what is stopping us from assuming the LLM is conscious?
That's true, but they also often fall into the trap of exceptionalism.
Yeah, probably. At least a little bit.
Are 80,000 bees conscious, or more conscious? Well, they’re definitely capable of some emergent behaviours that one be alone can’t achieve.
For alternative viewpoints: Daniel Dennett considered philosophical zombies to be logically incoherent. Douglas Hofstadter similarly holds that "meaning" is just another word for isomorphism, and that a thing is a duck exactly to the extent that it walks and quacks like one. Alan Turing advocated empiricism when evaluating unknown intelligence. These are smart cookies.
You ask the LLM a complex question and it gives you a correct answer. Yes it has to string words together to answer your question but how did it know the order and which words to use in order to make the answer correct? You don’t actually know. No one does and it is in that unknown space that we suspect consciousness may lie. Something is there and humanity as a whole cannot understand it and this lack of understanding is exactly the same fundamental lack of understanding we have for how a monkey brain or dog brain or even human brain works. We do not know whether humans dogs or monkeys are conscious… you only assume other living beings are conscious because you yourself experience it and just assume it exists for others. We can’t even define what it is because consciousness is a loaded word like spirituality.
This is not anthropomorphism. You attribute the bias wrongly. Instead it is a stranger phenomenon among people like you who can mysteriously only characterize the LLM as a next token predictor and nothing else beyond that even though the token prediction clearly indicates greater intelligence at work.
The tldr is that we don’t actually know and that consciousness is a highly viable possibility given what we don’t know and given the assumptions of consciousness we have on other living beings with equivalent understanding of complex topics.