upvote
Why do you think stringing words together is any more a sign of consciousness than google maps is when it tries to find the best route available to your destination? It seems to me that humans often fall into the trap of anthropomorphism. This is a theme thats touched upon in the novel "Blindsight" by Peter Watts. Just because something can communicate in a way that you can interpret, doesnt mean something is conscious
reply
Yea a while a back I read an article which had a quote something like “what happened to weather prediction has happened to language.” Which is an oversimplification on both sides but if you think LLMs are conscious there’s good reason to think that GFS is too.
reply
A large part of the problem is what you consider consciousness.

If you talk about having a subjective experience, then we don't know of any way to prove that even other humans than ourselves have one. We go entirely by assumptions based on physical similarity and our ability to communicate.

But we have no evidence that physical similarity is a prerequisite, nor that it is sufficient.

So the bigger trap is to assume that we know what causes a subjective experience, and what does not.

None of us even know if a subjective experience exists for more than a single entity.

But the second problem is that it is not clear at all whether that subjective experience in any way matters.

Unless our brains exceed the Turing computable, for which we have no evidence is even possible, either whatever causes the subjective experience is also within the Turing computable or it can not in any way influence our actions.

Ultimately we know very little about this, and we have very little basis for ruling out consciousness in computational systems, and the best and closest we have is whether or not they appear conscious when communicating with them.

reply
“If you talk about having a subjective experience, then we don't know of any way to prove that even other humans than ourselves have one.“

Wittgenstein kinda blows this burden of proof apart. Just because you can doubt something like the subjectivity of others to the point where it needs to be reconstructed from proofs, that’s an issue with the doubting experiment more than the subjectivity. Others possessing Subjectivity is the kind of hinge certainty upon which your world is constructed, it’s not a proof worthy endeavour to doubt it - it’s something you’re certain is the case. If it wasn’t then pretty well everything else about reality would be in doubt and needing constant reconstruction from proofs, which is an exercise in madness and futility, not philosophy. There’s really nothing in your experience where others not possessing subjective experiences of some kind really arises, except for the philosophical exercise of doubting and requiring epistemological proofs which can’t ever exist in the face of a relentless and unconvincable doubter. Heidegger talks about pretty well the same idea as Wittgenstein.

reply
The problem with your thinking here is that we are creating artificial beings now that display and output the same subjectivity.

The argument you present like many arguments breaks down when the topic becomes self referential. It makes sense for other topics as analyzing subjectivity becomes pedantic when asking questions like why is the sky blue.

But now subjectivity itself is in question. The argument you present calls for the subjectivity of others to be taken as true because all reality breaks down if we don’t… but what’s suddenly stopping you from applying the same assumptions to an LLM? That is the heart of the problem. People are questioning whether the burden of subjectivity is applicable to LLMs.

Or another way to frame it… what makes humans rise to the level where we can assume their subjectivity is true? What is the mechanism and reasoning behind that? We can no longer simply assume human subjectivity is true because LLMs are now displaying outward behaviors that are indistinguishable from humans.

Also stop relying on the wonderings of old school philosophers. We are now in times where you can basically classify their ideas as historically foundational but functionally obsolete and outdated. Think deeper.

reply
> Just because something can communicate in a way that you can interpret, doesnt mean something is conscious

The phrase “the trap of anthropomorphism” betrays a rather dull premise: that consciousness is strictly defined by human experience, and no other experience. It refuses to examine the underlying substrate, at which point we’re not even talking the same language anymore when discussing consciousness.

reply
I think these ideas are orthogonal. I do not think that conciousness is defined by human experience at all - in fact, I think humans do a profound disservice to animals in our current lack of appreciation for their clear displays of conciousness.

That said, if a chimpanzee bares its teeth to me, I could interpret that to be a smile when in fact its a threatening gesture. Its this misinterpretation that I am trying to get at. The overlaying of my human experiences onto something which is not human. We fall for this over and over again, likely as we are hard wired to - akin to mistakenly seeing eyes when observing random patterns in nature.

In the case of LLMs though, why does using a mathmatical formula for predicting the next word give any more credence to conciousness than an algorithm which finds a nearest neighbour? To me, its humans falling foul of false pattern matching in the pursuit of understanding

reply
Replace the word chimpanzee with human in your own argument and realize that the same logic applies to other humans.

When another human smiles you assume he is happy and not just baring his teeth at you because that’s what you do when you smile. You are “anthropomorphizing” other people. You fall for the same category error in a daily basis when you interact with people; it is not just chimpanzees.

> In the case of LLMs though, why does using a mathmatical formula for predicting the next word give any more credence to conciousness than an algorithm which finds a nearest neighbour?

First we don’t know whether LLMs are conscious. People speaking here are talking about the realistic possibility that it is conscious.

Second the algorithm is much more than a next word predictor. The intelligence that goes into choosing the next word such that it constructs arguments and answers that are correct involves a lot more then simple prediction. We know this because the LLM regularly answers questions that require extreme understanding of the topic at hand. It cannot token predict working code in my companies code base without understanding the code.

Third, we do not know what drives human consciousness but we do know it is model-able in a very complex mathematical algorithm. We know this because we have pretty complete mathematical models for lower resolutions of reality. For example we can models atoms mathematically. We know brains are made of atoms and because atoms are mathematically model-able we know that human brains and thus consciousness is mathematically model-able.

The sheer complexity of the LLM model is the problem we cannot have high level understanding of it because conceptual understanding cannot be simplified into a few concepts.

   To understand the LLM requires simultaneous understanding of likely billions of concepts at the same time and how all the weights interact in the LLM. 

What you are missing with your analysis is that this is the same reason why we don’t understand the human brains. The foundational math already exists as we can models atoms in math and thus since the brain is made out of atoms we should be able to model the brain… but we can’t. We can’t because it is too complex.

   To understand the human brains requires simultaneous understanding of likely billions of concepts at the same time and how all the weights interact in the human brain. 

I italicized two sentences here to help you understand the logic. Our thinking is more foundational then anthropomorphization. The argument has moved far beyond that. You need to think deeper.
reply
What makes you certain that human thought is more than pattern matching?

As I understand it neuroscience hasn’t come up with a clear explanation of thought, much less a mind or consciousness. It seems to me complex pattern matching is a reasonable a cause of consciousness as anything else.

reply
Why does a neuron, which is simply a cell that takes in chemicals and electricity, and shits out neurotransmitters; why does 90 billion of those give rise to human intelligence? Neurons are just next chemical state machines. We can model individual ones on a computer. Yet 90 billion of them together make up a human brain, and gives rise to consciousness and intelligence. If you get stuck on the next word prediction part, and ignore the ridiculous scale that's involved with training a model, you miss the forest for the trees.
reply
> It seems to me that humans often fall into the trap of anthropomorphism.

That's true, but they also often fall into the trap of exceptionalism.

reply
There are people who think Google Maps is a tiny bit conscious (the union of computational functionalists and panpsychists), to resolve the dilemma of some magical binary threshold.
reply
When a honey bee does its little dance to communicate to its sisters where the foods at, similarly to Google Maps computing and communicating the shortest path to your destination, is the bee conscious?

Yeah, probably. At least a little bit.

Are 80,000 bees conscious, or more conscious? Well, they’re definitely capable of some emergent behaviours that one be alone can’t achieve.

reply
I would caution against deriving too much of your philosophical worldview from a scifi book about posthuman vampires that has been deliberately engineered to make a philosophical point that is most certainly not a consensus.

For alternative viewpoints: Daniel Dennett considered philosophical zombies to be logically incoherent. Douglas Hofstadter similarly holds that "meaning" is just another word for isomorphism, and that a thing is a duck exactly to the extent that it walks and quacks like one. Alan Turing advocated empiricism when evaluating unknown intelligence. These are smart cookies.

reply
Why do you think it's definitely not?
reply
Except we don’t know how those words are strung together. Right? Why don’t you analyze it a little further and stop shutting down your own brain before coming to this superficial conclusion.

You ask the LLM a complex question and it gives you a correct answer. Yes it has to string words together to answer your question but how did it know the order and which words to use in order to make the answer correct? You don’t actually know. No one does and it is in that unknown space that we suspect consciousness may lie. Something is there and humanity as a whole cannot understand it and this lack of understanding is exactly the same fundamental lack of understanding we have for how a monkey brain or dog brain or even human brain works. We do not know whether humans dogs or monkeys are conscious… you only assume other living beings are conscious because you yourself experience it and just assume it exists for others. We can’t even define what it is because consciousness is a loaded word like spirituality.

This is not anthropomorphism. You attribute the bias wrongly. Instead it is a stranger phenomenon among people like you who can mysteriously only characterize the LLM as a next token predictor and nothing else beyond that even though the token prediction clearly indicates greater intelligence at work.

The tldr is that we don’t actually know and that consciousness is a highly viable possibility given what we don’t know and given the assumptions of consciousness we have on other living beings with equivalent understanding of complex topics.

reply
The mechanistic view gets weirder if you imagine all the states of the system being written down on a giant tape. Not just the "current" state but all the past and future states. What makes this tape not alive or conscious?
reply
You could push the analogy even further and run the thought experiment where every forward pass through an LLM could in principle be done on pen and paper, distributed throughout all humanity. Sure it would take a long time, but the output would be exactly the same. We’ve just shifted the implementation from GPU to scribbling things down on paper. If you want to assert that LLMs are “conscious” then you would have to likewise say this pen-and-paper implementation is conscious unless you want to say a certain clock-speed is a necessary condition for consciousness.
reply
When we get complete neuronal connection maps (which we are getting close to for mice and humans will be done within a decade or two), we could in principle simulate a brain on a computer or on paper too. Unless you assert something magical like a "soul", these connections are what determine human consciousness. It is one thing to argue that LLMs don't resemble brains and if they could be "conscious" they wouldn't be conscious in the sense we are, but asserting that anything understandable can't be conscious won't age well.
reply
the problem with this is I'd strongly argue that you could do this pen and paper process with the human brain and our consciousness too; we just lack enough understanding to put pen to paper in that case

the notion of consciousness being something an experience that other animals/humans share is entirely faith based.

the only person with evidence of ones consciousness is the person claiming they're conscious.

reply
> the problem with this is I'd strongly argue that you could do this pen and paper process with the human brain and our consciousness too; we just lack enough understanding to put pen to paper in that case.

You're basing your premise on a lack of understanding[1], the GP's premise is based on an exact understanding[2].

You don't see the difference between your premise and the GP'S premise?

-----------------

[1] "We don't know how brains actually come up with the things they come up with, like consciousness"; IOW, we don't know what the secret ingredient is, or even if there is one.

[2] "We can mechanically do the following steps using 18th-century tech and come up with the same result as the LLM"; IOW, every ingredient in here is known to us.

reply
Can computers simulate all the laws, even theoretically? We don't have a final theory / unification of all the physics frameworks, so I'm not sure if that claim can be made. Ex: the standard model and gravity.
reply
I think it is primary too easy to dismiss the option that Dawkins is way less scientific then he pretends to me and possible a quired minor form of ai psychosis.
reply
Likely. I'm convinced 'AI psychosis' is a developmental phase that everyone is subject to. It just gets manifested in character unique ways. I think part of it is the result of an internal struggle AI evokes which leads to a new form of humbling no one is exempt from.

Conciseness itself has always seemed to me a silly concept. My whole life I have not come across a simple definition but many sophists pin their existence on it.

reply
HN is full of experts who know despite lack of evidence. It’s the strangest thing because their confidence on this topic is completely authoritative despite total ignorance.
reply
but that’s not science, right? Dawkins and his ilk cling to science as a cure for religion yet if we are to believe that our absence of understanding of consciousness means computers can be conscious then our absence of understanding of the universe means god may exist.

“Isn’t it enough to see that a garden is beautiful without having to believe that there are fairies at the bottom of it too?”

reply