upvote
How can I tell what? That current LLMs are not conscious or that AGI/ASI will not require consciousness?
reply
How do you know they aren't conscious of we don't know what consciousness is, and have no test to see if anyone or anything is conscious?

This may seem like a joke, but your answer will likely be in the vain of "conscious things are obviously conscious", which gets us nowhere.

I mean, self motivation and a desire to not be turned off can be programmed into even decades old AIs.

reply
Consciousness is a huge topic and beyond a HN comment, but: My answer to this is that they obviously lack a basic understanding of simple things that any continually conscious being would find trivial. I have spent a lot of time having long form exploratory conversations on a particular topic with AI, and you begin to see how it doesn’t really understand what you’re talking about, it just makes a prediction about what you probably mean.

There is also apparently no real memory; if I tell it to stop doing something today, it’ll agree, then go back to doing it again tomorrow, with no memory of our conversation. This never changes, no matter how many times I ask.

Again we could debate consciousness forever, but in a simple sense, are there any other conscious beings without this sense of continuity? Not that I can think of. And so if everything we call “conscious” is different from an AI, then are we justified in extending it to AI?

reply
So is a person suffering from amnesia conscious if they lack short-term and long-term memory?

Ruling out consciousness or qualia emerging from the inference in an LLM is just as invalid of a take as being 100% certain of its consciousness. We don’t know what consciousness really is, so only thing we can say with certainty is we do not know.

reply
No, by continuity I mean literally moment to moment. Sorry if I didn’t clarify that. Even people with amnesia are still present moment to moment. As far as I know there are no things that we call conscious which have zero continuity.

I think consciousness is not an abstract property in the world, therefore it’s tied to certain types of entities. Therefore an AI is not going to be “conscious” in the way an animal is, and never will be. This is a failing of specific language. Maybe the machines can be aware, input data, mimic what we see as consciousness, etc. but the metaphor of consciousness really doesn’t fit. A jet can move faster than an eagle but it’s not moving in the same way. We simply lack a sophisticated enough language to easily differentiate the two.

reply
Doesn’t the LLM experience discrete continuity every time it infers the next token?

> I think consciousness is not an abstract property in the world, therefore it’s tied to certain types of entities. Therefore an AI is not going to be “conscious”

This pretty much sums up most arguments for why LLMs aren’t conscious: ”I think” followed by assertions. Only real argument is: science doesn’t quantify consciousness, we cannot quantify consciousness, let’s not assign so much certainty to models clearly exhibiting intelligence not being conscious in some way, to some degree.

reply
I don't think you really understood my point, because you didn't reply to it at all.

I am making a linguistic argument. AI may get as sophisticated as "traditional" consciousness. But this is only "real" consciousness if you are a functionalist and think the output is all that matters.

I disagree and think that "flying" is just a weak generic word that describes both planes and birds, and not some kind of ultimate Platonic Ideal in the world.

Ditto for AI consciousness: it may develop to be as complex as traditional animal consciousness, but I'm not a functionalist, and think it's merely a lack of our sophisticated language that makes us think it's the same thing. It's not. Planes PlaneFly through the air, while birds BirdFly.

reply
I see it as LLMs, AI, whatever, can be intelligent enough to emulate consciousness, appear outside as if it were. But that is not proof it really has a qualia, an experience of existing.

All I am saying we should stop being so certain they are not conscious, since we lack a solid, quantifiable model for consciousness.

reply
As a philosophical zombie myself[0], I'm well aware of how hard it is to define and test consciousness. That's why I tried to clarify what I meant with: desire for self-preservation and intrinsic motivation. Which LLMs clearly lack, don't you agree? Also, I'm not saying that those things couldn't be programmed in, just that so far, they don't seem necessary.

[0] I lack a conscious experience and qualia

reply
How can you tell that you lack conscious experience and qualia?
reply
They assert that they dont have them, in the same way you (presumably) assert that you do have them. Neither have any further evidence and one is not a prioi more likely than the other.
reply
Yep, this basically. I tend to get along well with solipsists.
reply
> desire for self-preservation and intrinsic motivation

I’d be curious about how you’re showing they lack either of those

reply
They don't try to prevent you from deleting them and they don't output anything unless prompted.
reply
"they don't output anything unless prompted"

Unprompted they're not unlike a human sleeping or in a coma. Those states don't preclude consciousness in other states.

reply
That's besides the point though.
reply