upvote
How do you know the sensation of a red photon hitting a cone cell, transduced to the optic nerve through ion junctions and processed by pyramidal neurons, is any more or less real than the excitation of electrons in a doped silicon junction activating the latent space of the "red" thought vector? Cause we are made of meat?
reply
You’re arguing against the opposite of my position. I am arguing that LLMs have a reasonable basis to be seen as conscious because there is nothing special about biological neural networks.
reply
Sensory input is nothing but data.
reply
That's just reductive semantics. Anything can be described as "nothing but data".
reply
Sensory data is a specific data set that corresponds to phenomena in the world. But to say that LLMs don’t have senses merely because they are linguistic or computational doesn’t follow when they can take in data from the world that similarly reflects something about the world.
reply
They don't have senses because they don't have a body. It's just a program. Do weights on a hard drive have consciousness? Does my installation of starcraft have consciousness? It doesn't make any sense.
reply
> They don't have senses because they don't have a body

Surely "having senses" is predicated more on "being able to sense the world around you" than "having a body."

> Does my installation of starcraft have consciousness?

Can your installation of StarCraft take in information about the world and then reason about its own place in that world?

reply
The weights on your hard drive might have consciousness if they can respond to stimuli in ways other conscious brains do. That’s the whole point of the Turing test, it’s a criteria for when the threshold of reasonable interpretation is crossed.
reply
Bodies aren’t necessary for senses. I can send a picture to Claude. I can send a series of pictures. That’s usually called a sense of vision. I could connect it to a pressure sensor and that would be touch.
reply
There are robots with AI controlling them, so it doesn't hold that they don't all have bodies. They can see, they can move.

(I'm still not sure that that makes them conscious, or if we can even determine that at all, but I don't think that's a fair argument.)

reply
How do you measure this consciousness?
reply
How do you imagine a brain can distinguish data from a real sense and data from another source?
reply
Neural networks can have senses. Hook an LLM up to a thermometer and it will respond to temperature changes.
reply
No, it will respond to tokens telling it about a temperature change. It has no sense of warmth. It cannot be burned.

Conflating senses with cognitive awareness of sensory input is a mistake.

reply
The human Brain is a neural network. Your sense of “knowing what warmth is” reduces down to the weights of connections between neurons in an analog of LLMs. What is different about the human brain that warrants saying that the same emergent characteristics for one network are inaccessible to another?
reply
I’m not sure I fully understand the distinction you’re making, or if I do I’m not sure I agree. Concretely, I agree that these are very different mechanisms. Abstractly… I agree that an LLM cannot be burned. I’m not sure I agree, though, that there is a significant conceptual difference between thermoreceptors in the skin causing action potentials to make their way up the spinal cord to the brain is all that different than reading a temperature sensor over I2C and turning it into input tokens.

Edit: what they don’t have, obviously, is a hard-coded twitch response, where the brain itself is largely bypassed and muscles react to massive temperature differentials independently of conscious thought. But I don’t think that defines consciousness either. Ants instinctively run away from flames too.

reply