upvote
What do you mean exist outside of time? They definitely don't exist outside of any causal chain - tokens follow other tokens in order.

Gaps in which no processing occurs seems sort of irrelevant to me.

The main limitation I'd point to if I wanted to reject LLMs being conscious is that they're minimally recurrent if at all.

reply
A LLM is not intrinsically affected by time. The model rests completely inert until a query comes in, regardless of whether that happens once per second, per minute, or per day. The model is not even aware of these gaps unless that information is provided externally.

It is like a crystal that shows beautiful colours when you shine a light through it. You can play with different kinds of lights and patterns, or you can put it in a drawer and forget about it: the crystal doesn’t care anyway.

reply
So what? If a human were unconscious every 5 seconds for 100ms, would you say they are "less conscious"? Tokens are still causally connected, which feels sufficient.
reply
Pseudocode for LLM inference:

    while (sampled_token != END_OF_TEXT) {
    probability_set = LLM(context_list)
    sampled_token = sampler(probability_set)
    context_list.append(sampled_token)
    }
LLM() is a pure function. The only "memory" is context_list. You can change it any way you like and LLM() will never know. It doesn't have time as an input.
reply
As opposed to what? There are still causal connections, which feel sufficient. A presentist would reject the concept of multiple "times" to begin with.
reply
That’s true by definition. They’re only on when they’re on. Are you making a broader point that I’m missing?
reply
Something similar could be said of a the brain? Bundles of inputs come in, bundle of output comes out. It only exists while information is being processed. A brain cut from its body and frozen exists in a similar state to an LLM in ROM.
reply
A living brain exists physically, changes over time, and never stops working.

A brain cut from its body and frozen its a dead brain.

reply