upvote
> No, they have "attention". There is unique logic going on in the deep layers of the neural network.

Any specifics? That doesn't say anything about them not being sentence generators. And it's pretty well known that the LLMs constantly spew out fantastically grammatically correct sentences that have no logic to them whatsoever.

> These simple networks take in raw pixels and somewhere in the many layers recognize "curves" and "edges" and then "circles" and "boxes" and whatnot and eventually "digits".

That sounds like a version of anthropomorphizing. It is my understanding that it is a completely open problem as to what neural networks are actually doing in their internal, deep layers.

> I think the oversimplified argument of them just being stochastic sentence machines mostly comes from people who don't understand how they work.

I mean, that's effectively a logical fallacy, so it's not a strong argument.

reply