upvote
>> the symbolic approach to modeling the world is fundamentally misguided. > but how did you arrive at this conclusion? Why are you certain that symbolic modeling (of some sort, some subset thereof, etc) isn't what ML models are approximating?

I'm not the poster, but my answer would be because symbolic manipulation is way too expensive. Parallelizing it helps, but long dependency chains are inherent to formal logic. And if a long chain is required, it will always be under attack by a cheaper approximation that only gets 90% of the cases right—so such chains are always going to be brittle.

(Separately, I think that the evidence against humans using symbolic manipulation in everyday life, and the evidence for error-prone but efficient approximations and sloppy methods, is mounting and already overwhelming. But that's probably a controversial take, and the above argument doesn't depend on it.)

reply
How do LLM advancements further such a view? Couldn't you have argued the same thing prior to LLMs? That evolution is a greedy optimizer etc etc therefore humans don't perform symbolic reasoning. But that's merely a hypothesis - there's zero evidence one way or the other - and it doesn't seem to me that the developments surrounding LLMs change that with respect to either LLMs or humans. (Or do they? Have I missed something?)

Even if we were to obtain evidence clearly demonstrating that LLMs don't reason symbolically, why should we interpret that as an indication of what humans do? Certainly it would be highly suggestive, but "hey we've demonstrated that thing can be done this way" doesn't necessarily mean that thing _is_ being done that way.

reply