I'm not the poster, but my answer would be because symbolic manipulation is way too expensive. Parallelizing it helps, but long dependency chains are inherent to formal logic. And if a long chain is required, it will always be under attack by a cheaper approximation that only gets 90% of the cases right—so such chains are always going to be brittle.
(Separately, I think that the evidence against humans using symbolic manipulation in everyday life, and the evidence for error-prone but efficient approximations and sloppy methods, is mounting and already overwhelming. But that's probably a controversial take, and the above argument doesn't depend on it.)
Even if we were to obtain evidence clearly demonstrating that LLMs don't reason symbolically, why should we interpret that as an indication of what humans do? Certainly it would be highly suggestive, but "hey we've demonstrated that thing can be done this way" doesn't necessarily mean that thing _is_ being done that way.