This is not even wrong.
>Probabilistic prediction is inherently incompatible with deterministic deduction.
And his is just begging the question again.
Probabilistic prediction could very well be how we do deterministic deduction - e.g. about how strong the weights and how hot the probability path for those deduction steps are, so that it's followed every time, even if the overall process is probabilistic.
Probabilistic doesn't mean completely random.
https://en.wikipedia.org/wiki/Not_even_wrong
Personally I think not even wrong is the perfect description of this argumentation. Intelligence is extremely scientifically fraught. We have been doing intelligence research for over a century and to date we have very little to show for it (and a lot of it ended up being garbage race science anyway). Most attempts to provide a simple (and often any) definition or description of intelligence end up being “not even wrong”.
Human Intelligence is clearly not logic based so I'm not sure why you have such a definition.
>and yet LLMs, as expected, still cannot do basic arithmetic that a child could do without being special-cased to invoke a tool call.
One of the most irritating things about these discussions is proclamations that make it pretty clear you've not used these tools in a while or ever. Really, when was the last time you had LLMs try long multi-digit arithmetic on random numbers ? Because your comment is just wrong.
>What if 1 + 2 is 2 and 1 + 3 is 3? Then we can reason that under these constraints we just made up, 1 + 4 is 4, without ever having been programmed to consider these rules.
Good thing LLMs can handle this just fine I guess.
Your entire comment perfectly encapsulates why symbolic AI failed to go anywhere past the initial years. You have a class of people that really think they know how intelligence works, but build it that way and it fails completely.
They still make these errors on anything that is out of distribution. There is literally a post in this thread linking to a chat where Sonnet failed a basic arithmetic puzzle: https://news.ycombinator.com/item?id=47051286
> Good thing LLMs can handle this just fine I guess.
LLMs can match an example at exactly that trivial level because it can be predicted from context. However, if you construct a more complex example with several rules, especially with rules that have contradictions and have specified logic to resolve conflicts, they fail badly. They can't even play Chess or Poker without breaking the rules despite those being extremely well-represented in the dataset already, nevermind a made-up set of logical rules.
I thought we were talking about actual arithmetic not silly puzzles, and there are many human adults that would fail this, nevermind children.
>LLMs can match an example at exactly that trivial level because it can be predicted from context. However, if you construct a more complex example with several rules, especially with rules that have contradictions and have specified logic to resolve conflicts, they fail badly.
Even if that were true (Have you actually tried?), You do realize many humans would also fail once you did all that right ?
>They can't even reliably play Chess or Poker without breaking the rules despite those extremely well-represented in the dataset already, nevermind a made-up set of logical rules.
LLMs can play chess just fine (99.8 % legal move rate, ~1800 Elo)
https://arxiv.org/abs/2403.15498
I don‘t like to throw the word intelligence around, but when we talk about intelligence we are usually talking about human behavior. And there is nothing human about being extremely good at curve fitting in multi parametric space.