upvote
I don't know what a "booster" is, but if a model can solve original math problems, then it's reasoning.

If you can come up with a way to do math without reasoning, that would be, in a sense, even more interesting than AI.

reply
A model solving original math problems may look like human reasoning, but internally the model is choosing the next token based on what it has learned about probability around various patterns and structures. The model knows about correlations between problems, proof techniques and answer structures, and when it "reasons" it's selecting a high probability trajectory through that learned knowledge.

A calculator is different because it is not probabilistic; it executes a fixed procedure. One of these models, when doing math, is more like a learned probabilistic system that understands enough structure around mathematics that some of its high probability trajectories seem like genuine reasoning.

The difference is that when a human reasoner goes to solve a problem, they'll think "this kind of proof usually goes this way" - following an explicit rule enforcement. The model may produce the same output, and may even appear to approach it the same way, but the mechanism is a probabilistic pattern selection rather than explicit rule enforcement.

reply
You talk as if problem solving is a supervised (imitation) learning problem. No, it is a reinforcement learning problem, models learn by solving problems and getting rated. They generate their own training data. Optimal budget allocation is 1/3 cost pre-training, 1/3 for RL, and 1/3 on inference.
reply
> The difference is that when a human reasoner goes to solve a problem, they'll think "this kind of proof usually goes this way" - following an explicit rule enforcement.

How is this different from "probabilistic pattern selection"?

reply
Because... it's just different, that's all! OK?
reply
I don’t think there’s any evidence that “human reasoning” isn’t also based on probabilistic pattern selection.
reply
It’s amazing simple things have to be reiterated.

Perhaps it’s best if most admit they don’t have the fundamental ways of thinking to even participate in the conservation.

When all nuance is lost, the discussion must end.

reply
You should leave this site. Comments like this are not good for this site. You should go somewhere else.
reply
> If you can come up with a way to do math without reasoning, that would be, in a sense, even more interesting than AI.

Logic is just syntactic manipulation of formulas. By the early 90s logical reasoning was pretty much solved with classical AI (the last building block being constraint logic programming).

reply
So you'll be able to show me the early-90s era program that can solve original IMO-level problems when supplied with the plaintext questions. Right?
reply
if i presented math problems to the best english mathematicians in chinese, does that mean they arent able to reason? the plain text is an arbitrary constraint
reply
The actual question is, if you presented an undergraduate-level calculus problem to a human who is considered intelligent but who was never given an "understanding" of math in school, would the human be able to solve it? Why or why not?

If so, what exactly would you call the process by which the intelligent human solves the math problem that he or she does not initially understand?

Whatever you call that process is what a reasoning model does. You don't have to call it "reasoning," of course... unless you want other people to understand what you're talking about.

reply
My dear sir, the entire universe is made of things that "do math without reasoning!"

It's the default, and if we're lucky we harness pieces of it to discern something we're interested in.

reply
[flagged]
reply