A calculator is different because it is not probabilistic; it executes a fixed procedure. One of these models, when doing math, is more like a learned probabilistic system that understands enough structure around mathematics that some of its high probability trajectories seem like genuine reasoning.
The difference is that when a human reasoner goes to solve a problem, they'll think "this kind of proof usually goes this way" - following an explicit rule enforcement. The model may produce the same output, and may even appear to approach it the same way, but the mechanism is a probabilistic pattern selection rather than explicit rule enforcement.
How is this different from "probabilistic pattern selection"?
Perhaps it’s best if most admit they don’t have the fundamental ways of thinking to even participate in the conservation.
When all nuance is lost, the discussion must end.
Logic is just syntactic manipulation of formulas. By the early 90s logical reasoning was pretty much solved with classical AI (the last building block being constraint logic programming).
If so, what exactly would you call the process by which the intelligent human solves the math problem that he or she does not initially understand?
Whatever you call that process is what a reasoning model does. You don't have to call it "reasoning," of course... unless you want other people to understand what you're talking about.
It's the default, and if we're lucky we harness pieces of it to discern something we're interested in.