upvote
Yes it is, LLMs perform logical multi step reasoning all the time, see math proofs, coding etc. And whether you call it synthesis or statistical mixing is just semantics. Do LLMs truly understand? Who knows, probably not, but they do more than you make it out to be.
reply
I don't want to speak too much out of my depth here, I'm still learning how these things work on a mechanical level, but my understanding of how these things "reason" is it seems like they're more or less having a conversation with themselves. IE, burning a lot of tokens in the hopes that the follow up questions and answers it generates leads to a better continuation of the conversation overall. But just like talking to a human, you're likely to come up with better ideas when you're talking to someone else, not just yourself, so the human in the loop seems pretty important to get the AI to remix things into something genuinely new and useful.
reply
They do not. The "reasoning" is just adding more text in multiple steps, and then summarizing it. An LLM does not apply logic at any point, the "reasoning" features only use clever prompting to make these chains more likely to resemble logical reasoning.

This is still only possible if the prompts given by the user resembles what's in the corpus. And the same applies to the reasoning chain. For it to resemble actual logical reasoning, the same or extremely similar reasoning has to exist in the corpus.

This is not "just" semantics if your whole claim is that they are "synthesizing" new facts. This is your choice of misleading terminology which does not apply in the slightest.

reply