Can they?
I am exactly saying that they have no thoughts or even "thoughts".
> which doesn't explain how they can reason in novel problems
There are many decades of pre-LLM software that can solve novel problems. Thought (or novel thought) is required to reason, but it's not required to solve a problem.
For example, there are exhaustive algorithms that can solve novel equations and even complete simple mathematical proofs, but they don't need to think.
> "how do we know your comment isn't you regurgitating an HN opinion"?
You don't, and I don't care if you do or not. The value of my comment isn't its novelty or whether it's truly reasoned, which is why LLMs sometimes do create valuable output.
In fact, the output of a reasoning machine (whether a human brain or true AGI, sometime in the future) isn't deterministic. A non-reasoning machine and a reasoning machine could create the same output.
The reason I know LLMs don't have thoughts is because I use them many times every day, and they are very clearly pattern machines. They don't even begin to seem rational, human, or knowledgeable. It's sometimes possible to find near-verbatim sources for their outputs.
It's amazing that it's as good as it is given how far it is from thinking, but if you threw something actually novel at it, all it would do is confidently word salad a response.