upvote
>which doesn't explain how they can reason in novel problems

Can they?

reply
Generative AI can reason in novel problems in an analogous way that bands continue to make new melodies even after all these years using the same instruments as all the other bands in their genre.
reply
I do agree, in about the same way as people.
reply
deleted
reply
> Either you're saying that LLMs have no "thoughts", and just regurgitate everything

I am exactly saying that they have no thoughts or even "thoughts".

> which doesn't explain how they can reason in novel problems

There are many decades of pre-LLM software that can solve novel problems. Thought (or novel thought) is required to reason, but it's not required to solve a problem.

For example, there are exhaustive algorithms that can solve novel equations and even complete simple mathematical proofs, but they don't need to think.

> "how do we know your comment isn't you regurgitating an HN opinion"?

You don't, and I don't care if you do or not. The value of my comment isn't its novelty or whether it's truly reasoned, which is why LLMs sometimes do create valuable output.

In fact, the output of a reasoning machine (whether a human brain or true AGI, sometime in the future) isn't deterministic. A non-reasoning machine and a reasoning machine could create the same output.

The reason I know LLMs don't have thoughts is because I use them many times every day, and they are very clearly pattern machines. They don't even begin to seem rational, human, or knowledgeable. It's sometimes possible to find near-verbatim sources for their outputs.

reply
As far as I know, no current gen model AI "thinks". It's all processes trained against a body of work generated by actual thinkers along with a bevy of smoke and mirrors to fill in the gaps.

It's amazing that it's as good as it is given how far it is from thinking, but if you threw something actually novel at it, all it would do is confidently word salad a response.

reply