upvote
>> My experience is that LLMs are generally as smart as the average human +90% of the time. A lack of perfect to me doesn't mean it isn't AGI.

In my experience, they contain more information than any human but they are actually quite stupid. Reasoning is not something they do well at all. But even if I skip that, they can not learn. Inference is separate from training, so they can not learn new things other than trying to work with words in a context window, and even then they will only be able to mimic rather than extrapolate anything new.

It's not the lack of perfect, it's the lack of reasoning and learning.

reply
I 100% agree that learning is missing. We make up for it in SKILLS.md and README.md files and RAGs of various types. And we train the LLMs to deal with these structures.

I've seen a lot of reasoning in the latest models while engaging in agentic coding. It is often decent at debugging and experimentational, but around 30% it goes does wrong paths and just adds unnecessary complexity via misdiagnoses.

reply