upvote
My understanding is no. But the definition of AGI isn’t that well defined and has been evolving, making the assessment pretty much impossible
reply
What can a SOTA LLM not answer that the average person can? It's already more intelligent than any polymath that ever existed, it just lacks motivation and agency.
reply
Good question. I would guess no - but it could help you build one. Am I mistaken?
reply
They could help you build an AGI if someone else has already built AGI and published it on GitHub.
reply
I see this statement all the time and it's just strange to me. Yes, the LLMs struggle to form unique ideas - but so do we. Most advancements in human history are incremental. Built on the shoulders of millions of other incremental advancements.

What i don't understand is how we quantify our ability to actually create something novel, truly and uniquely novel. We're discussing the LLMs inability to do that, yet i don't feel i have a firm grasp on what we even possess there.

When pressed i imagine many folks would immediately jest that they can create something never done before, some weird random behavior or noise or drawing or whatever. However many times it's just adjacent to existing norms, or constrained by the inversion of not matching existing norms.

In a lot of cases our incremental novelties feel, to some degree, inevitable. As the foundations of advancement get closer to the new thing being developed it becomes obvious at times. I suspect this form of novelty is a thing LLMs are capable of.

So for me the real question is at what point is innovation so far ahead that it doesn't feel like it was the natural next step. And of course, are LLMs capable of doing this?

I suspect for humans this level of true innovation is effectively random. A genius being more likely to make these "random" connections because they have more data to connect with. But nonetheless random, as ideas of this nature often come without explanation if not built on the backs of prior art.

So yea.. thoughts?

reply
I really love Andrej Karpathy's take on LLMs as being instead of intelligence or sentience, a kind of cortical tissue.

It should be clear from working with LLMs over the past 4 years that they are not consciousness.

Andrej's appearance on the Dwarkesh podcast is great.

reply
No I think that’s accurate. They seem more like an oracle to me. Or as someone put it here, it’s a vectorization of (most/all?) human knowledge, which we can replay back in various permutations.
reply
LLMs and human intelligence overlap, but they are not the same. What LLMs show is that we don't need AGI to be impressed. For example, LLMs are not good playing games such as Go [1].

[1] https://arxiv.org/abs/2601.16447

reply
I don't see why not, especially with computer use and vision capabilities. Are you talking about their lack of physical embodiment? AGI is about cognitive ability, not physical. Think of someone like Stephen Hawking, an example of having extraordinary general intelligence despite severe physical limitations.
reply