upvote
Google Gemini often gives an overly lengthy response, and then at the end asks a question. But the question seems designed to move on to some unnecessary next step, possibly to keep me engaged and continue conversing, rather than seeking any clarification on the original question.
reply
This is a great point, because when you ask it (Claude) if it has any questions, it often turns out it has lots of good ones! But it doesn't ask them unless you ask.
reply
That's because it doesn't really have any questions until you ask it whether it does.
reply
This is the most important comment in this entire thread IMO, and it’s a bit buried.

This is the fundamental limitation with generative AI. It only generates, it does not ponder.

reply
You can define "ponder" in multiple ways, but really this is why thinking models exist - they turn over the prompt multiple times and iterate on responses to get to a better end result.
reply
Well I chose the word “ponder” carefully, given the fact that I have a specific goal of contributing to this debate productively. A goal that I decided upon after careful reflection over a few years of reading articles and internet commentary, and how it may affect my career, and the patterns I’ve seen emerge in this industry. And I did that all patiently. You could say my context window was infinite, only defined by when I stop breathing.

That is to say, all of that activity I listed is activity I’m confident generative AI is not capable of, fundamentally.

Like I said in a cousin comment, we can build Frankenstein algorithms and heuristics on top of generative AI but every indication I’ve seen is that that’s not sufficient for intelligence in terms of emergent complexity.

Imagine if we had put the same efforts towards neural networks, or even the abacus. “If I create this feedback loop, and interpret the results in this way, …”

reply
Agreed that feedback loops on top of generative LLMs will not get us to AGI or true intelligence.
reply
what is the difference between "ponder" and "generate"? the number of iterations?
reply
Probably the lack of external stimuli. Generative AI only continues generating when prompted. You can play games with agents and feedback loops but the fundamental unit of generative AI is prompt-based. That doesn’t seem, to me, to be a sufficient model for intelligence that would be capable of “pondering”.

My take is that an artificial model of true intelligence will only be achieved through emergent complexity, not through Frankenstein algorithms and heuristics built on generative AI.

Generative AI does itself have emergent complexity, but I’m bearish that if we would even hook it up to a full human sensory input network it would be anything more than a 21st century reverse mechanical Turk.

Edit: tl;dr Emergent complexity is a necessary but insufficient criteria for intelligence

reply
you can get it to change by putting instructions to ask questions in the system prompt but I found it annoying at a while
reply
Because 99% of the time it's not what users want.

You can get it to ask you clarifying questions just by telling it to. And then you usually just get a bunch of questions asking you to clarify things that are entirely obvious, and it quickly turns into a waste of time.

The only time I find that approach helpful is when I'm asking it to produce a function from a complicated English description I give it where I have a hunch that there are some edge cases that I haven't specified that will turn out to be important. And it might give me a list of five or eight questions back that force me to think more deeply, and wind up being important decisions that ensure the code is more correct for my purposes.

But honestly that's pretty rare. So I tell it to do that in those cases, but I wouldn't want it as a default. Especially because, even in the complex cases like I describe, sometimes you just want to see what it outputs before trying to refine it around edge cases and hidden assumptions.

reply