But what I see again and again in LLMs is a lot of combinations of possible solutions that are somewhere around internet (bc it put that data in). Nothing disruptive, nothing thought out like an experimented human in a specific topic. Besides all the mistakes/hallucinations.
They are after all, pattern matching.
A lot of humans have difficulty with very reality that they are in fact biological machines, and most of what we do is the same thing.
The funny thing is although I think are are 'metaphysically special' in our expression, we are also 'mostly just a bag of neurons'.
It's not 'natural' for AI to be creative but if you want it to be, it's relatively easy for it to explore things if you prod it to.
Constantly worrying, "is this a superset? Is this a superset?" Is exhausting. Just use the damn tool, stop arguing about if this LLM can get all possible out of distribution things that you would care about or whatever. If it sucks, don't make excuses for it, it sucks. We don't give Einstein a pass for saying dumb shit either, and the LLM ain't no Einstein
If there's one thing to learn from philosophy, it's that asking the question often smuggles in the answer. Ask "is it possible to make an unconstrained deity?" And you get arguments about God.