upvote
Maybe I do not have a good definition for it.

But what I see again and again in LLMs is a lot of combinations of possible solutions that are somewhere around internet (bc it put that data in). Nothing disruptive, nothing thought out like an experimented human in a specific topic. Besides all the mistakes/hallucinations.

reply
Yes, LLMs have a very aggressive regression towards the mean - that's probably an existential quality of them.

They are after all, pattern matching.

A lot of humans have difficulty with very reality that they are in fact biological machines, and most of what we do is the same thing.

The funny thing is although I think are are 'metaphysically special' in our expression, we are also 'mostly just a bag of neurons'.

It's not 'natural' for AI to be creative but if you want it to be, it's relatively easy for it to explore things if you prod it to.

reply
I think the terminology is just dogshit in this area. LLMs are great semantic searchers and can reason decently well - I'm using them to self teach a lot of fields. But I inevitably reach a point where I come up with some new thoughts and it's not capable of keeping up and I start going to what real people are saying right now, today, and trust the LLM less and instead go to primary sources and real people. But I would have never had the time, money, or access to expertise without the LLM.

Constantly worrying, "is this a superset? Is this a superset?" Is exhausting. Just use the damn tool, stop arguing about if this LLM can get all possible out of distribution things that you would care about or whatever. If it sucks, don't make excuses for it, it sucks. We don't give Einstein a pass for saying dumb shit either, and the LLM ain't no Einstein

If there's one thing to learn from philosophy, it's that asking the question often smuggles in the answer. Ask "is it possible to make an unconstrained deity?" And you get arguments about God.

reply