Intelligence is not as cool as you think it is.
[0] https://mathstodon.xyz/@tao/115855840223258103
[1] https://huggingface.co/blog/dlouapre/gpt-single-minus-gluons
[2] https://deepmind.google/blog/alphaevolve-a-gemini-powered-co...
Yes, and that's exactly what they do.
No, none of the problems you gave to the LLM while toying around with them are in any way novel.
Do you not consider that novel problem solving?
I think you are confused about LLMs - they take in context, and that context makes them generate new things, for existing things we have cp. By your logic pianos can't be creative instruments because they just produce the same 88 notes.
But I think this specific claim is clearly wrong, if taken at face value:
> They just regurgitate text compressed in their memory
They're clearly capable of producing novel utterances, so they can't just be doing that. (Unless we're dealing with a very loose definition of "regurgitate", in which case it's probably best to use a different word if we want to understand each other.)
You could imagine that it is possible to learn certain algorithms/ heuristics that "intelligence" is comprised of. No matter what you output. Training for optimal compression of tasks /taking actions -> could lead to intelligence being the best solution.
This is far from a formal argument but so is the stubborn reiteration off "it's just probabilities" or "it's just compression". Because this "just" thing is getting more an more capable of solving tasks that are surely not in the training data exactly like this.