upvote
The prompt is unique but the tokens aren't.

Type "owejdpowejdojweodmwepiodnoiwendoinw welidn owindoiwendo nwoeidnweoind oiwnedoin" into ChatGPT and the response is "The text you sent appears to be random or corrupted and doesn’t form a clear question." because the prompt doesnt correlate to training data.

reply
...? what is the response supposed to be here?
reply
Just using a scaled up and cleverly tweaked version of linear regression analysis...
reply
That is, the probability distribution that the network should learn is defined by which probability distribution the network has learned. Brilliant!
reply
Hamiltonian paths and previous work by Donald Knuth is more than likely in the training data.
reply
The specific sequence of tokens that comprise the Knuth's problem with an answer to it is not in the training data. A naive probability distribution based on counting token sequences that are present in the training data would assign 0 probability to it. The trained network represents extremely non-naive approach to estimating the ground-truth distribution (the distribution that corresponds to what a human brain might have produced).
reply
>the distribution that corresponds to what a human brain might have produced..

But the human brain (or any other intelligent brain) does not work by generating probability distribution of the next word. Even beings that does not have a language can think and act intelligent.

reply