What are you doing when you are not outputting tokens? You have a thought, evaluate it, refine it, repeat.
You’re not wrong that the basic building block is just “next token prediction”, but clearly the emergent behaviors exceed our intuition about what this process can achieve. We’re seeing novel proofs come out of these. Will this lead to AGI? That’s still TBD.
> I genuinely believe that a language model is, in essence, a function which takes in a sequence of tokens and produces a token probability distribution as an output. If this is incorrect, please, correct me.
The pejorative is that you imply this is a shallow and unthinking process. As I said earlier, you are literally a token generator on HN. You read someone’s comment, do some kind of processing, and output some tokens of your own.
I mean I do think sometimes even when not typing?
> Will this lead to AGI? That’s still TBD.
This is literally what I have been saying this whole time.
Since we agree, I will consider this conversation concluded.
I bet the guy has never contributed a novel thought that could be argued as moving something of magnitude forward. If that is the case he ought to stop writing as if he were capable of doing so - and therefore has no understanding of what true intelligence is.