upvote
Why do you think they won’t continue to improve?
reply
Because of how LLM's work. I don't know exactly how they're using it for chess, but here's a guess. If you consider the chess game a "conversation" between two opponents, the moves written out would be the context window. So you're asking the LLM, "given these last 30 moves, what's the most likely next move?". Ie, you're giving it a string like "1. e4 e5, 2. Nf3 Nc6, 3. Bb5 a6, 4..?".

That's basically what you're doing with LLMs in any context "Here's a set of tokens, what's the most likely continuation?". The problem is, that's the wrong question for a chess move. If you're going with "most likely continuation", that will work great for openings and well-studied move sequences (there are a lot of well studied move sequences!), however, once the game becomes "a brand new game", as chess streamers like to say when there's no longer a game in the database with that set of moves, then "what's the most likely continuation from this position?" is not the right question.

Non-LLM AI's have obviously solved chess, so, it doesn't really matter -- I think Chess shows how LLM's lack of a world model as Gary Marcus would say is a problem.

reply