It shows these LLMs don't understand what's necessary for washing your car. But I don't see how that generalizes to "LLMs do NOT 'understand' anything".
What's your reasoning, there? Why does this show that LLMs don't understand anything at all?
Do we need a new dictionary word that acts as a synonym for "understanding" specifically for non-human actors? I don't see why, personally, but I guess a case could be made.
IMHO 'understanding' in the usual human sense requires thinking and however good and fast improving LLMs are I don't think anyone would suggest that any of them has become sentient yet. They can infer things based on their training data set better and better but do not 'understand' anmything.
This is a deep and complex topic, and has been for decades.