Those are the same thing in this case. The latter is just an extremely reductionist description of the mechanics behind the former.
They are certainly marketed as if they think, learn and follow orders, but they do not.
You can always reduce high-level phenomena to lower-level mechanisms. That doesn't mean that the high-level phenomenon doesn't exist. LLMs are obviously able to understand and follow instructions.
And yet they don't, quite a lot of the time, and in a random way that is hard to predict or even notice sometimes (their errors can be important but subtle/small).
They're simply not reliable enough to treat as independent agents, and this story is a good example of why not.
Second, whether they're perfect at following commands is besides the point. They're not just "predicting tokens," in the same way you're not just "sending electrochemical signals." LLMs think, solve problems, answer questions, write code, etc.
It’s the same reason we call the handheld device we carry around to do everything a “phone” without a second thought. We don’t call it a phone because it’s primary purpose is calling, we call it a phone because the definition of the word “phone” has grown to include “navigates, entertains, takes pictures, etc”.