No, but an LLM doesn't do that either. An LLM is an algorithm to generate text output which can simulate how humans describe reasoning about themselves in relation to others. Humans do that by using words to describe what they internally experienced. LLMs do it by calculating the statistical weight of linguistic symbols based on a composite of human-generated text samples in its training data.
LLMs never experienced what their textual output is describing. It's more similar to a pocket calculator calculating symbols in relation to other symbols, except scaled up massively.
That they do it at all is the point and is what separates then from MP3 encoding algorithms. The "how" doesn't seem to me to be as important as you're suggesting.
You asked a hypothetical above about a different algorithm and now we've ascertained the reasons why that was reductive.
> LLMs never experienced ...
What is experience beyond taking input from the world around you and holding an understanding of it?