The LLMs I have tested have terrible world models and intuitions for how actions change the environment. They're also not great at discerning and pursuing the right goals. They're like an infinitely patient five-year old with amazing vocabulary.
[1]: https://entropicthoughts.com/updated-llm-benchmark
(more descriptions available in earlier evaluations referenced from there)
If you doing an RPG, which I guess is where this is more obvious, you track the play and enemy positions, their health, their moods and perhaps top thoughts, the state of important inanimate objects. if you break down the door, you update the door's state in the document. This is in contrast to just giving the LLM the previous turns and hoping it realizes the door is broken down later (just by statistical completion).
It might be too expensive, but I would be interested in the benchmarks for the current crop of SOTA models.