* Continuously updates its state based on sensory data
* Retrieves/gathers information that correlates strongly with historic sensory input
* Is able to associate propositions with specific instances of historic sensory input
* Uses the above two points to verify/validate its belief in said propositions
Describing how memories "feel" may confuse the matter, I agree. But I don't think we should be quick to dismiss the argument.
It's pretty obvious that an LLM not knowing what it does or does not know is a major part of it hallucinating, while humans do generally know the limits of their own knowledge.