[ChatGPT is bullshit ]: https://link.springer.com/article/10.1007/s10676-024-09775-5
Ask it about "Marathon Desert", which does not exist and isn't closely related to something that does exist, and it asks for clarification.
I'm not here to say LLMs are oracles of knowledge, but I think the need to carefully craft specific "gotcha" questions in order to generate wrong answers is a pretty compelling case in the opposite direction. Like the childhood joke of "Whats up?"..."No, you dummy! The sky is!"
Straightforward questions with straight wrong answers are far more interesting. I don't many people ask LLMs trick questions all day.
It doesn't "assume" anything, because it can't assume, that's now the machine works.
The Marathon Valley _is_ part of a massive impact crater.