LLMs are an example, but so are random pages on the internet, a buch of stuff we get served by the media (mainstream or otherwise), "expert opinions" by biased or sponsored experts or experts in a different field, etc, etc.
As the popular quip goes: It ain’t what you don’t know that gets you into trouble. It’s what you know for sure that just ain’t so.
With LLMs, we actually do get the warnings: Here's the ChatGPT footer: ChatGPT can make mistakes. Check important info. For Claude: Claude is AI and can make mistakes. Please double-check responses.
Such disclaimers, if written, are usually hidden deeply in terms of use for a random website, not stated up front.