upvote
the model prefers to tell you nothing before it tells you something wrong

If all LLMs did this, people would trust them more.

reply