upvote
You're right - hallucinations aren't limited to citations. We see a few failure modes:

Fabricated citations: Case doesn't exist at all

Wrong citation: Case exists but doesn't say what the model claims

Misattributed holdings: Real case, real holding, but applied incorrectly to the legal question

From our internal testing, proper context engineering significantly reduces hallucination across the board.

Once we ground the model in the relevant source documents, hallucination rates drop substantially.

reply
The lawyer can handle hallucinations by reading the underlying case. For example, "Brady exempts the prosecution from turning over embarassing evidence. See, Brady v. Maryland, 373 US 83 (1963)." If you're a lawyer, you know Brady doesn't say this at all. To be sure, you have to read the case. Errors in the citation are like typos. The must still be corrected, but an occasional typo is not the end of the werld.
reply
It’s called grounding and ChatGPT and Gemini both do it by linking the appropriate sources
reply