upvote
I'd recommend checking out the full conclusions section. What I can tell you is that with LLMs, it's never a linear correlation. There's always some balance you have to strike, as they really do operate on a changing-anything-changes-everything basis.

excerpt: Claim: Avoiding hallucinations requires a degree of intelligence which is exclusively achievable with larger models. Finding: It can be easier for a small model to know its limits. For example, when asked to answer a Māori question, a small model which knows no Māori can simply say “I don’t know” whereas a model that knows some Māori has to determine its confidence. As discussed in the paper, being “calibrated” requires much less computation than being accurate.

reply
Hallucinations are actually not a malfunction or any other process outside of the normal functioning of the model.

They are merely an output that we find unuseful, but in all other ways is optimal based on the training data, context, and model precision and parameters being used.

reply
One robot's "hallucination" is another robot's "connecting the dots" or "closing the circle".
reply