upvote
Large enough is usually between 5 to 10% of the advertised context.
reply
This doesn't mean there's no subtle accuracy drop on negations. Negations are inherently hard for both humans and LLMs because they expand the space of possible answers, this is a pretty well studied phenomenon. All these little effects manifest themselves when the model is already overwhelmed by the context complexity, they won't clearly appear on trivial prompts well within model's capacity.
reply
I've noticed this in Latin too.

Like, in Latin, the verb is at the end. In that, it's structured like how Yoda speaks.

So, especially with Cato, you kinda get lost pretty easy along the way with a sentence. The 'not's will very much get forgotten as you're waiting for the verb.

reply