points
However, when I use the 5.2codex model, I've found it to be very slow and worse (hard to quantify, but I preferred straight-up 5.2 output).