upvote
I use mistral-medium-3.1 for a lot of random daily tasks, along with the vibe cli. I'd state from my personal opinion that mistral is my preferred 'model vendor' by far at this point. They're extremely consistent between releases while each of them just feels better. I also have a strong personal preference to the output.

I actively use gemini-3.1-pro-preview, claude-4.6-opus-high, and gpt-5.3-codex as well. I prefer them all for different reasons, however I usually _start_ with mistral if it's an option.

reply
Why not Large 3? It's larger and cheaper
reply
Mistral hasn't been in the running for SOTA for quite awhile now
reply
Not at the moment, but a release of Mistral 4 seems close which likely bridges the gap.
reply
Mistral Small 4 is already announced.
reply
MOE but 120B range. Man I wish it was an 80B. I have 2 GPUs with 62Gib of usable VRAM. A 4bit 80B gives me some context window, but 120B puts me into system RAM
reply
Either some q3 or since it's a MoE, maybe a REAP version of q4 might work (or could be terrible, I'm not sure about REAP'd models).
reply