That link doesn't have much affiliation with Qwen or anyone who produces/trained the Qwen models. That doesn't mean it's not good or safe, but it seems quite subjective to suggest it's the latest latest or greatest Qwen iteration.
I can see huggingface turning into the same poisoned watering-hole as NPM if people fall into the same habits of dropping links and context like that.
I'm saying it's the latest iteration of the finetuned model mentioned in the parent comment.
I'm also not suggesting that it's "the latest and greatest" anything. In fact, I think it's rather clear that I'm suggesting the opposite? As in - how can a small fine tune produce better results than a frontier lab's work?
The sentiment still applies the parent comment of yours though.
That's the idea behind distillation. They are finetuning it on traces produced by opus. This is poor man's distillation (and the least efficient) and it still works unreasonably well for what it costs.