upvote
Amazing! It couldn't answer my question at all, but it couldn't answer it incredibly quickly!

Snarky, but true. It is truly astounding, and feels categorically different. But it's also perfectly useless at the moment. A digital fidget spinner.

reply
does no one understand what a tech demo is anymore? do you think this piece of technology is just going to be frozen in time at this capability for eternity?

do you have the foresight of a nematode?

reply
Make it for Qwen 2.5 and I'd buy it.

You don't actually need "frontier models" for Real Work (c).

(Summarization, classification and the rest of the usual NLP suspects.)

reply
As someone with a 3060, I can attest that there are really really good 7-9B models. I still use berkeley-nest/Starling-LM-7B-alpha and that model is a few years old.

If we are going for accuracy, the question should be asked multiple times on multiple models and see if there is agreement.

But I do think once you hit 80B, you can struggle to see the difference between SOTA.

That said, GPT4.5 was the GOAT. I can't imagine how expensive that one was to run.

reply
Yeah, two p’s in the word pepperoni …
reply