upvote
> It seems to me that most model providers are not running/testing via the most used backends i.e Llama, Ollama etc because if they were, they would see how broken their release is.

The models usually run fine on the server targeted backends they’re released for.

Those projects you cited are more niche. They each implement their own ways of doing things.

It’s not the responsibility of model providers to implement and debug every different backend out there before they release their model. They release the model and usually a reference way of running it.

The individual projects that do things differently are responsible for making their projects work properly.

Don’t blame the open weight model teams when unrelated projects have bugs!

reply
Just since I'm curious, what exact models and quantization are you using? In my own experience, anything smaller than ~32B is basically useless, and any quantization below Q8 absolutely trashes the models.

Sure, for single use-cases, you could make use of a ~20B model if you fine-tune and have very narrow use-case, but at that point usually there are better solutions than LLMs in the first place. For something general, +32B + Q8 is probably bare-minimum for local models, even the "SOTA" ones available today.

reply