upvote
Tried the E4B model in Ollama and it's totally broken when interpreting images. The output depends only on the text and is consistent in that way, but otherwise completely wrong.

Works fine with regular Gemma 3 4B, so I'll assume it's something on Ollama's side. edit: yep, text-only for now[1], would be nice if that was a bit more prominent than burried in a ticket...

Don't feel like compiling llama.cpp myself, so I'll have to wait to try your GGUFs there.

[1]: https://github.com/ollama/ollama/issues/10792#issuecomment-3...

reply
Oh I don't think multimodal works yet - it's text only for now!
reply
Literally was typing out "Unsloth, do your thing!!" but you are way ahead of me. You rock <3 <3 <3

Thank you!

reply
:) Thanks!
reply
Thanks! What kind of rig do I need?
reply
Likely nothing crazy. My RTX 2080 is pumping out 45 tok/s.
reply
What is `jinja` in this context?
reply
The chat template is stored as a Jinja template.
reply
deleted
reply