Works fine with regular Gemma 3 4B, so I'll assume it's something on Ollama's side. edit: yep, text-only for now[1], would be nice if that was a bit more prominent than burried in a ticket...
Don't feel like compiling llama.cpp myself, so I'll have to wait to try your GGUFs there.
[1]: https://github.com/ollama/ollama/issues/10792#issuecomment-3...
Thank you!