upvote
Strongly agree. Gemma3:27b and Qwen3-vl:30b-a3b are among my favorite local LLMs and handle the vast majority of translation, classification, and categorization work that I throw at them.
reply
what HW are you running them on ? are you using OLLAMA ?
reply
I'm using the default llama-server that is part of Gerganov's LLM inference system running on a headless machine with an nVidia 16GB GPU, but Ollama's a bit easier to ease into since they have a preset model library.

https://github.com/ggml-org/llama.cpp

reply
What sort of tasks are you using self-hosting for? Just curious as I've been watching the scene but not experimenting with self-hosting.
reply
Not OP but one example is that recent VL models are more than sufficient for analyzing your local photo albums/images for creating metadata / descriptions / captions to help better organize your library.
reply
Any pointers on some local VLMs to start with?
reply
The easiest way to get started is probably to use something like Ollama and use the `qwen3-vl:8b` 4‑bit quantized model [1].

It's a good balance between accuracy and memory, though in my experience, it's slower than older model architectures such as Llava. Just be aware Qwen-VL tends to be a bit verbose [2], and you can’t really control that reliably with token limits - it'll just cut off abruptly. You can ask it to be more concise but it can be hit or miss.

What I often end up doing and I admit it's a bit ridiculous is letting Qwen-VL generate its full detailed output, and then passing that to a different LLM to summarize.

- [1] https://ollama.com/library/qwen3-vl:8b

- [2] https://mordenstar.com/other/vlm-xkcd

reply
You could try Gemma4 :D
reply
For me, receipt scanning and tagging documents and parts of speech in my personal notes. It's a lot of manual labour and I'd like to automate it if possible.
reply
Have you tried paperless-ngx, a true and tested open source solution that's been filling this niche successfully for decades now?
reply
Adding to the Q: Any good small open-source model with a high correctness of reading/extracting Tables and/of PDFs with more uncommon layouts.
reply
I use local models for auto complete in simple coding tasks, cli auto complete, formatter, grammarly replacement, translation (it/de/fr -> en), ocr, simple web research, dataset tagging, file sorting, email sorting, validating configs or creating boilerplates of well known tools and much more basically anything that I would have used the old mini models of OpenAI for.
reply
I would personally be much more interested in using LLMs if I didn’t need to depend on an internet connection and spending money on tokens.
reply