upvote
That's actually how vision language models already work, pretty much.
reply
And there's a reason nobody uses them for face recognition

Vision language models are an incredible achievement in the generality and usability. But they pay a hefty price in fidelity and speed

reply
Huh? The images are tokenized in the same way language is and it’s just fed into one single model. Not multiple smaller expert models.

Image gets rasterized into smaller pieces (eg 4x4 pixels) and each of those is assigned a token, similarly how text is broken up into tokens. And the whole thing is fed into a single model.

reply
Yes I'm saying

> Imagine face recognition to work like a text chat, where the PC gets the frame from the camera and writes in the chat: "Who's that? Here's the RGB888 image in hex: ...".

that's p much how it works.

reply
But that isn’t a specialized model like the grandparent claimed, but rather a single, multi-modal model.
reply
Yes, the "imagine" was showcasing the opposite of a specialized model to call it a bad idea.
reply
Do you know that MoE is a thing?
reply
The experts in MoEs aren't specialized in any meaningful task sense. From level of what we would think as tasks MoEs are selected essentially arbitrarily per token and per block.
reply
It’s unsupervised, yes, but “unspecialized in any meaningful task sense” is incorrect, that’s the whole point. It’s just not in the sense of “this is a legal expert, this is a software developer”.
reply
Optimal expert separation depends on the goal and can be pretty arbitrary, for example DeepSeek v4 separates them more or less by domain if I remember correctly.
reply