upvote
> It's a shame this remains a market largely overlooked by Western players, Mistral being the only one moving in that direction.

I've said in a recent comment that Mistral is the only one of the current players who appear to be moving towards a sustainable business - all the other AI companies are simply looking for a big payday, not to operate sustainably.

reply
Metawith the llama series as well,they just didn't manage to keep upping the game after and with llama4.
reply
I play with the small open weight models and I disagree. They are fun, but they are not in the same class as hosted models running on big hardware.

If some organization forbade external models they should invest in the hardware to run bigger open models. The small models are a waste of time for serious work when there are more capable models available.

reply
Most organizations aren't going to need the wide breadth of capabilities of the frontier models. They're risk averse and LLMs are non-deterministic, so use cases are typically more tightly scoped to tasks that involve nuanced classification that small models can easily handle even if it takes a little fine-tuning on your organizations data.
reply
I agree with the sentiment, but these models aren't suited for that. You can run much bigger models on prem with ~100k of hardware, and those can actually be useful in real-world tasks. These small models are fun to play with, but are nowhere close to solving the needs of a dev shop working in healthcare or banking, sadly.
reply
I love the idea of building competitor to open weight models but damn is this an expensive game to play
reply
It is, but think about how advances in computing technology have made that power available over time. A Raspberry Pi is almost 5 times more powerful than the Cray-1.

Granted, these next couple of years are going to suck because of the AI Component Drought, but progress marches on and the power and price of running today's frontier models will be affordable to mere mortals in time. Obviously we've hit the wall with Moore's law and other factors but this will not always be out of reach.

reply
How true is this? How does a regulated industry confirm the model itself wasn't trained with malicious intent?
reply
Why would it matter if the model is trained with malicious intent? It's a pure function. The harness controls security policies.
reply
Much like a developer can insert a backdoor as a "bug" so can an LLM that was trained to do it.

One way you could probably do it is by identifying a commonly used library that can be misused in a way that would allow some kind of time-of-check to time-of-use (TOCTOU) exploit. Then you train the LLM to use the library incorrectly in this way.

reply