Speaking of taking my money, what's the economic model for a company like this? They've published a fair amount about their architecture - enough that I imagine frontier labs could implement. Patents? Trade secrets? It's hard for me to understand how you'd be able to beat that training compute and knowhow at Anthropic/GOOG/oAI/Meta without some sort of legal protection.
I can't wait to see what these model architectures do with like 30-40% lower latency and more model intelligence. Very appealing. For reference, these look to be roughly 1/10 the size of Opus 4.7 / GPT 5.x series -- 275B, 12B active. So there's lots of room to add intelligence, and lots of hope that we could see lower latency.
i think the real ones know this is the tip of the iceberg? hparam tuning, data recipes, data collection, custom kernels, rl/eval infra, all immensely deep topics that would condense multiple decades of phd lifetimes to produce SOTA performance (in both senses of the word) like this.
i would also calibrate what you are impressed by. simply waiting is a posttrain thing - the fact that gemini and oai have not prioritized it is not something you should overindex on as hard. what they showed with full duplex is technically far far harder to achieve
oh, honey.
> Time-Aligned Micro-Turns. The interaction model works with micro-turns continuously interleaving the processing of 200ms worth of input and generation of 200ms worth of output. Rather than consuming a complete user-turn and generating a complete response, both input and output tokens are treated as streams. Working with 200ms chunks of these streams enables near real-time concurrency of multiple input and output modalities.
That's probably the main thing that distinguishes it from the multimodal models from other frontier labs as far as I can tell.
We can do these things today, but they're "bolted on" as afterthoughts. Yet they work remarkably well. I wonder how well they'd work if trained int his combined regime, from the ground up.
Local models will catch up soon.
is it separate batches of special "skills" that are added post training? how can they guarantee the models won't eventually lose a skill?
Presumably it will be possible to adjust that behavior with settings, the system prompt, etc. Not that most users will make such adjustments, though.
I'm currently teaching a class on AI-related issues at a university in Tokyo. Many of the students were surprised when I showed them that they can change the response behavior of chatbots to make them more or less verbose, sycophantic, etc. It shifted the direction of our discussions on the possible impacts of AI on the people who use it.
Next gemini releases coming next week though, we will see how that matches up!