upvote
Cynics on HN easily dismiss AI service wrappers (and many of them are in fact overblown and not worth their own code). But writing a genuinely good harness with lots of context engineering and solid tool integration is in fact not that easy. The biggest issue is that model providers also see what the community likes and often move on with their own offerings that are tailored to their own models, potentially at the training stage. So even if you have the best harness for something today, unless you are also a frontier LLM provider, there's zero guarantee you will still be relevant in the future. More like the opposite.
reply
> But writing a genuinely good harness with lots of context engineering and solid tool integration is in fact not that easy.

true, but its not worth $60 billion fucking quid.

reply
it's insanity.

the whole thing is driven by irrational stock market investers who NEED ai to be the thing that saves the world.

they're betting everything on it.

reply
deleted
reply
There are plenty of harder things in the world and very few are worth 60B.
reply
> (...) writing a genuinely good harness with lots of context engineering and solid tool integration is in fact not that easy.

This. They are after the harness engineering experience of the Cursor people, I'd assume the they want to absorb all that into Grok's offerings.

The value and the room for innovation on the harness side seems to be underestimated.

Oddly the harness also affects model training, since even GLM/Z.ai for example train (I suspect) their model on the actual Claude Code harness. So the choises made by harness engineers affects the model. For Kimi/Moonshot and OpenAI the company makes their own harness. Alibaba uses Gemini.

Very interesting dynamics.

reply
Isn't Codex TUI available for free though? Besides others like Pi and OpenCode of course.
reply
It can use local/oss models, but it doesn't make it simple to do (easiest with ollama) and it's not clear what else you 'lose' by making that choice.

If you had a really good (big) local model, maybe it's an option, but on the more common smaller (<32b) models, it will have similar problems in looping, losing context, etc. in my experience.

It's a nice TUI, but the ecosystem is what makes it good.

reply
Sure, but is it worth 60 billion?
reply
Definitely not if someone frames it "shitty IDE with some plugins".

But if someone frames it "engineering talent that knows how to make LLMs even better at software development than competition" it might.

I see with my own work it works so it is not like Devin that was basically a scam that was valued at 10 billion.

In this kind of context yeah feels like it is quite possible to be worth 60 billion.

reply
Their annualized revenue run rate is on track to surpass $6 billion by the end of 2026 so it's not ridiculous for them to be valued at $60 billion at some point. Also worth noting that if they do get access to SpaceX compute, they could start pretraining their own model. Composer is good but its built on top of Kimi 2.5.
reply
SpaceX thinks so.
reply
SpaceX the space rocket and internet satellite company? Or SpaceX the Elon Musk piggy bank used to buy up all his financial misadventures?
reply
You mean Musk thinks xAI need to be shown making AI investments to keep getting outside funding.
reply