upvote
Those companies at least had somewhat of a moat.

As I see it, the only thing close to a moat is CC for Anthropic, and since it is a big ol' fucking mess that is a) apparently now beyond the ability of any current SOTA LLM to fix, and b) understood by absolutely no human, I'd say it's not much of a moat. The other agents will catch up sooner rather than later.

The other providers? I don't see a moat. We jump ship at the drop of a hat.

reply
yes, but how many succeed without any kind of moat or having destroyed the existing companies?

I'm still running local LLMs and finding perfectly acceptable code gen.

reply
I think the situation we'll end up in is having closed models that are fast and near perfect but expensive, and a lot of cheap open-source models that are good enough for most people.
reply
No moat --> It's basically OpenAI, Google, and Anthropic left at the SOTA. Maybe soon, we'll have 2 left.
reply
> No moat --> It's basically OpenAI, Google, and Anthropic left at the SOTA. Maybe soon, we'll have 2 left.

Yeah, but do we even need them? Non-SOTA is still pretty damn good; remember last year, pre-SOTA? How many people were boasting 10x - 100x productivity increases using the end-2025 models?

So the non-sota models support doing 10 hours of work in 1 hour. Many people would be fine with that. Fine enough that they aren't going to spring for a SOTA model that cuts the 10 hours to 0.5 hours, they're just going to use the cheap models to cut the 10 hours down to 1 hour.

reply
Which ones, if you don't mind sharing?
reply