upvote
It might settle into a situation where cutting edge LLMs are a service, while older and smaller LLMs are self-hosted. So you are not at risk of being cut off, but of being degraded.
reply
I hope you're right. I played around with a bunch of AI stuff recently and that's kind of the conclusion I came to. Use local AI for mission critical stuff, if you're confident in that, and use the SOTA models for reviewing.

Tap the latest general knowledge for asking "could this be improved", but make the improvements with local systems and models. But then the obvious problem becomes finding new data to train the AIs. In my opinion, there's no way their plan doesn't involve stealing from everyone to keep training, so is it really going to be safe to use the cutting edge models at all?

reply
If they manage to build good memory systems, people will stop keeping personal docs and rely on the “AI” for everything. Imagine 20 years from now when people don’t even have copies of the recipe to bake bread and then you’ll see what the goal is.
reply
And then in future if you try to build something to reverse the situation your coding llm becomes stupid and your psychologist llm recommends you some blue pills.
reply