They are getting better, but that doesn't mean they're good.
We have a magical pseudo-thinking machine that we can run locally completely under our control, and instead the goal posts have moved to "but it's not as fast as the proprietary could".
It's more cost effective for someone to pay $20 to $100 month for a Claude subscription compared to buying a 512 gig Mac Studio for $10K. We won't discuss the cost of the NVidia rig.
I mess around with local AI all the time. It's a fun hobby, but the quality is still night and day.
1. It costs 100k in hardware to run Kimi 2.5 with a single session at decent tok p/s and its still not capable for anything serious.
2. I want whatever you're smoking if you think anyone is going to spend billions training models capable of outcompeting them are affordable to run and then open source them.
But as it stands right now, the most useful LLMs are hosted by companies that are legally obligated to hand over your data if the US gov. had decided that it wants it. It's unacceptable.