Local/open LLMs are a thing though. You can build a server for hosting decent sized (100-200B) models at home for a few k$. They may not be Opus-level, but hopefully we can get something matching current SOTA, but that we can run locally, before the megacorps get too greedy.
Alternatively you could find some other people to share the HW cost and run some larger models (like Kimi-K2.5 at 1.1T params).