upvote
Its a question of price, quality and other factors.

If my company pays for it, i do not care.

If i have a hobby project were it is about converting an idea in my spare time in what i want, i'm happily paying 20$. I just did something like this on the weekend over a few hours. I really enjoy having small tools based on single html page with javascript and json as a data store (i ask it to also add an import/export feature so i can literaly edit it in the app and then save it and commit it).

For the main agent i'm waiting for like the one which will read my emails and will have access tos ystems? I would love a local setup but just buying some hardware today costs still a grant and a lot of energy. Its still sign cheaper to just use a subscription.

Not sure what you mean though regarding speed, they are super fast. I do not have a setup at home which can run 200-300 tps.

reply
i don't use local models, i just use the APIs of cloud providers (eg fireworks, together, friendli, novita, even cerebras or groq).

you can get subscriptions to use the APIs, from synthetic, or ollama, fireworks.

reply
I might be missing it, but does fireworks actually have a subscription? All I saw was serverless (per token) and gpu $/hr.

And since I saw a few other comments talking about these, do you have any preference on different cloud providers with ZDR? I look every once in a while and want to switch to completely open models and/or at least ZDR so I can start doing things like summarizing e-mail. I'm thinking I can probably split my use between some sort of cloud api and claude code for heavier tasks.

reply
Whats the big difference then? You can get a lot of tokens for 20$ and not everything is a state secret i'm doing.

But if i would use some API stuff, probably openrouter, isn't that easer to switch around and also have zero konwledge savety?

reply
i think that privacy is good for wellbeing. it may be this is a dying point of view.
reply
It is for sure but running your own email is so time intense that i gave that up 10 years ago.

i then decided to trust one company with most stuff.

Also as I said, I would use something different for my personal stuff. But i'm waiting for the right hardware etc.

reply
You are not crazy, you are just waking up from the SaaS delusion. We somehow allowed the industry to convince us that paying $20/month to rent volatile compute, have our proprietary workflows surveilled, and get throttled mid-thought is an 'upgrade'. The pendulum is swinging violently back to local-native tools. Deterministic, privately owned, unmetered—buying your execution layer instead of renting it is the only way to build actual leverage.
reply
I'm quite aware of my dependency and i'm balancing this in and out regularly over the last 10 years.

Owning is expensive. Not owning is also expensive.

Energy in germany is at 35 cent/kwh and skyrocketed to 60 when we had the russian problem.

I'm planning to buy a farm and add cheap energy but this investment will still take a little bit of time. Until then, space is sparse.

reply
i don't use local llms. it's mostly the closed source subscriptions that are not private, it really is a choice.

there are many cloud providers of zero data retention llm APIs, and even cryptographic attestation.

they are not throttled, you can get an agreed rate limit.

reply
Would you mind naming some of your favorite providers?
reply
No one was convinced to spend money to do the things you're saying. That's just disingenuous. People rent models because (a) it moves compute elsewhere (b) they provide higher quality models.
reply
c) It's turnkey instead of requiring months/years of custom dev and on-going maintenance.
reply
If I could buy this to run it locally, what's that hardware even look like? What model would I even run on the hardware? What framework would I need to have it do the things Claude Code can do?
reply