upvote
I've been relying primarily on deepseek-v4-flash for 90% of my work. It sips tokens. that model will run on 128gb. not a cheap configuration for a consumer but within the budget of a developer relying on it for work.

Ive only been using kimi 2.5 and deepseek pro for reviewing PRs for security issues. less than 10% of my workflow requires a full powered frontier model.

I think the issue is overblown by people who think claude code is a good harness and use opus for everything. opencode is objectively better. its much more verbose about what its doing, you have more control when it comes to offloading to subagents with targeted context (crucial for running through larger jobs) and I can swap between codex and open weight models.

reply
And they will.
reply