upvote
You can use your GH subscription with a different harness. I'm using opencode with it, it turns GH into a pure token provider. The orchestration (compacting, etc.) is left to the harness.

It turns it into a very good value for money, as far as I'm concerned.

reply
But you still get charged per turn right ? I don't like that because it impacts my workflow. When I was last using it I would easily burn through the 10$ plan in two days just by iterating on plans interactively.
reply
Honestly I'm not sure, I'm on my company's plan, I get a progress bar vaguely filling, but no idea of the costs or billing under the hood.
reply
But you still get the reduced context-window.
reply
Disagree entirely.

GHCP at least is transparent about the pricing: hit enter on a prompt= one request. CC/Codex use some opaque quota scheme, where you never really know if a request will be 1,2,10% of your hourly max, let alone weekly max.

I've never seen much difference with context ostensibly being shorter in GHCP, all of the models (in any provider) lose the thread well before their window is full, and it seems that aggressive autocompaction is a pretty standard way to help with that, and CC/Codex do it frequently.

reply
>I've never seen much difference with context ostensibly being shorter in GHCP, all of the models (in any provider) lose the thread well before their window is full, and it seems that aggressive autocompaction is a pretty standard way to help with that, and CC/Codex do it frequently.

Then we've had wildly different results. Running CC and GH copilot with Opus 4.6 on same task and the results out of CC were just better, likewise for Codex and GPT 5.4. I have to assume it's the aggressive context compaction/limited context loading because tracking what copilot does it seems to read way less context and then misses out on stuff other agents pick up automatically.

reply