upvote
I'm still paying the 10$ GH copilot but I don't use it because :

  - context is aggressively trimmed compared to CC obviously for cost saving reasons, so the performance is worse
  - the request pricing model forces me to adjust how I work
Just these alone are not worth saving the 60$/month for me.

I like the VSCode integration and the MCP/LSP usage surprised me sometimes over the dumb grep from CC. Ironically VSCode is becoming my terminal emulator of choice for all the CLI agents - SSH/container access and the automatic port mapping, etc. - it's more convenient than tmux sessions for me. So Copilot would be ideal for me but yeah it's just tweaked for being budget/broad scope tool rather than a tool for professionals that would pay to get work done.

reply
You can use your GH subscription with a different harness. I'm using opencode with it, it turns GH into a pure token provider. The orchestration (compacting, etc.) is left to the harness.

It turns it into a very good value for money, as far as I'm concerned.

reply
But you still get charged per turn right ? I don't like that because it impacts my workflow. When I was last using it I would easily burn through the 10$ plan in two days just by iterating on plans interactively.
reply
Honestly I'm not sure, I'm on my company's plan, I get a progress bar vaguely filling, but no idea of the costs or billing under the hood.
reply
But you still get the reduced context-window.
reply
Disagree entirely.

GHCP at least is transparent about the pricing: hit enter on a prompt= one request. CC/Codex use some opaque quota scheme, where you never really know if a request will be 1,2,10% of your hourly max, let alone weekly max.

I've never seen much difference with context ostensibly being shorter in GHCP, all of the models (in any provider) lose the thread well before their window is full, and it seems that aggressive autocompaction is a pretty standard way to help with that, and CC/Codex do it frequently.

reply
>I've never seen much difference with context ostensibly being shorter in GHCP, all of the models (in any provider) lose the thread well before their window is full, and it seems that aggressive autocompaction is a pretty standard way to help with that, and CC/Codex do it frequently.

Then we've had wildly different results. Running CC and GH copilot with Opus 4.6 on same task and the results out of CC were just better, likewise for Codex and GPT 5.4. I have to assume it's the aggressive context compaction/limited context loading because tracking what copilot does it seems to read way less context and then misses out on stuff other agents pick up automatically.

reply
Is your source code worth only $40 for them to train their models on?

https://www.techradar.com/pro/bad-news-skeptics-github-says-...

reply
Considering how much data they already have from everything that's on GitHub, I doubt you would make a dent boycotting their AI product.
reply
And don't you think they're going to realize soon that it's also pretty good at "doing penetration testing" for your company when it's already trained on your company's source code?
reply
It's already more than "pretty good": https://www.anthropic.com/glasswing
reply
Google $20/mo plan has great usage for Claude Opus. Last time I used it, around Feb, it felt basically unlimited.
reply
Agree, that was Feb. Not now, I cancelled mine on the 7th. Claude Opus via Gemini is just a few prompts then it locks you out for another week.
reply
So, you basically tried it a century ago...
reply