upvote
Just use claude code directly with a pro plan instead of copilot for roughly the same cost.

On wait, nevermind.

https://news.ycombinator.com/item?id=47855565

reply
The Anthropic Pro plan cost double and gave you, I don't know, a tenth the usage, depending on how efficiently you used Copilot requests, and no access to a large set of models including GPT and Gemini and free ones.
reply
> Just use claude code directly with a pro plan

Usage limits are/were higher in Copilot. They also charge per prompt, not per token.

reply
Yes, I loved my $10 a month person subscription for light coding tasks, it worked great. I'd use claude code max for heavy lifting, but the $10 a month copilot plan kept me off cursor for the IDE centric things.
reply
Me too. Claude isn't the best option when all you do is ask "what's this error message", every 10 minutes or so.
reply
Well they charge per prompt, but with usage limits it is a mix of token and prompt. If prompt multiplier is higher, tokens are also multiplied, so limit is reached sooner.

It is basically a token based pricing, but you get alos a limitation of prompts (you can't just randomly ask questions to models, you have to optimize to make them do the most work for e.g. hour(s) without you replying - or ask them to use the question tool).

reply
Yeah this was me. I just got a message that I hit my limit and now I am looking into what it takes to run Qwen on local hardware.
reply
A suggestion: Don't invest in any new hardware to run an LLM locally until you've tried the model for a while through OpenRouter.

The Qwen models are cool, but if you're coming from Opus you will be somewhere between mildly to very disappointed depending on the complexity of your work.

reply
OpenRouter-served models are often more heavily quantized than what you can run locally, or try for yourself on generic cloud-based infrastructure.
reply
Been having a ton of fun with copilot cli directed to local qwen 3.6. If you’re willing to increase the amount of specificity in your prompts then delegating from a GPT-5.4 or Opus to local qwen has been great so far.
reply
I have to say this was how I used GitHub copilot in vscode. I Used opus 4.6 for most tasks. I am not sure I want to keep my copilot plan now.
reply
Opus 4.6 is no longer available and Opus 4.7 chews through monthly limits with reckless abandon. The value-add of GH Copilot is basically gone (at least for individuals on the Pro or Pro+ plans.)
reply
Good, I hope Microsoft lost a lot of money in the deal.
reply
From a friend in GitHub: they've been burning so much money because of Opus.
reply