upvote
Since we're talking about GitHub Copilot I'll lodge my biggest complaint about it here! The context window is stuck at 128k for all models (except maybe Codex): https://github.com/microsoft/vscode/issues/264153 and https://github.com/anomalyco/opencode/issues/5993

This absolutely sucks, especially since tool calling uses tokens really really fast sometimes. Feels like a not-so-gentle nudge to using their 'official' tooling (read: vscode); even though there was a recent announcement about how GHCP works with opencode: https://github.blog/changelog/2026-01-16-github-copilot-now-...

No mention of it being severely gimped by the context limit in that press release, of course (tbf, why would they lol).

However, if you go back to aider, 128K tokens is a lot, same with web chat... not a total killer, but I wouldn't spend my money on that particular service with there being better options!

reply
This is the first time I've read about this. Thank you. I never noticed because OpenCode just shows you the context window usage as a %.
reply
I'm currently using GitHub copilot via Zed and tbh I have no idea which of these this relates to. Perhaps a combination of

> GitHub Copilot is a service

and maybe, the api behind

> GitHub Copilot is VSCode extension

???

What an absolute mess.

reply
You want a mess?

Put together a nice and clean price list for your friends in the purchasing department.

I dare you.

reply
Might be a good time to start a Copilot Copilot company that manages all your copilots.
reply