upvote
Geez there should be a big warning on the tin about this. They’re so neatly integrated with copilot that I assumed (and told others) that they had all the privacy guarantees of copilot :(
reply
Also, even when using local models in ollama or lmstudio, prompts are proxied via their domain, so never put anything sensitive even when using local setup

https://old.reddit.com/r/LocalLLaMA/comments/1rv690j/opencod...

They also don't let you run all local models, but specific whitelisted by another 3rd party: https://github.com/anomalyco/opencode/issues/4232

reply
To be clear, that seems to be about the webui only, the TUI doesn't seem affected. I haven't fully investigated this myself, but when I run opencode (1.2.27-a6ef9e9-dirty) + mitmproxy and using LM Studio as the backend, when starting opencode + executing a prompt, I only see two requests, both to my LM Studio instance, both normal inference requests (one for the chat itself + one for generating the title).

Everything you read on the internet seems exaggerated today. Especially true for reddit, and especially especially true for r/LocalLllama which is a former shadow of itself. Today it's mostly sockpuppets pushing various tools and models, and other sockpuppets trying to push misinformation about their competitors tools/models.

reply
Seems like an anti-pattern to me to run AI models without user’s consent.
reply
? The whole idea of a coding assistant is to send all your interactions with the program to the llm model.
reply
To the provider you select in the UI, I agree. But OpenCode automatically sends prompts to their free "Zen" proxy, even without choosing it in the UI.

Imagine someone using it at work, where they are only allowed to use a GitHub Copilot Business subscription (which is supported in OpenCode). Now they have sent proprietary code to a third party, and don't even know they're doing it.

reply
This is exactly me considering what I might have leaked to god knows who via grok. I was hyped by opencode but now I’m thinking of alternatives. A huge red flag… at best irresponsible?
reply
My understanding is that it’s best to set a whitelist in enabled_providers, which prevents it from using providers you don’t anticipate.
reply
Are you using Grok for the coding? Because I have Copilot connected and I can see the request to Copilot for the summaries - with no "small model" setting even visible in my settings.
reply