To be clear, that seems to be about the webui only, the TUI doesn't seem affected. I haven't fully investigated this myself, but when I run opencode (1.2.27-a6ef9e9-dirty) + mitmproxy and using LM Studio as the backend, when starting opencode + executing a prompt, I only see two requests, both to my LM Studio instance, both normal inference requests (one for the chat itself + one for generating the title).
Everything you read on the internet seems exaggerated today. Especially true for reddit, and especially especially true for r/LocalLllama which is a former shadow of itself. Today it's mostly sockpuppets pushing various tools and models, and other sockpuppets trying to push misinformation about their competitors tools/models.