upvote
Right - setting up LM studio is not hard. But how do I connect LM Studio to Copilot, or set up an agent?
reply
I tried the Zed editor and it picked up Ollama with almost no fiddling, so that has allowed me to run Qwen3.5:9B just by tweaking the ollama settings (which had a few dumb defaults, I thought, like assuming I wanted to run 3 LLMs in parallel, initially disabling Flash Attention, and having a very short context window...).

Having a second pair of "eyes" to read a log error and dig into relevant code is super handy for getting ideas flowing.

reply
It looks like Copilot has direct support for Ollama if you're willing to set that up: https://docs.ollama.com/integrations/vscode

For LM Studio under server settings you can start a local server that has an OpenAI-compatible API. You'd need to point Copilot to that. I don't use Copilot so not sure of the exact steps there

reply
Basically LM Studio has a server that serves models over HTTP (localhost). Configure/enable the server and connect OpenCode to it.

Try this article https://advanced-stack.com/fields-notes/qwen35-opencode-lm-s...

I'm looking for an alternative to OpenCode though, I can barely see the UI.

reply
Codex also supports configuring an alternative API for the model, you could try that: https://unsloth.ai/docs/basics/codex#openai-codex-cli-tutori...
reply