How "useful" a particular MCP is depends a lot on the quality of the MCP but i've been slowly testing the waters with GitHub MCP and Home Assistant MCP.
GH was more of a "go fix issue #10" type deal where I had spent the better part of a dog-walk dictating the problem, edge cases that I could think of and what a solution would probably entail.
Because I have robust lint and test on that repo, the first proposed solution was correct.
The HomeAssistant MCP server leaves a lot to be desired; next to no write support so it's not possible to have _just_ the LLM produce automations or even just assist with basic organization or dashboard creation based on instructions.
I was looking at Ghidra MCP but - apparently - plugins to Ghidra must be compiled _for that version of ghidra_ and I was not in the mood to set up a ghidra dev environment... but I was able to get _fantastic_ results just pasting some pseudo code into GPT and asking "what does this do given that iVar1 is ..." and I got back a summary that was correct. I then asked "given $aboveAnalysis, what bytes would I need to put into $theBuffer to exploit $theorizedIssueInAboveAnalysis" and got back the right answer _and_ a PoC python script. If I didn't have to manually copy/paste so much info back and forth, I probably would have been blown away with ghidra/mcp.
This totally reads to me like you're prompting an LLM instead of talking to a person
Chatgpt asks for a host for the mcp server.
All the MCPS I find give a config like
```{ "mcpServers": { "sequential-thinking": { "command": "npx", "args": [ "-y", "@modelcontextprotocol/server-sequential-thinking" ] } } }```
It feels like wizardry a little to me.