"MCP is useful because I can add one in a single click for an external service" Like... a CLI/API? [edit: sorry, not click, single 'uv' or 'brew' command]
"So yeah, the agent could write some those queries manually" Or, you could have a high-level CLI/API instead of a raw one?
"I don’t get why people get worked up over MCP" Because we tried them and got burned?
"to help us get more context into agents in a more standard way than everyone writing a million different markdown files and helper scripts." Agreed it's slightly annoying to add 'make sure to use this CLI/API for this purpose' in AGENTS.md but really not much. It's not a million markdown files tho. I think you're missing some existing pattern here.
Again, I fail to see how most MCPs are not lazy tools that could be well-scoped discoverable safe-to-use CLI/APIs.
I can run an MPC on my local machine and connect it to an LLM FE in a browser.
I can use the GitHub MCP without installing anything on my machine at all.
I can run agents as root in a VM and give them access to things via an MCP running outside of the VM without giving them access to secrets.
It's an objectively better solution than just giving it CLIs.
The composition argument is compelling though. Instead of clis though, what if the agent could write code where the tools are made available as functions?
tools.get_foo(tools.get_bar())The "discoverable" action itself is specified in the MCP standard. The "well-scoped" is accomplished by creating a whitelist of tool resources in the MCP manifest.
In contrast, a general-purpose CLI/API that wasn't specialized for AI usage is more open-ended and less standardized than the explicit MCP protocol.
Yes, existing Claude can already use existing CLI/API tools without any MCP server but that doesn't make MCP redundant. (But I'm not saying this thread's article about fetching the latest Google Android docs needs an MCP server.)
E.g. you mentioned CLI already has discoverability e.g. "clitool --help". That's somewhat true but it's still not as standardized as the MCP specification for discovering available resources and actions. The problem is the CLI implementation for displaying syntax and help could be spelled differently such as "clitool -help" (1 hyphen not 2) or "clitool -h" or "/help" or "/?". Some CLI tools also have extra detailed help that requires a modifier such as "clitool -help all". Or the critical CLI usage to solve a random AI user prompt is actually described in the "README.MD". The text output from the CLI's help option is not standardized. The point is none of that is standardized among all developers of various CLI tools.
(Random note.. last week, my ffmpeg build with the AV1 codec broke because I upgraded from NASM 2.x to NASM 3.x. The Google's libaomav1 build script[0] did an interesting hack by calling "nasm.exe -hf" (2.x syntax) to parse that output and "discover" the available NASM optimization flags... but the new NASM 3.x changed the help syntax to "nasm.exe -h all" and removed the old "-hf" causing stdout to be empty. Thus, the build script couldn't "discover any optimization flags" and aborted the build. That's what happens when the discoverability mechanism itself is not standardized across all CLI tools.) (I'm not saying nasm needs a MCP server to avoid the "-hf" vs "-h all" problem. It's just an anecdote about CLI tools being inconsistent.)
Same variety of "no common standards" applies to existing APIs that weren't originally intended for AI consumption.
MCP makes things less open-ended and more structured by listing specific integration points. This makes it much easier to add quickly capabilities to Claud Desktop and other AI tools etc.
Yes, an AI tool could certainly burn a bunch of tokens and electricity by simply running random "clitool --help" and parse that output through the LLM context, then "learn" the likely actions that can do/answer whatever the AI user prompted in English. Or skip all that inefficiency and have people agree on how to specify the exact integration points (the "tools & resources" in MCP lingo). Everybody agreeing on how to do that is basically what MCP is.
Or put another way, the MCP protocol is itself "a well-scoped and discoverable API" that you're suggesting.
E.g. rather than give AI tool raw access to Python API of software HomeAssistant smarthome hub, people created MCP servers to give a whitelist of available actions to AI chatbots. AI doesn't "need" an MCP server in this case but it makes easier and more predictable than the raw Python API.
MCP ideas of standardizing the discovery and whitelisting of available actions are another implementation of the decades long quest in computer science to "catalog a universe of services/capabilities and have tools discover them". It has overlaps with previous ideas such as CORBA RPC to describe available functions (IDL interface descript language), Microsoft COM services IUnknown::QueryInterface, Roy Fielding's original REST HATEOAS where an entrypoint url (say a banking url) uses a standardized field to specify the next set of urls that are useful (url to create account, make a deposit, etc.)
MCP are not "better" than CLI/API tools. They solve a different problem. Standardized instead of adhoc discovery of capabilities, N+M instead of NxM combinatorial integrations, security, etc.
[0] https://aomedia.googlesource.com/aom/+/refs/tags/v3.13.1/bui...