You could argue that they could just let the agent curl an agent-optimized API, and that is what MCP is.
1. A cli script or small collection of scripts
2. A very short markdown file explaining how it works and when to use it.
3. Optionally, some other reference markdown files
Context use is tiny, nearly everything is loaded on demand.
And as I'm writing this, I realize it's exactly what skills are.
Can anyone give an example of something that this wouldn't work for, and which would require MCP instead?
MCP is useful because I can add one in a single click for an external service (say, my CI provider). And it gives the provider some control over how the agent accesses resources (for example, more efficient/compressed, agent-oriented log retrieval vs the full log dump a human wants). And it can set up the auth token when you install it.
So yeah, the agent could write some those queries manually (might need me to point it to the docs), and I could write helpers… or I could just one-click install the plugin and be done with it.
I don’t get why people get worked up over MCP, it’s just a (perhaps temporary) tool to help us get more context into agents in a more standard way than everyone writing a million different markdown files and helper scripts.
"MCP is useful because I can add one in a single click for an external service" Like... a CLI/API? [edit: sorry, not click, single 'uv' or 'brew' command]
"So yeah, the agent could write some those queries manually" Or, you could have a high-level CLI/API instead of a raw one?
"I don’t get why people get worked up over MCP" Because we tried them and got burned?
"to help us get more context into agents in a more standard way than everyone writing a million different markdown files and helper scripts." Agreed it's slightly annoying to add 'make sure to use this CLI/API for this purpose' in AGENTS.md but really not much. It's not a million markdown files tho. I think you're missing some existing pattern here.
Again, I fail to see how most MCPs are not lazy tools that could be well-scoped discoverable safe-to-use CLI/APIs.
I can run an MPC on my local machine and connect it to an LLM FE in a browser.
I can use the GitHub MCP without installing anything on my machine at all.
I can run agents as root in a VM and give them access to things via an MCP running outside of the VM without giving them access to secrets.
It's an objectively better solution than just giving it CLIs.
The composition argument is compelling though. Instead of clis though, what if the agent could write code where the tools are made available as functions?
tools.get_foo(tools.get_bar())The "discoverable" action itself is specified in the MCP standard. The "well-scoped" is accomplished by creating a whitelist of tool resources in the MCP manifest.
In contrast, a general-purpose CLI/API that wasn't specialized for AI usage is more open-ended and less standardized than the explicit MCP protocol.
Yes, existing Claude can already use existing CLI/API tools without any MCP server but that doesn't make MCP redundant. (But I'm not saying this thread's article about fetching the latest Google Android docs needs an MCP server.)
E.g. you mentioned CLI already has discoverability e.g. "clitool --help". That's somewhat true but it's still not as standardized as the MCP specification for discovering available resources and actions. The problem is the CLI implementation for displaying syntax and help could be spelled differently such as "clitool -help" (1 hyphen not 2) or "clitool -h" or "/help" or "/?". Some CLI tools also have extra detailed help that requires a modifier such as "clitool -help all". Or the critical CLI usage to solve a random AI user prompt is actually described in the "README.MD". The text output from the CLI's help option is not standardized. The point is none of that is standardized among all developers of various CLI tools.
(Random note.. last week, my ffmpeg build with the AV1 codec broke because I upgraded from NASM 2.x to NASM 3.x. The Google's libaomav1 build script[0] did an interesting hack by calling "nasm.exe -hf" (2.x syntax) to parse that output and "discover" the available NASM optimization flags... but the new NASM 3.x changed the help syntax to "nasm.exe -h all" and removed the old "-hf" causing stdout to be empty. Thus, the build script couldn't "discover any optimization flags" and aborted the build. That's what happens when the discoverability mechanism itself is not standardized across all CLI tools.) (I'm not saying nasm needs a MCP server to avoid the "-hf" vs "-h all" problem. It's just an anecdote about CLI tools being inconsistent.)
Same variety of "no common standards" applies to existing APIs that weren't originally intended for AI consumption.
MCP makes things less open-ended and more structured by listing specific integration points. This makes it much easier to add quickly capabilities to Claud Desktop and other AI tools etc.
Yes, an AI tool could certainly burn a bunch of tokens and electricity by simply running random "clitool --help" and parse that output through the LLM context, then "learn" the likely actions that can do/answer whatever the AI user prompted in English. Or skip all that inefficiency and have people agree on how to specify the exact integration points (the "tools & resources" in MCP lingo). Everybody agreeing on how to do that is basically what MCP is.
Or put another way, the MCP protocol is itself "a well-scoped and discoverable API" that you're suggesting.
E.g. rather than give AI tool raw access to Python API of software HomeAssistant smarthome hub, people created MCP servers to give a whitelist of available actions to AI chatbots. AI doesn't "need" an MCP server in this case but it makes easier and more predictable than the raw Python API.
MCP ideas of standardizing the discovery and whitelisting of available actions are another implementation of the decades long quest in computer science to "catalog a universe of services/capabilities and have tools discover them". It has overlaps with previous ideas such as CORBA RPC to describe available functions (IDL interface descript language), Microsoft COM services IUnknown::QueryInterface, Roy Fielding's original REST HATEOAS where an entrypoint url (say a banking url) uses a standardized field to specify the next set of urls that are useful (url to create account, make a deposit, etc.)
MCP are not "better" than CLI/API tools. They solve a different problem. Standardized instead of adhoc discovery of capabilities, N+M instead of NxM combinatorial integrations, security, etc.
[0] https://aomedia.googlesource.com/aom/+/refs/tags/v3.13.1/bui...
Has this changed?
My uncharitable interpretation is that MCP servers are NJ design for agents, and high quality APIs and CLIs are MIT design.
But at the end of the day, MCP is about making it easy/standard to pull in context from different sources. For example, to get logs from a CI run for my PR, or to look at jira tickets, or to interact with GitHub. Sure, a very simple API baked into the model’s existing context is even better (Claude will just use the GH CLI for lots of stuff, no MCP there.)
MCP is literally just a way for end users to be able to quickly plug in to those ecosystems. Like, yeah, I could make some extra documentation about how to use my CI provider’s API, put an access token somewhere the agent can use… or I could just add the remote MCP and the agent has what it needs to figure out what the API looks like.
It also lets the provider (say, Jira) get some control over how models access your service instead of writing whatever API requests they feel like.
Like, MCP is really not that crazy. It’s just a somewhat standard way to make plugins for getting extra context. Sure, agents are good at writing with API requests, but they’re not so good at knowing why, when, or what to use.
People get worked up over the word “protocol” like it has to mean some kind of super advanced and clever transport-layer technology, but I digress :p
You say "a very simple API baked into the model's existing context is even better". So we agree? MCP's design actively discourages that better path.
"Agents are good at writing API requests, but not so good at knowing why, when, or what to use". This is exactly what progressive discovery solves. A good CLI has --help. A good API has introspection. MCP's answer is "dump all the tool schemas into context and let the model figure it out," which is O(N) context cost at all times vs O(1) until you actually need something.
"It's just a standard way to make plugins" The plugin pattern of "here are 47 tool descriptions, good luck" is exactly the worse-is-better tradeoff I'm describing. Easy to wire up, expensive at runtime, and it gets worse as you add more servers.
The NJ/MIT analogy isn't about complexity, it's about where the design effort goes. MCP puts the effort into easy integration. A well-designed API puts the effort into efficient discovery. One scales, the other doesn't.
fabien@debian2080ti:~$ du -sh /usr/share/man/ #all lang
52M /usr/share/man/
Yep... in fact there are already a lot of tooling for that, e.g. man obviously but also apropos.I’ve been wrapping the agent’s curl calls in a small cli that handles the auth but I’m wondering if other people have come up with something lighter/more portable.
Job security.