MCPs as a thin layer over existing APIs has lost utility. Custom MCPs for teams that reduces redundant thinking/token consumption and provides more useful context for the agent and decreases mean time to decision making is where MCPs shine.
Something as simple as correlating a git SHA to a CI build takes 10s of seconds and some number of tokens if Claude is utilizing skills (making API calls to the CI server and GitHub itself). If you have an MCP server that Claude feeds a SHA into and gets back a bespoke, organized payload that adds relevant context to its decision making process (such as a unified view of CI, diffs, et. al), then MCP is a win.
MCP shines as a bespoke context engine and fails as a thin API translation layer, basically. And the beauty/elegance is you can use AI to build these context engines.
In your example, you could achieve a similar outcome with a skill that included a custom command-line tool and a brief description of how to use
it.
MCPs are specially well suited for cases that need a permanent instance running alongside the coding agent, for example to handle authentication or some long-lived service that is too cumbersome to launch every time the tool is called.
I mention mean-time to decision making and that's one of the rationales for the mcp. A skill could call a script that does the same thing -- but at that point aren't we just splitting hairs? We are both talking about automated repetitive thinking + actions that the agent takes? And if the skill requires authentication, you have to encode passing that auth into the prompt. MCP servers can just read tokens from the filesystem at call time and don't require thinking at all.
Exactly. The way it’s mostly been used so far is a poor abstraction over stuff you can just put in the context and have the agent run commands.
It really shines in custom implementations coupled to projects. I’ve got a QT desktop app and my mcp server allows the agents to run the app in headless mode, take screenshots, execute code like in Playwright, inspect widget trees, send clicks/text/etc with only six tools and a thousand tokens or so of instructions. Took an hour to build with Claude Code and now it can run acceptance tests before committing them to code end to end tests.
Something as simple as correlating a git SHA to a CI build takes 10s of seconds and some number of tokens if Claude is utilizing skills (making API calls to the CI server and GitHub itself). If you have an MCP server that Claude feeds a SHA into and gets back a bespoke, organized payload that adds relevant context to its decision making process (such as a unified view of CI, diffs, et. al), then MCP is a win.
MCP shines as a bespoke context engine and fails as a thin API translation layer, basically. And the beauty/elegance is you can use AI to build these context engines.