I find the opposite after reading the spec. Did you read the spec? I mean the actual spec. not Python API documentation and such. :)
It’s just JSON RPC between a client, one or more servers. The AI agent interaction is not part of what the protocol is designed for except for re-prompting requests made by tools. It has to be AI agnostic.
For tool call workflow: (a) client requests the list of tools from the known servers, then it forwards those (possibly after translating to API calls like OpenAI toolcall API) to any AI agents it wants; when the AI then wants to call a tool (b) it returns a request that needs to be forwarded to the MCP server for handling; and (c) return the result back to the AI.
The spec is actually so simple no SDK is even necessary you could just write a script in anything with an HTTP client library.
Oh, there's a spec! Something concrete, with definitions?! I'm starting to read now, and for the first time I understand something concrete, even if it's still somewhat verbose.
I've spent so much time clicking through pages and reading and not understanding, but without finding the spec. Thanks so much!
Yeah it is not a well thought spec. There is a big confusion about what is a MCP Client and what is a MCP Host. Which is a useless separation as what they call in the spec a client is just a connection to a server while MCP host is what is a real client (the apps using MCP like claude desktop, cli tools, etc).
I had that same intial reaction but I think it makes sense if you think of it in terms of software development. There are many different MCP clients being developed that are just that - clients. They don't take care of hosting the LLM or any other functionality of the application but are meant to be plugged into existing applications to enable MCP support. So from that perspective, the difference is useful, as it refers to code developed by different people.
But the host application does much more than just connect to MCP servers, as the host is one-to-many client connections. The host application also has OTHER client connections to AI agents and so-forth.
I think it can be confusing in general it’s like understanding X11 where the client-server relationship is conceptually flipped. :)
> But the host application does much more than just connect to MCP servers, as the host is one-to-many client connections. The host application also has OTHER client connections to AI agents and so-forth.
Yes, that is the case, but they could have called just connections, a MCP Client (host) keeps multiple open connections to MCP servers. Everywhere in the docs and in the internet peoplE MCP Client the app connecting to mcp servers, not MCP hosts.
This blog post is miles better than MCP spec, which yes, described what you should do but doesn't really differentiate from what's beyond JSON-RPC + Auth. I think that's the point though. It is really just a RPC layer for LLM and by keeping it "generic", LLM can do anything with it.
It’s just JSON RPC between a client, one or more servers. The AI agent interaction is not part of what the protocol is designed for except for re-prompting requests made by tools. It has to be AI agnostic.
For tool call workflow: (a) client requests the list of tools from the known servers, then it forwards those (possibly after translating to API calls like OpenAI toolcall API) to any AI agents it wants; when the AI then wants to call a tool (b) it returns a request that needs to be forwarded to the MCP server for handling; and (c) return the result back to the AI.
The spec is actually so simple no SDK is even necessary you could just write a script in anything with an HTTP client library.