Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

I feel like I need the opposite, a cursory view, or at least a definition.

Most of the material on MCP is either too specific or too in depth.

WTF is it?! (Other than a dependency by Anthropic)



It's a vibe-coded protocol that lets LLM models query external tools.

You write a wrapper ("MCP server") over your docs/apis/databases/sites/scripts that exposes certain commands ("tools"), and you can instruct models to query your wrapper with these commands ("calling/invoking tools") and expect responses in a certain format that they can then use.

That is it.

Why vibe-coded? Because instead of bi-directional websockets the protocol uses unidirectional server-side events, so you need to send requests to a separate endpoint and then listen to the SSE hoping for an answer. There's also non-existent authentication.


You are complaining about the transport aspect of the specification.

The protocol could easily be transported over websockets. Heck, since stdio is one transport, you could simply pipe that over websockets. Of course, that leaves a massive gap around authn and authz.

The Streamable HTTP transport includes an authentication workflow using OAuth. Of course, that only addresses part of the issue.

There are many flaws that need improvement in MCP, but railing against the current transports by using a presumably denigratory term ("vibe-coded") isn't helpful.

Your "that is it" stops at talking about one single aspect of the protocol. On the server side you left out resources and prompts. On the client side you left out sampling, which I find to be a very interesting possibility.

I think MCP has many warts that need addressing. I also think it's a good start on a way to standardize connections between tools and agents.


The choice of transport is just one, quite telling, aspect of this mess.

Could these commands be executed over websockets? Yes, they could. Will they? No, because the specification literally only defines two transports, and all of the clients only support those.

As with any hype, the authors drink their own coolaid, invent their own terminology, and ignore literally everything that came before them.

Even reading through explanations on the once again vibe-coded https://modelcontextprotocol.io/ you can't help to wonder why.

"tools" are nothing but RPC calls (that's why the base of this is JSON RPC)

"resources"? PHP could do an fopen on remote URLs in the 90s. It literally is just that: "Each resource is identified by a unique URI and can contain either text or binary data." You don't say.

"sampling"? It literally is just bi-directional communication. "servers request data from the client by sending commands". What a novel idea, must have a new name and marketing blurb about "powerful MCP feature, enabling sophisticated agentic behaviors while maintaining security and privacy."

As for auth, again, MCP doesn't have it, and expects you to just figure it out yourself. The entirety of the "spec" on it is just "MCP provides an Authorization framework for use with HTTP and your expected to conform to this spec". There's no spec. Edit: to be clear. At the point of writing all mentions of "MCP Auth Spec" on the internet link to https://modelcontextprotocol.io/specification/2025-03-26 which at the time of writing contains zero mentions of OAuth and says nothing about auth (and is not a spec to begin with) [1]

And so on.

It's hype-driven vibe-coded development at its finest.

[1] The auth spec is here: https://modelcontextprotocol.io/specification/2025-03-26/bas... I don't think anything on the site links to this directly. I found the link from some github discussion. See issues with it here: https://blog.christianposta.com/the-updated-mcp-oauth-spec-i...


Maybe calmly look at the spec and notice that the site does actually have a clear navigation to authorization (2025-03-26 > Base Protocol > Authorization).

You clearly have no desire to objectively evaluate what the specification is trying to do and are simply disregarding all aspects of the specification as trite or pointless. As such, this will be my last response of the subject.

I encourage you to take a breath and maybe try to understand why the specification was created in the first place before dismissing it fully.


> Maybe calmly look at the spec and notice that the site does actually have a clear navigation to authorization (2025-03-26 > Base Protocol > Authorization).

Not on mobile. At least I couldn't see any obvious link to a very important part of the spec. There are circular links everywhere, none to auth.

> You clearly have no desire to objectively evaluate what the specification is trying to

I'm not questioning what it is trying to do. I'm questioning how it's doing that.

> try to understand why the specification was created in the first place before dismissing it fully.

I'm not questioning why it is trying to do. I'm questioning how it's doing that.

Stop buying into hype and marketing wholesale, and actually read what it is, and actually understand what your opponents are talking about.


more than vibe coded it feels vibe concieved

but that doesnt have to be necessarily negative


I see zero reason they couldn't have used standard websockets and made it simpler and more robust.

Awful case of "not invented here" syndrome

I'm personally interested in if WebTransport could be the basis for something better


look at the <client> implementation here, https://modelcontextprotocol.io/quickstart/client

that's the missing piece in most of these description.

You send off a description of the tools, the model decides if it wants to use one, then you run it with the args, send it back to the context and loop.


I found that the other day and finally got what MCP is. Kinda just a convenience layer for hooking up an API via good "old" tool use.

Unless I'm missing something major, it's just marginally more convenient than just hooking up tool calls for, say, OpenAPI. The power is probably in the hype around it more than it's on technical merits.


Except in practice it is far less convenient because it constantly breaks, with terrible error handling


I had a fun one yesterday. The `mcp-atlassian` server failed trying to create multiple Jira tickets. The error response (and error logs) was just a series of newlines (one for each ticket we wanted to create). Turned out the issue was the LLM decided to mis-capitalize the project code. My best guess is it read the product name, which has the same letters but not fully uppercase, and used that instead of the Jira project code which was also provided in the context.


The ideal is that you can simply connect to whatever MCP Server endpoint you need, without needing to code your own tools.

The reality is that the space is still really young and people are figuring things out as they go.

The number of people that have no real clue what they are doing that are jumping in is shocking. Relatedly, the number of people that can't see the value in a protocol specifically designed to work with LLM Tool Calling is equally shocking. Can you write code that glues an OpenAPI Server to an LLM-based Tool Calling Agent? 100%! Will that setup flood the context window of the LLM? Almost certainly. You need to write code to distill those OpenAPI responses down to some context the LLM can work with, respecting the limited space for context. Great, now you've written a wrapper on that OpenAPI server that does exactly that. And you've written, in essence, a basic MCP Server.

Now, if someone were to write an MCP Server that used an LLM (via the LLM Client 'sampling' feature) to consume an OpenAPI Server Spec and convert it into MCP Tools dynamically, THAT would be cool. Basically a dynamic self-coding MCP Server.


Terraform for LLMs


A standard protocol that allows many different Applications to provide context to many different LLMs.

Conversely, it allows many different LLMs to get context via many different Applications using a standard prodocol.

It addresses an m*n problem.


https://youtu.be/74c1ByGvFPE?si=S-5oBO8ptL_7WmQ9

I like this succinct explanation.


It's an API to expose tools to LLMs.


Or... it's a tool to expose APIs to LLMs.


functions that an LLM can use in its reasoning are called "tools", so the prior is probably more correct in the sense that an API can be used to provide the LLM tools


I just thought the inversion was fun. A lot of MCPs are basically wrappers around APIs, hence the comment. But certainly not all of them.


My eye twitches every time I see something like "a lot of MCPs are". It's probably a lost cause at this point, but it's an MCP Server, not an MCP. And the other side of that connection would be an MCP Client that lives in an MCP Host which almost certainly could simply be called an Agent.


Are you sure it's not the primary antagonist from Tron (1982)?

https://en.wikipedia.org/wiki/List_of_Tron_characters#Master...


Hah! I'd totally forgotten about that. Thanks! Now I need to go re-watch the movie.


You're not wrong, but you are being pretty pedantic about it. I consider myself pedantic in most circumstances but this one clearly doesn't bother me.


It also supports Resources and Prompts, not just Tools.


This is a VFAQ https://hn.algolia.com/?q=what+is+mcp

But to save you the click & read: it's OpenAPI for LLMs


OpenAPI for LLMs is such a good way to describe it!


Seems apt to me as well.

Before the whole "just use OpenAPI" crowd arrives, the point is that LLMs work better with curated context. An OpenAPI server not designed for that will quickly flood an LLM context window.


So.. why not use OpenAPI?


I guess there’s not really a good reason. Maybe there are specific constraints when working with LLMs? OpenAPI is quite verbose

Anyway, the technical merits don’t really matter. MCP (and any standard really) are only useful because they’re widely adopted. OpenAPI isn’t used for this, but MCP is. So, in practice, MCP is better for AI agents


it's so apt that one of the most common question/statements I hear is why not use OpenAPI? I don't know the answer. Or WTF is streaming HTTP? Sure feels like we're trying to reinvent web sockets. It must be either #notinventedhere or while the genius devs build the LLMs the interns do the documentation and SDKs




Consider applying for YC's Fall 2025 batch! Applications are open till Aug 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: