Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

But that's not what MCP does. It is a tool created by anthropic ( 2nd most used LLM) to provide portabiliry and vendor neutrality between different LLMs. It's like terraform for LLMs.

Also providing data through function calls/tool use is not context, you are overloading the term. Context is LLM context, if you fetch from a db it's something else




> But that's not what MCP does. It is a tool created by anthropic ( 2nd most used LLM) to provide portabiliry and vendor neutrality between different LLMs. It's like terraform for LLMs.

Given that your only contributions in this thread are to acknowledge your ignorance of MCP [0] and to post the summary dismissal upthread that shows your ignorance of it, it would probably behoove you to actually learn about MCP more before confidently making assertions about it. Suffice it to say that this is inaccurate and others have already explained in your "WTF is it" thread what MCP actually is.

> Also providing data through function calls/tool use is not context, you are overloading the term. Context is LLM context, if you fetch from a db it's something else

If you believe this then you don't understand how tool use is implemented. It's literally accomplished by injecting a tool's response into the context [1].

As a general life tip: most pedants are wrong most of the time. If you find yourself being pedantic, take a few steps back and double check that you're not just wrong.

[0] https://news.ycombinator.com/item?id=44011320

[1] https://platform.openai.com/docs/guides/function-calling


If you believe this then you don't understand how tool use is implemented. It's literally accomplished by injecting a tool's response into the context [1].

I was doing tool use before chatgpt released an official API for function calls. You literally give ChatGPT API specs and ask it to generate call parameters.

The API is fed into the LLM as context, the response is part of the output. Whether you pass that output through another layer of LLM is trivial. And even if you do, the "context" in that case would be only the response, not the whole database. You are confusing even yourself, you accepting the overloading of the word 'context' (pushed by a company for commercial purposes) and you are now unable to distinguish between LLM context in terms of tokens, an external data source, and a response fetched by the tool.

It's not that I am ignorant of what Anhtropic claims Context means, I'm contesting it. If Microsoft releases a new product and claims that Intelligence is the parameters of their Microsoft Product, then it pays to be a bit cynical instead of parroting whatever they say like some unpaid adman


I couldn't care less about Anthropic or MCP—as I noted, I'm a critic of MCP—but pedants bug me quite a bit especially when they're wrong.

> The API is fed into the LLM as context, the response is part of the output.

So you implement tool use by feeding an API into the LLM as context in order to get it to produce call parameters. Got it.

> Whether you pass that output through another layer of LLM is trivial. And even if you do, the "context" in that case would be only the response

So the output of the tool when called with those parameters can be fed back into the LLM as further context. Got it.

Given the above, it seems that we agree that tool use is implemented entirely by giving selected bits of context to the model.

With that in mind, if one were to design a protocol that makes tool use plug-and-play instead of something that has to be coded by hand for each tool—a protocol designed to allow a model to discover tool APIs that it might want to bring into context and then use those APIs to bring their outputs into context—it would be reasonable to call said protocol the Model Context Protocol, because it's all about getting specific bits of Context into a Model.

I'm not sure why the word "context" is the hill you decided to die on here when there is so much else to pick on with MCP, but it's time to get off the hill.


That something can be context if you feed it as input to the LLM and that output will be input, is true for everything in an LLM. So you are not really conveying any meaning with that definition of MCP. MCP is an API layer between LLMs and SaaS applications, designed to provide vendor neutrality for the LLMs. Nothing to do with the context window, which is a specific variable measured in kTokens

It pays to be precise when speaking and studying, and it pays to develop a precise language on nascent technologies when we comunicate about them.

This reminds me when I was studying chemistry and I thought they were pedantic for the way they used the word salt. Or when I studied chess and I called every bishop and knight attack to the f6 pawn the fried liver, instead of the specific sequence of moves that we call the fried liver. Or when I thought that the arm forearm distinction was pedantic in medicine

Science demands precision in communication, feel free to steal a well defined term and use it to mean something else that already has a different sign to denote it. But I'm not playing




Consider applying for YC's Fall 2025 batch! Applications are open till Aug 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: