Hacker News new | past | comments | ask | show | jobs | submit login

To really understand MCP you need to think about application design in a different way.

In traditional applications, you know at design-time which functionality will end up in the final product. For example, you might bundle AI tools into the application (e.g. by providing JSON schemas manually). Once you finish coding, you ship the application. Design-time is where most developers are operating in, and it's not where MCP excels. Yes, you can add tools via MCP servers at design-time, but you can also include them manually through JSON schemas and code (giving you more control because you're not restricted by the abstractions that MCP imposes).

MCP-native applications on the other hand can be shipped, and then the users can add tools to the application — at runtime. In other words, at design-time you don't know which tools your users will add (similar to how browser developers don't know which websites users will visit at runtime). This concept — combined with the fact that AI generalizes so well — makes designing this kind of application extremely fascinating, because you're constantly thinking about how users might end up enhancing your application as it runs.

As of today, the vast majority of developers aren't building applications of this kind, which is why there's confusion.




I think this is a good explanation on the client side of MCP. But most developers are not building MCP clients (I think?). Only a few companies like OpenAI, Anthropic, Cursor and Goose are building MCP client.

Most developers are currently building MCP servers that wrap a 3rd party or wrap their own service. And in this case, they are still at deciding on the tools in design-time, not runtime.

Also I want to mention that both Cursor and Claude desktop don't support dynamic toggling on / off tools within a MCP server, which means users can't really pick which tools to expose to AI. It exposes all tools within a MCP server in current implementation.


The concept of design-time vs. runtime applies to both clients and servers.

I believe you're implying that server developers can focus less on this concept (or sometimes even ignore it) when building a server. This is true.

However, the fact that end-users can now run MCP servers directly — rather than having to wait for developers to bundle them into applications — is a significant paradigm shift that directly benefits MCP server authors.


I see what you mean. It is a paradigm shift indeed if you look from the user's perspective.


This a good characterisation of functionality MCP might enable. Thanks.

In your opinion, what percentage of apps might benefit from this model where end users bring their own MCP tools to extend the capabilities of your app. What are some good examples of this - e.g., a development tool like Cursor, WindSurf likely apply, but are there others, preferable with end users?

How is the user incentivized to upskill towards finding the right tool to "bring in", installing it and then using it to solve their problem.

How do we think about about the implications of bring your own tools, knowing that unlike plugin based systems (e.g,. Chrome/extensions), MCP servers can be unconstrained in behaviour - all running within your app


> In your opinion, what percentage of apps might benefit from this model where end users bring their own MCP tools to extend the capabilities of your app.

Long term close to 100%. Basically all long-running, user-facing applications. I'm looking through my dock right now and I can imagine using AI tools in almost all of them. The email client could access Slack and Google Drive before drafting a reply, Linear could access Git, Email and Slack in an intelligent manner and so on. For Spotify I'm struggling right now, but I'm sure there'll soon be some kind of Shazam MCP server you can hum some tunes into.

> How is the user incentivized to upskill towards finding the right tool to "bring in", installing it and then using it to solve their problem.

This will be done automatically. There will be registries that LLMs will be able to look through. You just ask the LLM nicely to add a tool, it then looks one up and asks you for confirmation. Running servers locally is an issue right now because local deployment is non-trivial, but this could be solved via something like WASM.

> How do we think about about the implications of bring your own tools, knowing that unlike plugin based systems (e.g,. Chrome/extensions), MCP servers can be unconstrained in behaviour - all running within your app

There are actually 3 different security issues here.

#1 is related to the code the MCP server is running, i.e. the tools themselves. When running MCP servers remotely this obviously won't be an issue, when running locally I hope WASM can solve this.

#2 is that MCP servers might be able to extract sensitive information via tool call arguments. Client applications should thus ask for confirmation for every tool call. This is the hardest to solve because in practice, people won't bother checking.

#3 is that client applications might be able to extract sensitive information from local servers via tool results (or resources). Since the user has to set up local servers themselves right now, this is not a huge issue now. Once LLMs set them up, they will need to ask for confirmation.


> local deployment is non-trivial, but this could be solved via something like WASM.

This is why I started working on hyper-mcp which use WASM for plugin development & OCI registry for hosting. You can write Dockerfile for plugin packaging

You develop plugins in any language you want as long as it supports WASM

https://github.com/tuananh/hyper-mcp


I can’t express how much I agree with your perspective. It’s a completely different/total shift in how we might deliver functionality and… composability to users.

Well said.


Oh, it’s the new HATEOAS? A pluggable framework for automatic discoverability of HTTP APIs is incredibly useful, and not just for AI :)


Unfortunately, MCP is not HATEOAS. It doesn't need to be, because it's not web-like. I wish it were.

HATEOAS is great for web-like structures because in each response it not only includes the content, but also all actions the client can take (usually via links). This is critical for architectures without built-in structure — unlike Gopher which has menus and FTP and Telnet which have stateful connections — because otherwise a client arriving at some random place has no indication on what to do next. MCP tackles this by providing a stateful connection (similar to FTP) and is now moving toward static entry points similar to Gopher menus.

I specifically wrote about why pure HATEOAS should come back instead of MCP: https://www.ondr.sh/blog/ai-web


No, you can't understand it until you understand that the world isn't all webshit and not everything is best used via REST.

(Not even webshit is best used by REST, as evidenced by approximately every "REST" API out there, designed as RPC over HTTP pretending it's not.)


Nevertheless, MCP is a “webshit” protocol (even in stdio mode), so if web protocols are unsuitable for your problem, MCP would be as well.


Isn't this just the same paradigm as plugins?


Similar, but one level higher.

Plugins have pre-defined APIs. You code your application against the plugin API and plugin developers do the same. Functionality is being consumed directly through this API — this is level 1.

MCP is a meta-protocol. Think of it as an API that lets arbitrary plugins announce their APIs to the application at runtime. MCP thus lives one level above the plugin's API level. MCP is just used to exchange information about the level 1 API so that the LLM can then call the plugin's level 1 API at runtime.

This only works because LLMs can understand and interpret arbitrary APIs. Traditionally, developers needed to understand an API at design-time, but now LLMs can understand an API at runtime. And because this can now happen at runtime, users (instead of developers) can add arbitrary functionality to applications.

I hate plugging my own blog again but I wrote about that exact thing before, maybe it helps you: https://www.ondr.sh/blog/thoughts-on-mcp


> And because this can now happen at runtime, users (instead of developers) can add arbitrary functionality to applications.

I don't understand what you mean by this. Currently without MCP a server has an API that's documented and to interact with it(thus provide "arbitrary functionality") you call those APIs from your own application code(e.x. python script).

With MCP an LLM connected to your application code calls an API that's documented via MCP to provide "arbitrary functionality".

How are these different, and how does MCP allow me to do anything I couldn't before with API access and documentation? In both cases the application code needs to be modified to account for the new functionality, unless you're also using the LLM to handle the logic which will have very unpredictable results.


>In both cases the application code needs to be modified to account for the new functionality, unless you're also using the LLM to handle the logic which will have very unpredictable results.

In the case of MCP, no application code is modified. You first ship the application and then functionality is added. Using plain APIs, it's the other way around. That's the difference.


I don't understand this at all.

If my application performs some function dependant on data from an API(e.x. showing tax information, letting a user input tax information, and performing tax calculations and autocomplete), how do I extend that UI easier with MCP than with an HTTP REST API.

Even with MCP I need to update my application code to add UI elements(inputs, outputs) for a user to interact with this new functionality, no?


No, MCP does not include any concept of UI (yet). Tool results are usually text only, although there is also the abstraction of an Image (which can be displayed as clients as decide to, e.g. inline).


So no application code needs to be changed because no application code exists.

Isn't that like saying you don't need to modify application code with an REST API if your "application" is just a list of instructions on how to use wget/bash to accomplish the task?


This sounds like a security nightmare.


As it currently stands, MCP is absolutely a security nightmare. Combine this with a general lack of appreciation for security culture amongst developers, and the emerging vibe coding paradigm where non-security-minded people automatically generate and fail to properly audit production-facing code, and it's a disaster waiting to happen.

Feels like we've slid back into the 90s in this regard. Great time to be a security researcher!


> Feels like we've slid back into the 90s in this regard.

Thank $deity. 90s and early 2000s were the times software was designed to do useful work and empower users, as opposed to lock them into services and collect telemetry, both of which protected by the best of advancement in security :).

I'm only half-joking here. Security is always working against usefulness; MCP is designed to be useful first (like honest to $deity useful, not "exploit your customers" useful), so it looks like security nightmare. Some of that utility will need to go away, because complete lack of security is also bad for the users - but there's a tradeoff to be made, hopefully one that doesn't just go by modern security zeitgeist, because that is already deep into protecting profits by securing services against users.

> a general lack of appreciation for security culture amongst developers, and the emerging vibe coding paradigm where non-security-minded people automatically generate and fail to properly audit production-facing code

There is also a general lack of consideration of who is being protected from whom, and why in the security culture. MCP, vibe coding, and LLMs in general are briefly giving end-users back some agency, bringing back the whole idea of "bicycle for the mind" that was completely and intentionally destroyed when computing went mainstream. Let's not kill it so eagerly this time.


A non-exhaustive list of concerns:

- How does a consumer of a remote MCP server trust that it is not saving/modifying their data, or that it is doing something other than what it said it would?

- How does a consumer of a local MCP server trust that it won't wreck their machine or delete data?

- How do servers authorize and authenticate end users? How do we create servers which give different permissions to different users?

These are examples of things which must be done right, and sacrificing user security in order to achieve market dominance is ethically bankrupt. Pedestrians don't know exactly which regulations serve them when a bridge is built, so we don't expect pedestrians to be able to stop corruption and laziness in civil engineering. The same should be true for mass infrastructure; we have a duty as engineers to make the right call.

> MCP, vibe coding, and LLMs in general are briefly giving end-users back some agency, bringing back the whole idea of "bicycle for the mind"

I love what software might look like in 15 years. I don't plan to kill that. I want to protect it, and also protect everyone involved.



It’s pretty astounding to me that this aspect of MCP is not mentioned more. You’re putting a LOT of trust in both the model and the system prompt when you start attaching MCPs that provide unfettered access to your file system, or connect up to your REST API’s POST endpoints.

(That being said, I have to admit I’ve been writing my own powerful but extremely dangerous tools as an experiment (e.g. run arbitrary Python code on my machine, unsandboxed) and I have to admit the results have been incredibly compelling.)


I tend to agree with this.

No, MCP's have NOT Won (Yet) https://newsletter.victordibia.com/p/no-mcps-have-not-won-ye...


agreed. this sounds useless at the moment unless you’re sand boxing it in a throw-away VM lol. Scary!


I really enjoyed both your blog posts. You've clearly thought about this a lot and explained things well. I'd love to subscribe to be updated on your next post (even if it's not for months/years). Any chance you could add an RSS feed to your blog?


Thanks. Added RSS, but WC3 shows some errors. I'll move to plain markdown when I have more time, then this will be easier.


the blog is hosted on substack which supports feeds.

https://newsletter.victordibia.com/feed


You might be able the say the user could "plug in" the new functionality. Or it allows them to "install" a new "application"?


So MCP to an application is like how a WebDriver interface is to a Web browser?




Consider applying for YC's Summer 2025 batch! Applications are open till May 13

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: