Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Probably my judgement is a bit fogged. But if I get asked about building AI into our apps just one more time I am absolutely going to drop my job and switch careers





That's likely because OG devs have been seeing the hallucination stuff, unpredicability etc. and questioning how that fits with their carefully curated perfect system

What blocked me initially was watching NDA'd demos a year or two back from a couple of big software vendors on how Agents were going to transform enterprise ... what they were showing was a complete non-starter to anyone who had worked in a corporate because of security, compliance, HR, silos etc. so I dismissed it

This MCP stuff solves that, it gives you (the enterprise) control in your own walled garden, whilst getting the gains from LLMs, voice etc. ... the sum of the parts is massive

It more likely wraps existing apps than integrates directly with them, the legacy systems becoming data or function providers (I know you've heard that before ... but so far this feels different when you work with it)


There are 2 kinds of usecases that software automates. 1) those that require accuracy and 2) those that dont (social media, ads, recommendations).

Further, there are 2 kinds of users that consume the output of software. a) humans, and b) machines.

Where LLMs shine are in the 2a usecases, ie, usecases where accuracy does not matter and humans are end-users. there are plenty of these usecases.

The problem is that LLMs are being applied to 1a, 1b usecases where there is going to be a lot of frustration.



How does MCP solve any of the problems you mentioned? The LLM still has to access your data, still doesn't know the difference between instructions and data, and still gives you hallucinated nonsense back – unless there's some truly magical component to this protocol that I'm missing.

The information returned by the MCP server is what makes it not hallucinate. That's one of the primary use cases.

> That's likely because OG devs have been seeing the hallucination stuff, unpredicability etc. and questioning how that fits with their carefully curated perfect system

That is the odd part. I am far from being part of that group of people. I‘m only 25, I joined the industry in 2018 as part of an training program in a large enterprise.

The odd part is, many of the promises are a bit Déjà-vu even for me. „Agents going to transform the enterprise“ and other promises do not seem that far off the promises that were made during the low code hype cycle.

Cynically, the more I look at the AI projects as an outsider, the more I think AI could fail in enterprises largely because of the same reason low code did. Organizations are made of people and people are messy, as a result the data is often equally messy.


Rule of thumb: the companies building the models are not selling hype. Or at least the hype is mostly justified. Everyone else, treat with extreme skepticism.



Consider applying for YC's Fall 2025 batch! Applications are open till Aug 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: