MCP is great. But what i'd like to understand is whats the difference between MCP and manually prompting the model a list of tools with description and calling the specific function based on the llm response ?
1. it makes tool discovery and use happen elsewhere so programs become more portable
2. it standardizes the method so every LLM doesn't need to do it differently
3. it creates a space for further shared development beyond tool use and discovery
4. it begins to open up hosted tool usage across LLMs for publicly hosted tools
5. for better or worse, it continues to drive the opinion that 'everything is a tool' so that even more functionality like memory and web searching can be developed across different LLMs
6. it offers a standard way to set up persistent connections to things like databases instead of handling them ad-hoc inside of each LLM or library
If you are looking for anything more, you won't find it. This just standardizes the existing tool use / function calling concept while adding minimal overhead. People shouldn't be booing this so much, but nor should they be dramatically cheering it.
You can build all the tools yourself, or you can just go to a "tools store", install it and use it. MCP is just the standard everyone can use to build, share and use these tools.
Just like an app store, a chrome extension store, we can have a LLM tools store.
I totally agree, but what i hate about react.js is that if you have like a big app, the routes definitions will be a nightmare, i would love to see native file routing system inside react.js
> i hate about react.js is that if you have like a big app, the routes definitions will be a nightmare
I'm curious how you define your routes cause I've got several big apps with lot of routes, nested routes and complex access permissions for each route and it never entered in my mind that there was anything complicated about it.
lol this is exactly what im trying to understand, you hear this phrase sometimes in yt videos. I mean it doesnt make sense unless probably you are selling a t-shirt for your community, posting a pic of the product to see their reaction, but in the tech industry it seems impossible
If it's a child sitting on the lap with her father, and her father is related to Hezbollah (and as such carrying a pager), this stuff could happen, I think.
Apparently a message was sent to the pagers right before exploding. I saw in a couple videos where the victim looked down at their hip and angled it to see the screen. It makes sense as a trick to ensure the target is close by when it goes off, but a kid could just as easily pick it up off a table after hearing it buzz.
I don't get it, you can already get the user time zone with javascript
```
Intl.DateTimeFormat().resolvedOptions()
```
a good library like dayjs can handle timezones
It's quite annoying to deal with different library dependencies that all picked a different userspace library for something that should be standardized. For example, you might ship a date picker component using dayjs, but I already adopted Luxon in my application, so now I need to write some janky adapter or end up using both.
I would love to not have to import dayjs or another userspace library, so that i can pass datetime-with-timezone to other libraries and instead use an IETF standard way of handling time that's backed by performant native runtime code.
In my experience, userspace timestring parsing and timezone conversion can also be a surprising CPU bottleneck. At Notion, we got a 15% speedup on datetime queries by patching various inefficiencies in the library we selected. The runtime can be more efficient as well as saving page weight on tzdata and locale stuff.
I really like that you are using nestjs, idk why some devs hate it, IMHO its the best node framework that can be used to build production ready apps, i started using it a month ago at work and it was my first time using it, and it already made so productive
I'm literally in the middle of spending my evening, outside of work, gutting NestJS from a project I've inherited at work. I would literally consider changing jobs if I couldn't remove it.
There is so much to unpack to get as why I have such an issue with it. But time and again I have been frustrated with it in terms of: it's design philosophy, implementation, scope of what it covers, bloat, recommended implementation approaches, etc.
I don't understand how a single framework can think that it should cover: message/request handling, logging, config management, dependency inversion, persistence, and IO. These things have almost no cross over (i.e. if they are well designed they should be easily composable with any other component) but time and again framework developers attempt to bundle them into a "one size fits all solution".
To best sum it up. I think any package I use should be secondary to my application. But this package makes it so that my application is secondary to the framework.
I recently migrated my API from lambda functions do a dockerized Node API and I evaluated NestJS, though ended up using Fastify. Like others have mentioned, it's great for devs that come from Angular or Java but for me I didn't like that it used decorators all over the place and preferred to have something more "Express like"
This is precisely my experience. Classes are painful to deal with. Decorators are not only unergonomic, they also throw away any type safety. Also Nest shoves class transformer and class validator down your throat, which are also a pian in the ass.
Yes I noticed v5! I love it so much. The great thing about itty is you can integrate anything really easily.
I'm in the progress of making a simple middleware based on zod to parse not only request body, but also params, headers, etc. Zod is really powerful and you could even use it to do stuff like parse jwt tokens and have complete type inference.
Perhaps my only issue with this approach is that you rely on a wrapper function to correctly pass the generics fron the middleware to the main handler.
Another possible approach is using types-per-route but then it's hard to enforce that the validator agrees with the handler itself.
Same experience here. Admittedly it's been a few years since I last used it, but there was so much boilerplate coupled with a layer of "magic" that was too thick for my liking.
Provider initialization (dependency injection) failed on me on a few occasions and it always wasted hours of productivity. It would break in some obscure way that wouldn't log any errors to the console, so there was nothing to go on besides attaching a debugger and stepping through layers of framework code. It was quite infuriating because it always happened when I was in the middle of something else.
If your specific use case wasn't covered by their docs (which were very barebones and "hello-world" oriented at the time), it was painful to figure out and use.
nestjs is nice if you’re coming from Angular. It’s basically Angular for the backend.
But like Angular, there is a very wide range of use cases where it is totally overkill and like Angular, companies are throwing it at each and every project.
I don’t find it bad but it’s in a strange spot being more bloated than other JS frameworks while still being way less "batteries included" than more classical corporate frameworks.
Like Angular, I don’t hate it though it’s just that I still haven’t figured out a project where it’s better suited than something else.
NestJS is nodeJS for Java people. It's like Angular in that sense.
So some people will feel like it's over engineered.
I mean it's overengineered. Why do I have to register all these things, and why does it keep crashing if I register it like this without any understandable error message. It has a little bit of an OCD relationship with dependency injection. Where the normal import system can handle most of those cases.
But few nice things, resolvers, auto-generate swaggers. And TypeORM is lovely.
But yeah it's a bit too demanding. I'm okay with an opiniated framework if it gives a lot of features out of the box (like laravel or NextJs), but NestJS tells me how to do things without giving me enough in return. (auth, sockets etc are still quite a lot of work)
Yeah, I don't know, most of the time only time you actually need dep injection is for tests, and at that point why not just mock with Jest? Feels too much just having to do all this work just to complicate dependencies and make moving around more difficult for tests when a much simpler solution is available.
Just wanted to chime in and say that if you use the CLI to generate things, the experience is much nicer. However, you do still need to be in their playground. If you have a large team out org and don't want to have to document extra about the guts, I love having something proven. If you need to mix protocols, I believe Feathers JS was a little simpler to get into last I looked.
Dependency injection was not really meant to help with testing, but to keep code decoupled. It can be a nice pattern, but even if nestjs forces us to do it, we developers still find creative ways to nullify any attempts to decouple the code we write :).
It wasn't meant to, but it realistically and potentially feels only useful in those cases.
To me in most cases it is used it seems to just overengineer and obfuscate things unnecessarily when much simpler code would be easier to understand, etc.
It shouldn't be a thing that is done by default, only when it really makes sense.
Eventually with all those injections you are going to cause a situation where making any changes becomes really complex, if there are any use cases you didn't foresee at all.
In 90%+ cases you don't need interfaces and or DI, you should just be able to follow the logic with your IDE, it makes no sense to obfuscate that.
If the impl truly must vary for whatever case, then sure, you can use it.
But I would also say that don't do interfaces and impl before you actually need to switch them out dynamically depending on the context (and not just for testing).
If you have something that does Storage, and Storage drivers could be different like FileSystem, GoogleDrive, whatever, then sure use an interface, but not otherwise.
It's like DRY. Unless you actually use it 2-3 times, don't make everything unnecessarily into a reusable fn.
If you don't foresee having multiple storage methods in the near future then just use a class for storing which you can move to, to see how it does file storage or similar.
yeah agree, there was a time where everyone only wanted to code in patterns. I do think they are nice, but often most of us hardly understood the original reason why they were created.
I use nestjs in my open source no-code database https://github.com/teableio/teable, and I really like it, especially the dependency injection capability.
I wouldn't really recommend TS anymore. I would just go to a compiled language that actually has runtime type safety and a good ecosystem and devx. Although, it looks like Deno/Bun will improve things. After working with it for 5yrs, I just don't want to deal with the typescript compiler and ecosystem anymore, it's more of a headache than it's worth when Java (or ktl), Go, (and because HN: rust) are great.
I remember using this language in the AI class, i thought it was an old, deprecated language as people now use python mainly, really cool to see it again
I forget who said it, but a good programming language should have two features:
1. Make simple problems easy.
2. Make difficult problems possible.
Python is an outstanding language on both scores.
Prolog, alas, is much better at #2 than it is at #1. For example, prolog is a great choice for writing domain-specific languages, and modern implementations (like scryer prolog, mentioned in a comment above), can generate very efficient code for them.
Prolog's "killer ap" is how well it does on a third feature:
3. It makes virtually impossible tasks "merely" very difficult ;-)
And it has a steep learning curve; it took me years to really "get" it, and I only persevered because I was facing one of those category #3 problems.
Its not a deprecated language, but alas, it is destined to be a niche language. But for those niches, it is still the best, hands-down.
I guess this is rather category #2 than #3, but my favorite example is Advent of Code 2021, day 24: https://adventofcode.com/2021/day/24. I think many people consider this one of the hardest AOC problems ever and at the time many people didn't even code up a complete solution but solved in manually (me included). However, with Prolog it's almost trivial. So many of Prologs strengths come together here: writing parsers, writing interpreters, solving integer constraints, and in particular "reasoning backwards".
I was writing a theorem prover for higher-order modal logic. All the theorem provers written in C or C++ were tens of thousands of lines, and even at that they had only a fraction of what I needed. What's more, they were all like 100 times *slower* at proving theorems which prolog could prove out of the box.
So I decided to try to implement the features I needed on top of the theorem prover which prolog already is.
Took me forever, but eventually it all came together like a thunderclap, and I was able to implement a theorem prover for quantified, higher-order modal logic which was amazingly-blazingly fast--in 67 lines of prolog.
In terms of lines-of-code per day, its the least productive I've ever been :-)
While I wouldn’t want to do many tasks in prolog. I’ve always wished for a good interop with some other language, or a language with some kind of async prolog call. There are so many things that are a couple dozen of lines of prolog but would be hundreds of lines of other languages.
I think it is undergoing a resurgence due to the LLM prolifereating, but I haven't seen how Prolog is being used in the LLM creation. Of course, I don't really understand the whole process of the creation. I do remember someone on a academic/programmer forum stating Prolog regarding research was big in academics in Europe whereas Lisp was more popular in America academics for research. It would make sense as Richard Stallman was big into AI Lisp as a consultant.