Hacker News new | past | comments | ask | show | jobs | submit | Beefin's comments login

Super cool direction. Making agents first-class MCP servers feels like a natural next step—especially for scaling multi-agent coordination across infra boundaries. Curious how you’re handling observability at the server level—do you expose structured logs or telemetry for workflows running across agents? This could be huge for debugging large-scale agentic chains.


This is exactly what we're working on at the moment! (If you're curious about following along progress, check out feature/distributed_tracing branch -- https://github.com/lastmile-ai/mcp-agent/tree/feature/distri...)

The nice thing about representing agents as MCP servers is we can leverage distributed tracing via OTEL to log multi-agent chains. Within the agent application, mcp-agent tracing follows the LLM semantic conventions from OpenTelemetry (https://opentelemetry.io/docs/specs/semconv/gen-ai/). For any MCP server that the agent uses, we propagate the trace context along.


thank you thank you :) feel free to star it haha


what's the calculus here? if i'm a developer choosing a low-level primitive such as a database, i'm likely quite opinionated on which models i use.


If I had to guess they might see embedding models become small and optimised enough to the point that they can pull them into the DB layer as a feature instead of being something devs need to actively think about and build into their app.

Or it could just be an expansion to their cloud offering. In a lot of cases embedding models just need to be 'good enough' and cheap and/or convenient is a winning GTM approach.


what's the calculus here? if i'm a developer choosing a low-level primitive such as a database, i'm likely quite opinionated on which models i use.


> We firmly believe that the next generation of AI applications will be built on MongoDB, making it the ideal foundation for AI-driven systems.

I wonder about this. I've been working on ClickHouse support and cloud management for over 6 years. When I first started I thought we would focus on integrating ML workloads, pretty much like the quote above. Over that time maybe 2 customers asked about ML. Everyone else (like literally hundreds) wanted visualization and ability to load data fast. After a while, it began to become clear why this was so.

Databases tend to be chosen and operated by groups with very different skillsets from AI. They solve different problems. The workloads are completely different. AI depends on GPUs and often depends on datasets that are far beyond the storage capability of databases. Databases on the other hand optimize hardware for fast response, which means loads of RAM and fast I/O. When people used to ask me about our AI integration strategy, I would reply "fix bugs in Parquet." It's not a flip answer. It enables databases and AI services to use a single copy of source data. That's one example of how AI and databases actually interoperate in industrial deployments.



anyone that can read and understand regex i have concerns for


congestion pricing doesn't work. its simply a shrug for the wealthy, and reduces money for lower income.



Mixpeek | Founding Engineers | NYC | Onsite, West Village

Our team is comprised of Computer Vision and NLP engineers and we're building "palantir for video". Said differently, the ability to pull out any data from a video and perform search, clustering, analysis, etc.

We're looking for founding engineers, generally if you're a strong and resourceful developer you'd probably be a good fit.

http://mixpeek.com

email the founder: ethan at mixpeek dot com


TL;DR SQL is king


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: