I built a MCP server that solves this actually. It works like a tool calling proxy that calls child servers but instead of serving them up as direct tool calls, it exposes them as typescript defintions, asks your LLM to write code to invoke them all together, and then executes that typescript in a restricted VM to do tool calling indirectly. If you have tools that pass data between each other or need some kind of parsing or manipulation of output, like the tool call returns json, it's trivial to transform it. https://github.com/zbowling/mcpcodeserver
It works as an MCP proxy of sorts that converts all the child MCP tools into typescript annotations, asks your LLM to generate typescript, then executes those tool calls in a restricted VM to do the tool calls that way. It allows parellel process, passing data between tools without coming back to the LLM for a full loop, etc. The agents are pretty good at debugging issues they create too and trying again.
I specifically built this as an MCP server. It works like an MCP server that proxies to other MCP servers and converts the tool defintions in to typescript anotations and asks your llm to generate typescript that runs in a restricted VM to make tools calls that way. It's based on the apple white paper on this topic from last year. https://github.com/zbowling/mcpcodeserver
I hacked together a new MCP server this weekend that can significantly cut down the overhead with direct tool calling with LLMs inside different agents, especially when making multiple tool calls in a more complex workflow.
Inspired by the recent blog post by Cloudflare for their CodeMod MCP server and the original Apple white paper, I hacked together a new MCP server that is a lot better than the Cloudflare server in several ways. One of them being not relying on their backends to isolate the execution of the tool calling but also just generally better support around all the features in MCP and also significantly better interface generation and LLM tool hinting to save on context window tokens. This implementation can also scale to a lot more child servers more cleanly.
Most LLMs are naturally better at code generation than they are at tool calling with code understanding being more foundational to their knowledge and tool calling being pound into models in later stages during fine tuning. It can also burn an excessive number of tokens passing data between tools via LLMs in these agent orchestrators. But if you move the tool calling to be done by code rather than directly by the LLMs in the agents and have the LLMs generate that code, it can produce significantly better results for complex cases and reduce overhead with passing data between tool calls.
This implementation works as an MCP server proxy basically. As an MCP server, it is also an MCP client to your child servers. In the middle it hosts a node VM to execute code generated by the LLM to make tool calls indirectly. By introspecting the child MCP servers and converting their tool call interfaces to small condensed typescript API declarations, your LLM can generate code that invokes these tools in the provided node VM instead of invoking directly and do the complex processing of the response handling and errors in code instead of directly. This can be really powerful with when doing multiple tool calls in parallel or with logic around processing. And since it's a node VM, it has access to standard node models and built in standard libraries there.
One issue is if your tool calls are actually simple, like doing a basic web search or a single tool call, this can a bit more unnecessary overhead. But the more complex the prompt, the more this approach can significantly improve the quality of the output and lower your inference billing costs.
synology doens't even compete with synology anymore because all the new hardware requires locked in synology drives now.
It's creating a void that is getting filled with Ugreen, Minisforum, beelink, Aoostor for invoative platforms from China and classic competitors like Qnap, Asustor, Teramaster, etc for innovation for the small to mid-tier needs. 45drives in the larger spaces for folks wanting to manage things more on their own but have enterprise scale needs. Dell and HP have always competed on the high-end enterprise space and also becoming a better option, even though synology is so easy as an appliance.
Yes. Synology introduced the requirement to use first-party drives earlier this year and it was such an unmitigated disaster for them that they rolled it back just a couple of days ago.
Check out Pixi! Pixi is an alternative to the common conda and pypi frontends and has better system for hardware feature detection and get the best version of Torch for your hardware that is compatible across your packages (except for AMD at the moment). It can pull in the condaforge or pypi builds of pytorch and help you manage things automagically across platforms. https://pixi.sh/latest/python/pytorch/
It doesn't solve how you package your wheels specifically, that problem is still pushed on your downstream users because of boneheaded packaging decisions by PyTorch themselves but as the consumer, Pixi soften's blow. The condaforge builds of PyTorch also are a bit more sane.
Try pixi! Pixi is a much more sane way for building with conda + pypi packages in a single tool that makes this so much easier for torch development, regardless if you get the condaforge or pypi builds of pytorch. https://pixi.sh/latest/
Though I'm not sure who decided the ʻokina needed its own character rather than the traditionally used apostrophe. It's a pain to type without a Hawaiian keyboard.
Besides, the Hawaiian diacritics are not part of English orthography, so the name of the state (and the big island) is just "Hawaii" in English. In Hawaiian, it's Hawaiʻi.
> Though I'm not sure who decided the ʻokina needed its own character rather than the traditionally used apostrophe. It's a pain to type without a Hawaiian keyboard.
I dunno, the glottal stop sounds pretty different from normal English usage of apostrophe. If anything it's closer to - than ', like in uh-oh.
French uses both grave and acute accent marks, and they sound very different.
It's not just the shape of the glyph. An apostrophe is a punctuation mark. An ʻokina is a letter. In Unicode, U+0027 is marked "Other Punctuation". U+02BB is "Modifier Letter". This matters to software.
In Unicode, U+0027 is marked "ASCII punctuation and symbols" and described as "neutral (vertical) glyph with mixed usage."
While in English, the apostrophe is usually a punctuation mark, it is used as a letter, typically a glottal stop like the ʻokina, in dozens of languages as well as when writing certain English accents phonetically, like Glasgow or Cockney.
Software does not particularly care about what unicode character you use and the switch to the inverted comma ʻokina began before unicode (or software) was a thing.
Need to write a web extension to inject some javascript to show a loading screen for a few seconds and download a few MB of js so it feels like a modern website. Should probably wrap the whole thing in a SPA too so we have options in the future