This is already supported via listChanged. The problem is that >90% of clients currently don’t implement this - including Anthropic’s, https://modelcontextprotocol.io/clients
I was considering making an MCP SEP (specification enhancement proposal) — https://modelcontextprotocol.io/community/sep-guidelines, though I'm curious if other MCP tinkerers feel the issue exists, should be solved like that, etc. What do you think?
In this situation, I would have a tool called "request ability to edit GLTF". I This would trigger an addition to the tool list specifically for your desired GLTF. The model would send the "tool list changed' notification and now the LLM would have access.
If you want to do it without the tool list changed notification ability, I'd have two tools, get schema for GLTF, and edit GLTF with schema. If you note that the get schema is a dependency for edit, the LLM could probably plumb that together on it's own fairly well
You could probably also support this workflow using sampling.
Sure, but the need for accuracy will only increase; there is a difference between suggesting an LLM to put a schema in its context before calling the tool vs forcing the LLM to use a structured output returned from a tool dynamically.
We already have 100% reliable structured outputs if we are making chatbots with LLM integrations directly; I don't want to lose this.
And LLMs will get more accurate. What happens when the LLM uses the wrong parameters? If it's an immediate error then it will just try again, no need for protocol changes, just better LLMs.
Last time I used Gemini CLI it still couldn’t consistently edit a file. That was just a few weeks ago. In fact, it would go into a loop attempting the same edit, burning through many thousands of tokens and calls in the process, re-reading the file, attempting the same edit, rinse, repeat until I stopped it.
The Arazzo specification[0] (from OpenAPI contributors) aims to solve the dependent arguments issue by introducing the concept of a "runtime expressions"[1] within a series of independent tool calls which compose a workflow.
Not true, structured outputs enforce output formats with 100% reliability, e.g., https://platform.openai.com/docs/guides/structured-outputs says "Structured Outputs is a feature that ensures the model will always generate responses that adhere to your supplied JSON Schema, so you don't need to worry about the model omitting a required key, or hallucinating an invalid enum value"