No. You are giving textual instructions to Claude in the hopes that it correctly generates a shell command for you vs giving it a tool definition with a clearly defined schema for parameters and your MCP Server is, presumably, enforcing adherence to those parameters BEFORE it hits your shell. You would be helping Claude in this case as you're giving a clearer set of constraints on operation.
Well, with MCP you’re giving textual instructions to Claude in hopes that it correctly generates a tool call for you. It’s not like tool calls have access to some secret deterministic mode of the LLM; it’s still just text.
To an LLM there’s not much difference between the list of sample commands above and the list of tool commands it would get from an MCP server. JSON and GNU-style args are very similar in structure. And presumably the command is enforcing constraints even better than the MCP server would.
Not strictly true. The LLM provider should be running a constrained token selection based off of the json schema of the tool call. That alone makes a massive difference as you're already discarding non-valid tokens during the completion at a low level. Now, if they had a BNF Grammer for each cli tool and enforced token selection based on that, you'd be much better off than unrestrained token selection.
Yeah, that's why I said "not much" difference. I don't think it's much, because LLMs do quite well generating JSON without turning on constrained output mode, and I can't remember them ever messing up a bash command line unless the quoting got weird.
Either way it is text instructions used to call a function (via a JSON object for MCP or a shell command for scripts). What works better depends on how the model you’re using was post trained and where in the prompt that info gets injected.