IANA{LLM}, but if you're only sampling from a "correct" grammar, you are potentially (very potentially) forgoing what might otherwise have been a more desirable and more semantically useful token. Most of the models have been trained on myriads of human language, not structured data necessarily, and so I'd rather elect for a more semantically enriched format (e.g. XML or YAML) because those are designed to be ~more human readable. Or perhaps more preferably: have the boss LLM pump out what it excels at (strings of prose most of the time) and have a secondary model with a stricter grammar convert that to JSON.