I'm not saying use placeholder keys: the actual keys themselves serve as guidance.
Naming a key "nameBasedOnLocationIGaveYou" instead of "name", or "oneSentenceSummary" vs "summary", results in a meaningful difference.
You can even use that for formatted single-response chain of thought, like {"listOfStuff":[...], "whatDoTheyHaveInCommon": "", "whichOneIsMostImportant": ""}
Also remember, the LLM doesn't need valid JSON: I just straight up insert comments in the JSON in a non-compliant way for some of my prompts, GPT-4 and Claude are all smart enough to not hallucinate comments back at you. 3.5 might be pushing it if temp is too high (although even the nerfed API logit bias should fix that now that I think about it)
And sometimes to save tokens I describe a JSON object without using JSON: just structure it in neatly formatted markdown and even 3.5 can follow along
Oh, I see! I misunderstood that you meant using dummy keys to hold comments in their values, which some people have suggested as a work-around for there not being any comments in JSON.
Naming a key "nameBasedOnLocationIGaveYou" instead of "name", or "oneSentenceSummary" vs "summary", results in a meaningful difference.
You can even use that for formatted single-response chain of thought, like {"listOfStuff":[...], "whatDoTheyHaveInCommon": "", "whichOneIsMostImportant": ""}
Also remember, the LLM doesn't need valid JSON: I just straight up insert comments in the JSON in a non-compliant way for some of my prompts, GPT-4 and Claude are all smart enough to not hallucinate comments back at you. 3.5 might be pushing it if temp is too high (although even the nerfed API logit bias should fix that now that I think about it)
And sometimes to save tokens I describe a JSON object without using JSON: just structure it in neatly formatted markdown and even 3.5 can follow along