Hamel wrote a whole lot more about the "LLM as a judge" pattern (where you use LLMs to evaluate the output of other LLMs) here: https://hamel.dev/blog/posts/llm-judge/
I really recommend people study the measurement frailties and prompting sensitivities of LLM judges before employing them. They're valuable, but should be used with complete understanding of the risks: https://www.cip.org/blog/llm-judges-are-unreliable