Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Hamel wrote a whole lot more about the "LLM as a judge" pattern (where you use LLMs to evaluate the output of other LLMs) here: https://hamel.dev/blog/posts/llm-judge/


I really recommend people study the measurement frailties and prompting sensitivities of LLM judges before employing them. They're valuable, but should be used with complete understanding of the risks: https://www.cip.org/blog/llm-judges-are-unreliable


Appreciate it, Simon! I have now edited my post to include links to "intro to evals" for those not familiar.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: