Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Even better, if they could leverage LLMs, they could maybe automate a large part of the initial moderation. They have already acknowledged that these models were likely trained on their data. Why not feed new posts back into it and determine about relevance (on vs off topic), attitude (compassion vs hate) or quality (written by a bot)?


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: