Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Agreed, we shouldn't trust the system, but using it as a bloom filter to flag those that should be reviewed manually seems warranted.

If all we're getting is false positives then it can be used to reduce the workload.

If we also get false negatives then we'd be better off using existing techniques (manual or otherwise).



How do you do this manual review? How can a human spot LLM-generated text? The internet is full of horror stories of good students getting failing grades due to false positive LLM detectors where the manual review was cursory at best.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: