Hacker News new | past | comments | ask | show | jobs | submit login

This is a neat idea but gives me pause. Thinking about how it would work in projects I maintain, it would either:

- be over-confident, providing negative value because the proportion of PRs which “LGTM” is extraordinarily low, and my increasingly deep familiarity with the code and areas of risk makes me even more suspicious when something looks that safe

- never gain confidence in any PR, providing no value

I can’t think of a scenario where I’d use this for these projects. But I can certainly imagine it in the abstract, under circumstances where baseline safety of changes is much higher.




Consider applying for YC's Fall 2025 batch! Applications are open till Aug 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: