Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

>For one, we can test the software by running it

As long as the tests are not also written by ChatGPT...

Many critical security issues require a deep understanding or the code or some intense fuzzing to discover, it's not enough to ask ChatGPT "write me X" then superficially glance at the output to validate that it looks correct. That's the part that worries me. Completely broken code will be caught immediately, but subtly broken code may linger for a long time and make it to production.

And from my limited experience with ChatGPT, it seems very good at making up broken things that look superficially correct.



Consider applying for YC's Winter 2026 batch! Applications are open till Nov 10

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: