As long as the tests are not also written by ChatGPT...
Many critical security issues require a deep understanding or the code or some intense fuzzing to discover, it's not enough to ask ChatGPT "write me X" then superficially glance at the output to validate that it looks correct. That's the part that worries me. Completely broken code will be caught immediately, but subtly broken code may linger for a long time and make it to production.
And from my limited experience with ChatGPT, it seems very good at making up broken things that look superficially correct.
As long as the tests are not also written by ChatGPT...
Many critical security issues require a deep understanding or the code or some intense fuzzing to discover, it's not enough to ask ChatGPT "write me X" then superficially glance at the output to validate that it looks correct. That's the part that worries me. Completely broken code will be caught immediately, but subtly broken code may linger for a long time and make it to production.
And from my limited experience with ChatGPT, it seems very good at making up broken things that look superficially correct.