Sometimes the AI is all too good at writing tests.
I agree with the idea, I do it too, but you need to make sure the test don't just validate the incorrect behavior or that the code is not updated to pass the test in a way that actually "misses the point".
I've had this happen to me on one or two tests every time
Even more important, those tests need to be useful. Often unit tests are simply testing the code works as written which is generally doing more harm than good.
To give some further advice to juniors: if somebody is telling you writing unit tests is boring, they haven’t learned how to write good tests. There appears to be a large intersection between devs who think testing is a dull task and devs who see a self proclaimed speed up from AI. I don’t think this is a coincidence.
Writing useful tests is just as important as writing app code, and should be reviewed with equal scrutiny.
For some reason Gemini seems to be worse at it than Claude lately. Since mostly moving to 3 I've had it go back and change the tests rather than fixing the bug on what seems to be a regular basis. It's like it's gotten smart enough to "cheat" more. You really do still have to pay attention that the tests are valid.
Yep. It's incredibly annoying that obviously these AI companies are turning the "IQ knob" on these models up and down without warning or recourse. First OpenAI, then Anthropic and now Google. I'm guessing it's a cost optimization. OpenAI even said that part out loud.
Of course, for customers it is just one more reason you need to be looking at every AI outputs. Just because they did something perfect yesterday doesn't mean they won't totally screw up the exact same thing today. Or you could say it's one more advantage of local models: you control the knobs.
I agree with the idea, I do it too, but you need to make sure the test don't just validate the incorrect behavior or that the code is not updated to pass the test in a way that actually "misses the point".
I've had this happen to me on one or two tests every time