Sure, writing tests can find design bugs. But that's not really the question at hand here. The specifics are that we have a clear and obvious requirement ("torpedo should self-destruct if it turns a full 360 degrees") that turns about to be missing an important point ("EXCEPT IF IT IS ON THE BOAT").
No amount of testing the former will lead you to realize the latter. Sure, you might happen to come to the realization while writing the test, but you might do so over breakfast too.
I'm not saying "don't do testing". I'm trying to point out that it has limits. The fact that you've written tests and they pass doesn't get you off the hook for design bugs.
I think his point is that the test itself isn't the useful bit. The useful part is that thinking about what to test can uncover bugs.
This is what FMEA (Failure Modes Effects Analysis) does: you assume failures of every part of the system, rank their likelihood and the end effect and see how your design handles it. A good FMEA assumes everything will fail and analyzes the impact. Unfortunately, comprehensive FMEA is expensive and time consuming so it's usually only done for critical subsystems.
No amount of testing the former will lead you to realize the latter. Sure, you might happen to come to the realization while writing the test, but you might do so over breakfast too.
You're a bit more likely to do it during a time you've set aside to fully consider potential failure scenarios.
I guess I can't argue with that, as it's essentially unfalsifiable. But it's worth pointing out that writing a unit test, which is the subject that started this discussion, is definitely not a time when you're "fully considering potential failure scenarios". Unit tests are narrow and focused, and aimed at verifying features.
What you're talking about is something I'd call white box QA. Which is valuable, though it's essentially just an extension of design, and has the same limits.
My broad point still stands: you can't "process" your way out of this with extra testing. Some bugs are just inherent, and stem from the fact that we're human.
Unit tests are narrow and focused, and aimed at verifying features. What you're talking about is something I'd call white box QA. Which is valuable, though it's essentially just an extension of design, and has the same limits.
It seems like most negative arguments regarding "unit tests" start by defining them as a small subset of what can be usefully tested, and then arguing against the value of testing such an incomplete subset.
I don't see the point of drawing such an arbitrary line. I leverage "unit" tests to automatically test units of code as completely as possible (not just 'verify features'), and I expect the same of tests that others write.
No amount of testing the former will lead you to realize the latter. Sure, you might happen to come to the realization while writing the test, but you might do so over breakfast too.
I'm not saying "don't do testing". I'm trying to point out that it has limits. The fact that you've written tests and they pass doesn't get you off the hook for design bugs.