I know it's a joke, but the truth is that this is exactly the kind of bug that unit tests won't find. The unit tests would have simulated a bunch of gyro settings, with and without a single gyro failure (but not two gyro failures, as that was an accepted design limitation).
Running the gyros with the torpedo still on the boat, however, would not have been tested because the designers didn't think of it. If they had thought of it, the failure wouldn't have happened in the first place.
Testing can verify that your software works within the space of behavior that you already know about. It can't make up for your failure to understand the problem fully.
It's the unknown and unanticipated failure modes that cause the worst problems. Predicted failure modes at least have code to deal with them, even if it's buggy. Unit tests reduce the bugs, but non-existant code for unanticipated failures, while it doesn't have bugs, doesn't solve the problem.
Testing can verify that your software works within the space of behavior that you already know about. It can't make up for your failure to understand the problem fully.
Thinking about correctly testing software is generally one of the best ways to improve your understanding of the problem.
I quite often find bugs during unit testing simply because I'm forced to think about how the software will break, rather than thinking about how it will (or should) work.
I think of it as being quite similar to waiting overnight to proof-read your own paper. You need to be in the context of the reader, not the writer, or your brain will skim over most mistakes.
Sure, writing tests can find design bugs. But that's not really the question at hand here. The specifics are that we have a clear and obvious requirement ("torpedo should self-destruct if it turns a full 360 degrees") that turns about to be missing an important point ("EXCEPT IF IT IS ON THE BOAT").
No amount of testing the former will lead you to realize the latter. Sure, you might happen to come to the realization while writing the test, but you might do so over breakfast too.
I'm not saying "don't do testing". I'm trying to point out that it has limits. The fact that you've written tests and they pass doesn't get you off the hook for design bugs.
I think his point is that the test itself isn't the useful bit. The useful part is that thinking about what to test can uncover bugs.
This is what FMEA (Failure Modes Effects Analysis) does: you assume failures of every part of the system, rank their likelihood and the end effect and see how your design handles it. A good FMEA assumes everything will fail and analyzes the impact. Unfortunately, comprehensive FMEA is expensive and time consuming so it's usually only done for critical subsystems.
No amount of testing the former will lead you to realize the latter. Sure, you might happen to come to the realization while writing the test, but you might do so over breakfast too.
You're a bit more likely to do it during a time you've set aside to fully consider potential failure scenarios.
I guess I can't argue with that, as it's essentially unfalsifiable. But it's worth pointing out that writing a unit test, which is the subject that started this discussion, is definitely not a time when you're "fully considering potential failure scenarios". Unit tests are narrow and focused, and aimed at verifying features.
What you're talking about is something I'd call white box QA. Which is valuable, though it's essentially just an extension of design, and has the same limits.
My broad point still stands: you can't "process" your way out of this with extra testing. Some bugs are just inherent, and stem from the fact that we're human.
Unit tests are narrow and focused, and aimed at verifying features. What you're talking about is something I'd call white box QA. Which is valuable, though it's essentially just an extension of design, and has the same limits.
It seems like most negative arguments regarding "unit tests" start by defining them as a small subset of what can be usefully tested, and then arguing against the value of testing such an incomplete subset.
I don't see the point of drawing such an arbitrary line. I leverage "unit" tests to automatically test units of code as completely as possible (not just 'verify features'), and I expect the same of tests that others write.
Running the gyros with the torpedo still on the boat, however, would not have been tested because the designers didn't think of it. If they had thought of it, the failure wouldn't have happened in the first place.
Testing can verify that your software works within the space of behavior that you already know about. It can't make up for your failure to understand the problem fully.