I like the pilot checklist concept as an example of a way to address a problem when there can be no magic automatic way.
It's a different way to address the problem of the unreliable component than just requiring it to be magically reliable, or replacing it with something else that's reliable (automation, ie magic compiler or runtime), by adding procedure and redundancy and cross verification so that the final output is reliable even though all the individual worker parts are not. It's just ECC in human scope.
So I guess you're right that it is an example of the article's assertion that "If people always do something wrong, that is evidense that there is a system in place which results in people doing that thing wrong, and that system is wrong."
The system is the overall norms of software development, and the wrongness is that it doesn't include something equivalent to a pilot checklist culture around the important bits, even though everyone knows that those bits are important, and knows that humans are unable to be reliable.
I really like your analysis, spot on. Including the viewpoint that a checklist is a human scoped form of redundancy, allowing us to build reliable systems out of unreliable components. While there is nothing that can force the human to check the checklist, having it in place reduces the risk by an order of magnitude. And one can always seek extra assurance if called and budgeted for.
Definitely sucks that overall software development norms do not take more care to protect the important bits. Hard for the market to reward secure designs, because the effects are latent, because the competition is not any better and everyone claims their product is secure. If a market required data showing security and if the security industry came up with cheap effective solutions, things might be different.
I do not view the system that birthed the checklist concept as being flawed or wrong, though. I agree that if people always do something "wrong", then it requires close scrutiny. It is just that different systems have different requirements and levels of redundancy. The checklist came about when people realized that humans make costly mistakes without one. Same deal for input sanitization (which many on this thread are not liking). OWASP is of the opinion that input sanitization is worth mentioning, but they would probably agree the whole thing is broken...
"While there is nothing that can force the human to check the checklist"
That's why there's also the buddy system, procedure aka habit aka ritual, and culture aka the pressure of norms.
No single thing guarantees much, they are all just pressures that some people are immune to, or some can't always be applied (buddy system requires buddies), but they do all have their statistical effect, and the more of those you pile on, the less likely it is for something to make it past all of them.
Say someone is a total lone wolf, not part of any teams or communities, utterly immune to shame from not conforming to everyone else's expectations, the system still works because everyone else shuns their work because it came from such an unhygenic source, or, they take on the job of doing what the author didn't, because one way or another they simply can't be seen using dirty software in their own work. They either don't use it or they launder it themselves.
At least, I assume it's diffucult for a total cowboy scofflaw unsafe pilot to get a job doing any piloting that matters anywhere. I bet even the military demands people be in control of their intentiinal crazy unsafe flying.
Not just because of any rules but the entire culture made of the entire rest of the population that you can't buck individually.
And we do actually have a little bit of this in some, maybe most companies, where developers generally don't push code directly to production but must pass through at least one other reviewer to merge. But the company as a whole gets to do whatever they want in private, and they don't do that same sort of auditing on all the stuff they use that came from elsewhere, and thst's only big companies which are still probably a minority of all developers and projects. Even if I'm part of such a culture at work, I'm still not the rest of the time unless I choose to be.
It's a different way to address the problem of the unreliable component than just requiring it to be magically reliable, or replacing it with something else that's reliable (automation, ie magic compiler or runtime), by adding procedure and redundancy and cross verification so that the final output is reliable even though all the individual worker parts are not. It's just ECC in human scope.
So I guess you're right that it is an example of the article's assertion that "If people always do something wrong, that is evidense that there is a system in place which results in people doing that thing wrong, and that system is wrong."
The system is the overall norms of software development, and the wrongness is that it doesn't include something equivalent to a pilot checklist culture around the important bits, even though everyone knows that those bits are important, and knows that humans are unable to be reliable.