If it was, they would have mentioned it in their summary report, the way they justified other deliberate design decisions. I find it more likely they thought of 25 different ways this system could fail, fixed the ones that needed fixing (some of them hinted in the summary report), and then they forgot about that one way it was actually going to fail. Happens all the time.
I agree this article is very hindsight biased though. We do need a way to model the failure modes we can think of, but we also need a method that helps us think of what the failure modes are, in a systematic manner that doesn't suffer from "oops we forgot the one way it was actually going to fail".
Yes, any analysis after an incident has the benefit, and bias, of hindsight.
But I see this post less as an incident analysis and more as an experiment in learning from hindsight. The goal, it seems, isn’t to replay what happened, but to show how formal methods let us model a complex system at a conceptual level, without access to every internal detail, and still reason about where races or inconsistencies could emerge.
I agree this article is very hindsight biased though. We do need a way to model the failure modes we can think of, but we also need a method that helps us think of what the failure modes are, in a systematic manner that doesn't suffer from "oops we forgot the one way it was actually going to fail".