I would imagine that, with more experience, anticipating the (more consistent) actions of a machine would be easier than anticipating the actions of an unknown human in an unknown state.
The entire point of this line of discussion is that an ML based system with extremely weird and unexpected failure conditions and failure states ISN'T "more consistent" than a human who might follow more closely than physics says but otherwise is ACTUALLY predictable because they have a mind that we have evolved to predict.
ML having completely unpredictable failure modes is like the entire case against putting them anywhere. What would you call a vision system that mis-identifies a stop sigh because of a couple unrelated lines painted on it, other than "unpredictable"?