Spot-testing usually gives you a representative picture of what the ML model will produce in general. Of course there can always be outliers (and usually there are), but they are just that, outliers, and they can’t be systematically exploited by an attacker with normal-looking inputs. The present paper however basically shows that those outliers can be systematically and deliberately spread throughout input space in such a way that any given input can be slightly tweaked by the attacker (in ways that the input still looks unsuspicious) to get the desired “lying” output, without that fact being detectable either by spot-checking or any other practically feasible analysis on the model. The fact that this is possible to do in such a general fashion (any given model can be modified to contain such a backdoor) is a new finding.