That would be an improvement for sure... but this is not fundamentally a technical problem.
Reading the timeline, the root cause is pure social engineering. The technical details could be swapped out with anything. Sure there are aspects unique to xz that were exploited such as the test directory full of binaries, but that's just because they happened to be the easiest target to implement their changes in an obfuscated way, that misses the point - once an attacker has gained maintainer-ship you are basically screwed - because they will find a way to insert something, somehow, eventually, no matter how many easy targets for obfuscation are removed.
Is the real problem here in handing over substantial trust to an anonymous contributor? If a person has more to lose than maintainership then would this have happened?
That someone can weave their way into a project under a pseudonym and eventually gain maintainership without ever risking their real reputation seems to set up quite a risk free target for attackers.
> Is the real problem here handing over substantial trust to an anonymous contributor?
Unless there's some practical way of comprehensively solving The Real Problem Here, it makes a lot of sense to consider all reasonable mitigations, technical, social or otherwise.
> If a person has more to lose than maintainership then would this have happened?
I guess that's one possible mitigation, but what exactly would that look like in practice? Good luck getting open source contributors to accept any kind of liability. Except for the JiaTans of the world, who will be the first to accept it since they will have already planned to be untouchable.
> I guess that's one possible mitigation, but what exactly would that look like in practice? Good luck getting open source contributors to accept any kind of liability. Except for the JiaTans of the world, who will be the first to accept it since they will have already planned to be untouchable.
It's not necessary to accept liability, that's waived in all FOSS licenses. What I'm suggesting would only risk reputation of non-malicious contributors, and from what I've seen, most of the major FOSS contributors and maintainers freely use their real identity or associate it with their pseudonym anyway, since that attribution comes with real life perks.
Disallowing anonymous pseudonyms would raise the bar quite a bit and require more effort from attackers to construct or steal plausible looking online identities for each attack, especially when they need to hold up for a long time as with this attack.
Agreed it's mainly a social engineering problem, BUT also can be viewed as a technical problem, if a sufficiently advanced fuzzer could catch issues like this.
It could also be called an industry problem, where we rely on other's code without running proper checks. This seems to be an emerging realization, with services like socket.dev starting to emerge.
Reading the timeline, the root cause is pure social engineering. The technical details could be swapped out with anything. Sure there are aspects unique to xz that were exploited such as the test directory full of binaries, but that's just because they happened to be the easiest target to implement their changes in an obfuscated way, that misses the point - once an attacker has gained maintainer-ship you are basically screwed - because they will find a way to insert something, somehow, eventually, no matter how many easy targets for obfuscation are removed.
Is the real problem here in handing over substantial trust to an anonymous contributor? If a person has more to lose than maintainership then would this have happened?
That someone can weave their way into a project under a pseudonym and eventually gain maintainership without ever risking their real reputation seems to set up quite a risk free target for attackers.