Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

The lesson I take away from this incident is that we probably shouldn't be allowing anonymity for core contributers in critical open source projects. This attack worked and the attacker will likely get away with it free of consequence, because they were anonymous.


No thanks.

That's not going to help, and will be fairly easy to circumvent for nation state actors or similar advanced persistent threats who will not have a problem adding an extra step of identity theft to their attack chain, or simply use an agent who can be protected if the backdoor is ever discovered.

On the other hand, the technical hoops required for something like that will likely cause a lot of damage to the whole open source community.

The solution here is learn from this attack and change practices to make a similar one more difficult to pull off:

1. Never allow files in release tar-balls which are not present in the repo.

2. As a consequence, all generated code should be checked in. Build scripts should re-generate all derived code and fail if the checked in code deviates from the generated.

3. No inscrutable data should be accessible by the release build process. This means that tests relying on binary data should be built completely separately from the release binaries.


It's easy to steal or craft an identity. Having a person adopt that identity and use it over multiple in-person meetings around the world over an extended period of time is not.

Part of the appeal of cyber operations for intelligence agencies is that there's basically no tradecraft involved. You park some hacker in front of a laptop within your territory (which also happens to have a constitution forbidding the extradition of citizens) and the hacker strikes at targets through obfuscated digital vectors of attack. They never go in public, they never get a photo taken of them, they never get trailed by counterintelligence.

If you start telling people who want to be FLOSS repo maintainers that they'll need to be at a few in-person meetings over a span of two or three years if they want the keys to the project, that hacker has a much harder job, because in-person social engineering is hard. It has to be the same person showing up, time after time, and that person has to be able to talk the language of someone intimately familiar with the technology while being someone they're not.

It's not a cure-all but for supply chain attacks, it makes the operation a lot riskier, resource-intense, and time-consuming.


Many OSS contributors likely don't have "fly to distant country for mandatory meeting" money.

You are excluding a ton of contributors based on geography and income.

It's not common that I find this line actually decent but check your privilege with this kind of comment.

This is really a small step away from segregation.


There are multiple ways to vet identity. Even just knowing location and such. This person used a VPN. A big red flag.


Stop trying to support such a variety of images too? Maybe?


Two problems with this:

1. Many important contributors, especially in security, prefer to be pseudonymous for good reasons. Insisting on identity drives them away.

2. If a spy agency was behind this, as many people have speculated, those can all manufacture "real" identities anyway.

So you'd be excluding helpful people and not excluding the attackers.


> The lesson I take away from this incident is that we probably shouldn't be allowing anonymity for core contributers in critical open source projects. This attack worked and the attacker will likely get away with it free of consequence, because they were anonymous.

This would be impossible to enforce, and might not be a good idea because it enables other ranges of attacks: if you know the identities of the maintainers of critical open source projects, it’s easier to put pressure on them.


If this was a state-actor (which it definitely looks like it) then what validation are you going to do? They can probably manufacture legitimate papers for anything.

Driver’s license, SSN, national ID, passport, etc. If the government is in on it then there’s no limits.

The only way would be to require physical presence in a trusted location. (Hopefully in a jurisdiction that doesn’t belong to the attacker…)


Many more chances to mess up.


Who designates it as critical?

If someone makes a library and other people start using it, are they forced to reveal their identity?

Do the maintainers get paid?


It might prevent attacks under different aliases, but a determined organization will be able to create a verified account, if only because nobody, certainly noy github, has the will and means to verify each account themselves.


The attack almost worked because of too few eyes




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: