As a developer, this amazes me, and it just shows what — to me — feels like a top-tier attack method, is probably only entry-to-mid level complexity for the folks working at that stage. Some of the things I see posted here on HN are well above this level, so I'd assume for the right kind of money (or other incentives), this is only the beginning of what's possible. And, if you think of ALL the packages and ALL the many millions of libraries on GitHub, this vector is SO EFFECTIVE, there will be hundreds of cases like it uncovered in the next few months, I am certain of it.
I worry about all the con/pro-sumer hardware makers, from Philips Hue to Alexas, from the SumUps to the camera makers, from Netgear to TP-Link. All their products are packed with open-source libraries. And I am 100% certain that most of their dev teams do not spend time scanning these for obscure injection vectors.
> And I am 100% certain that most of their dev teams do not spend time scanning these for obscure injection vectors.
This rationale baffles me, it feels that the dependency-hell circlejerk crowd is working on making OSS maintainers look even more bad with this scenario.
Any given commercial operation that claims any credibility for itself does supply chain analysis before adopting a dependency. This is, among other things why ordinarily you'd pay RedHat to maintain a stable Linux Release for you and why projects such as FreeBSD severely limit the software they ship in the default install.
If you are affected by this mess, I'm sorry to say, but it's your fault. If you are worried about developers of software you use for free, as in free beer, going rogue, either put in incentives for them to not do that (i.e. pay them) or fork the project and implement your own security measures on top of what's already there.
If you're worried that you could encounter exploits from dependencies in commercial software you use, you should negotiate a contract that includes compensation from damages from supply chain attacks.
If you're unwilling to do that, sorry mate, you're just unprofessional.
Inb4: Yes, I am really trying to say that you should check the supply chain of even your most basic dependencies such as SSH.
Unfortunately that's "industry standard" nowadays. I lost count how often I had that discussion over the last two decades.
Just look at stuff like pip, npm or pretty much any "modern" package manager in use by developers - they're all pretty much designed to pull in a shitload of arbitrary unaudited and in some causes unauditable dependencies.
And nobody wants to listen. That's why I prefer to work in heavily regulated areas nowadays - that way I can shorten that discussion with "yeah, but regulatory requirements don't let us do that, sorry"
The absolute basic should be having a local archive of dependencies which at least received a basic sanity check, and updates or additions to that should review changes being added. CI gets access to that cache, but by itself does not have network access to make sure no random crap gets pulled into the build. You'd be surprised how many popular build systems can't do that at all, or only with a lot of workarounds.
Seems like a lot of this could be solved with a whitelist of trusted dependencies.
There are already lots of groups maintaining internal lists and analysis of dependencies they trust. If there was a platform for reporting safety, rather than reporting vulnerability, one could say "Only allow packages that someone from a fortune 500 company publish an analysis of".
Cargo absolutely uses git, or else we wouldn't've had that thing where setting it to use the cli git instead of some reimplementation lead to massive speedups
What I mean is that it doesn’t use git as the source of truth for packages (unless you point at a git repo instead of crates.io). It does use a git repo for the index.
What they meant was probably, that you have the option to rely entirely on using git repositories for your dependencies or even just paths to other projects on your disk.
You can also setup your own dependency registry and only work with that.
> they're all pretty much designed to pull in a shitload of arbitrary unaudited and in some causes unauditable dependencies.
No they're not. The dependency circle jerk went so far to prompt NPM to display all subsequent dependencies on each libraries page.
The issue lies with the industry as a whole exploiting the work of OSS developers for their own gain and having the audacity to complain when these volunteers won't additionally do a security audit for free.
> Any given commercial operation that claims any credibility for itself does supply chain analysis before adopting a dependency. This is, among other things why ordinarily you'd pay RedHat to maintain a stable Linux Release for you and why projects such as FreeBSD severely limit the software they ship in the default install.
That sounds like you assume RedHat would've caught the vulnerability in xz-utils, before shipping it in the next release of RHEL. I'm not so sure about that, as there is only so much you can do in terms of supply chain analysis and such a sophisticated vulnerability can be pretty hard to spot. Also mind that it only got discovered by accident after all.
I don't know if RedHat would have caught it. But the benefit of Red Hat is, they would be the one to fall on the sword. Your product is built on RHEL. This happens. You get to shift blame to RHEL and RedHat would eat it. The positive is, after the dust has settled Red Hat could choose to sort of adopt the compromised piece (invest engineering effort and take it over) or take some stewardship (keeping an eye on it and maybe give a hand to whoever is maintaining it after).
Sorry, but what? What does “shifting the blame” mean here? Ever heard of “it’s not my fault but it’s my problem?”.
It really sounds like you’re speaking from the perspective of a hypothetical employee looking to not get PIPd or whatever.
GP is talking about something quite different, and you’ve run off taking some sort of great personal offence to someone dare implying that there are downsides to open source, not even that it’s worse overall, but that there are downsides.
They're right though, the benefit of paying absurd amounts to your linux vendor is raking in certs that you can use with your insurance provider to cover your a** in case something like this happens. That's the sole reason of certs after all. Though I'd like to figure out if RedHat really is going to eat it if push comes to shove.
In theory they should and hopefully in practice they would. How rigorously is that tested, I am not sure. But if they weren't willing to eat it when they sell you support for a distro they package has a vulnerability, then people would need to ask, what is the point in paying Red Hat? Seems like their entire business would go out the window, especially because they are required or one of a few small number of options certain business domains require. What is the advantage of RHEL over the free Debian ISO and support there that I currently deploy to production environments? I also don't work in as heavily of regulated domain.
> I also don't work in as heavily of regulated domain.
I feel this is the crux of it for the thread. Most places where I've worked have been regulated and this has been interested to read/follow.
This 'fall on the sword' thing is real. The 'engineer on a PIP' thing is too, in a twisted sense. This has multitudes/depth.
Consider business terms/liability. Your certification/ability to do business depends on implementing certain things, sometimes by buying things (ie: RHEL) from those who also carry certifications. The alternative is to do it yourself at great expense.
If 'it' hits the fan, you can [hopefully] point at due diligence. It's not an engineer doing this to cover themselves... but businesses.
I don't know how approachable the distribution providers are as a smaller business. We, at fairly large enterprises, were able to work closely with them to get fixes regularly - but that says very little.
Anyway: I say all this to neither defend or deride the situation. It's sort of like a cartel, insurance, and buying merch for a band on tour, all in one.
I've benefited from this situation but also lost years of my life to it
Then please enlighten me as how the hell Red Hat's business model is supposed to work if that isn't true. You pay for Red Hat for quality guarantees and certifications, which in some industries is required. The main business model of Red Hat is, pay for a curated distro with support and we will take care of some things for you. We will ensure a secure and managed repo of third party tools and yada yada. Otherwise, why would anyone pay Red Hat and not just deploy fleets of Debian servers? For sure, some people do just deploy Debian, this is what I do at work. But some businesses do pay for Red Hat.
I'm not saying its not a companies problem if this exploit got into their RHEL environments. But from a company perspective when it comes down to law suits, they will get to shift the blame to RHEL. And for a business, that is what matters. Do you really think companies care about having secure systems? I would be willing to bet money, if companies could be protected from lawsuits from data breeches, they wouldn't give two shits about security. For them, data breeches are just potential multi-million or multi-billion dollar legal liabilities. And this is part of RHEL's business model. You get to shift some of that legal liability to RHEL.
I said "ordinarily". I meant "this is what you'd expect from them by paying them". Obviously this is a big fauxpas on their end and I'd reconsider using their services after this scenario. After all, security hardening upstream packages is among the reasons you're supposed to use them.
I think it's more than an individual or an organisation. The industry as a whole has favoured not caring about dependencies or where they come from in order to increase velocity. Concerns about supply chain (before we even had the term) were dismissed as "unlikely and we won't be blamed because everyone's doing it").
The organisations that did have some measures were complained about loudly, and they diluted their requirements over time in order to avoid stagnation. Example: Debian used to have a "key must be signed by three other Debian developers" requirement. They had to relax the requirement in part because, from the perspective of the wider ecosystem, nobody else had these onerous requirements and so they seemed unreasonable (although Covid was the final straw). If we'd had an ecosystem-wide culture of "know your upstream maintainer", then this kind of expectation as a condition of maintainership would be normal, we'd have much better tooling to do it, and such requirements would not have seemed onerous to anyone. It's like there's an Overton Window of what is acceptable, that has perhaps shifted too far in favour of velocity and at the cost of security, and this kind of incident is needed to get people to sit up and take notice.
This incident provides the ecosystem as a whole the opportunity to consider slowing down in order to improve supply chain security. There's no silver bullet, but there are a variety of measures available to mitigate, such as trying to know the real world identity of maintainers, more cautious code review, banning practices such as binary blobs in source trees, better tooling to roll back, etc. All of these require slowing down velocity in some way. Change can only realistically happen by shifting the Overton Window across the ecosystem as a whole, with everyone accepting the hit to velocity. I think that an individual or organisation within the ecosystem isn't really in a position to stray too far from this Overton Window without becoming ineffective, because of the way that ecosystem elements all depend on each other.
> If you're unwilling to do that, sorry mate, you're just unprofessional.
There are no professionals doing what you suggest today, because if they did, they'd be out-competed on price immediately. It's too expensive and customers do not care.
>, is probably only entry-to-mid level complexity for the folks working at that stage.
On the contrary: the developers and maintainers who are more informed than us described it as highly sophisticated attack. I also read early InfoSec (information security) articles which were able to only describe a part of the code, not the whole strategy behind the attack because, again, the attack and code are sophisticated. You can also read early InfoSec articles which describe the attack in different ways simply because it was not that simple to understand. Then I read articles saying something like this: "Finally it seems it's an RCE attack".
Of course, now that even a scanner is developed to detect that vulnerability on your server, we can all claim: "Oh that was a so simple and stupid attack, how come no one detected it much earlier ?!"
That's why I don't see e.g. TP-Link basing their router firmware on OpenWRT as a win, and why I want the "vanilla" upstream project (or something that tracks upstream by design) running on my devices.
Applies to all of my devices btw. I don't like Android having to use an old kernel, I didn't like MacOS running some ancient Darwin/BSD thing, etc. The required effort for backporting worries me.
Don't get me wrong, I'm not saying OSS has no vulns.
More orgs directly contributing to upstream is best in my eyes too. I'm not against forking, but there are usually real benefits to running the latest version of the most popular one.
One opposite of this I've seen is Mikrotik's RouterOS. I'm under the understanding that they usually reimplement software and protocols rather than depending on an upstream.
I'd imagine that is what leads to issues such as missing UDP support in OpenVPN for 10 years, and I'm not sure it gives me the warmest fuzzy feeling about security. Pros and cons, I suppose. More secure because it's not the same target as everybody else. Less secure because there are fewer users and eyes looking at this thing.
Any moderately well run shop will have a mechanism to get updates when a dependency of theirs has a security issues, depending on the line of business it may actually be required by a regulator or certification body (eg PCI etc)
We should probably be more afraid of the backdoors you can’t see in proprietary that would almost never be found.
> there will be hundreds of cases like it uncovered in the next few months, I am certain of it.
"Given the activity over several weeks, the committer is either directly
involved or there was some quite severe compromise of their
system. Unfortunately the latter looks like the less likely explanation, given
they communicated on various lists about the "fixes" mentioned above." (https://www.openwall.com/lists/oss-security/2024/03/29/4)
I worry about all the con/pro-sumer hardware makers, from Philips Hue to Alexas, from the SumUps to the camera makers, from Netgear to TP-Link. All their products are packed with open-source libraries. And I am 100% certain that most of their dev teams do not spend time scanning these for obscure injection vectors.