Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

> For one, it is more complicate to keep track of which of the many shared libraries on a typical system are used by which application. It is common that the same library occurs multiple times in different versions, built by different people/organizations and residing in different directories.

That's not common at all, man. I strongly recommend you don't do that.

> Quick, without looking: which TLS library do your network exposed subsystems use, which directories are they in and where did you install them from.

Openssl 3.x.y. It's /usr/lib64/openssl.so or similar. They are installed from my distro's repository.

> When you do go to look: did you find what you expected?

Yes. Openssl 3.1.1-r2. The OpenSSL binaries are actually named /usr/lib64/libssl.so and /usr/lib64/libcrypto.so. Upstream version is 3.1.2. There have been two low priority CVEs since 3.1.1 (never change openssl...) and my distro has backported the fixes for both of them into 3.1.1-r2.

> Do you know which versions of which libraries work with which binaries?

What do you mean "which versions of which libraries"? There's only one version of each library. If the package manager needs to keep an old version of a library around, it gives a loud warning about it so I can either fix the problem or ignore it at my own peril.

Those two .so files (libssl.so and libcrypto.so) as used by postfix, dovecot, and nginx. They are also linked by opendkim, spamassassin and cyrus-sasl, but those don't have open ports on the internet, so they don't really count. OpenSSH can optionally link to openssl; as it happens, my openssh does not link against a crypto library, openssl or otherwise. It just uses openssh's built in crypto schemes.

> Do you trust the information your package manager has about version requirements?

Yes.

> Does it even have that information?

... wat..? Of course it does?

> I've been in the situation where someone inadvertently updated a library in production and everything came crashing down. Not only did it take down the site, but it took a while to figure out what happened. Both because the person who did it wasn't aware of what they'd done. And the problem didn't manifest itself in a way that made the root cause obvious.

I've been in the situation where a security guard at my last job inadvertently discharged his service revolver into a Windows machine, and it crashed. That doesn't mean I stopped using Windows. (I mean, I did stop using Windows...)

That's genuinely just not a problem that I've had. Not since 2004 and all the C++ programs on my computer broke because I force upgraded from GCC-3.3 to GCC-3.4 and the ABI changed. Or that time in 2009 where I installed a 0.x version of Pulseaudio on my gaming machine. Or that time I replaced OpenSSL with LibreSSL on my personal computer. If your server takes a shit because somebody was fucking around doing stupid shit on prod, and you do root cause analysis and come up with a reason that it broke other than, "employee was fucking around and doing stupid shit on prod" and the recommendation is something other than "don't fuck around and do stupid shit on prod" I don't know what to tell you. Dynamic linking isn't going to stop a sufficiently determined idiot from bringing down your server. Neither will static linking.



> What do you mean "which versions of which libraries"?

If you upgrade a shared library to fix a problem, how do you know that the application has been tested against the fixed version?

And no, your package manager won't know.

Congratulations on a) not having multiple installs of shared libraries on your system and b) for knowing which version you have. Knowing this isn't very common.


> If you upgrade a shared library to fix a problem, how do you know that the application has been tested against the fixed version?

Distro's like Debian solve that problem by not upgrading. The only things deemed worthy of "fixing" are security issues, and they are fixed by backporting the fix (only) to the existing shared library. Thus no API's (of any sort - even unofficial ones like screen scraping) are upgraded or changed, so no testing is necessary.

And thus:

> And no, your package manager won't know.

It doesn't have to know, because the package manager can assume all releases for Debian stable are backward compatible with all the packages in that release.

A lot of the noise you see on HN comes from people using distro's on their desktops. To them a distro is a collection of pre-packaged software with all the latest shinies, which they upgrade regularly. But Linux's desktop usage is 3%, whereas it server usage is claimed to be over 95% (which eclipses Windows Desktop share). Consequently distros are largely shaped not by the noisy desktop users, but by the requirements of sysadmin's. They need a platform that is guaranteed both stable and secure for years. To keep it stable, they must solve the problem you describe, and for the most part they have.


> because the package manager can assume [...]

Right.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: