Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

> The ability to change which library is being used without needing to rebuild the main program is really important.

Having spent many hours avoiding bugs caused by this anti-feature I have to disagree. The library linked almost always must be the one that the software was built against. Changing it out is not viable for the vast majority of programs and libraries.

Just as an example, there is no good technical reason why I shouldn't be able to distribute the same ELF binary to every distro all the time. Except the fact that distros routinely name their shared objects differently and I can't predict ahead of time what they will be, and I can't feasibly write a package for every package manager in existence. So I statically link the binary and provide a download, thereby eliminating this class of bug.

Despite the rants of free software advocates, this solution is the one preferred by my users.



Not defending it for general use, but dynamic linking can be very useful for test and instrumentation. Also sometimes for solvers and simulators, but that's even more niche.


Can you please elaborate? Why would the ability to change the library version at runtime be useful for testing? and what aspect of this is useful for simulators and solvers?


You compile a version of the dependency which intentionally behaves different (e.g. introduces random network errors, random file parsing errors, etc.) and "inject" it into a otherwise functional setup to check if all other parts including error reporting and similar work.

There are always other ways to archive this but using (abusing?) the dynamic linking is often the simpler way to set it up.

> and what aspect of this is useful for simulators and solvers?

Duno, but some programs allow plugin in different implementations of the same performance critical functionality so that you can distribute a binary version and then only re-compile that part with platform specific optimizations. If that is what the author is referring to I would argue today there are better ways to do it. But it's still working out either way and can be much easier to setup/maintain in some cases. (And probably falls into the "shared libraries as extension system" category Linus excluded from his commentary.)


That's part of it, which I agree falls under the excluded category. I was thinking more about code generation and program modification.

Curious what the better way to swap implementations would be?


I should clarify - it's at the start of runtime when the library is initially loaded. Not afterwards. You'd have to restart the program to swap a library.

For testing and simulation, it's a way to mock dependencies. You get to test the otherwise exact binary you'll deploy. And you can inject tracing and logging.

Solver installations tend to be long-lived, but require customizations, upgrades, and fixes over time. Dynamic linking lets you do that without going through the pain of reinstalling it every time. Also with solvers you're likely to see unconventional stuff for performance or even just due to age.


> ... when the library is initially loaded. Not afterwards. You'd have to restart the program to swap a library.

When you close the library with dlclose() you can swap it during runtime, too.


Dependency injection is useful for mock testing in C/C++ although there are more programmatic ways to do it for the latter


You're forgetting about one common case where libraries are replaced.

This is security vulnerabilities. If your application depends on a common library that had a vulnerability, I can fix it without you having to recompile your app.

With GLibc or X libraries a vulnerability there would result essentially requiring reinstallation of the entire OS.


You could but you would be doing yourself two disservices by trusting vendors that aren't providing security updates for dependencies in a timely manner and running applications on top of dependencies they haven't been tested with.

Vendors could ship applications with dependencies and package managers could tell which of those applications and dependencies have vulnerabilities. This would clarify the responsibility of vendors and pressure them to provide security updates in a timely manner.

One big obstacle is that it's fairly common for vendors to take a well known dependency and repackage it. It's difficult to keep track of repackaged dependencies in vulnerability databases.


What vendors? SCO UNIX? HP-UX? IBM AIX?


I'm not, actually. If the libraries need to be replaced the software should be rebuilt.

And yes, bandwidth and disk are cheap today. Reinstalling a large number of programs on disk is not that big of a problem today.


You seem to be assuming rebuilding is possible. What about the (still very useful) proprietary binaries from a company that hasn't existed in a decade? What about the binaries where the original source no longer exists?


I say that presents market opportunities. Every piece of technology faces a time of obsolescence.

If the original source no longer exists and rebuilding is no longer possible then replacing dependencies is no longer feasible without manual intervention to mitigate problems. ABIs and APIs are not as stable as you'd think.


> Every piece of technology faces a time of obsolescence.

That is true for most technologies that experience entropy and the problems of real world. Real physical devices of any kind will eventually fail.

However, anything build upon Claude Shannon digital circuits do not degrade. The digital CPU and the software that runs it are deterministic. Some people see a lack of updates in a project as "not maintained", but for some projects that lack of updates means the project is finished.

> obsolescence

What you label as obsolete I consider "the last working version". The modern attitude that old features need to be removed results in software that is missing basic features.

> replacing dependencies is no longer feasible without manual intervention to mitigate problems

This is simply not true. Have you even tried? I replaced a library to get proprietary software working last week. In the past, I've written my own replacement library to add a feature to a proprietary program. Of course this required manual intervention; writing that library took more than a week of dev time. However, at least it was possible to replace the library and get the program working. "Market opportunities" suggests you think I should have bought replacement software? I'm not sure that even exists?

> ABIs and APIs are not as stable as you'd think.

I'm very familiar with the stability of ABIs and APIs. I've been debugging and working around this type of problem for ~25 years. Experience suggests that interface stability correlates with the quality of the dev team. A lot of packages have been very stable; breaking changes are rare and usually well documented.


> there is no good technical reason why I shouldn't be able to distribute the same ELF binary to every distro

Oh, your app also works on every single kernel, every different version of external applications, and supports every file tree of every distro? Sounds like you added a crap-ton of portability to your app in order to support all those distros.

> I can't feasibly write a package for every package manager in existence.

But you could create 3 packages for the 3 most commonly used package managers of your users. Or 3 tarballs of your app compiled for 3 platforms using Docker for the build environment. Which would take about a day or two, and simultaneously provide testing of your app in your users' environments.


Yes this not that hard if you are statically linking as much as possible. The magic of stable syscalls. Variance between glibc is the biggest headache but musl libc solves a lot of the problems.

Have you ever actually tried that last step that you're suggesting? It's actually really time consuming and expensive to maintain that infrastructure due to oddities between distros, like glibc or unsupported compiler versions. Statically linking is easier than redoing all the work of setting up and tearing down developer environments just because one platform has a different #define in libc. It's also cheaper when your images are not small and you're paying for your bandwidth/storage.




Consider applying for YC's Fall 2025 batch! Applications are open till Aug 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: