Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Dynamic linkers aren't really dynamic enough.

If I don't use any features that require libbz2 or whatever a piece of software shouldn't complain if libbz2 isn't there on the system. If I add it to the system I should be able to use those features without recompiling and relinking everything.

Half of the features of autoconf such as "is this library with this method signature available?" should be moved into the dynamic linker.

Software would need to be rewritten to be more lazy and modular, but it'd make dynamic shared libraries actually dynamic. Having to make all those decisions at compile time instead of run time isn't very dynamic.



This is perfectly possible with dlopen(), but as you say, requires work on the software side.


And the reason we have "dependency hell" in packaging is because software developers never take advantage of this. If applications would use dlopen() with versioning (a real thing that has existed for a while now) to load the specific version of the library they want, we could install 1,000 different versions of a library in our systems today, no problem at all.

For different apps to use different versions of dependencies but still interact with each other, they would need strict rules about how to use interfaces to different versions. You obviously don't want two different programs that use two different versions of a library to talk to one another, because what if version A has a different schema than version B? Unless there was a very specific way to pass along interface information between two programs, so that they could independently handle changes in their interfaces.


This is the reality on Darwin-based platforms: If you use an API that was introduced before a certain OS version, you can weak-link the framework or dylib that it’s in, and just not call it, and your code will load and execute just fine without it present.


> Software would need to be rewritten to be more lazy and modular, but it'd make dynamic shared libraries actually dynamic.

This does exist--see the Vulkan API, for example.

And mostly everybody ignores it. Most programmers effectively write a wrapper to load everything and then treat it like the dynamic linker pulled everything in.

The alternative is when the functionality isn't core. We call those plugins and people use them quite a bit.

The problem is now you get a lot of bug reports of "Feature <X> doesn't work." "Well you don't have plugin <Y>. Closed."

Static linkage is goodness.

And, I'm tired of the dynamic library people beating the "security upgrade" dead horse. The biggest problem with the "security upgrade" argument is that a significant number of people refuse to upgrade because an upgrade always breaks something because everything is dynamically linked.

If everything was statically linked, their pet program wouldn't break and they'd be more likely to upgrade.


Don't weak definitions already cover this? It's just a matter of the application authors taking advantage of it...


The problem is that not having the library usually means you also don’t have the headers, so you can’t compile the C code that would use it. What you’re talking about isn’t impossible, but it probably requires either a binary distribution built from a system aware of these libraries (this is the Darwin+availability attributes approach) or some massive header repository that you can pull from to build things.


IIRC Solaris supports this sort of dynamic linking, you can mark a library as optional and have the program handle it being missing.


What problem does this solve? You wouldn't pay for what you don't use except in disk space, with significant added complexity and fragility.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: