Hacker News new | past | comments | ask | show | jobs | submit login

> really important.

Except it isn't, at least not for open source.

Most libraries do not have stable ABI's, even for C there are may ways you can mess that up. Even "seemingly clear cut cases" like some libc implementations ran into accidental ABI breakage in the past.

And just because the ABI didn't change doesn't mean the code is compatible.

It's not seldom that open source libraries get bugs because dynamic linking is used to force them to be used with versions of dependencies which happen to be ABI compatible (enough) but don't actually work with it/have sub-tile bugs. It sometimes gets to a point where it's a major anoyence/problem for some open source projects.

Then there is the thing that LD_LIBRARY_PATH and similar are MAJOR security holes, and most systems would be best of to use hardening techniques to disable it (not to be confused with `/etc/ld.so.conf`).

Through yes without question for not properly maintained closed source programs it is helpful. But then for them things like container images being able to run older versions of linux (besides the kernel) in a sandbox can be an option, too. Through a less ergonomic one. And not applicable in all cases.




> Then there is the thing that LD_LIBRARY_PATH and similar are MAJOR security holes, and most systems would be best of to use hardening techniques to disable it (not to be confused with `/etc/ld.so.conf`).

I do not consider LD_LIBRARY_PATH or LD_PRELOAD more a security hole than PATH itself.

There is two scenarios:

- you control exactly how your program launcher (environment variables, absolute path) andit's a non issue

- you do not control the environment properly and the everything is a security hole.

That's said: DT_RUNPATH and RPATH are however beautiful security holes. They allow to hardcode loading path in the binary itself even with a controlled environment.

And many build tools let garbage inside these path unfortunately (e.g /tmp/my_build_dir )


I can only agree.

From a desktop point of view Linux needs some major improvement about how it handles applications.

It also has all tools to do so, but it would brake a lot of existing applications.

In the past I though Flatpack and Snap would steps in the right direction. But now I'm not so sure about that anymore (snap made some steps in the right direction but also many in wrong directions, flatpack seems to not care about anything but making things run easier, in both cases moving from a kinda curated repo to a not-really curated one turned out horrible).

For a server point of view things matter much less, especially wrt. modern setups (container, vm in cloud, cloud provider running customized and hardened Linux container/vm hosts, etc).

And in the end most companies paying people to work on Linux are primary interested in server-ish setups, and only secondarily in desktop setups (for the people developing the server software). Some exception would be Valve, I guess, for which Linux is an escape hatch in case bade lock-in patterns from phone app-stores take hold on windows.


> "Most libraries do not have stable ABI's, even for C"

I think the mess we created in ABI space is one of the failures of our indistry.


For comparison, AmigaOS was built on the assumption of binary compatibility and people still replace and update libraries today, 35 years later.

It's a cultural issue, not a technical one - in the Amiga world breaking ABI compatibility is seen as a bug.

If you need to, you add a new function. If you really can't maintain backwards compatibility, you write a new library rather than break an old one.

As a result 35 year old binaries still occasionally get updates because the libraries they use are updated.

And using libraries as an extension point is well established (e.g datatypes to allow you to load new file formats, or xpk which let's any application that supports it handle any compression algorithm there's a library for).

But it requires a discipline around it.


Oh man that brings memories. It's so sad that things like datatypes or xpk didn't made it to modern OSes (well, there's just fraction of it, I guess video codecs are closest thing to it, but that just targets one area).

I also wanted to point out that this standardization made it possible to "pimp" your AmigaOS to make individual desktops somewhat unique. There were basically libraries that substituted system libraries and changed how the UI looked or even how it worked. I kind of miss that. Now the only personalization I see is how the terminal prompt looks like :)


It's a side effect of abstraction. Even a language like C makes it extremely hard to figure out the binary interfaces of the compiled program. There's no way to know for sure the effects any given change will create in the output.

The best binary interface I know is the Linux kernel system call interface. It is stable and clearly documented. It's so good I think compilers should add support for the calling convention. I wish every single library was like this.

https://man7.org/linux/man-pages/man2/syscall.2.html


"It's a side effect of abstraction."

We have an entire language-on-top-of-a-language in C++ pre-processor, but we could not figure out something to specify to the compiler what we want in an ABI?

I think an abstraction is when a tool takes care of something for you, this situation is just neglect.


I have maintained some mini projects which try to have strong API stability.

And even through keeping API stability is much easier then ABI stability I already ran into gotchas.

And that was simple stuff compared to what some of the more complex libraries do.

So I don't think ABI FFI stability ever had a good chance (outside of some very well maintained big system libraries where a lot of work is put into making it work).

I think the failure was to not realize it earlier and instead move to a more "message passing" + "error kernel" based approach for libraries where this is possible (which are surprisingly many) and use API stability only for the rest (except system libraries).

EDIT: Like use pipes to interconnect libraries and use well defined (but potential binary) message passing between them. Being able to reload libraries resetting all global state, or run multiple versions of them at the same time etc. But without question this isn't nice for small utility libraries or language frameworks and similar.


Isn't that just ABI stability with extra steps?


it has the slight benefit of not corrupting your memory if your make an error


I think the failure was to not realize it earlier and instead move to a more "message passing" + "error kernel" based approach for libraries where this is possible (which are surprisingly many) and use API stability only for the rest (except system libraries).

Sounds pretty sweet as far as composability is concerned, but there is the overhead caused by serialization and the loss of passing parameters in registers.




Consider applying for YC's Fall 2025 batch! Applications are open till Aug 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: