Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Torvalds:

> Disk is cheap

I certainly don't want each installed desktop app to have a copy of base gnome/kde runtime and everything down to libc. And the implication is even the graphics would be duplicated, for example the Adwaita icon pack is huge. So if I have a full set of gnome applications (say 50) would I have 50 copies of Adwaita icon set? Suddenly disk space isn't cheap. Shared libs are good and we could do better than flatpaks and containers and static linking.

And just because shared libs are PITA it's not just because of their nature, it's the lack of tooling from supposedly modern languages, lack of guidance, lack of care for versioning and API stability, and distro agnostic convention. Each of these problems can be solved by not sweeping them under the rug.



I don't know whatyour point is. He literally says:

>Yes, it can save on disk use, but unless it's some very core library used by a lot of things (ie particularly things like GUI libraries like gnome or Qt or similar), the disk savings are often not all that big - and disk is cheap

He's literally making the point you're arguing. He says, core libraries should be shared.


Tell me how much of the system libraries are written in C or C++, and how much of them are being written in newer languages.

C is the default choice because of its ABI, and the tooling around it is made for using shared libraries.

What can you say about modern languages? Each of the languages are designed to work in a silo and not much cooperation for let's say, system plumbing. Their own package manager makes it easy to pull in code, and only make code for that language only. You can't just create a package that works as a shared library for other programs without mushing around C FFI. They make it hard by default, which creates resistance in developers to make a piece of code usable by others other than their own language. This trend is pretty alarming, especially when hidden and manipulative language fanboyism is showing its ugly head everywhere.


You severely misspelled "Thanks for the correction; sorry, I totally missed that".

HTH!


You should explain what's wrong with the argument instead of being a passive agressive asshole. It was a continuation of why it happens nowadays that people swing to static linking and then posts like this get shamelessly upvoted.


OK, if being an active aggressive asshole is better (which judging from your comment certainly seems to be your opinion): You made a stupid comment. It was pointed out to you that your "big counter-argument to what Linus wrote" was actually exactly what he had written. In stead of graciously acknowledging, or admitting by even so much as a hint, that you were wrong (which was so obvious that one would have to be a total blithering idiot not to get it), you went off gibbering about some other tangent. That makes you the primary passive-aggressive asshole here. Now you've graduated to active-aggressive assholery, which makes you just simply an asshole.

There, clear enough this time?


Nix(OS) solved this problem by hashing all packages based on their inputs (including other package hashes) all the way down in a merkle tree. You would have one copy of the icon pack, for example. But if any common libraries are built with different inputs for a particular program it will be duplicated instead of shared. Nix can then go through your store and hard-link any duplicate files between similar packages to save some more space.


> So if I have a full set of gnome applications (say 50) would I have 50 copies of Adwaita icon set?

No. Icons is easier to load with regular open/read, then with dlopen. But in any case they go into separate files, they are not in a binary.

Dynamic loading might be used to load data into the process, but it will be a very strange way to do it.

> And just because shared libs are PITA it's not just because of their nature, it's the lack of tooling from supposedly modern languages, ...

It is more complex than this. Dynamic linkage is limited in its ways. It couldn't do a type parametrization, for example. All it could do is to fill gaps in a code with function addresses. But it is not enough. Far not enough. For instance, you wouldn't like to dynamically link a c++ vector, because it is meant to be inlined and highly optimized then. Dynamic linker cannot inline or optimize.

So you are forced to use a lot of inlining at the stage of the static linking of the application binary, but then you get a problem of binary incompatibility between an app and a lib coming from lib being rebuilt with different optimizations.

So I'd say, that modern linux distributions should adapt to modern languages (like c++, lol), not vice versa.

> Each of these problems can be solved by not sweeping them under the rug.

For what end? What we possibly might gain, from solving these problems? Dynamic linkage is a runtime cost. Why should we prefer runtime costs to compile-time ones?


BTW, flatpaks are backed by an ostree so all data is automatically deduplicated between all installed flatpaks and runtimes at file level.

That of course will not help with RAM but should, reduce storage requirements quite a bit.


We’ll surely individual applications only use a few icons, not the entire set, so they can statically link in only the resources they actually depend on, right?


I think you can deduplicate at the OS level


ksmd will do that for you today!


Great idea and hold the hashtable of all files in the ram...what a great idea.


Hardlinking them works fine.


That's not de-duplication.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: