Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

I work on IBM mainframe (z/OS). Nothing else I know comes as close in maintaining backwards compatibility as IBM. Microsoft (Windows) is the 2nd, I think. Linux (kernel) ABI has the 3rd place, but that's only a small portion of Linux ecosystem.

Almost everything else, it's just churn. In OSS this is common, I guess nobody wants to spend time on backward compatibility as a hobby. From an economic perspective, it looks like a prisoner's dilemma - everybody externalizes the cost of maintaining compatibility onto others, collectively creating more useless work for everybody.



In OSS this is common, I guess nobody wants to spend time on backward compatibility as a hobby.

There's a lot of chasing new and shiny in OSS but I wouldn't say that applies to everyone... just look at the entire retrocomputing community, for example. Writing drivers for newer hardware to work on older OSes is not unheard of.


These are amazing people, and I like what they do, but they are still chasing the churn of newer hardware, which also introduces incompatible APIs. The incompatible APIs are often introduced commercially for business and not technical reasons, either out of ignorance, legal worries or in order to gain a market advantage.


> I guess nobody wants to spend time on backward compatibility as a hobby.

Getting paid to maintain something certainly goes a long way. Without payment, I suppose it comes down to how much one cares about the platform being built. I deliberately chose to target the Linux kernel directly via system calls because of their proven commitment to ABI stability.

On the other hand, I made my own programming language and I really want to make it as "perfect" as possible, to get it just right... So I put a notice in the README that explains it's in early stages of development and unstable, just in case someone's crazy enough to use it. I have no doubt the people who work on languages like Ruby and Python feel the same way... The languages are probably like a baby to them, they want people to like it, they want it to succeed, they just generally care a lot about it. And that's why mistakes like print being a keyword just have to be fixed.


I don’t think it’s just about bw compatibility. It’s the probability of it breaking for random reasons if you forget to babysit it for a bit. A lot of times, it’s even the bw compatibility stuff that breaks.

On one of my employers, we build containerized node apps, and the CI process involves building the image from the node source. Suddenly deployments started to fail on some services that were untouched for a while. We found out the Dockerfile was based on a Ubuntu image that fell off the support window and so the update repos were to archived, so the image could not be build without updating the Dockerfile.

This is an example of software that breaks without being touched. This is also why I stick with Go and single binaries (which I can even choose to package as a release so I never need to build again) as well as use the Distroless docker images that will contain no dependencies except my binary.

I’ve used go for a very long time and I have never had an issue with software aging on me. There are just entire classes of problems that have disappeared when I moved to go that when I use other languages like Node or PHP I just feel like these frameworks are reinventing wheels that don’t stand the test of time. Number 2 on Node land is all the indirection patterns in frameworks. Number one is package management. “You installed version X but module Y requires version Z” blah blah blah peer-deps…




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: