> Users on niche CPU architectures which haven't been sold commercially for 15+ years were the only ones impacted.
This is kind of the root of the problem, when you are kind of a hobbyist dev wanted to work with the latest shinny new version of everything, then everything is fine.
But when you try to do different things that are not mainstream or doing embedded, then you understand why sometimes you have to keep old software or hardware.
Rust like go are made for a connected world, where you are always upstream, always connected to get the latest versions, and it's ok to do breaking change to the language every few years.
Did you ever try to build a Linux from scratch? Trying to fit some build or runtime constraints? Then you learn the real cost of each dependency that is added and of their complexity.
Imagine you had that much dependencies to build a kernel, probably this much of memory, CPU and storage used. Now you need to add rust, its dependencies, other tools related to it. Each dependency possibly not supporting your system or configuration or require their own library dependencies in a new version that is maybe conflicting with older versions already used by the system. And that you can't change without breaking other existing programs in the system. And maybe you can fix these other applications to support the new version of the library but you just wanted to "update" your kernel and not spend 10 days reworking your system unexpectedly because a stupid update required it.
It’s worth pointing out that a lot of those “weird” old chips weren’t formally supported anyway, and the the fact that the software worked on them was a convenient fluke, and by design or intention.
This is the magic of Linux so far to be so versatile and able to support so many hardwares.
But that being said, most of the time, issues are not coming from really exotic "chips" but from little variations, configuration or system library versions.
Let's say you want to cross compile from x to y, and want code to use that specific memory space. This is when things start to get messy usually.
Example of how you can lose a lot of time and get crazy, when you just wanted to compile the code of something for your case:
This is kind of the root of the problem, when you are kind of a hobbyist dev wanted to work with the latest shinny new version of everything, then everything is fine.
But when you try to do different things that are not mainstream or doing embedded, then you understand why sometimes you have to keep old software or hardware.
Rust like go are made for a connected world, where you are always upstream, always connected to get the latest versions, and it's ok to do breaking change to the language every few years.
Did you ever try to build a Linux from scratch? Trying to fit some build or runtime constraints? Then you learn the real cost of each dependency that is added and of their complexity.
Imagine you had that much dependencies to build a kernel, probably this much of memory, CPU and storage used. Now you need to add rust, its dependencies, other tools related to it. Each dependency possibly not supporting your system or configuration or require their own library dependencies in a new version that is maybe conflicting with older versions already used by the system. And that you can't change without breaking other existing programs in the system. And maybe you can fix these other applications to support the new version of the library but you just wanted to "update" your kernel and not spend 10 days reworking your system unexpectedly because a stupid update required it.