In the Node & JavaScript ecosystem, there is the web framework Express. The current major version 4.x.x branch is over 10 years old [1]. And yet it powers so many apps in the ecosystem (over 17M downloads every week [2]). It lacks some features and is not the most performant [3]. But me and coworkers I talked with, like it because it allows for quick, stable development and long-term planning without worrying about drastic API changes and lack of security patches for older versions. Even better stability is provided with Go where we can run over 10-year-old programs thanks to a mix of wide stdlib and the promise of compatibility. [4]
This gave me a pleasant shock. I'd forgotten that Express has been around for 13 years now. It was considered a super shoddy, pretend-programmer piece of junk by many when it first arrived (largely by virtue of being written in JavaScript). Since then I've helped a lot of companies build cool stuff that made real money with it. It's probably serving a crazy number of requests these days.
I write a lot of things with Go now instead, but I'm still totally content to build things with Express. It's good software, generally speaking.
I find the Express 5 situation hilariously wonderful.
It’s basically done. Any other project would’ve slapped a major release on it and fixed any issues that came up in a patch. Everyone who is using it says it works great.
But the maintainer won’t release it because they don’t feel it’s gotten enough testing. So they’re just waiting. And no one really cares because Express 4 is great and works fine.
It’s a beautiful example of mature software development.
cakePHP also does this, and this is one of the reasons why I dislike RoR. I don't actually dislike it but I would never choose it because of the ratwheel of constant version upgrades.
Python is a really bad example of cold blooded software. There is constant breaking changes with it (both runtime and tooling). So much so that the author still has to use python2 which has been EOL'd for quite a while.
A much better example would be something like go or java where 10 year old code still runs fine with their modern tooling. Or an even better example, perl, where 30 year old code still runs fine to this day
As an author of software, sometimes you make mistakes, and those mistakes are often of the form, "I permitted the user to do something which I didn't intend." How do you correct something like that? In the Java world, the answer is "add newer & safer & more intentional capabilities, and encourage the user to migrate."
In the Python world, this answer is the same, but it also goes further to add, "... and turn off the old capability, SOON," which is something that Java doesn't do. In the Java world, you continue to support the old thing, basically forever, or until you have no other choice. See, for example, the equals method of java.net.URL: known to be broken, strongly discouraged, but still supported after 20+ years.
Here's an example of the difference which I'm talking about: Python Airflow has an operator which does nothing -- an empty operator. Up through a certain version, they supported calling this the DummyOperator, after an ordinary definition for "dummy." But also -- the word "dummy" has been used, historically & in certain cultures & situations, as a slur. So the Airflow maintainers said, "that's it! No more users of our software are permitted to call their operators DummyOperator -- they now must call it EmptyOperator instead!" So if you tried to upgrade, you would get an error at the time your code was loaded, until you renamed your references.
This decision has its own internal consistency, I suppose. (I personally would not break my users in this way). But in the Java world it wouldn't even be a question -- you'd support the thing until you couldn't. If the breakage in question is basically just a renaming, well, the one thing computers are good at is text substitution.
So overall & in my opinion anyway, yes, it's very much true that you can upgrade Java & Java library dependencies much more freely than you can do the same sorts of things with Python.
> So the Airflow maintainers said, "that's it! No more users of our software are permitted to call their operators DummyOperator -- they now must call it EmptyOperator instead!"
Man, some companies and people have far too much time to waste.
Not to detract from your point (which I agree with), but rather as a side note, Airflow's developers publish top-notch migration and upgrade documentation and tools which hold your hand through the process of updating your DAGs when upgrading Airflow. Which IMO is the next best thing to do when you break backwards compatibility.
English isn't my first language but I haven't seen "dummy" being used as a slur, in any conversations I've engaged in or any books I've read. For me its connotation is more of a playful nature. When I think of slur I don't think of "dummy", I think of the r word and the like.
At least I can get the reasons for github's change to "main" for the default branch in a git repo. Maybe I don't agree with it but I can at least see how some people would interpet the word "master" in a negative way. I can't say the same for the word "dummy" though.
Yes your understanding is how almost everyone treats the word today. The slur is from a different meaning & different context. You'd never come across it in regular life, unless you were, like, into the history of baseball:
Maven is fantastic. As long as you stick to an LTS Java version and pick good dependencies you can always get things up and running. With Python I remember a ML class I took where one of the dependencies had introduced breaking API changes overnight and the lecturer hadn’t noticed because he was just using whatever version was available a few weeks ago when he first started prepping for the class.
I assumed you could specify versions in python. It’s not the first time I’ve run into issues building other people’s python projects, so I hope someone makes it a requirement to specify a version number when adding dependencies.
And I really wish ML and AI was as popular in Java as it is with Python, but I don’t see that happening anytime soon :(
> There is constant breaking changes with it (both runtime and tooling).
I'm not sure what you mean. Python 2 to 3 was a breaking change, but that was just one change, not "constant breaking changes".
If you stick with one major version no old code breaks with a new minor version (e.g., you can run old 2.x code under 2.7 just fine, and you can run old 3.x code under 3.12 just fine). The minor version changes can add new features that your old code won't make use of (for example, old 3.x code won't use the "async" keyword or type annotations), but that doesn't make the old code break.
The Python 3.11 release notes have a pretty lengthy list of removed or changed Python and C APIs followed by guidance on porting to 3.11, which implies potentially breaking changes to me.
It's a fair point that Python minor version changes can and do involve removal of previously deprecated APIs, which would break old code that used those APIs.
That said, when I look through the 3.11 release notes you refer to, I see basically three categories of such changes:
- Items that were deprecated very early in Python 3 development (3.2, for example). Since 3.3 was the first basically usable Python 3 version, I doubt there is much legacy Python 3 code that will be broken by these changes.
- Items related to early versions of new APIs introduced in Python 3 (for example, deprecating early versions of the async APIs now that async development has settled on later ones that were found to work better). These sorts of breaking changes can be avoided by not using APIs that are basically experimental and are declared to be so (as the early async APIs were).
- Items related to supporting old OSs or data formats that nobody really uses any more.
So, while these are, strictly speaking, breaking changes, I still don't think that "constant breaking changes" is a good description of the general state of Python development.
Python's changes between releases are not limited to removing deprecated APIs. Sometimes semantics changes in breaking ways, or new reserved words crop up, etc. etc. It certainly is Russian roulette trying to run python code on any version other than the one it was written for.
I know this specific example because it was one of a handful of reasons that delayed my workplace from upgrading past Python 3.5 (I think) for quite a while because of the addition of the reserved word `async`.
For me switching to Python 3.11 was really tough because of various legacy stuff removals (like coroutine decorators etc). While my code did not use these, the dependencies did. For some dependencies I had to switch to different libraries altogether - and that required rewriting my code to work with them.
There was also some time in the past when async became a keyword. It turned out many packages had variables named async and that caused quite a bit of pain too.
The problem is `requirements.txt` doesn't do anything with downstream dependencies. There's nothing like a shinkwrap/lockfile in python. Even if you pin dependencies to exact versions, if you check your project out in a new environment and run pip install -r requirements.txt, you can end up with different, broken downstream dependencies.
If you want to stick with using `pip` over any of the newer tools that build on top of it (Poetry - my favourite, pdm, pipenv, rye, ...) the simplest way I used in the past was to use a `requirements.human.txt` to set my dependencies, then install them in a venv and do `pip freeze > requirements.txt` to lock all of the transitive dependencies.
That's an awareness problem. requirements.txt was invented... a long time ago, I think before the much more sane (but still not perfect) dependencies/lockfile split got popular. requirements.txt tries to be both - and it can be both, just not at the same time.
In short, you want your deployed software to use pip freeze > requirements.txt and libraries to only specify dependencies with minimal version conditions.
I did not know about pip freeze, doh. Thanks will check that out!
Edit: so if I understand it, this is just listing all packages in the current python env and writing them to a file. Hm, requires more discipline than the npm equivalent. But thats a natural consequence of pip defaulting to installing packages globally (vs npm which installs in local node_modules by default). Better but still not awesome IMO
Why would you bluntly assume my comment lacks any foresight? I was simply recommending you a tool that I used, albeit briefly, that solves the exact the same problem for which you are claiming no solution exists.
Nobody is denying that it would be ideal if there is one best solution to every problem in the ecosystem. But at the end of the day all software, including core and third party libs is just code written by people, and it is too much to expect that any person (or a group of them) gets everything right the first time. Change, breaking or otherwise, is inevitable as people learn from their mistakes - its not like the core is guaranteed to never have any breaking changes either.
Just like you can pin the version of libraries, you can pin the versions of your tools too, as long as they are not depending on external services with no versioning. The point of the post is not absolute avoidance of change. It is to opt into a workflow and tooling setup so you can deal with the upstream changes at your own time and convenience.
And BTW, looking at their versioning, poetry hasn't yet had any breaking changes in its 4+ years of existence.
That said, I remember all three of those transitions (2.3 to 2.4, 2.4 to 2.5, and 2.5 to 2.6), and I remember changing Python code to make use of new features introduced in those transitions (for example, using with statements and context managers in 2.5), but those aren't breaking changes; the old code still worked, it just wasn't as robust as using the new features.
Something called onnx (all iirc) requires Python 3.8-3.9 but not 3.10+ in which it doesn’t work. So for my various AI needs I have three versions of Python 3 installed through different channels. And of course they all have their own multi-gigabyte caches of base libraries and models.
I know it may be more complex or trivial than I think, or tied to very few specific packages, but that’s the point – I have to figure it out where I shouldn’t need to. In contrast, I’m sure that no matter which latest version of Node I have, it will work.
I mean I was, up until Node 19/20, where they broke the loader, so ts-node doesn’t work anymore and the suggestion is to re-learn something called tsx. F that nonsense.
Agreed. This is one of the reasons why I avoid using Python whenever possible. Python code I write today is unlikely to be functional years from now, and I consider that a pretty huge problem.
This really depends on your environment. I've been running legacy Python servers continuously for 4+ years without breaking them or extensively modifying them because I invested in the environment and tooling around it (which I would do for any app I deploy). I can't say I want to bring all of them entirely up to date with dependencies, but they're still perfectly functional. Python is pretty great, honestly. I rarely need to venture into anything else (for the kind of work I do).
> I've been running legacy Python servers continuously for 4+ years
That seems like a large amount of effort to make up for a large language deficiency. My (heartfelt) kudos to you!
I might have been willing to do the same if I used Python heavily (I don't because there are a number of other things that makes it very much "not for me") -- but it would still represent effort that shouldn't need to be engaged in.
I think it depends on which bits of the Python ecosystem you're interacting with. The numerical/scientific parts have been quite stable for at least the past 10 years (new features have been added, but only small amounts of removal), compared with the more "AI" focused parts where I wouldn't trust the code to be working in 6 months. Similarly, some web frameworks are more stable than others. I think also over the last 5 or so years, there's been a change in maintainers of some larger projects, and the new maintainers have introduced more breaking changes than their predecessors.
None of this is implied by the language, I think it's much more driven by culture (though I think the final dropping of support for Python 2 did give some maintainers an excuse to do more breaking changes than was maybe required).
I'm not following. I've put in a total of a couple hours of maintenance over four years for the entire app stack. I think the maintenance issues and processes I use for Python would be the same as any other language. I remember my Java experience a decade back being essentially the same, the JS apps I am responsible have perpetual churn, and my .NET friends have said they feel behind if they're not keeping up with core changes every 6-12 months. Every asset, whether physical or digital, needs regular maintenance. What is different in your experience?
There are plenty of other removals, deprecated stuff like Thread.stop(), security providers, JDBC interfaces changes, finalizers no longer do anything (better not depend on them ever running) and might be removed in some future version.
Java does not have perfect backwards compatibility, but it's pretty good.
Thread.stop was deprecated very long time ago. It was deprecated at least in Java 6 (2006 year) and I'm too lazy to check for earlier versions.
Security providers have very limited usefulness for ordinary applications. And AFAIK it's not even removed for Java 21 yet, so formally old apps should work for now.
I'm not aware of incompatible JDBC interface changes and I used to use very old Oracle driver without that much problem. Yes, there are some new methods in JDBC interfaces which are not implemented with old drivers, but you can just not call those methods. It's not breaking change.
Finalizers were never reliable. But you're not correct about them not running right now, I just checked and with Java 21 they're called.
So while there are breaking changes, they're always some application issues. If you call Thread.stop, you should not do that, that's bad application. If you're using security providers, you should not do that, they're not safe enough, you should use containers or VM or other means of code isolation. I'm not completely sure about JDBC issues, but I had experience running very old JDBC driver and it works just fine. If you rely on finalizer, you should not do that, that's wrong approach to resource management and it was always wrong.
I maintain Eclipse RDF4J and I noticed this too between Java 9 and 11, after that there haven’t been any breaking things except for having to bump a maven plugin dependency. We make sure to stay compatible with both Java 11 and whatever is the newest version by running our CI on both Java versions.
All the stuff that got removed since Java 9, as the new policy is to actully remove deprecations after a couple of warning releases, instead of supporting them forever.
Additionally, being more strict regarding internal APIs security, not allowing access to JVM internals by naughty 3rd party libraries.
Regarding Python: Really? Obviously v2-to-v3 was an absolute fiasco, but since then, it's been great in my personal experience.
Don't get me wrong: Python hasn't overcome its tooling problem, so there's still that barrier. But once your team agrees on a standardized tool set, you should be able to coast A-OK.
Every time there's a 0.1 Python version increase, it takes months for other libraries to catch up. I still have to install conda envs with Python=3.9 because they are warm-blooded software.
Go is not a good example either. Some times ago we tried compiling a code a few years after it was made, it did not work. Someone who actually knew the language and tooling tried and said there was a migration to be done and it was complicated. I have not followed the subject up close but in the end they just abandoned IIRC.
I don't think I've ever had that problem -- particularly once they introduced Go modules, which specified a specific version of a library dependency. My experience is like the author's: Even old random packages I wrote 5 years ago continue to Just Work when used by new code.
There are a handful of functions that they've deprecated, which will produce warnings from `go vet`; but that doesn't stop `go build` from producing a usable binary.
Depends if you mean python the interpreter or python the language. e.g. pypy still supports python2 and has "indefinite support" or something along those lines.
Even the cpython2 interpreter is no longer supported by the original authors, but that doesn't stop someone else from supporting it.
The worst example perhaps. I have the unfortunate honor to work on our python projects from time to time, but rarely and every time that I do, something is broken. No other software is as unreliable. Only Ruby comes close and probably for the same reason.
I've had good luck with sveltekit (a framework for js sites). They'll break something with a new version but provide you with very helpful compile errors pointing to a migration script to re-write any old code.
C# has been pretty good as well.
But at some point you're going to need data for your app and that's where you'll get surprised. That Yahoo currency data you used to get for free or Wikipedia's mobile API? Gone ten years later.
What? You can just update EF Core without ever having to do a migration of the schema. It just works. Also, the versions that are EoL today are a really poor choice for Lambda anyway because you really do want to be using Native AOT + Dapper AOT with it instead.
Note that since Java 9, this isn't exactly true, after modules, removal of private apis being misused by packages and effectly removing deprecated APIs instead of having it forever on the platform.
I have yet to have a python versioning issue, but with java I've had tons.
Worst of all, it's always a clear "use the latest version and it will work". With python using the latest version almost always works, and you can import the previous functions if you really want to use the new interpreter on old code.
Maybe this is because most of the time with python you barely have external libraries. Similar to Java, but in Node.js it's like asking for trouble.
I work on IBM mainframe (z/OS). Nothing else I know comes as close in maintaining backwards compatibility as IBM. Microsoft (Windows) is the 2nd, I think. Linux (kernel) ABI has the 3rd place, but that's only a small portion of Linux ecosystem.
Almost everything else, it's just churn. In OSS this is common, I guess nobody wants to spend time on backward compatibility as a hobby. From an economic perspective, it looks like a prisoner's dilemma - everybody externalizes the cost of maintaining compatibility onto others, collectively creating more useless work for everybody.
In OSS this is common, I guess nobody wants to spend time on backward compatibility as a hobby.
There's a lot of chasing new and shiny in OSS but I wouldn't say that applies to everyone... just look at the entire retrocomputing community, for example. Writing drivers for newer hardware to work on older OSes is not unheard of.
These are amazing people, and I like what they do, but they are still chasing the churn of newer hardware, which also introduces incompatible APIs. The incompatible APIs are often introduced commercially for business and not technical reasons, either out of ignorance, legal worries or in order to gain a market advantage.
> I guess nobody wants to spend time on backward compatibility as a hobby.
Getting paid to maintain something certainly goes a long way. Without payment, I suppose it comes down to how much one cares about the platform being built. I deliberately chose to target the Linux kernel directly via system calls because of their proven commitment to ABI stability.
On the other hand, I made my own programming language and I really want to make it as "perfect" as possible, to get it just right... So I put a notice in the README that explains it's in early stages of development and unstable, just in case someone's crazy enough to use it. I have no doubt the people who work on languages like Ruby and Python feel the same way... The languages are probably like a baby to them, they want people to like it, they want it to succeed, they just generally care a lot about it. And that's why mistakes like print being a keyword just have to be fixed.
I don’t think it’s just about bw compatibility. It’s the probability of it breaking for random reasons if you forget to babysit it for a bit. A lot of times, it’s even the bw compatibility stuff that breaks.
On one of my employers, we build containerized node apps, and the CI process involves building the image from the node source. Suddenly deployments started to fail on some services that were untouched for a while. We found out the Dockerfile was based on a Ubuntu image that fell off the support window and so the update repos were to archived, so the image could not be build without updating the Dockerfile.
This is an example of software that breaks without being touched. This is also why I stick with Go and single binaries (which I can even choose to package as a release so I never need to build again) as well as use the Distroless docker images that will contain no dependencies except my binary.
I’ve used go for a very long time and I have never had an issue with software aging on me. There are just entire classes of problems that have disappeared when I moved to go that when I use other languages like Node or PHP I just feel like these frameworks are reinventing wheels that don’t stand the test of time. Number 2 on Node land is all the indirection patterns in frameworks. Number one is package management. “You installed version X but module Y requires version Z” blah blah blah peer-deps…
one thing I’ve noticed is that many engineers, when they’re looking for a library on Github, they check the last commit time. They think that the more recent the last commit is, the better supported the library is.
But what about an archived project that does exactly what you need it to do, has 0 bugs, and has been stable for years? That’s like finding a hidden gem in a thrift store!
Most engineers I see nowadays will automatically discard a library that is not "constantly" updated... Implying it's a good thing :)
A library can only stay static if the environment it's used in is also static. And many of the environments in which modern software is developed are anything but static, web frontends are one example where things change quite often.
A library that can stand entirely on its own might be fine if it's never updated. But e.g. a library that depends on a web frontend framework will cause trouble if it is not updated to adapt to changes in the ecosystem.
Also, even a very stable project that is "done" will receive a trickle of minor tweak PRs (often docs, tests, and cleanups) proportional to the number of its users, so the rate of change never falls to zero until the code stops being useful.
I think this is also in inverse proportion to the arcane-ness of the intended use of the code, though.
Your average MVC web framework gets tons of these minor contributors, because it's easy to understand MVC well enough to write docs or tests for it, or to clean up the code in a way that doesn't break it.
Your average piece of system software gets some. The Linux kernel gets a few.
But ain't nobody's submitting docs/tests/cleanups for an encryption or hashing algorithm implementation. (In fact, AFAICT, these are often implemented exactly once, as a reference implementation that does things in the same weird way — using procedural abstract assembler-like code, or transpiled functional code, or whatever — that the journal paper describing the algorithm did; and then not a hair of that code is ever touched again. Not to introduce comments; not to make the code more testable; definitely not to refactor things. Nobody ever reads the paper except the original implementor, so nobody ever truly understands what parts of the code are critical to its functioning / hardening against various attacks, so nobody can make real improvements. So it just sits there.)
I disagree. Tiny libraries can be fine indefinitely. For example this little library which inverts a promise in JavaScript.
I haven’t touched this in years and it still works fine. I could come in and update the version of the dependencies but I don’t need to, and that’s a good thing.
I think total number of commits is probably a good metric too. If the project only has 7 commits to begin with then it's unlikely to get any more updates after it's "done". But a 10 year old project with 1000 commits where the last commit was 3 years ago is a little more worrying.
Even if the environment it's used in is not static, the world it lives in is not static.
I work in industrial automation, which is a slow-moving behemoth full of $20M equipment that get commissioned once and then run for decades. There's a lot of it still controlled with Windows 98 PCs and VB6 messes and PXI cards from the 90s, even more that uses SLC500 PLCs.
But when retrofitting these machines or building new ones, I'll still consider the newness of a tool or library. Modern technology is often lots more performant, and manufacturers typically support products for date-on-market plus 10 years.
There's definitely something to be said for sticking with known good products, but even in static environments you may want something new-ish.
As someone who migrated a somewhat old project to one which uses a newer framework, I agree with this. The amount of time I spent trying to figure out why and old module was broken before realizing that one of it's dependencies was using ESM even though it was still using CJS... I don't even want to think about it. Better to just make sure that a module was written or updated within the last 3 years because that will almost certainly work.
This is a very strange example. Browsers have fantastic backwards compatibility. You can use the same libraries and framework you used ten years ago to make a site and, with very few exceptions, it will work perfectly fine in a modern browser.
Browsers have decent backwards compatibility for regular webpages, but there’s a steady stream of breakage when it comes to more complex content, like games. The autoplay policy changes from 2017-2018, the SharedArrayBuffer trainwreck, gating more and more stuff behind secure contexts, COOP/COEP or other arbitrary nonsense... all this stuff broke actual games out in the wild. If you made one with tools from 10 years ago you would run into at least a couple of these.
Browsers themselves aren't usually the problem. While sometimes they make changes, like what APIs are available without HTTPS, I think you're right about their solid backwards compatibility.
What people really mean when they talk about the frontend is the build system that gets your (modern, TypeScript) source code into (potentially Safari) browsers.
Chrome is highly backwards compatible. Webpack, not so much.
This build system churn goes hand-in-hand with framework churn (e.g. going from Vue 2 to 3, while the team have put heaps of effort into backwards compatibility, is not issue-free), and more recently, the rise of TypeScript and the way the CJS to ESM transition has been handled by tools (especially Node).
The problem arises when you're not using old libraries and frameworks. You're using new stuff, and come across an old, unmaintained library you'd like to use.
Hey, it uses the same frameworks you're using --- except, oh, ten years ago.
Before you can use it, you have to get it working with the versions of those frameworks you're using today.
Someone did that already before you. They sent their patch to the dead project, but didn't get a reply, so nobody knows about it.
>> web frontends are one example where things change quite often.
There is a world of difference between linux adding USB support and how web front ends have evolved. One of them feels like they are chasing the latest shiny object...
A VM with a fixed spec can delegate os churn and the like to the VM maintainers and thus protect the managed code authors aka the JVM.
Does it help the dependency ecosystem churn? No.
Until be get very fine grained api versioning info (like at method / function granularity and even then is it good enough and what oss author could maintain that info outside of a small api) then library version info will simply be a coarse grained thing.
If only there was a super smart AI with great breadth of knowledge with capabilities to infer this relationship graph, but I don't think there's a lot of research into AIs like that these day, right?
Even though it’s not strictly true, checking for recent updates is an excellent heuristic. I don’t know the real numbers, but I feel confident that in the overwhelming majority of cases, no recent activity means “abandoned”, not “complete and bug free”.
I remember seeing a bunch of graphs which showed how programming languages have changed over time, and how much of the original code is still there.
It showed that some languages were basically nothing like the 1.0 versions, while others had retained most of the code written and only stuff on top.
In the end, it seems to also be reflected in the community and ecosystem. I remember Clojure being close/at the top of the list as the language hardly does breaking changes anymore, so libraries that last changed 5 years ago, still run perfectly well in the current version of the language.
I guess it helps that it's lisp-like as you can extend the core of the language without changing it upstream, which of course also comes with its own warts.
But one great change it did to me, is stop thinking that "freshness" equals "greatness". It's probably more common I use libraries today that basically stopped changed since some years ago, than I use libraries that were created in the last year. And without major issues.
Some languages have releases every year or two where they will introduce some new, elegant syntax (or maybe a new stdlib ADT, etc) to replace some pattern that was frequent yet clumsy in code written in that language. The developer communities for these languages then usually pretty-much-instantly consider use of the new syntax to be "idiomatic", and any code that still does things the old, clumsy way to need fixing.
The argument for making the change to any particular codebase is often that, relative to the new syntax, the old approach makes things more opaque and harder to maintain / code-review. If the new syntax existed from the start, nobody would think the old approach was good code. So, for the sake of legibility to new developers, and to lower the barrier to entry to code contributions, the code should be updated to use the new syntax.
If a library is implemented in such a language, and yet it hasn't been updated in 3+ years, that's often a bad sign — a sign that the developer isn't "plugged into" the language's community enough to keep the library up-to-date as idiomatic code that other developers (many of whom might have just learned the language in its latest form from a modern resource) can easily read. And therefore that the developer maybe isn't interested in receiving external PRs.
I wonder if anyone ever took it scientifically and A/B tested it on a codebase. A community is fine all these years before a change, but afterwards all that instantly becomes a bad practice and loses legibility. I’m confident that it mostly gets done not for any objective result, but because most developers are anxious perfectionists in need of a good therapist. And that’s plague-level contagious. Some people get born into this and grow up being sick.
By zero bugs do you mean zero GitHub issues? Because zero GitHub issues could mean that there are security vulnerabilities but no one is reporting them because the project is marked as abandoned.
> But what about an archived project that does exactly what you need it to do, has 0 bugs, and has been stable for years? That’s like finding a hidden gem in a thrift store!
Either the library is so trivial to implement myself that I just do that anyway, which doesn't have issues w.r.t maintenance or licensing, or it's unmaintained and there are bugs that won't be fixed because it's unmaintained and now I need to fork and fix it, taking on a legal burden with licensing in addition to maintenance.
Bugs happen all the time for mundane reasons. A transitive dependency updated and now an API has a breaking change but the upstream has security fixes. Compilers updated and now a weird combination of preprocessor flags causes a build failure. And so on.
The idea that a piece of software that works today will work tomorrow is a myth for anything non-trivial, which is why checking the history is a useful smell test.
Consider an at-the-time novel hashing algorithm, e.g. Keccak.
• It's decidedly non-trivial — you'd have to 1. be a mathematician/cryptographer, and then 2. read the paper describing the algorithm and really understand it, before you could implement it.
• But also, it's usually just one file with a few hundred lines of C that just manipulates stack variables to turn a block of memory into another block of memory. Nothing that changes with new versions of the language. Nothing that rots. Uses so few language features it would have compiled the same 40 years ago.
Someone writes such code once; nobody ever modifies it again. No bugs, unless they're bugs in the algorithm described by the paper. Almost all libraries in HLLs are FFI wrappers for the same one core low-level reference implementation.
In practice, this code will use a variety of target-specific optimizations or compiler intrinsics blocked behind #ifdefs that need to be periodically updated or added for new targets and toolchains. If it refers to any kind of OS-specific APIs (like RNG) then it will also need to be updated from time to time as those APIs change.
That's not to say that code can't change slowly, just the idea that it never changes is extremely rare in practice.
I submit math.JS and numeric.JS. Math.JS has an incredibly active community and all sorts of commits numeric. JS is one file of JavaScript and hasn’t had an update in eight years if you want to multiply 2 30 by 30 matrices, numeric.JS works just fine in 2023 and is literally 20 times faster.
I'm checking the zlib changes file [1] and there are regular gaps of years between versions (but there are times where there are a few months between versions). zlib is a very stable library and I doubt the API has changed all that much in 30 years.
Good point. I have also seen Great Endeavor 0.7.1 stay there because the author gave up or graduated or got hired and the repo sits incomplete, lacking love and explanation for dismissal.
The GHC project churns out changes at a quite high rate though. The changes are quite small by themselves, but they add up and an abandoned Haskell project is unlikely to be compilable years later.
One disadvantage of archived repos is that you can't submit issues. For this reason it is hard to assess how bug free the package is. My favorite assessment metric is how long it takes the maintainer(s) to address issues and PRs (or at least post a reply). Sure, it is not perfect and we shouldn't expect all maintainers to be super responsive, but it usually works for me.
Last commit time is a pretty good indicator that the project has someone who still cares enough to regularly maintain it.
I have some projects I consider finished because they already do what I need them to do. If I really cared I'm sure I could find lots of things to improve. Last commit time being years ago is a pretty good indicator that I stopped caring and moved on. That's exactly what happened: my itch's already been scratched and I decided to work on something else because time is short.
I was once surprised to discover a laptop keyboard LED driver I published on GitHub years ago somehow acquired users. Another developer even built a GUI around it which is awesome. The truth is I just wanted to turn the lights off because when I turn the laptop on they default to extremely bright blue. I reverse engineered everything I could but as far as I'm concerned the project's finished. Last commit 4 years ago speaks volumes.
It’s extremely rare to have projects be considered stable for years without any updates. Unless there are no external dependencies, uses very primitive or core language constructs, there’s always updates to be had - security updates, EOLs are common examples. What works in Python 2 might not work in Python 3
Software needs to be maintained. It is ever evolving. I am one of those that will not use a library that has not been updated in the last year, as I do not want to be stuck upgrading it to be compatible with Node 20 when Node 18 EOLs
I chose a .Net library (Zedgraph) about 10 years ago, partly for the opposite reason. It was already known to be "finished", what you might call dead. It reliably does what I want so I don't care about updates. I'm still using the same version today and never had to even think about updating or breakages or anything. It just keeps on working.
Mind you, it's a desktop application not exposed to the internet, so security is a little lower priority than normal.
I'm sort of confused on where your comment is coming from. In the modern world (2023 in case your calendar is stuck in the 90s) we have a massive system of APIs and services that get changed all the time internally.
If a library is not constantly updated then there is a high likely hood (99%) that it just won't work. Many issues raised in git are that something changed and now the package is broken. That's reality sis.
A heavily used library, gauged from download stats as reported from package repositories or github star count for example, with low to none open issue count (and even better a high closed issue count) gives me a better feel for the state of a library than it's frequency of updates.
If you are asking yourself, "will this do what it says it will do?" and you are comparing a project that hasn't had any updates in the last 3 years vs one that has seen a constant stream of updates over the last 3 years, which one do you think has a greater probability of doing what it needs to do?
Now I do get your point. There is probably a better metric to use. Like for example, how many people are adding this library to their project and not removing it. But if you don't have that, the number of recent updates to a project that has been around for a long time is probably going to steer you in the right direction more often than not.
The only software that can go without updates is software that gets it right the first time If you're building software for yourself, this is relatively easy. Your tastes probably won't change that much even after a decade. You can probably ignore minor problems like using an O(n^2) function where there exists an O(n) because n is small. If you're writing software that other people will use, then that's where the problems come in. Other people don't have the same requirements as you, and may have a large enough N that the O(n) function makes it worth it, for example.
But regardless of if you're writing for yourself or someone else, sometimes you just can't foresee problems. Maybe it crashes processing files larger than a gig, but because you've only ever used files <100KB it's never mattered to you. Then you go in to fix the crash and it turns out you're going to have to rewrite half the thing.
This is, I think, the biggest argument against the idea that software that doesn't change is inherently better than software that changes frequently[0]: it may be that unchanging software was perfect from the first line, or it may be that there's terrors lurking in the deep, and a priori it can be difficult to tell which a particular project is.
[0] This is not to say that rapidly updating software is inherently better than slowly updating software either. There's many factors other than just update speed
I don't think the idea is that software should never change. If requirements change then software obviously has to change as well.
But over the course of 10 years a lot of things can change that have nothing to do with changing requirements.
Open source projects are abandoned or change direction. Commercial software gets discontinued. Companies get acquired. App Store / Play Store rules change. APIs go away or change pricing in ways that render projects economically unviable. Toolchains, frameworks and programming languages, paradigms and best practices change.
I think the point is that you don't want external changes that are unrelated to your requirements to force change on you. It's a good principle but as always there are trade-offs.
There is stable and then there is obsolete. The difference is often security.
And what if an important new requirement is easy to meet, but only if you bump a vendored library by seven major versions causing all sorts of unrelated breakage?
What if there aren't enough people left who are familiar with your frozen-in-time toolset and nobody wants to learn it any longer?
I think careful and even conservative selection of dependencies is a good idea, but not keeping up with changes in that hopefully small set of dependencies is one step too far for me.
I love the sentiment of this post. I absolutely hate that my recent mobile apps from only a couple years ago l now require a dozen hours to patch them up and submit updates.
The author's final point is interesting wherein they refer to their own static site generator as being cold-blooded and that it runs on Python 2. Python 2 is getting harder to install recently and will eventually make it a warm blooded project.
I have a little hobby project (iOS and macOS) that I don't regularly develop anymore, but I use it quite often as a user, and I like to keep it compiling and running on the latest OSes. It's aggravating (and should be totally unacceptable) that every time I upgrade Xcode, I have a few odds and ends that need to be fixed in order for the project to compile cleanly and work. My recent git history comments are all variations of "Get project working on latest Xcode".
I could almost understand if these underlying SDK and OS changes had to be made due to security threats, but that's almost never the case. It's just stupid things like deprecating this API and adding that warning by default and "oh, now you need to use this framework instead of that one". Platforms and frameworks need to stop deliberately being moving targets, especially operating systems that are now very stable and reliable.
I should be able to pull a 10 year old project out of the freezer and have it compile cleanly and run just as it ran 10 years ago. These OS vendors are trillion dollar companies. I don't want to hear excuses about boo hoo how much engineering effort backward compatibility is.
The worst is when your virtualization environments intended to provide long-term support don't even accomodate the "new" mainline hardware. Most frustrating example: Virtualbox doesn't work on Apple M1 or M2 chipsets.
why would it, though? Qemu (probably) works on "M" macs.
Virtualbox is linked intimately with the underlying hardware, it's a translation layer - even though it can do emulation, it's x86 emulating x86.
i always thought i was one of the few people that used virtualbox instead of the more popular ones; i tend to forget that there's probably a subset of developers that still use it for the orchestration software that can use it.
I've been maintaining my own side project. It started 12-13 years ago, with vanilla php, later rewritten with Laravel, later rewritten again with Symfony in 2017-ish. Since then I've had phases from 6-18 months where I had a total of 2-3 tiny commits (I was working full time as a freelancer, so I didn't have energy to work on my side project). But then when I had time, I would focus on it, add features, upgrade and just experiment and learn.
This was super valuable to me to learn how to maintain projects long-term: Update dependencies, remove stuff you don't need, check for security updates, find chances to simplify (e.g. from Vagrant to Docker... or from Vue + Axios + Webpack + other stuff to Htmx). And what to avoid... for me it was to avoid freshly developed dependencies, microservices, complexified infrastructure such as Kubernetes.
And now I just developed a bunch of features, upgraded to PHP 8.2 and Symfony 7 (released a month ago), integrated some ChatGPT-based features and can hopefully relax for 1-3 years if I wanted to.
In the last 4-5 years the project has made about the same revenue as an average freelance year's revenue, so it's not some dormant unknown side project.
I think PHP, as horrible as it feels to go back, is one example of something that’s truly backwards compatible even to its own detriment.
Haven’t worked with it for years, went back to find that the horrible image manipulation functions are still the same mess that I left behind 8 years ago.
Yeah, some things are still a mess, but many things I use constantly have improved so much. Here is an excerpt of a function that shows many of the updates that I use regularly:
#[AsMessageHandler]
readonly class JobEditedHandler
{
public function __construct(
private Environment $twig,
private EmailService $mailer,
private string $vatRate,
) {}
public function __invoke(JobEdited $jobEdited): void
{
$this->sendNotificationToJobPublisher($jobEdited);
}
You have attributes, much better type-hinting, constructor property promotion, read-only properties / classes. Additionally you have native Enums, named arguments and also smaller things such as match expressions (instead of case switch), array spread operator, null coalescing assignment operator, etc, etc.
Especially in a CRUD-heavy setting like mine (I run a niche jobboard) it reduces so much boilerplate and increases type-safety, thus makes it way less error-prone. Combined with new static analyzers (phpstan, php-cs-fixer, psalm - take your pick), you find possible errors way earlier now.
I think it gets a lot of inspiration from Java. Symfony gets lots of inspiration from Spring Boot. The Twig templating language is heavily related to the Django templating language. So many of the tools and concepts are somewhat battle-tested.
And this is on top of the huge performance improvements in the last years.
So yeah, there's many things that are still fixable. But the improvements have been staggering.
Laravel was easier to get into but once you strayed from "The Laravel Way", it gets quite messy.
I got into Symfony by "accident", because a freelance colleague put me on projects that used Symfony. So for a couple of years I used Laravel and Symfony in parallel, but after a few years I decided to go full Symfony.
Some of the things that were better for my use case:
Many of the Laravel components are "Laravel only". Whereas in Symfony, you can just pick and choose the components you need - it's very modular and extendible without forcing your hand. You don't even need the Symfony framework and just choose the components you want.
That's how Laravel can depend on Symfony modules; but Symfony can't depend on Laravel modules.
Entities vs. Models (Data Mapper vs. Active Record):
The entities in Symfony (equivalent to Models in Laravel) were just simple PHP objects. I can see what properties an entity has, I can configure directly there in a simple way. I can add my own functions, edit the constructor, etc, etc. Also: You create the properties, and the migrations were generated based on that. In Laravel, you create the migrations, and the actual model is based on going through the migration steps. This just feels odd to me.
In Laravel, the Models extend the Eloquent Model class and it feels "heavier" and I had more trouble re-configuring some things. Plus without using an additional "auto-complete" generator, I couldn't just see what the properties / columns of the model / table was.
I also don't like Facades (because they hide too much stuff and I have trouble figuring out the service that it actually represents).
Templating:
I also like that Twig is more restrictive, it forces me to think more about separating logic and the view, whereas Blade allows way more things. You don't have to use it, but I reckon since it's allowed, people will do so.
One thing I still envy from Laravel, though, is the testing suite.
I tried integrating it in Symfony, but it was quite messy and somewhat incompatible. That shows the above point, that it's "Laravel only". It's really nice, but not enough for me to advocate for Laravel over Symfony.
Besides what is stated in the article, it is also important to have an inherently secure threat model. For example, full websites are inherently warm-blooded since you are constantly dealing with attackers, spam bots, etc. However, static pages like Tiddlywiki are a lot better since you can avoid putting it on the web at all and browsers are incredibly stable platforms.
"My third remark introduces you to the Buxton Index, so named after its inventor, Professor John Buxton, at the time at Warwick University. The Buxton Index of an entity, i.e. person or organization, is defined as the length of the period, measured in years, over which the entity makes its plans. For the little grocery shop around the corner it is about 1/2,for the true Christian it is infinity, and for most other entities it is in between: about 4 for the average politician who aims at his re-election, slightly more for most industries, but much less for the managers who have to write quarterly reports. The Buxton Index is an important concept because close co-operation between entities with very different Buxton Indices invariably fails and leads to moral complaints about the partner. The party with the smaller Buxton Index is accused of being superficial and short-sighted, while the party with the larger Buxton Index is accused of neglect of duty, of backing out of its responsibility, of freewheeling, etc.. In addition, each party accuses the other one of being stupid. The great advantage of the Buxton Index is that, as a simple numerical notion, it is morally neutral and lifts the difference above the plane of moral concerns. The Buxton Index is important to bear in mind when considering academic/industrial co-operation."
Not everyone knows it, strangely, many of the (senior or junior) project management-types I work with have to be introduced to the term and concept (and if they listen it can at least resolve confusion, if not conflict, about the different priorities and behaviors of all the parties involved). But yes, they describe the same thing.
What a terrible name for this. Cold blooded animals are highly dependent on their environment whereas the body of warm-blooded animals eliminate the dependency on external temperature via metabolism.
In any case, it's unnecessarily ambiguous. Why not simply say 'software without external dependencies' and eliminate the paragraphs of meandering explanation?
This is literally the only reply that hit the core of the article's problem and of course no one on this site upvoted it lol.
The only thing I dislike more than software development posts that use inappropriate analogy from nature to shallowly jump to conclusion, is software development posts that use inappropriate analogy from nature to shallowly jump to conclusion with absolutely flawed understanding of the supposedly analogous natural phenomenon.
And of course, painted turtles (among a few other species) can survive being frozen not because of their cold-bloodedness, but thanks to special antifreeze protein they have. Other lizards (and cold blooded animals for that matter) would just rupture their own tissues upon thawing.
Counterpoint: some types of software aren’t meant to last long. Even if it still builds and can be worked on later, the usecase itself may have changed or disappeared, or someone has probably come up with a new better version, so that it’s no longer worth it to continue.
This probably doesn’t apply to many types of software over 6 months, but in a couple years or a couple decades. Some online services like CI or package managers will almost certainly provide backwards-compatible service until then.
Another possibility is that developer efficiency improves so much that the code written 10 years ago is easier to completely rewrite today, than it is to maintain and extend.
This is why I’m hesitant to think about software lasting decades, because tech changes so fast it’s hard to know what the next decade will look like. My hope is that in a few years, LLMs and/or better developer tools will make code more flexible, so that it’s very easy to upgrade legacy code and fix imperfect code.
"Another possibility is that developer efficiency improves so much that the code written 10 years ago is easier to completely rewrite today, than it is to maintain and extend."
This seems completely false to me and I'm curious what has caused you to believe this as I'm a fairly imaginative and creative person yet I cannot imagine a set of circumstance that would lead someone to this conclusion.
In other words, I disagree so very strongly with that statement that I wanted to engage rather than just downvote. (I didn't btw).
I agree with your first statement though and I don't think the op is saying only make cold-blooded projects.
Well. I don’t know if I agree or not, but felt like playing devils advocate.
take the example of game development. Trying to maintain, say, the hobbit game from the early 2000s to today would almost certainly take more work than just making a new one from scratch today (GPUs have changed drastically over the past 20 years and making simple 3d platformers with unreal is so easy, “asset flips” are a new kind of scam)
Or a tool which lets people visually communicate over vast distances without specialized hardware.
That was a huge lift in the 2000s when Skype was the only major player, but you can find tutorials for it now using webrtc.
I wrote an SDK, in 1994-95, that was still in use, when I left the company, in 2017.
It was a device control interface layer, and was written in vanilla ANSI C. Back when I wrote it, there wasn't a common linker, so the only way to have a binary interface, was to use simple C.
I have written stuff in PHP (5), that still works great, in PHP 8.2. Some of that stuff is actually fairly ambitious.
This is why I am trying to switch as many projects I'm on as possible to HTMX. The churn involved with all of the frontend frameworks means that there's far too much update work needed after letting a project sit for N quarters.
I googled HTMX, all excited that maybe, just maybe, the browser people got their shit together and came up with a framework we can all live with, something native to the browser with a few new tags, and no other batteries required....
and was disappointed to find it's just a pile of other libraries 8(
htmx is written in vanilla javascript, has zero dependencies and can be included from source in a browser and just works
it doesn't introduce new tags (there are probably enough of those) but it does introduce new attributes that generalize the concept of hypermedia controls like anchors & forms (element, event, request type & placement of response/transclusion)
You can also use the web platform straight up without transpilation, build tools, post-css compilation and all that jazz.
Just vanilla JavaScript, CSS, HTML, some sprinkles of WebComponents. And you can be pretty sure that you won't have to update that for a decade or more, as compatibility won't be broken in browsers.
Heck, I have vanilla JS projects I wrote 15 years ago that still render and work exactly like how they rendered/worked when I wrote them.
> Anything more than a todo list becomes unwieldy almost instantly.
That's not a fact, just your personal experience
> Taking a small dependency to avoid that is well worth it.
Sometimes, yeah. Sometimes, no.
> Taking a whole “virtual dom” may be overkill though (looking at you, react)
In most cases, probably yeah. React was created to solve a specific problem a specific company experienced, then the community took that solution and tried to put it everywhere. Results are bound to be "not optimal".
It's one small dependency. Worst case, you write the library yourself.
You send a request to the backend, it then sends you HTML back (all rendered in the backend using a templating language such as Django templating engine, Twig or Liquid), you insert it into a div or so.
Htmx was Intercooler, worst case you create your own. But no additional scripts needed.
I've been able to kick out Vue out because Htmx covers my use case.
> It's one small dependency. Worst case, you write the library yourself.
Every abstraction comes with a cost :) I'm not saying it never makes sense to use React, Vue, Htmx or anything else. But that's besides this conversation.
> You send a request to the backend, it then sends you HTML back
You're just trading doing stuff in the frontend for doing stuff in the frontend + backend. Might make sense in a lot of cases, while not making sense in other cases.
> You're just trading doing stuff in the frontend for doing stuff in the frontend + backend. Might make sense in a lot of cases, while not making sense in other cases.
For me I can now do everything in the backend, I don't need to switch context (Twig templating language vs. JavaScript / SPA / etc). It's now easier to just keep up to date with PHP / Symfony, instead of also updating Node, npm / yarn, node_modules, and keeping up to date with everything.
Otherwise the frontend part is messy and not so great code, since my JS skills are limited and it's too stressful for me to keep up to date with in addition to PHP / Symfony (I had worked with AngularJS 1, Angular 2-8, Vue 2, some React, played around with Svelte, managing stuff via bower, gulp, grunt, Webpack, parcel, npm, yarn 1/2/3, Node 10-18, nvm, corepack; and if I kept up, I probably need to look at bun soon).
Nothing to be disappointed in here AFAICT, however, it’s shocking that you had to Google HTMX, seeing as it shows up on HN a few times a month at least.
I'm guessing the disappointing feeling come from parent saying "Pff, I'm so tired of all these libraries that eventually update their APIs in a breaking way, so now I'm using X" while X is just another library exactly like all the rest, and will surely introduce a breaking change or two down the line.
HTMX is not _exactly_ like the rest. It's far simpler than the others, e.g. by not requiring a build step, being pure JS and just having a smaller scope overall. Hot/cold isn't binary.
You're arguing from the abstract point of view, rather than the practical. The point is that it takes an order of magnitude more time to clone, say, a Vue project from three years ago that nobody has touched since then and try to download your dependencies and build on a new machine, as compared to an HTMX project.
As if "npm/yarn install" wouldn't work for the hypothetical Vue project? A charitable interpretation of what you're saying is that you cannot clone a vue project from three years ago, update all dependencies to the latest version, and expect that to work. But then, how is it different for HTMX, other than for the fact that 1. it's younger 2. you don't have the ecosystem around it to update - but that also means you're doing less or redoing everything yourself.
>As if "npm/yarn install" wouldn't work for the hypothetical Vue project?
I'm not talking in hypotheticals. No, if you do this for a Vue project that hasn't been touched in a few years, it doesn't work. Upon cloning the source, and running npm install, you'll run into loads of build errors between incompatible versions of npm dependencies, even after you've used nvm to switch back to an old version. A build process, especially one based on npm, intrinsically introduces great amounts of fragility to the project.
Yes, you pay for it by having to invent a lot of things yourself, but limiting the project to HTMX means you've just got one dependency to store and it'll work so long as you do that.
Back to the point of TFA: you can have a cold blooded project with a dependency to HTMX and one or two other JS libs. Once you introduce an npm build, you're squarely out of cold blooded territory due to the constant updates and maintenance required just to keep your build working.
Okay, go ahead. Show me a (serious) project that hasn't been touched in three years and that plain doesn't work it you install packages from the lock file. You made a claim, I said I was skeptical, your only counter argument was... To reiterate your initial point without adding anything new. So, time for evidence.
Most of the software I write is at least somewhat cold-blooded by this definition. My program to find the dictionary forms of Finnish words is an okay example:
I wrote the initial draft in an afternoon almost a year ago, and from then on endeavored to only make changes which I know play nicely with my local software ecology. I usually have `fzf` installed, so an interactive mode comes as a shell script. I usually have `csvkit`, `jq`, and if all else fails `awk` installed, so my last major update was to include flags for CSV, JSON, and TSV output respectively. Etc, etc.
The build instructions intentionally eschew anything like Poetry and just gives you the shell commands I would run on a fresh Ubuntu VirtualBox VM. I hand test it every couple of months in this environment. If the need to Dockerize it ever arose I'm sure it would be straightforward, in part because the shell commands themselves are straightforward.
I don't call it a great example because the CLI library I use could potentially change. Still, I've endeavored to stick to only relatively mature offerings.
I built my websites on Drupal 7 and have enjoyed a decade of stability. Now, with D7 approaching EOL in 1 year, I'm looking for a solution that will last another decade. There's no reason for the EOL, either, other than people wanting to force everyone to move on to a newer version. It undoubtedly means more business for some people, as they will be able to reach out to their clients and say, "Your website is about to be a security risk, so you have to pay to update it!" Unfortunately, it means more work for me to support my personal projects.
And why? Because someone somewhere has decided that I should move on to something newer and more exciting. But I don't want new and exciting... I want rock solid!
I'm on vacation this week. Am I learning a new hot language like Rust, Zig, Go, etc.?
Nope.
I have no desire to. I don't trust them to be the same in a decade, anyway.
I'm focusing on C. It's far more enjoyable, and it's stable.
> why? Because someone somewhere has decided that I should move on to something newer and more exciting. But I don't want new and exciting... I want rock solid!
Well, it also could be because someone else decided to move on to something newer and more exciting instead of dutifully maintaining 10y old free software because someone WANTS to have peace of mind on their vacation.
People and companies don't want to pay for maintenance work. I think that this is actually the main reason for all of this complaints about perceived short longevity of libraries and languages. Unfortunately entropy is a bitch, one can put up colossal amount of work up front, pyramids amount of work equivalent but eventually decay will catch up.
How much interaction do your sites have? If you ran a little program locally that took the sitemap, and generated a static site, then you will be immune for life for those security and maintenance arguments.
You can probaly pin the PHP, SQL, and webserver versions, compile them from source so that you will always have the binaries at hand. Then it will last another 1000 years.
However, if you need user interaction, then you are stuck in an eternal rat race of security updates and deprecation, leading to major upgrades, leading to more security updates!
Haha... I agree about it being subjective! I find that I enjoy the process as much as the result. It's like bringing order to a chaotic universe. :)
The thing is, I don't have many segfaults in C, and I find C much easier to debug and hunt down issues in than even C++ (which I also enjoy). Also, because C uses very little "magic", and I also know exactly what I'm getting with my code, I find it much easier to reason about.
I heard a quote the other day while watching a presentation "When you're young you want results, when you're old you want control." I think I'm on the old side now.
As for Go, I genuinely don't have anything against it, but I don't see why I need it either. I don't doubt that others have stellar use cases and impressive results with Go, and that's fine, too, but I don't sense any lack which prompts me to investigate further. I would love to learn more about it, but most of what I see online is either over-the-top (and therefore vomit-inducing) fanboyism, or otherwise unspectacular, which makes me ask "why bother?"
I really appreciate this idea after rewriting my blog engine three times because the frameworks I was using (Next, Remix) had fundamental changes after a year and I was multiple major versions behind. Though it depends on what you are after. If the goal is to be able to blog, time spent upgrading and rewriting code because the framework is evolving is wasted time unless you want to stay up to date with that framework. Think about how we view physical goods today, they aren’t built to last. In certain situations, like a personal blog, you want reliable software that works for years without the need to change. It also helps to have software that uses common data formats that are exportable to another system, like a blog based in markdown files, rather than JSX.
Worth mentioning the Hare language, designed to be stable for 100 years. After they release 1.0 they don't plan to change it beyond fixes. It's Drew DeVault's project.
A link to this article would be an effective curt reply to the "is this project dead?" GitHub issues that have been known to enrage and discourage cold-blooded project owners.
> I want yesterday's technology tomorrow. I want old things that have stood the test of time and are designed to last so that I will still be able to use them tomorrow. I don't want tomorrow's untested and bug-ridden ideas for fancy new junk made available today because although they're not ready for prime time the company has to hustle them out because it's been six months since the last big new product announcement. Call me old-fashioned, but I want stuff that works.
The same thing is true with free software: I prefer to use the terminal. In the terminal, I prefer to run bash and vim, not zsh and neovim.
When I write code, I've found C (and perl!) to be preferable, because "You can freeze it for a year and then pick it back up right where you left off."
There are rare exceptions, when what's new is so much better than the previous solution (ex: Wayland) that it makes sense to move.
However, that should be rare, and you should be very sure. If you think you made the wrong choice, you can always move back to your previous choice: after playing with ZFS for a few years, I'm moving some volumes back to NTFS.
Someone mentions how the author choice (python2) is getting harder to install. Cold blooded software works best when done with multiplatform standards, so I'd suggest the author does the bare minimum amount of fixes necessary to run with https://cosmo.zip/pub/cosmos/bin/python and call it a day.
With self-contained APEs and the eventual emulator when say 20 years from now we move to Risc V, you don't have to bother about dependencies, updates or other form of breakage: compile once in a APE form (statically linked for Windows/Linux/BSD/MacOS) it will run forever by piggybacking on the popularity of the once-popular platform.
Wine lets you run Windows 95 binaries about 30 years layer: I'd bet than Wine + the Windows part of the APE will keep running long after the kernel break the ABI.
I’ve got a similar one, yet to be written, about “cold computing”. How do you compute if you’re on a limited solar+battery installation? what if your CPU wakes up and you have only a couple of hours of runtime? What if you only can turn on wifi for 20 minutes a day?
In my mind this is a lot more about tooling and platform than language, library, architecture, etc.
I have a project that’s quite complicated and built on fast-moving tech, but with every element of the build locked down and committed in SCM: Dockerfiles, package sets, etc.
Alternatively, one of my older projects uses very stable slow-moving tech. I never took the time to containerize and codify the dependencies. It runs as an appliance and is such a mess that it’s cheaper to buy duplicates of the original machine that it ran on and clone the old hard drive rather than do fresh installs.
I kept making CMS as a hobby, starting with flat files and PHP, moving to MySQL.. simple things. I did it precisely because I figured if I modify and write plugins for Wordpress, I would have to keep updating them on their schedule. Especially since even back then I really liked removing things I don't want, and while carrying over additions I created over to new versions might be easy enough, maintaining a stripped down version of something like Wordpress (even 20 years ago..) would have been impossible.
I felt like a stubborn dumb ass in the early 2000s (and there was also this constant mockery of "NIH syndrome" in the air) but by now, I'm so glad I basically disregarded a lot of stuff and just made my own things out of the basics. And coincidentally, the last one I made has also lasted me over 12 years by now. I still love it actually, it's just the code that is terrible. So I started a new one, to fix all the mistakes of the previous one, which mostly is cutting less corners because now I know that I'll use this for way longer than I can reasonably estimate right now, so I try to be kind(er) to future me.
(But I'll also make fascinating new mistakes, because I decided to de-duplicate more or less everything at the DB level, on a whim, without prior experience or reading up on it. And then I'll write some monstrosity to pipe 12 years of content from the old CMS into the new one, and I will not break a single link even though nobody would really care. Just because I can.)
If you limit your dependencies to what’s available in your distro’s LTS or stable release breaking changes are much less common. Living on the bleeding edge has a cost.
This got me thinking if any of my side projects or work projects that are in maintenance mode could qualify as "cold blooded". Conceptually, they can - I have many projects written in Go, Typescript, and Python where I could cache my dependencies (or at least the SHAs) and do what this is implying. The problem is that it stops being useful beyond proving the concept. In reality, all my projects have a slow churn that usually has to do with vulnerability updates. Maybe more aptly put, "Can I take this Go repository off the shelf, rebuild the binary, and let it run?"; the answer is of course - assuming HTML and web standards haven't changed too much. The problem is that then some old vulnerability could be immediately used against it. The assumption I also made, that HTML and web standards haven't changed too much, will almost assuredly be falsey. They may have not have changed enough to be breaking, but they'll have certainly changed to some degree; the same can be said for anyone that's developed desktop applications for any OS. The one constant is change. Either side of that coin seems to be a losing proposition.
I had to read this article a couple of times before I got it. I guess dependencies can make an app warm blooded, but Docker or containerization can also paper over some of these issues. However, whenever I choose libraries for a project I do alot of research to make sure that the libraries themselves are "cold blooded" too, as even one badly chosen library can cause your project to fail in 10 years time
I had this experience making an iOS game. After a few years of making the game, I went back to it, and found that I was unable to get it to compile. I guess iOS games are very warm blooded. Perhaps if I had stuck with a desktop platform or web it would have remained fine? Not entirely sure.
Mobile in general is this way. For instance, on Android, if your app isn't targeting a high enough sdk version, Google will remove it after some time. If you have to upgrade your target sdk, you may find many libraries are broken (or not supported), and it also can lead to other cascades of upgrades, like having to upgrade gradle or the NDK if you use it.
I think Go's backward compatibility promise – https://go.dev/blog/compat – would make much Go software 'cold blooded' by this definition (so long as you vendor dependencies!)
Cold-blooded software seems like a great idea in spaces where the security risk and business impact are low. I can think of a lot of great hobbyist uses for this approach, like a handmade appliance with Arduino or Raspberry Pi.
The ever-evolving threat landscape at both the OS and application level makes this unviable for projects with any amount of money or sensitivity behind them. Imagine needing to handle an OS-level update and learning that you can no longer run Python 2 on the box you're running that project on. Fine for a blog, but calamitous for anything that handles financial transactions.
> Cold-blooded software seems like a great idea in spaces where the security risk and business impact are low. I can think of a lot of great hobbyist uses for this approach, like a handmade appliance with Arduino or Raspberry Pi.
I think it would be the other way around. A low-impact hobby project can use exciting, fast-moving technology because if it breaks, there is not so much damage (move fast and break things). But something with high business impact should use boring, tried-and-tested technologies no external network dependencies (e.g. a package being available in a third-party repository at compile time or runtime). For something like that, the OS updates (on the LTS branch if Linux) would be planned well ahead, and there would be no surprises like the Python 2 interpreter suddenly breaking.
If you are a bank, a store, or handle PHI, you will have contractual obligations to maintain it. However, I still think that can be "cold-blooded" maintenance. When I update a Go project after running `govulncheck ./...`, it is generally easy. I vendor; builds and runtime only rely on systems I control.
Many large companies and business like banks and manufacturers run legacy code in ancient runtimes. The projects can be so frozen in time that nobody has the courage to touch them.
Meh, just keep a container around with py2 in it, maybe just containerize the whole app. The ultimate in vendored dependencies, short of a whole VM image.
Great article. I try to follow this advice as much as I can. My personal website (https://www.jviotti.com) runs almost purely on well established UNIX tools like Make, Pandoc, Sed, etc. Repository here: https://github.com/jviotti/website.
I have some Windows binaries from the mid 90s that I still use today. Mainly small utilities for various calculations/conversions, filesystem organisation, and the like.
One interesting thing about learning Elixir and its (+Erlang) ecosystem after 5+ years of JS/TS is that half of the most popular libraries seemed abandoned.
However, when you look closer, turns out that most of them just finished the work within their scope, fixes all reproducible bugs, and there's just not much left to do.
If there's a JS dependency with the last commit in 2022, it probably won't even build. (I'm half-joking of course, but only half)
I love cold-blooded software and avoid unstable-ever-changing software as much as possible. Why? Because I can then focus 99% on delivering customer value instead of constantly rewriting code that used to work just fine but now doesn’t because some inexperienced snowflake narcissistic developer decide to make arbitrary changes to an API that worked perfectly fine for years!
Sorry a bit of a rant there. Unstable software makes me want to throw my PC out of the nearest window.
For many use cases, cold-blooded software is not viable. We need better tools to automate and remove the tedium involved in upgrading dependencies or modernizing codebases to protect against ever evolving threats and adapt to changes in the ecosystem
[1] https://www.npmjs.com/package/express?activeTab=versions
[2] https://www.npmjs.com/package/express
[3] https://fastify.dev/benchmarks/
[4] https://go.dev/doc/go1compat