I'm going to make a conscious effort not to come off sounding like an asshole, but please excuse me if I slip up. I find many of the ideas in post to be fundamentally at odds with the direction that "good software development" should be travelling. The core of my feeling is best captured by the following quote from the post:
>As a system grows in complexity we don’t necessarily care about how old b-threads have been written, hence we don’t care about maintaining them.
This post is essentially formalizing the process of creating a Big Ball of Mud[0] that is so complex and convoluted that it is impossible to understand. The motivation for formalizing this process seems sane and with good intentions: to add functionality quickly to code you don't really understand. Normally, doing something like is considered cutting corners and incurring explicit technical debt, and must be used sparingly and responsibly. However, the process of "append-only development" is embracing the corner cutting and technical debt as a legitimate development process. I can't get on board with this.
To be more specific, with an example (and maybe I am wrong in understanding the post, this would be the time to point that out to me), let's suppose you have a massive complex software system that was built over the years with this "append-only" style of development. One day you find a nasty bug in one of the lower layers, and to correct it, you have to change some functionality, which moves/removes some events that subsequent layers are depending on for their own functionality. Suddenly, you are faced with rewriting all of those layers in order to adapt to your bugfix. What you're left with is a nightmare of changes that disturb many layers of functionality, because they're all based on this append-only diffing concept: the next layer is dependent on the functionality of the previous layer.
This is what programming APIs are for: to change functionality in lower layers with minimal influence subsequent layers. This post and process seems to be imagining a world without APIs.
The thing is, I might be able to see this "working" in very simple and limited processes[0]... But going back to your example of the nasty bug in complex software, I believe the post's idea is that you shouldn't fix the bug in the layer it's happening, but, instead, write a new b-thread that corrects that behavior.
Which might sound nice in theory but I feel it's much more likely that you won't be able to fix it there (the info you need might be lost in a previous b-thread) or become a piece of code so complex that negates any supposedly benefit that this system had in the first place.
[0] It might be just me and completely off-topic, but this reminds me a bit of rule-based expert systems, where the rules get activated by certain conditions and produce effects (that can activate other rules). The idea was always that you could model very complex (and emerging) behaviors based on very simple, human readable rules. The thing is that you could definitely add/tweak/remove the rules as needed.
"Good software development"... in 15 years of programming at 7 different companies I've never seen a good manageable piece of software. Our current paradigms do not work. I for one welcome anything that offers an improvement on the current clusterfuck.
Many more years, many more companies; there are good examples, but they are not 'in companies' (I am thinking Redis, SQLite etc). Especially 'fortune 1000' (local or global) have the most terrible software imaginable in my experience. Yet it works and, well, they belong to the fortune 1000 so apparently it is not that bad. But it is very badly written software and I agree, we must explore better ways of writing software. I just do not think this is one of them. It cannot hurt to explore though.
This is really due to software engineering as a cost center. When viewed this way, the business always attempts to drive cost to rock-bottom, which then means software that only incurs technical debt. Because business will never pay to reduce the debt, only to get new features. And yes, this is totally false dichotomy, because eventually that debt means that all those future features are more costly. But something about boiling frogs...
The only alternative I've seen without completely rethinking company structure, is having engineering management rebuking business and pushing for these initiatives. Which can be inadvisable from a career perspective, so usually does not happen. It's more politically savvy to push for a "new project" that will fix all the issues of the existent systems.
You are comparing a downside of an approach with the upside of another. You can just as well do the opposite: when adding a new feature/fixing a bug in a feature, when using mainstream existing programming styles you need to touch many code units that also play a role in other features. Behavioral Programming seeks to organize the code by features. It is somewhat similar to the expression problem[1], which is also about maintainability in the face of different kinds of changes. Some would be easier in one approach and harder in the other, and some would be the opposite. Without any real data, you cannot possibly determine which of those would be preferable in the "common case", as you don't know what the common case is. In this case the question is, which is more common, a change that affects few code units but many features (win for mainstream styles) or a change that affects many code points but a few features (win for behavioral programming)?
Your example doesn't really make much sense -- the whole point of append-only development is that you never have to make changes to the inner layers of your software. The bug would simply be fixed by appending new code that corrects the undesired behaviour.
>As a system grows in complexity we don’t necessarily care about how old b-threads have been written, hence we don’t care about maintaining them.
This post is essentially formalizing the process of creating a Big Ball of Mud[0] that is so complex and convoluted that it is impossible to understand. The motivation for formalizing this process seems sane and with good intentions: to add functionality quickly to code you don't really understand. Normally, doing something like is considered cutting corners and incurring explicit technical debt, and must be used sparingly and responsibly. However, the process of "append-only development" is embracing the corner cutting and technical debt as a legitimate development process. I can't get on board with this.
To be more specific, with an example (and maybe I am wrong in understanding the post, this would be the time to point that out to me), let's suppose you have a massive complex software system that was built over the years with this "append-only" style of development. One day you find a nasty bug in one of the lower layers, and to correct it, you have to change some functionality, which moves/removes some events that subsequent layers are depending on for their own functionality. Suddenly, you are faced with rewriting all of those layers in order to adapt to your bugfix. What you're left with is a nightmare of changes that disturb many layers of functionality, because they're all based on this append-only diffing concept: the next layer is dependent on the functionality of the previous layer.
This is what programming APIs are for: to change functionality in lower layers with minimal influence subsequent layers. This post and process seems to be imagining a world without APIs.
0. http://www.laputan.org/mud/