What are the institutional reasons that "frequent, small" software releases are never the first thing companies turn to? I mean, iterative development and TDD have been around since the 50s, but it seems that every single company has to rediscover it before they try it.
I think it is because more "control" and "testing" over changes sounds better, and safer, at first glance than speed-to-deploy does. It's easier to choose the option that sounds stable.
But then over time, companies realize that their tendency to add bureaucratic gate on top of gate to the process has begun to harm them because they can't release simple updates in anything less than a few months.
Frequent and small deployments sound riskier, and tougher, at first.
Frequent and small deployments sound riskier, and tougher, at first.
Helping people re-think this perverse risk assessment has become a sort of personal mission for me. One of my most rewarding experiences was setting up automated testing and deployment from day one for a new startup.
Hmmm...that last sentence doesn't jive totally with me. As a business-guy, I'd rather diversify my risk over many deployments than over one deployment every six months.
The word "control" probably has more to do with it. The option that makes management more important (big requirements documents, for example) is probably the one management will choose.
I can tell you what the problem is: overzealous postmortems.
The head of engineering wants to be able to say what went wrong and how to fix it next time. That's probably fine, if the solution is better automation and failure detection. But once you go into postmorteming a situation where you dropped a few requests to a minor service, the whole process turns into a pile of shit where the engineers and operations team are scared, management feels in control, and the product managers are left screwed because engineering and operations no longer take risks and everything takes forever.
Engineering management doesn't stop and look at the cost of what they're doing and when software takes forever to release, they don't look in the mirror for the blame. I received a weekly technical email once containing info that 10 requests were "throttled" during a deployment. That's great! We don't need any "next steps", our failover worked fine.
Control is part of it, but it's also due to the fact that it is hard to implement features to work in a heterogeneous environment. The temptation is to dev and test a fully deployed environment, when really you should test probable configurations with several pieces either missing or performing badly.
Frequent, smaller releases take more work upfront. You have to build the process first - lots of automated testing, a way to deploy automatically, etc.
dBaseII - sales people don't want $SOFTWAREv1.05, they want The New Version. They can then sell that upgrade, and the extra features. PHBs get caught up in that. Kaizen and iteration doesn't feel exciting; you don't have a release buzz.