"There might be an outage" is the stupidest reason I have ever heard to advocate not applying security patches.
There's plenty of companies that are able to regularly patch and upgrade software, incident free. If you team isn't able to do this, they need to fix their process instead of putting their head in the sand.
I am not defending the totality of EFX's process by any means. I am just pointing out that "install every high severity patch right away" is not as easy as it looks.
Others have pointed out this patch was not available for all current versions, for example. So, to install the patch you need to upgrade. Oh, that breaks <dependency>. So we need to upgrade <other thing>. But that regresses a feature we use ....
The major issue with EFX was the fragility of their overall architecture. A single weakness should not be enough to lay the whole DBMS open.
The tech industry has to move more towards declarative systems, rather than procedural ones. Docker, Kubernetes and similar tech can auto-update in the background without any downtime. It's impossible to maintain thousands of VMs without some sort of declarative system.
It actually decreases complexity. Updating 1000 VM servers using these technologies can be made automatic. True, it provides an additional attack surface, but the gains of having everything always patched are well worth it.
> Alternatively, there are sometimes political reasons. A director may not want to risk an outage for fear of being dressed down by their VP.
This is such a weird thing to me. I've never worked anywhere like this and so I have a hard time imagining it really. Wouldn't that same director get more than dressed down by the VP if on their watch a _known security vulnerability_ was ignored and the system was breached? I just can't understand this way of thinking.
It's easy to understand once you get just how simplistic it is. The VP will deliver a dressing down if the company loses money and the blame can be placed on the director.
The cost of an outage is very easy to quantify (revenue per minute the system is down), and the probability that something will go wrong while applying the patch is also somewhat easy to predict, and usually greater than zero. The director will be blamed with certainty for the outage, since he approved it.
The cost of a security breach is difficult to quantify; it depends on what gets breached and how bad. Note here, I say breach, not vulnerability. Even if there is a known security vulnerability, it's not immediately obvious in all systems what the consequence will be; there may be other mitigations in place outside of software that reduce the potential damage, or there may be unknown vulnerabilities that are exploitable due to the known vulnerability that make would make a breach worse. The lack of certainty about the consequences means it's also possible for the director to avoid blame if the breach is minor ("how was I supposed to know that other team is still using MD5?"). If there is no breach, then there is nothing for the director to be blamed for.
Given that the director would like to avoid being dressed down, director will be more inclined to delay patching over possibly causing an outage, because the costs of an outage are easy to predict and he will take all blame for it. The breach may never happen and even if it does, it may cost him personally less than the outage.
If this still seems weird, it might be because you are someone who views patching as an easy thing to do, because you probably work for a software company. Software companies are used to managing changing software, and have all kinds of practices around minimizing the risks of doing so. Non-software companies typically find patching to be hard and costly because their core business is something else; changes can disturb the "something else."
There's plenty of companies that are able to regularly patch and upgrade software, incident free. If you team isn't able to do this, they need to fix their process instead of putting their head in the sand.