Seems like a possible plan would be duplicate computer systems that are using last week's backup and not set to auto-update. Doesn't cover you if the databases and servers go down (unless you can have spares of those too), but if there is a bad update, a crypto-locker, or just a normal IT failure each department can switch to some backups and switch to a slightly stale computer instead of very stale paper.
We have "downtime" systems in place, basically an isolated Epic cluster, to prevent situations like this. The problem is that this wasn't a software update that was downloaded by our computers, it was a configuration change by Crowdstrike that was immediately picked up by all computers running its agent. And, because hospitals are being heavily targeted by encryption attacks right now, it's installed on EVERY machine in the hospital, which brought down our Epic cluster and the disaster recovery cluster. A true single point of failure.
Can only speak for the UK here, but having one computer system that is sufficiently functional for day-to-day operations is often a challenge, let alone two.
There are often such plans from DR systems to isolated backups to secondary system, as much as risk management budget allow at least. Of course it takes time to switch to these and back, the missing records cause chaos (both inside synced systems and with patient data) both ways and it takes a while to do. On top of that not every system will be covered so it's still a limited state.
Seems like a possible plan would be duplicate computer systems that are using last week's backup and not set to auto-update. Doesn't cover you if the databases and servers go down (unless you can have spares of those too), but if there is a bad update, a crypto-locker, or just a normal IT failure each department can switch to some backups and switch to a slightly stale computer instead of very stale paper.