Crowdstrike is on every machine in the hospital because hospitals and medical centers became a big target for ransomware a few years ago. This forced medical centers to get insured against loss of business and getting their data back. The insurance companies that insure companies against ransomware insist on putting host based security systems onto every machine or they won't cover losses. So Crowdstrike (or one of their competitors) has to run on every machine.
I wonder why putting software on every machine, instead of relying on a good firewall and network separation.
Granted, you are still vulnerable of physical attacks (i.e. the person coming with an USB stick) but I would say much more difficult, and if you put firewalls also between compartment of internal networks even difficult.
Also, I think the use of Windows in critical settings is not a good choice, and to me we had a demonstrations. For who says the same could have happened to Linux, yes but you could have mitigated it. For example, to me a Linux system used in critical settings shall have a root read-only root filesystem, on Windows you can't. Thus the worse you would had is to reboot the machine to restore it.
A common attack vector is phishing, where someone clicks on an email link and gets compromised or supplies credentials on a spoofed login page. External firewalls cannot help you much there.
Segmenting your internal network is a good defence against lots of attacks, to limit the blast radius, but it's hard and expensive to do a lot of it in corporate environments.
Yup as you say, if you go for a state of the art firewall, then that firewall also becomes a point of failure. Unfortunately complex problems don't go away by saying the word "decentralize".
> I wonder if those same insurance policies are going to pay out due to the losses from this event?
They absolutely should be liable for the losses, in each case where they caused it.
(Which is most of them. Most companies install crowdstrike because their auditor want it and their insurance company says they must do whatever the auditor wants. Companies don't generally install crowdstrike out of their own desire.)
But of course they will not pay a single penny. Laws need to change for insurance companies, auditors and crowdstrike to be liable for all these damages. That will never happen.
Depends on what the policy (contract) says. But there's a good argument that your security vendor is inside the wall of trust at a business, and so not an external risk.
In a sense, it looks like these insurance company's policies work a little bit like regulation. Except that it's not monopolistic (different companies are free to have different rules), and when shit hits the fan, they actually have to put their money where their mouth is.
Despite this horrific outage, in the end it sounds like a much better and anti-fragile system than a government telling people how to do things.
A little bit, probably slightly better. But insurance companies don't want to eliminate risk (if they did that, no one would buy their product). They instead want to quantify, control and spread the risk by creating a risk pool. Good, competent regulation would be aimed at eliminating, as much as reasonably possible, the risk. Instead, insurance company audits are designed to eliminate the worst risk and put everyone into a similar risk bucket. After spending money on an insurance policy and passing an audit, why would a company spend even more money and effort? They have done "enough".
> The insurance companies that insure companies against ransomware insist on putting host based security systems onto every machine or they won't cover losses.
This is part of the problem too. These insurance/audit companies need to be made liable for the damage they themselves cause when they require insecure attack vectors (like Crowdstrike) to be installed on machines.
Crowdstrike and its ilk are basically malware. There have to be better anti-ransomware approaches, such as replicated, immutable logs for critical data.
2. Why would anyone trust a ransomware perpetrator to honor a deal to not reveal or exploit data upon receipt of a single ransom payment? Are organizations really going to let themselves be blackmailed for an indefinite period of time?
3. I'm unconvinced that crowdstrike will reliably prevent sensitive data exfiltration.
1. Double extortion is the norm, some groups don't even bother with the encryption part anymore, they just ask a ransom for not leaking the data
2. Appearently yes. Why do you think calls to ban payments exist?
3. At minimum it raises the bar for the hackers - sure, it's not like you can't bypass edr but it's much easier if you don't have to bypass it at all because it's not there
I agree edr is not a DLP solution, but edr is there to prevent* an attack getting to the point where staging the data exfil happens... In which case yes I would expect web/volumetric DLP kicks in as the next layer.
*Ok ok I know it's bypassable but one of the happy paths for an attack is to pivot to the machine that doesn't have edr and continue from there.
By "decentralized" I think you mean "doesn't auto-update with new definitions"?
I have worked at places which controlled the roll-out of new security updates (and windows updates) for this very reason. If you invest enough in IT is possible. But you have to have a lot of money to invest in IT to have people good enough to manage it. If you can get SwiftOnSecurity to manage your network, you can have that. But can every hospital, doctor's office, pharmacy, scan center, etc. get top tier talent like SwiftOnSecurity?
I used to work for a major retailer managing updates to over 6000 stores. We had no auto updates (all linux systems in stores) and every update went through our system.
When it came to audit time, the auditors were always impressed that our team had better timely updates than the corporate office side of things.
I never really thought we were doing anythin all that special (in fact, there were always many things I wanted to improve anout the process) but reading about this issue makes me think that maybe we really were just that much better than the average IT shop?
If, for example, they were doing slow rollouts for configs in addition to binaries, they could have caught the problem in their canary/test envs and not let it proceed to a full blackout.
When I say decentralized, I mean security measures and updates taken locally at the facility. For example, MRI machines are local, and they get maintained and updated by specialists dispatched by the vendor (Siemens or GE)
Siemens or GE or whomever built the MRI machine aren't really experts in operating systems, so they just use one that everyone knows how to work, MS Windiows. It's unfortunate that to do things necessary for modern medicine they need to be networked together with other computers (to feed the EMR's most importantly) but it is important in making things safer. And these machines are supposed to have 10-20 year lifespans (depending on the machine)! So now we have a computer sitting on the corporate network, attached to a 10 year old machine, and that is a major vulnerability if it isn't protected, patched, and updated. So is GE or Siemens going to send out a technician to every machine every month when the new Windows patch rolls out? If not, the computer sitting on the network is vulnerable for how long?
Healthcare IT is very important, because computers are good at record-keeping, retrieval and storage, and that's a huge part of healthcare.
A large hospital takes in power from multiple feeds in case any one provider fails. It's amazing that we're even thinking in terms of "a security company" rather than "multiple security layers."
The fact that ransomware is still a concern is an indication that we've failed to update our IT management and design appropriately to account for them. We took the cheap way out and hoped a single vendor could just paper over the issue. Never in history has this ever worked.
Also speaking of generators a large enough hospital should be running power failure test events periodically. Why isn't a "massive IT failure test event" ever part of the schedule? Probably because they know they have no reasonable options and any scale of catastrophe would be too disastrous to even think about testing.
It's a lesson on the failures of monoculture. We've taken the 1970s design as far as it can ago. We need a more organically inspired and rigorous approach to systems building now.
This. The 1970s design of the operating system and the few companies that deliver us the monoculture are simply not adequate or robust given the world of today.