I do not think windows is the problem here. The problem is that equipment that is critical infrastructure being connected to the internet, imo. There is little reason for a lot of computers in some settings to be connected to the internet, except for convenience or negligence. If data transfer needs to be done, it can happen through another computer. Some systems should exist on a (more or less) isolated network at best. Too often we do not really understand the risk of a device being connected to the internet, until something like this happens.
Why would a machine that is required for a MRI machine to work (as one of the examples given in the thread here) need to be online? I understand about logging, though even then I think it is too risky. Do all these machines _really_ need to be online, or just nobody bothered after all the times something happened or, even worse, software companies profit in certain ways and would not want to change their models? Can we imagine no other way to do things apart from connecting everything to some server wherever that is?
MRI read outs are 3d, so can't be printed for analysis. They are gigabytes in size, and the units are usually in a different part of the building. So you could sneakernet cds every time an MRI is done, then sneakernet the results back. Or you could batch it and then analysis is done slowly and all at once. OR you could connect it to a central server and results/analysis can be available instantly.
Smarter people than us have already thought through this and the cost-benefit analysis said "connect it to a server"
So in that case you setup a NAS server that it can push the reports to, everything else is firewalled off.
Its just laziness, and to be honest, an outage like this has no impact on their management reputation as a lot of other poorly run companies and institutions were also impacted, so the focus is on crowdstrike and azure, not them.
I admit I'm not a medical professional but these sound like problems with better solutions than lots of Internet connected terminals that can be taken down by edr software.
Why not an internal only network for all the terminals to talk to a central server, then disable any other networking for the terminals? Why do those terminals need a browser where pretty much any malware is going to enter from? If hospitals are paying out the ass for their management software from epic/etc, they should be getting something with a secure design. If the central server is the only thing that can be compromised then when edr takes it down you at least still have all your other systems, presumably with cached data to work from
Many X-Rays (MRIs, CT scans, etc.) are read and interpreted by doctors who are remote. There are firms who that's all they do - provide a way to connect radiologists and hospitals, and handle the usual business back-end work of billing, HR, and so on. Search for "teleradiology"
Same goes for electronic medical records. There are people who assign ICD-10 codes (insurance billing codes) to patient encounters. Often this is a second job for them and they work remote and typically at odd hours.
A modern hospital cannot operate without internet access. Even a medical practice with a single doctor needs it these days so they can file insurance claims, access medical records from referred patients and all the other myriad reasons we use the internet today.
Okay, so (as mentioned elsewhere in this thread), connect the offline box to an online NAS with the tightest security between the two humanly possible. You can get the relevant data out to those who need it.
This stuff isn't impossible to solve. Rather, the incentives just aren’t there. People would rather build an apparatus for blame-shifting than actually just building a better solution.
Do you think everyone involved is physically present? The gp was absolutely accurate that you guys have no idea how modern healthcare works and this had nothing to do with externally introduced malware.
This sounds a bit like someone just got ran over by a truck because the driver couldn’t see them so people ask why trucks are so big that they’re dangerous and the response is “you just don’t know how trucks work” rather than “yeah maybe drivers should be able to see pedestrians”.
If modern medicine is dangerous and fragile because of network connected equipment then that should be fixed even if the way it currently works doesn’t allow it.
This is a completely different discussion. They absolutely should be reliable. The part that is a complete non starter is not being networked because it ignores that telemedicine, pacs integration, and telerobotics exist.
If you don't understand why it has to be networked with extremely bad fallback to paper, then I suggest working in healthcare for a bit before pontificating on how everything should just go back to the stone age.
Networking puts their reliability into risk. As shown here, as shown in ransomware cases. It is not the first time something like this happen.
The question is not whether or not hospitals need internet at all or to go back into printing things in paper or whatever nobody ever said. The question is whether everything in the hospital should be connected to the internet. Again the example used was simple. Having the computer processing and exporting the data from an MRI machine connected online in order to transfer the data, vs using a separate computer to transfer the data and the first computer is offline. This is how we are supposed to transfer similar data at my work for security reasons. I am not sure why it cannot happen in there. If you cannot transfer data through that computer, there could be an emergency backup plan. But you need to solve only the transfering data part. Not everything.
You don’t print the images an MRI produced, you transmit them to the people who can interpret them, and they are almost never in the same room as the big machine, and sometimes they need to be called up in a different office altogether.
The comment [0] mentioned that they could not get at all the mri outputs even with the radiologist coming on site. Obviously, software that was processing/exporting the data was running on a computer that was connected online, if not requiring internet connection itself. Data transfer can happen from another computer than the one the data is processed/obtained. Less convenient, but this is common practice in many other places for security and other reasons.
I mean, this is incentivized by current monetization models. Remove the need to go through a payment based aaS infra, and all the libraries to do the data visualization could be running on the MRI dude's PC.
-aaS by definition requires you to open yourself to someone else to let them do the work for you. It doesn't empower you, it empowers them.
Yeah I suspect -aaS monetisation models are one of the reasons of the current all-to-internet mess. However, such software running in the machine using a hardware usb key as authenticating is not unheard of either in software like that. I wish that decisions on these subjects were done based on the specific needs of the users rather than the finance people of -aaS companies.
Is that an ironic question? Or serious one? I fail to detect the presence or absence of irony sometimes online. I just hope that my own healthcare system has some back-up plans for how to do day-to-day operations like transfering my scan results to a specialist in case the system they normally use fails.
"It seems like you’ve never worked with critical infra."
My entire career has been spent building, and maintaining, critical infra.[1]
Further, in my volunteer time, I come into contact with medical, dispatch and life-safety systems and equipment built on Windows and my question remains the same:
Why is Windows anywhere near critical infra ?
Just because it is common doesn't mean it's any less shameful and inadequate.
I repeat: We've fully understood these risks and frailties for 25 years.
[1] As a craft, and a passion - not because of "exciting career opportunities in IT".
Is this the rsync.net HN account? If so, lmao @ the comment you replied to.
> As a craft, and a passion
I believe you’ve nailed the core problem. Many people in tech are not in it because they genuinely love it, do it in their off time, and so on. Companies, doubly so. I get it, you have to make money, but IME, there is a WORLD of difference in ability and self-solving ability between those who love this shit, and those who just do it for the money.
What’s worse is that actual fundamental knowledge is being lost. I’ve tried at multiple companies to shift DBs off of RDS / Aurora and onto at the very least, EC2s.
“We don’t have the personnel to support that.”
“Me. I do this at home, for fun. I have a rack. I run ZFS. Literally everything in this RFC, I know how to do.”
“Well, we don’t have anyone else.”
And that’s the damn tragedy. I can count on one hand the number of people I know with a homelab who are doing anything other than storing media. But you try telling people that they should know how to administer Linux before they know how to administer a K8s cluster, and they look at you like you’re an idiot.
The old school sysadmins who know technology well are still around but there is increasingly less of them while the demand skyrockets as our species gives computers an increasing number of responsibilities.
There is tremendous demand for technology that works well and works reliably. Sure, setting up a database running on an EC2 instance is easy. But do you know all of the settings to make the db safe to access? Do you maintain it well, patch it, replicate it, etc? This can all be done by one of the old school sysadmins. But they are rare to find, and not easy to replace. It's hard to judge from the outside, even if you are an expert in the field.
So when the job market doesn't have the amount of sysadmins/devops engineers available, then the cloud offers a good replacement. Even if you as an individual company can solve it by offering more money and having a tougher selection process, this doesn't scale over the entire field, as at that point the whole number of available experts comes in.
Aurora is definitely expensive, but there is cheaper alternatives to it. Full disclosure, I'm employed by one of these alternative vendors (Neon). You don't have to use it, but many people do and it makes their life easier. The market is expected to grow a lot. Clouds seem to be one of the ways our industry is standardizing.
I’m not even a sysadmin, I just learned how to do stuff in Gentoo in the early ‘00s. Undoubtedly there are graybeards who will laugh at the ease of tooling that was available to me.
> But do you know all of the settings to make the db safe to access? Do you maintain it well, patch it, replicate it, etc?
Yes, but to be fair, I’m a DBRE (and SRE before that). I’m not advocating that someone without fairly deep knowledge attempt to do this in prod at a company of decent size. But your tiny startup? Absolutely; chuck a default install of Postgres or MySQL onto Debian, and optionally tune 2 – 3 settings (shared_buffers, effective_cache_size, and random_page_cost for Postgres; (innodb_buffer_pool_* and sync_array_size for MySQL – the latter isn’t necessary until you have high concurrency, but it also can’t be changed without a restart so may as well). Pick any major backup solution for your DB (Barman for Postgres, XtraBackup for MySQL, etc.), and TEST YOUR BACKUPS. That’s about it. Apply any security patches (or use unattended-upgrades, just be careful) as they’re released, and don’t do anything outside of your distro’s package management. You’ll be fine.
Re: Neon, I’ve not used it, but I’ve read your docs extensively. It’s the most interesting Postgres-aaS product I’ve seen, alongside postgres.ai, but you’re (I think) targeting slightly different audiences. I wish you luck!
> It’s the most interesting Postgres-aaS product I’ve seen, alongside postgres.ai, but you’re (I think) targeting slightly different audiences. I wish you luck!
Also a lot of the passionate security people such as myself moved on to other fields as it has just become bullshit artists sucking on the vendors teat and filling out risk matrix sheets, but no accountability when their risk assessments invariably turn out to be wrong.
In the past, old versions of Windows were often considered superior because they stopped changing and just kept working. Today, that strategy is breaking down because attackers have a lot more technology available to them: a huge database of exploits, faster computers, IoT botnets, and so on. I suspect we're going to see a shift in the type of operating system hospitals run. It might be Linux or a more hardened version of Windows. Either way, the OS vendor should provide all security infrastructure, not a third party like Crowdstrike, IMHO.
> I suspect we're going to see a shift in the type of operating system hospitals run. It might be Linux or a more hardened version of Windows.
Why? "Hardening" the OS is exactly what Crowdstrike sells and bricked the machines with.
Centralization is the root cause here. There should be no by design way for this to happen. That also rules out Microsoft's auto updates. Only the IT department should be able to brick the hospitals machines.
Hardening is absolutely not what crowdstrike sells. They essentially sell OS monitoring and anomaly detection. OS monitoring involves minimizing the attack surface, usually by minimizing the number of services running and limiting the ability to modify the OS
Nothing wrong with that. Windows XP-64 supports up to 128GB physical RAM, could be 5 years until that is available on laptops. Windows 7 Pro supports up to 192 GB of RAM. Now if you were to ask me what you would run on those systems with maxed out RAM, I wouldn't know. I also don't think the Excel version that runs on those versions of windows allows partially filled cells for Gantt charts.
>Most of it runs on 6 to 10 year old unpatched versions of Windows…
Well, that's a pretty big problem. I don't know how we ended up in a situation where everybody is okay with the most important software being the most insecure, but the money needed to keep critical infra totally secure is clearly less than the money (and lives!) lost when the infra crashes.
Well you can use stupid broken software with any OS, not just Windows. Isn't CrowdStrike Falcon available on Linux, is there any reason why couldn't they have introduced a similar bug and similar consequences there?
None. There are a bunch of folks here who clearly haven’t spent a day in enterprise IT proclaiming Linux would’ve saved the day. 30 seconds of research would’ve lead them to discover crowdstrike also runs on Linux and has created similar problems on Linux in the past.
It's even better when you get told about the magical superiority of apple for that...
... Except Apple pretty much pushes you to run such tools just to get reasonable management key alone things like real-time integrity monitoring of important files (Crowdstrike in $DAYJOB[-1] is how security knew to ask whether it was me or something else that edited PAM config for sudo on corporate Mac)
Enterprise mac always follows the same pattern, users proclaim its superiority while its off the radar, then it gets mcaffee, carbon black, airlock, and a bunch of other garbage tooling installed and runs as poorly as enterprise Windows.
The best corporate dev platform at moment is WSL2 - most of the activity inside the WSL2 vm isn't monitored by the windows tooling so performance is fast. Eventually security will start to mandate agents inside the WSL2 instance, but at the moment most orgs dont.
> Why would Windows systems be anywhere near critical infra ?
This is just a guess, but maybe the client machines are windows. So maybe there are servers connected to phone lines or medical equipment, but the doctors and EMS are looking at the data on windows machines.
No. The problem isn’t expertise — it’s CIOs that started their career in the 1990s and haven’t kept up with the times. I had to explain why we wanted PostgreSQL instead of MS SQL server. I shouldn’t have to have that conversation with an executive that should theoretically be a highly experienced expert. We also have CIOs that have MBAs but not actual background in software. (I happen to have an MBA but I also have 15+ years of development experience.) My point is CIOs generally know “business” and they know how to listen to pitches from “Enterprise” software companies — but they don’t actually have real-world experience using the stuff they’re forcing upon the org.
I recently did a project with a company that wanted to move their app to Azure from AWS — not for any good technical reason but just because “we already use Microsoft everywhere else.”
Completely stupid. S3 and Azure Blob don’t work the same way. MCS and AWS SES also don’t work the same way — but we made the switch not even for reasons of money, but because some Microsoft salesman convinced the CIO that their solution was better. Similar to why many Jira orgs force Bitbucket on developers — they listen to vendors rather than the people that have to use this stuff.
> I had to explain why we wanted PostgreSQL instead of MS SQL server.
Tbf, you are giving up a clustering index in that trade. May or may not matter for your workload, but it’s a remarkably different storage strategy that can result in massive performance differences. But also, you could have the same by shifting to MySQL, sooooo…
That’s so infuriating. But, while the people in your story sound dumb, they still sound way more technically literate than 95% of society. Azure is blue, AWS is followed by OME.
Teach a 60 year old industrial powertrain salesman to use Linux and to redevelop their 20 year old business software for a different platform.
Also explain why it’s worth spending food, house, and truck money on it.
Finally, local IT companies are often incompetent. You get entire towns worth of government and business managed by a handful of complacent, incompetent local IT companies. This is a ridiculously common scenario. It totally sucks, and it’s just how it is.
Windows servers are “niche” compared to Linux servers. Command line knowledge is not “uncommon expertise,” it’s imo the bare minimum for working in tech.
I’m not wildly opinionated here, I should clarify. I’d love a more Linux-y world. I’m just saying that a lot of small-medium towns, and small-medium businesses are really just getting by with what they know. And really, Windows can be fine. Usually, however, you get people who don’t understand tech, who can barely use a Windows PC, nevermind Linux, and don’t really have the budget to rebuild their entire tech ecosystem or the knowledge to inform that decision. It sucks, but it’s how it is.
Also, Open Office blows chunks. Business users use Windows. M365 is easy to get going, email is relatively hands-off, deliverability is abstracted. Also, a LOT of business software is Windows exclusive. And that also blows chunks.
I would LOVE a more open source, security minded, bespoke world! It’s just not the way it is right now.
> Why would Windows systems be anywhere near critical infra ?
Why would computers be anywhere near critical infra? This sounds like something that should failsafe, the control system goes down but the thing keeps running. If power goes down, hospitals have generator backups, it seems weird that computers would not be in the same situation
Why would Windows systems be anywhere near critical infra ?
Heart attacks and 911 are not things you build with Windows based systems.
We understood this 25 years ago.