Hacker Newsnew | past | comments | ask | show | jobs | submitlogin
Up to 1,500 businesses affected by ransomware attack, Kaseya CEO says (reuters.com)
307 points by babyblueblanket on July 6, 2021 | hide | past | favorite | 301 comments


Its interesting to me that the discussion around ransomware attacks on HN is full of victim blaming.

When stores are forced to close in SF because of rampant theft, nobody suggests that Walgreens or Target should hire armed guards.

If a mall were to be bombed, nobody would suggest that malls should just be built to resist bombing attacks.

The entire point of a government is to provide security and protect the rights of citizens. It is the government's job to solve / prevent / deter crime. If we are willing to put the burden of security on the individual, then why would we organize into states at all?

I think there could be "IT security codes" just as there are "building codes" to enforce security good practices. But "survive impact from a 747" is not part of our building codes, and similarly "be resilient to targeted, state sponsored cyberwarfare" should not the responsibility of the individual.


If a bridge collapses, we don't get upset that the police weren't forming ranks by the bridge to keep the water away. Software is more like bridges than stores. It's supporting infrastructure that needs to be able to withstand reasonably anticipated forces of nature.

We expect buildings to be able to resist termite damage by taking reasonable means to block them. We should also expect software to resist self-propagating worms and other attacks. If you build a building in a tough neighborhood (or a warzone), that building has to have security and stability features that match the demands placed on it by its environment. The Internet is basically a termite-infested warzone.

We know that threats exist, we have things like OWASP and other sources of improving best practices to prevent common entrypoints for attacks. We have to expect software and networks to do better, just as we expect governments to find and stop the attackers.


There's a key difference: if you build a building to resist termite damage, the termites don't retreat, plan their next attack, and come back with drills and wood saws to try again. Hackers are better modeled as intelligent adversaries than as forces of nature, because as attackers, they actively improve their techniques as the defenders do.

That means they won't be stopped for long by static infrastructure. And in the same way, "best practices" are a moving target, so they'll always be applied unevenly across companies at any given point in time.

In fact, the more economically damaging the hack, the truer this is: the biggest ransoms and the greatest national security risks are mostly caused by actors that employ dozens or hundreds of motivated professionals to find gaps in an organization's infrastructure. And that means the "force of nature" model is especially inaccurate when we weigh incidents by economic impact (which arguably we ought to do).

Wee know exactly one way of blocking intelligent, motivated adversaries from getting what they want at our expense. And that's to have at least equally motivated, at least equally intelligent folks on the other side who are continually trying to stop them. And that doesn't sound entirely unlike a fairly reasonable line item in a national defense budget.


> It's supporting infrastructure that needs to be able to withstand reasonably anticipated forces of nature.

Hackers are not a force of nature, they are criminals. This is absolutely no different than someone picking the lock to your front door and stealing all you own. Even if you forgot to lock your door, or failed to install steal bars over your windows, its still not your fault if your house is broken into.


If I hire you to secure my house and you remove my front door- you are liable.

Kaseya is a security services company, they’re the ones securing the home, they removed the front door.


Most people (at least, that I know) do not hire private security teams to police their homes.

And the ones that do pay companies for security typically stop at surveillance (cameras, motion detectors, door detectors, etc). These only help to prevent intrusion with the threat of detection (the intruders recognizable face on camera).

Most private buildings are not hardened fortresses capable of state-of-the-art attacks and I personally don't think it's reasonable to expect them to be.


Sure, people don't, or at least the kind of people I associate with don't: companies do though.

This was a company hiring a security firm.


Sure, but banks have vaults. There's a continuum on what is reasonable against known risks. No one would accept a bank saying not to victim blame if they had no security. We know they are targets.

Unfortunately, enforcement online is an international sovereignty issue, so everyone is a target due to high reward and low risk. Until changes are made we must be responsible for our own security. We can still blame attackers at the same time.


The internet is global, freely accessible WiFi is covering every city on the planet, and sub $50 single board computers that use so little power they can run off small solar panels are ubiquitous. Hackers are absolutely to be expected like a force of nature and to think they aren't is pure fantasy.

If you are making millions of dollars but can't manage first year student level security or anything approaching best practices then you get what's coming to you.


A good analogy might be germs. It's probably not reasonable to expect most businesses to have a plan to handle a global pandemic, or to vaccinate the public against seasonal endemics. But it's probably reasonable to expect most businesses to be aware of germs and take appropriate measures to handle and protect against them.

Hospitals being very sensitive to germs, should have strong sanitation protocols. Food processing, likewise. The government should regulate this.

A factory making cars, maybe less regulation is required, though a general baseline prohibition on unsanitary working environments makes sense.

I'd say in the current threat environment, hacking/phishing attempts are closely analogous to the baseline level of attacks that our immune systems are subjected to. Countries that harbor hackers could be analogized to dumping effluent into a river up-stream of a city; it's probably the government's job to clean that up. But also, if the river is unsanitary, in the meantime it's reasonable to be critical of companies that obliviously use it for rinsing vegetables.

Under this analogy, it's both reasonable to expect companies to be aware of germs and take precautions against them, since they are a fact of the environment, and also to want the government to take the lead on cleaning up egregious sources of germs, since that's not something any individual actor could do on their own.


It is better to compare digital crime to that which is more similar: Physical crime.

Unlike germs, these attacks are carried out deliberately by people. Not instinctively by some animal or natural force.


Ok, and how would you apply your comparison to the conversation we're having here?


The core difference in the comparisons is that one treats the issue as a natural force that simply exists in the world and acts upon society externally. The other treats the issue as human aspect of society that acts from within.

Someone accused of "hacking" (Ransomware, spam, stolen credit cards, etc) may be brought to court to explain themselves, bring in (or implicate) their clients, present evidence, be judged, etc. That is the best process we have for dealing with crime and is why comparing hacking to germs is sidestepping an important part of the discussion.

Edit: To answer your actual question, I would say a more apt comparison would be to basic breaking & entering robbery. In the physical world it doesn't make sense for every building to have 2ft thick concrete walls, blast doors, iron bars, and complex locks that can defeat the most advanced techniques for breaking & entering because most robbers will not have access to those techniques and the ones that do will either be interested in other targets or deterred by the systems in place that prevent them from bringing these techniques to bear against some random gas station cash register. The problem, as I see it, described with this analogy is that the robbers (or "hackers" here) are empowered to be much less discriminating regarding their targets. To stay within this analogy, on a technical level the tools one would use for stealing from a bank vault are used just as easily to steal from a gas station cash register.


I see what you're getting at, thanks. The germ analogy perhaps is fuzzy if it seems to advocate being fatalistic about solving the crime, and that we should just accept it as a fact of life. That's not what I intended.

> Someone accused of "hacking" (Ransomware, spam, stolen credit cards, etc) may be brought to court to explain themselves, bring in (or implicate) their clients, present evidence, be judged, etc. That is the best process we have for dealing with crime and is why comparing hacking to germs is sidestepping an important part of the discussion.

Agree with this, we should definitely prosecute crime where possible.

Unfortunately it seems to me that a lot of cybercrime is either 1) state sponsored, or 2) state sanctioned (and in either case originating in jurisdictions far from our reach), and there's often no way to bring them to court. Perhaps we could argue for threatening war against China/Russia over failure to prosecute hacking, but that doesn't seem very palatable to me (and of course, with the work the NSA does, we should be careful about holding others to standards we wouldn't want ourselves to be held to; that's probably its own conversation).

I was not really considering how we should treat the criminals, and was trying to make a case for something of a middle road on how we should think about the blameworthiness of companies that are victims of hacking. On one hand, just blaming the companies doesn't seem reasonable, but on the other, saying it's the government's responsibility to prosecute these criminals while giving the companies a pass on liabilities is a sub-optimal position as well, especially in a market economy where price signals are the default way of coordinating. (From my perspective it seems that currently under the law, we're much closer to the latter case, where companies get hacked and suffer very little liability in consequence, e.g. Equifax, Yahoo, etc.)

Recasting what I was trying to get at in your preferred crime/real-world frame, I think that most companies are doing something like being a bank, but keeping cash in the front-of-house instead of in a vault, or leaving the back door open, and then making the customer liable for the loss when they get robbed. I think most companies won't take security seriously unless there's an actual financial penalty for failing to do so. But some companies are getting popped by nation-states with huge engineering resource; in those cases I don't think it's necessarily reasonable to punish companies IF they were doing a good job. (I.e. if they did have a vault, and someone came in with a tank and leveled the building).

At the same time, I also think that it's the government's job (carrying on the analogy) to 1) train banks on appropriate security measures, 2) invest in new vault technology, and 3) try to capture/reduce the bank robber bandits. But where this analogy breaks down is that these days we're extremely good at catching bank robbers, while it's unclear whether there's a mechanism by which we could arrest most hackers, and that's why I'm pushing for a solution which to some extent deals with the world as it is rather than solving the problem at its source as we would advocate for with most localized crime.


> Its interesting to me that the discussion around ransomware attacks on HN is full of victim blaming.

It is no longer interesting that there's a pattern of comments on HN that attacks some nebulous aspect of other comments on the site and/or article - such as this one. Neither is it interesting that these comments generally try to use emotionally manipulative language (like "victim blaming") and attempt to shame other HN users in place of (or occasionally in addition to) sound logic.

> victim blaming

There's more than one victim here - users, consumers, and other people who use the services of these organizations are also victims. These companies that were compromised had a responsibility to protect their users' data and continue to provide them services - that they failed to uphold due to their own lax IT infrastructure.

> The entire point of a government is to provide security and protect the rights of citizens.

Even given this model, the government still is not responsible for directly administering the IT systems of companies. They're responsible for that, and the government penalizes them when they fail.

We're still not under that system (I'm still waiting for a law that penalizes companies for leaking, sharing, or losing user data), but my argument holds anyway.


> Its interesting to me that the discussion around ransomware attacks on HN is full of victim blaming.

This is HN. It's filled with people whose job is building secure systems, or at the very least are aware of best practices to prevent these attacks. Of course you're going to read that they should have done this or that.

> When stores are forced to close in SF because of rampant theft, nobody suggests that Walgreens or Target should hire armed guards. If a mall were to be bombed, nobody would suggest that malls should just be built to resist bombing attacks.

Have you tried asking that in different places?


> But "survive impact from a 747" is not part of our building codes

It basically is for skyscrapers now.

https://global.ctbuh.org/resources/papers/download/1017-evol...


100% with you. Generally posts like these induce the HN crowd blinders, since people who typically post here are sympathetic to software developers. The same software developers who are making money hand over fist. The same software developers who refuse to pay reasonable amounts of money for zero days.

These hackers are fulfilling a market inefficiency whether users here would like to acknowledge that or not.

It’s not the mean hackers or bitcoins fault. The blame should be squarely be placed at the doorstep of all the brilliant engineers who are responsible for the creation of the system architecture but for whatever reason are nowhere to be found when it’s starts to degrade.

This is a major issue with software development that is simply not convenient to discuss because the incentives to frog hop from job to job massively outweigh the benefits of staying on board for the years the job actually requires.

And yes, people will tell you, especially on here, that air tight code is a pipe dream. And maybe so, but the amount of severe deficiencies in code bases that millions of people rely on every day are simply unacceptable. When your earnings report is in the upwards of hundreds of millions, it’s really hard to play stupid.


This sounds like you're laying the blame at the feet of developers. Unless we're talking about FOSS, most devs don't get to pick what to write; only how to do it, and then in a very limited way on artificially constrained (read: unreasonable) timescales.

Start holding companies responsible for their shitty priorities and then things may change. Until then, this is doing the equivalent of yelling at retail workers for company policy. They are not the responsible party here. That responsibility starts at the C-suite and filters down.


Currently the blame should be on companies and people from the top, as the developers don't really have much choice (without unionizing or whatever). OTOH, it doesn't have to be this way.

As an example, electricians are licensed here (I think) and it is against the law to pressure them work faster.

Architects and building engineers are licensed (need proper education + practice), and they are required to uphold certain standards. They give a stamp of approval and at fault (insurance compulsory) and can lose their license if there are problems with their designs, regardless of any pressure from investors.


This is something I think about a lot when I glance over the constant daily penetration scans on my small self-hosted website's logs.

We would obviously care and do something about people streaming (physically) through neighborhoods to test every door/window/mailbox on every building.

For some reason, when it comes to the metaphorical "buildings" of our digital spaces, the general consensus seems to be a half-sarcastic: "If you can't install and maintain impenetrable state-of-the-art locks on all your stuff, you had better just give up and move into the Facebook highrise."


> We would obviously care and do something about people streaming (physically) through neighborhoods to test every door

Not obviously. Burglary/theft clearance rates are very low (in most countries). At least in developed countries violent crimes have highest priority and small property crimes are almost ignored. I doubt police will do anything meaningful if you report a person who is going door to door and checking if they are open.


The disconnect isn't that we require people to have state of the art impenetrable locks, we are requiring them merely to lock the door when they are out.

These standards that are out there aren't difficult to implement or put in place. For example if we look at PCI-DSS standards some of them include

1 - Changing default passwords

2 - Having a firewall

3 - Encrypting PCI information at rest

4 - Using encrypted communication channels for PCI (https).

This is just some of the standards and none of them are very hard, all of them are trivial to implement.

So sure it's a bad thing if you get robbed while out on errands, but you're going to get a whole lot less sympathy if it turns out you left the front door open with a sign that said, "I'm not home right now."

EDIT:

To be clear I am not talking about SMB mom and pop shops necessarily in this comment, I am talking about the massive Fortune 1000 companies that are getting hit with this over and over again.


Trivial to an infosec professional, sure, but what about the artists, the wordpress cpanel hosters like me that can rig up a site using html/css but doesn't dare to host anything more dynamic knowing full well there'll be security holes?

There's a huge gap in technical skill from "can get an interactive site online" front-end-ish perspective versus a back end professional.


> The disconnect isn't that we require people to have state of the art impenetrable locks, we are requiring them merely to lock the door when they are out.

This is missing the point. My doors are locked and (as far as I can tell) the locks have not been defeated.

My problem is with the constant attempts to defeat the locks on my door and with people like you telling me things like > "you're going to get a whole lot less sympathy if it turns out you left the front door open with a sign that said, "I'm not home right now."" when what I'm complaining about is that robbers are banging on my locked door while I am behind it watching them on the security camera.


Your neighborhood isn't globally accessible in milliseconds with near complete anonymity. To expect that there won't be attacks is fanciful. To be operating in the 7 figure revenue range without even looking up best practices is negligence.


That is exactly my point.

We have systems in place to deal with people going door-to-door trying to open the windows on each house down my street. The police are eager to fly through red lights and get here as fast as possible to fight crime. (Equity in policing is a different discussion but the point here is that the system exists)

But for the electronic equivalent, we have no system.


> there could be "IT security codes" just as there are "building codes"

Problem is the threat model of IT and building is way too different. Imagine that major buildings are frequent target of arsons and the arsons can remotely set the fire. That mean buildings need to defend again all possible arsons, from random amateur folks to folks comparable to special forces.


Virtually all of the "cyber-attacks" in the past few months have been the "someone forgot to update Exchange this decade" or "someone left the default username and password configured" and not the "newly discovered OS vulnerability results in a drive by attack" variety. That sort of raw incompetence has been addressed by the many already existing standards.


People in society do ask businesses to secure their buildings appropriately. In small towns, that might mean a locked front door. In NYC, that means a metal gate that pulls down to block the storefront.


I don't see how effective government protection of the internet could coexist with arbitrary connections across hostile national borders with no extradition or law-enforcement cooperation. And a lot of people really appreciate the latter, that the internet is not organized into states, and would see it as a great loss if it were.


I think there could be "IT security codes" just as there are "building codes" to enforce security good practices. But "survive impact from a 747" is not part of our building codes, and similarly "be resilient to targeted, state sponsored cyberwarfare" should not the responsibility of the individual.

It's kind of quandary. "allow umpteen third parties to update their crap into your system" really is the current "security standard". And it's a standard that's gone along with the entirety of outsourcing as approach to cost-effectiveness. It's hard to be sympathetic to the organizations that have lived and died by this. On the other hand, you're right. One can't do this company by company, one needs standards.

The question is whether the same companies that are now suffering would be complaining tomorrow if actual standards were imposed.


Thousands of companies have been hit by ransomware this year. If we were to accept your analogy that this is equivalent to thousands of mall bombings, I’d say making malls bomb safe would make a lot of sense.



> It is the government's job to solve / prevent / deter crime.

Network Border Protection Agency, NBPA. A nation level firewall, much like China’s. Plus KYC rules for renting compute inside the country.


Exactly and if the burden is to be on the individual or company which in my opinion it should be then they should be able to take actions to protect themselves


I've been saying this for a long time.

I'm sure the computer security industry wouldn't like this.


Trying to prevent crime in another country is typically not within a government's mandate.

> victim blaming

That's a frame that carries a negative connotation. Why? Shouldn't builders construct houses that are safe enough? Or are you telling me that the government should prosecute the people responsible for the hacked systems?


Yes the government should prosecute people responsible for building insecure systems.

Just like an electrician is liable if your house burns down and their wiring wasn’t up to code [and caused the fire].

We need similar laws/codes for software. It’s time.


I think the problem here is with the definition of “secure system”. What is “secure enough”? Considering we’re talking about groups that have resources to buy 0-day exploits, if they want to get in, they’ll eventually will.

Sticking with your analogy, we could probably define a set of standards for baseline IT security for all IT systems…but it probably wouldn’t be very useful. Systems vary so wildly in complexity and scale that coming up with the equivalent of a “code” that fits most systems like we have with electrical installations is impossible.


This is much more akin to a criminal breaking into your house, ripping into your walls, and shorting the wires to cause a fire.


> Shouldn't builders construct houses that are safe enough?

Yes, which is why I mentioned building codes. I suppose "safe enough" is where the disagreement is.

Should you be prosecuted if a thief smashes your window and steals something you borrowed from a friend?

How incredibly irresponsible of you, to have a window in your home! When will we take security seriously? It is your fault that you were a victim of a crime and you should go to jail for it. /s


How much effort do you expect from the police if you gave keys to your house to everyone who ever visited. Or if you teach your butler to follow orders as long as they are prefaced with "dahfizz said ..."

That is the current level of software security. That's why I have a joyful smile anytime I read of another hack, because maybe people will start caring about security.


> Yes, which is why I mentioned building codes.

Then why use "victim blaming"?


"get doors with locks in them" is better metaphor for most cases.

There is no reason to blame corporate victims if they show that they have locked things up properly and follow good practices.

Governments should also be blamed, of course. Promoting infrastructure with backdoor's and weaknesses because "think of the children and terrorists" rhetoric is not helpful.


The wild thing is that the ransomware operations are making so much money that they can afford to buy multi-million dollar zero day vulns that at one point were only available to nation states or fortune 500 companies. Every successful round of extortion just gives them more ammunition to purchase more of them, hire more engineers, etc.

This kind of illicit capital flow totally makes a mockery of AML regulations. All the rules that were created after 9/11 are out the window - this time it's money to pay for zero days, but it's not a huge leap for this kind of illicit capital flow to end up paying for a huge terror op, paramilitary coup, etc. We'll be reading the next blue ribbon commission's retrospective findings for some horrific event in a few years and it'll be obvious what we failed at.


AML works against the small time criminals but when the sums get large they have access to much larger infrastructure and good old creative accounting. By infrastructure, I mean states.

That said, every once in a while a larger fish is caught. Right now a huge topic in Turkey is about a local businessman laundering about $1B of a US Mormon sect that stole the money from the US by faking business activity and receiving subsidies.

In this case, the Americans are in prison and the Turkish guy is in Austria, awaiting extradition.

Shortly before things got sour, the guy had access to the highest Turkish officials and was the darling of the media.

On the US side of the things, apparently the criminals were partnering with a high ranking CIA official to pull this off.

Here is a video on the topic: https://youtu.be/BPZIX5oBrUc

It’s already out of date as more money and connections were revealed since then but if you Google the names, more juicy stuff comes out. Sezgin Baran Korkmaz is the name of the Turkish guy allegedly laundered their money, now under arrest in Austria. Erdogan scrambled to remove his photos with the guy from the internet.

It has been revealed that they bought old Turkish companies that were under financial troubles and used these to move the money.

Why Turkey? Because Turkey is in economic turmoil and to motivate people bring money in the country they passed a law so the state doesn’t ask the origin of the money and politicians facilitate the bureaucratic process(allegedly for a substantial commission).

Pretty straightforward laundromat.

Once the money is in the Turkish system, they have access to EU, USA and pretty much everywhere because according to the paperwork the money is coming from legit companies, some a century old.

I bet you, Turkey is not the only rough agent here.


It's already obvious where we failed. Companies that don't care about sales won't sell, companies that don't care about R&D won't stay up to date, and companies that address IT liability by getting a bunch of forms filled out that don't have anything to do with information security won't be secure.


"...and companies that address IT liability by getting a bunch of forms filled out that don't have anything to do with information security won't be secure..."

...but won't see any particular repercussions until they get hit with a few million dollars ransom and some short-term bad PR.


> and companies that address IT liability by getting a bunch of forms filled out that don't have anything to do with information security won't be secure.

So glad to not be at that job anymore , this is 100% the approach my previous employer took ( who had 99 out of the fortune 100 companies as customers ).


This is a significantly false generalization about the TTPs for rware operators.

Vast majority of these attacks don't work with 0days. They work with malformed IAM policies, social engineering/phishing, and poor asset registries and cloud vis.

Keseya being a 0day is an outlier in many ways.


I thought this vulnerability had already been reported to Kaseya (privately), and they were working on it. Does that still qualify as a zero-day?


It is an outlier today.

Do you think it will be an outlier in a year? In two years?

If a zero-day can gain you millions, tens of millions, maybe a hundred million dollars? Do you think we can keep operating the way we have been up until now?


> Do you think it will be an outlier in a year? In two years?

I think it will remain a outlier until that becomes the easier/more economical way for them to do business. Right now there is so much low hanging fruit that I'd see no reason to invest in a bucket truck to get at the stuff further up the tree. I'd say your prediction is at least a few decades away.


I don't think you work in infosec?


Just curious, do we have recorded cases where a paramilitary coup or terror operation happened through funds obtained by hacking? Also any evidence that post-9/11 KYC measures have prevented anything bad from happening would be interesting.


This is kind of a political question, and another thread got touchy about politics.

But with Manafort’s pardon, we might not see Deutsch Bank’s full culpability in failing to kyc. So, if Manafort’s pardon was inevitable, then would you consider the big players in laundering to be sufficiently protected by a political movement/coup inside a major party?


> The wild thing is that the ransomware operations are making so much money that they can afford to buy multi-million dollar zero day vulns that at one point were only available to nation states or fortune 500 companies.

Is it a wild thing that if you permit a safe mechanism of extortion a sprawling economy quickly develops around it, dunno.


This is really highly interesting.

For me to visualize the problem, I came up with this here: honeypot.

In this sense, Kesaya or any other Managed Service Provider is some kind of honeypot. Let 100+ companies gather and share the same problem and exploit it.

This comparison did not occur to me for Microsoft and their Zero Day exploits. Because you did not attack Microsoft to spread the damage. You still have to find your victims.


I'd much rather keep FSU hackers in Southeast Asian hookers and blow than leave those vulnerabilities unpatched for Big Brother to pwn all the dissidents.

We all gotta do our part.


My favored analogy is piracy and privateering in the 17th, 18th centuries. Sure, the spoils of one attack enable the next.

An end to privateering comes with powerful institutional enforcement via the suppression of movement and committed retribution (an eventual monopoly on the exercise of force). I can only imagine what the analogy looks like - it's arguably more difficult than traditional privateers operating on the open seas if the actors in this case are safe on sovereign soil.

Perhaps the analogy to the open seas is the Internet itself. If the solution to privateering was denying bad actors the freedom to operate, the same applied solution on the Internet would be dramatic restriction of who can communicate on it.

Problem is, the supply chain attacks in this analogy are more akin to sailing under false colors. In this case sailing under false certificates. What do you do when a pirate captures a ship of your fleet, has your flag, your signal flags, and has your latest challenge/response codes? In the age of sail, it would probably mean accepting the loss of the incident, then ruthlessly hunting down the perpetrator directly with the goal of eliminating all actors capable of such sophisticated engagements - basically reducing the talent pool to near zero. If you aren't allowed in this day and age to address the actor directly, you probably have to deny the host nation itself the freedom of movement until they commit to delivering heads on plates. What does that mean? Cutting Russia off the Internet? Is that even doable these days? You could embargo the Internet for your own country like China tries to do - sad that we're having to consider that. I struggle to imagine other half-way realistic options. Kinetic war and assassination seem imprudent/impractical, to say the least.

I certainly don't think the answer is to "eliminate crypto", the equivalent to thinking "banish gold coins" in the age of sail would stop piracy. I also don't think the answer is to demand all companies "do better at security". While everyone needs to do security better, it will always be insufficient. A merchant ship in the age of sail was never expected to have the armaments of a national navy. Their solution was to convoy up and if lucky have state actors protect those convoys - a herd defense at worst. The equivalent to "convoying up" in this age would be some sort of massive crowd validation process before updates are released, slowing everything down to an impractical rate. So I struggle to see what's left other than a good offense, as much as I hate to think of what that means for the dream of the open Internet.

The merchants demanded their host nation deliver a safe operating environment, at a pretty steep collective cost.


> I certainly don't think the answer is to "eliminate crypto", the equivalent to thinking "banish gold coins" in the age of sail would stop piracy.

You don't need to eliminate crypto, just heavily scrutinize/regulate/license exchanges like any other bank. Generally treat anonymous crypto the same way you'd treat someone who pulls up to a regular bank with a U-Haul full of cash and tries to make a deposit. If a wallet has ever interacted with a mixer (and generally treat mixers the same way you'd treat money launderers), blacklist the wallet. Blacklist any wallet that is linked in any way to a mixer or to wallets involved in extortion. Revoke the license of any bank that accepts funds linked to an unregulated crypto exchange.


A ransom payment, while a HUGE incentive, is not the only incentive for these attacks. Like any piracy, there are gains to be had from captured assets and an owner's ransom is only one avenue to monetize those assets.

So you apply a Thor's hammer solution limited to cooperative countries. Who is going to disproportionately pay that penalty? And what does "just heavily regulate" a decentralized currency even imply when applied globally? Will Russia be cooperative/respectful of this regulation? I'm not sure this would be as effective as you think, but certainly comes with some downsides to gen pop if we imagine how regulating crypto would be done effectively across friendly nations.

I must say that I also think having anonymous currency (cash, crypto, etc) is fundamentally healthy for society on the balance. So am biased against solutions that would further it's knee-capping. It seems more structurally sound/scalable/proportionate to punish the agents rather than the mechanisms, to the degree possible. In a gray-filled and nuanced way, I'm happy that ransomware attacks are still possible (encryption works, anonymous payments work), but disapprove of their application in this way and believe a penalty should be paid by the actors for what they choose to do with tools that also have legitimate purposes.


> Will Russia be cooperative/respectful of this regulation?

I'd be fine with Russia and any other rogue financial state actors being cut out of the global financial system entirely if they're going to value relatively small-time money laundering over legitimate business. No loss there.

> I must say that I also think having anonymous currency (cash, crypto, etc) is fundamentally healthy for society on the balance. So am biased against solutions that would further it's knee-capping.

I'm honestly not seeing it. Crypto has been around for long enough to take an clear-eyed appraisal of its utility, and what I've seen has been primarily speculation (a worthless drag on society), with a smaller but hugely significant factor in enabling highly scalable criminal activity (more drag on society), and various other negatives. It has enabled a plethora of scams, shielded bad actors, and to the average person it has created pretty much zero utility.

There could be some good uses! I look at crypto like roads - yeah, it'd be great if we could all just drive anything we like, anywhere we like, at any speed, without regard for anyone else. However, to make roads a safe place for everyone and minimize negative externalities, we have traffic controls, licensing, emissions controls, and safety standards. Right now the crypto world is the equivalent of a small number of people driving around in tanks, occasionally running over a crowd of people, with no highway patrols able to do anything about it.


It should be pretty obvious that as long as ransomware is this profitable, we are absolutely fucked when it comes to computer security. The economical incentives are so completely and utterly skewed, there is no way for society to function.

This is all because of cryptocurrencies. They are the one single factor that enables this economical fiasco. They need to be banned, now, or this will just get worse from here on out.


> It should be pretty obvious that as long as ransomware is this profitable, we are absolutely fucked when it comes to computer security.

Maybe we should turn the tables on the ransomware orgs. I'm sure they're getting big enough that they can't keep tabs on everyone in the org. So why not start offering million dollar prizes for people inside the org to sell out their co-conspirators? I have to imagine that if you're unscrupulous enough to be in the operation, you'd have no problems doing some entrepreneurial activity on the side.


This is aggressively myopic. Banning a cryptocurrency is akin to banning HTTP to prevent the attacker's connection for their shell.

All ccs are just a protocol which can send verifiably discrete packet w/o a central server verifying the discreteness, and with some fancy branding on the packet type. It's as if people felt very tribal about POP3 vs. IMAP, and the IMAP foundation put out branding and POP3 was a FOSS project. Protocols in that sense can't really be ever banned. It's like banning a math proof.

CCs play a role, but they are not the single factor that enables rware, by any means. For instance, it's a lot harder to pop a meat processing plant or coastal pipeline if they didn't hook up IoT to anything and everything OT-related, and ICS was awful at integrating vis b/t IT and OT networks in their plants. Or, for instance, if cyber insurance companies are forbidden from paying ransom, then the economic pot is suddenly dry. And so on...


You don't need to ban a protocol. You're free to play around with protocols all day, nobody cares.

What is absolutely crucial is to cut the link between that protocol and the financial system. And that is very, very simple. There are a few very centralised points where that happens, and those have to follow laws.

They are starting to get cut off already, and these kinds of catastrophes are going to drive the effort to make that happen even quicker.


Are you familiar with DeFi/Dexs. Short of banning and tracking networking and compute at a fairly unheard of scale short of child exploitation, it’s significantly more complex than banning Coinbase. Your suggestion, going back to the HTTP simile, is like saying banning Google bans HTTP, banning Pirate Bay will ban torrenting.


Very. Absolutely all of it is built on top of a few financial gateways, that are going to be closed.


Banning isn’t enough. They must somehow be destroyed or rendered useless, perhaps through a mass 51% attack or something.


Couldn't someone DDoS the major mining pools?


Isn't one of the main problems with ransomware centralised and locked down IT administration? I mean if the computers were not tied together so tightly the effects would be isolated. Instead there is a admin account encrypting all the machines HDs remotely.


Yes, but centralization is a symptom of something deeper: incumbent business interests with top-down power structures actively disincentivize the holistic, creative, critical thinking required to mount a meaningful distributed cyber-defense, and incentivize CYA approaches which leads to weaker centralized solutions. "Well, we spent $10M on defense - put some appliances in a NOC, paid contractors, gave our CTO a bonus - what more could we do?" Ultimately the legal system will accept this as a valid excuse. Whereas, if you take more holistic steps, embrace distributed responsibility and action, incentivize awareness of threats and threat-modeling, and get sued as liable, a judge could very well say, "But you didn't spend a significant chunk of your revenue to build [out the same kind of solutions I'm used to seeing in cases like this]? Negligence!"

When ignorance becomes so widespread that it is enforced by law, and wise action is actively punished, and one cannot really blame rational actors for taking the CYA approach. One can hope that some of them take a principled stand and the risk of punishment to do more; alternatively, we can expect to collapse and be replaced by a smarter, if more brutal regime. One way or another, the bleeding always stops.


I think bigger problem is that you need "holistic, creative, critical thinking" to run an absolutely mundane, workaday Windows domain. Because there are hundreds of thousands, if not millions of those. Why don't we get the holistic, creative, critical thinkers to work at Microsoft (and the vendors people generally use in concert with Microsoft) to stop shipping this vulnerability-riddled crap.


Well, yes, system administration is, in general, quite broken. It is ironic to me that all those selfish weaknesses that vendors (hardware and software) include to give some sort of asymmetric advantage to them are always the cause of weakness. It's almost as if, I don't know, simple, symmetrical, fair computing systems are actually more secure by some law of nature.

But yeah. Software needs to get fewer, simpler, easier to understand, verify and build. Hardware needs to be simpler and easier to understand - and possible to verify. And there needs to be awareness this isn't some hippy-dippy sentiment of strange neckbeards, but rather its the only way to get the security we all need. "Every exploit can be turned against it's owner" needs to be drilled into engineers, executives and lawmakers at every level until they hear it in their sleep, and recoil in horror when anyone even suggests knowingly shipping an exploit.


It could perhaps be reasonable for IT departments to ship Windows desktops without all these surveillance and control mechanisms, if normal web browsing and email activities such as clicking links and opening attachments were not so likely to get the enterprise owned.


You need to be creative because the standards are broken. If there were strong, holistic standards, you wouldn't need to be creative.


> "But you didn't spend a significant chunk of your revenue to build [out the same kind of solutions I'm used to seeing in cases like this]? Negligence!"

Is there any evidence that this happens? I feel like there's a lot of these kind of spooky 21st century "folk-wisdom"(s) out there and if you actually trace it back its like the McDonald's hot coffee case or whatever.


It's a good question. I'm basing this mostly on my impression of recent PR releases, post-intrusion, that make the argument "We spent money on it! It's not our fault it didn't work!" I've noticed that PR releases tend to echo the eventual legal argument. There's at least some evidence it's not a McDonalds hot coffee lawsuit thing (although I really hope I'm wrong).


I agree fully.

This is totally unacceptable and legitimately dangerous. Dialysis machines are hooked up to this trash right now!

We require enforcement, jail, fines, and civil liability for this gross, aggravated negligence.


Yes. It's much of the security theater that has led to the state of things. Third-party administration and monitoring agents like Solarwinds are an incredible attack vector.

A lot of the "security best practices" just become checklists of what people thought were good ideas 20 years ago and enforced by auditors that only know how to check boxes.


Damn straight, it's even worse than that though.

Blue teams are behind from the start due to the nature of the security landscape. They are further hindered by misguided application of the "move fast and break things" method. You aren't supposed to break your C.I.A. and expose customers and everybody else to huge liabilities.

Security needs to be baked into the infrastructure and IT management practices from the start. This requires enforcement, jail, and civil suits.

Office of Personnel Management, Ashley Madison, Target and countless other retaikers, Dams and Pipelines and Water, Maersk, Linkedin, all these supply chain hacks and schools and hospitals across the country and the world.

This has been going on for decades now, with no accountability at all. It just doesn't seem to be a priority.

What in the hell are we doing? Why do MMORPGs have better security than the hospital??


> security theater

aka sales & marketing.

I wonder if the background of senior leadership is predictive in these sorts of situations. E.g., Equifax had a CISO with a non-technical background at the time of their breach, and Kaseya's leadership is dominated by MBAs and accountants.


Then it just becomes another thing that's gamed. Useless certs are used to cover for a lackluster technical background.


Quarterly password resets, five different "single sign on" services, no admin for devs.

Sometimes I feel like the idea is just to kill productivity.


When I have to work in a "no admin for devs" environment, I just take admin access one way or another.

They kill productivity in exchange for job security.


The lesson here is this: if you make it too inconvenient for people to do their jobs, regardless of why, people will work around you to get things done. Effective security policy must take this into account.


I can't upvote this enough. I have worked IT long enough to see some impressive creativity by users to defeat security policies getting in the way of doing their jobs.

I have argued that the raison d'etre of security policy is to ensure the existence and continuity of an environment in which work can get done. I've been told about the importance of the C.I.A triad and other things as though they were refutations of my point, often in tones of voice implying an attitude that this not-security|compliance-tech-is-incapable-of-knowing-what-he's-talking-about-and-therefore-can-be-ignored. I counter-argue that C.I.A et. al. are not refutations of my thesis, but in fact support it. If you can't ensure the confidentiality, integrity, and availability of information or systems for yourself or your customers, you do not, and/or will not, have an environment in which work can get done.

So, for the love of getting shit done, stop masturbating with broad and blind application of checklists, and take the time to sit down, really look at what you're trying to do and why, and develop actually useful risk models. And then develop security policies against those risk models. Yes checklists and various standards are useful tools that can help you cover a lot of common stuff, but are not the whole picture.


A bad security policy is an attack on the Availability part of C.I.A. Your security policy definitely shouldn't lead to leaking information or falsifying data, why is it so acceptable to make system availability go to shit?


There's no shortage of privilege escalation bugs in Windows, Mac and Linux. Hell, if you spend a week with a decent dissasembler, even an amateur can probably find a 0-day.


I’m CTO and have an admin account for whatever i need but… I STILL have an off domain laptop for emergency or diagnostic or debug use.


According to NIST, quarterly password resets are useless.

If your SSO isn't, use a good password manager.

Admin access for devs should be audited, and devs should understand that now they need some opsec. Like, separate work and personal machines; if not physically, at least use a different account, better yet, a VM.

To say nothing about adding suspicious email / IM attachments.

Remember, devs: you are a potential attack vector, a very efficient one.


What’s the alternative given that most small/medium-sized companies know very little about IT security?

Seems to me like a centralized system is fine, as long as it’s properly designed and implemented. The problem is how a business can know that an IT security system is properly designed and implemented.

The only solution I see is to couple insurance with an IT security system. If you’re certain your system protects against IT threats you should be willing to compensate your customer in case it fails to do so. Otherwise your customer has a very hard time determining whether your IT security system actually works.


> What’s the alternative given that most small/medium-sized companies know very little about IT security?

Removing the "remote" from "remote administration". It's more expensive but probably still not cost-prohibitive -- driving around to client sites and installing updates is not particularly skilled labor. Plus even the worst-case scenarios are far less worse because you already have a local workforce who can do site visits to manually recover systems locked down by ransomware attacks. Data might get stolen, but at least you have continuity of business.


They could even have a 'reverse shell' type button that the client has to explicitly click to enable temporary remote access.


What if you could somehow rate limit it? e.g. you can access N remote offices per day, but not all of them. Like the time lock on a safe, to prevent these pervasive smash and grabs.


Yes and no. Not having centralized IT administration is worse: People able to install their own root-level malware is a dramatic and exciting turn that I think everyone is glad to be away from. But IT people are also the worst IT customers: They often think they're too smart to fall for the tricks other employees face, and when they do fall for them, the damage is far worse.

We have narrowed the attack surface of networks drastically, the solution is not to undo that, but to keep narrowing it. There's a lot of room for improvement especially in service accounts, admin accounts, and crucially, more intelligent behavior detection.

Despite Microsoft's best cloud security capabilities, it still doesn't seem to mind if a senior citizen's Outlook.com account is suddenly logged into in Nigeria, and even after "securing the account", it doesn't clear their devices they connected while they were in the account... That's a consumer example, but there's so much room for more intelligent behavior detection, and it to make it down to base-level products, and not expensive add-ons or upgrades. Even the big companies don't do a good job at it on their own systems, much less the systems they sell to other people.

You have to have a certain tier of premium Azure cloud-based subscriptions to get reasonably decent security controls, while if you have a Windows Server-based network, your security options are the same as you had back in 2008.


> Despite Microsoft's best cloud security capabilities, it still doesn't seem to mind if a senior citizen's Outlook.com account is suddenly logged into in Nigeria

Geo-IP services are routinely inaccurate. I'm in the Southern US, the IP I used to get used to get me tagged as if I was from Quebec City. It was like that for over two years. A friend's house a few neighborhoods over showed up as some small town in Kansas. I could go from my home network-wise in allegedly Quebec City to my cell phone which showed as a town about 50 miles away from my actual location to Kansas in 10 minutes. If places banned based on these kinds of Geo-IP databases I'd be banned from most of the internet.


Even then, that's on the same continent. If Microsoft can't tell traffic is traveling across an ocean, what on earth are we doing as an industry?

And there's a difference between "ban everyone reporting outside their city" and "flag unusual behavior and trigger additional protections or checks". Think how your credit card works when you suddenly make a purchase in Las Vegas.

And in my cited example, the logins were reported as malicious to Microsoft, their account panel said the account had been "secured" since then... but Microsoft apparently let an Android phone the attackers linked up in Nigeria remain connected to the account, giving them persistence past password resets.


Isn't one of the main problems with ransomware centralised and locked down IT administration?

It seems like centralized, locked-down IT combined with security that's mere security theater while allowing third-parties to willi-nilli update their stuff.


Removing centralized IT management comes with other issues, especially rampant email viruses in the org due to poor compliance with update and security policies.


If this were 2003, yes.

Try to get Windows 10 to have an uptime of more than two weeks without an Windows Update and reboot cycle.

Email viruses aren't really a thing like they were back then too.


Only because the protections are there for email viruses. Phishing and spearphishing are still popular attack vectors, so we know people would be susceptible to something like iloveyou if it were to get past other defenses


Those protections are done up-stream at the MTA level and completely passive for the end user.

The "security best practices" are a cargo cult exercise that just lulls organizations into believing they're protected against motivated actors, when instead they're just enforcing a group policy on a good day.


I could be wrong but the CEO kept mentioning their "Cyberdefense playbook" and how it dictates immediate shutdown of services at sign of breach.

Well didn't their Cyberdefense playbook have anything to say about simple ACLs protecting those internet facing systems that were vulnerable to SQL injections? I mean even a very broad ACL allowing an entire country geoip block would be better than nothing.


Also, why is it so hard for them to track down ransomware perpetrators? Doesn't the NSA have back doors and gag orders to everything? Are the ransomware people really not using any Google/Microsoft products that can be used to track them down and banish them? Can we not hard fork Bitcoin to make ransomware wallets inactive?


The problem with hacking, and call criminal activity really, used to be “getting your money out”.

Yea. You have money somewhere, but when you go to get it they’ll find you.

Crypto laundering has made that trivial now. So, no, I don’t believe the NSA has back doors in “everything”.

(Nor would I consider the NSA to be unequivocal “good guys” who selflessly help businesses and employees)


It's an interesting point: governments have invested untold billions [1,2.3] into internet surveillance of the entire population but these crypto teams elude them.

[1] https://en.wikipedia.org/wiki/Utah_Data_Center

[2] https://en.wikipedia.org/wiki/Room_641A

[3] https://en.wikipedia.org/wiki/PRISM_(surveillance_program)


Because the surveillance is nothing but a tool for control and compliance. Solving crimes was never the point. It would not surprise me in the least if we found out in 50 years that these crypto groups were just another government team hoovering up data and getting more off the books funding.


> Also, why is it so hard for them to track down ransomware perpetrators? Doesn't the NSA have back doors and gag orders to everything?

The first thing being hard is evidence that the second thing is not true, and people just like saying it because they enjoy posting the most cynical take they can.


The NSA has actually lost some of its tools (Shadowbrokers) and backdoors (Juniper Networks)


This is bad but it's going to force all these companies to overall their IT system, period. They can't keep on losing data or paying ransoms eternally, because at that point it cannot just be "the cost of doing business".


Companies in various parts of world deal with extortion as a cost of doing business.

Ransomware companies are companies. They set their fees high but not so high they drive their "customers" out of business, at least not all their customers.

I hope things will change. The most likely seems like more of the same thing, which clearly won't change the situation. Also, the government huffs and puffs about arresting people somewhere.

The whole thing has "fall of the Roman Empire" vibes to it.


That’s very reassuring. I was worried they’d just keep “doing what the other guys do” and not try to invest in info sec as a specific business strategy. However I do wonder if average people won’t ultimately side with the small businesses, their dentist or local mom and pop, over rich young programmers, cyber gangs and their bitcoins.


I’m ignorant as to how these attacks are so successful. Seems like they always start with a phishing email, but how does some malware on one employee’s computer end up encrypting the “source of truth” for the entire company? Sure, some employees have a lot of access, so obtaining the right person’s credentials will get you a lot of the way there; but it seems like categorically preventing this type of attack should be possible with the right internal security approach. Even just the amount of time it takes to encrypt the data confuses me about this. Do the attackers just choose to launch this on a long weekend?


This wasn't a typical "someone clicked a link they shouldn't have" attack.

There was a vulnerability in the RMM server software that allowed remote code execution. The attackers used the RCE to push the ransomware out to all of the endpoints connected to the RMM server.

The attack is still being researched but it looks like there were two vulnerabilities. The first was an authentication bypass that allowed the attacker to authenticate as if it were an authorised client. That was used to upload the payload. There was as a RCE vulnerability that allowed the attacker to executed the uploaded file. The payload itself modified the SQL database of the RMM software to create a task on the remote endpoints that executed the ransomware.


A lot of the time it's nothing more complex than owning one Windows box, and then moving around the network using regular SMB and metsploit tactics.


Could a solution be to remove Windows from the box?


if your business doesn't rely on software that only works on windows, sure. But even companies like apple use windows: https://www.businessinsider.com/apple-uses-windows-xp-in-iph...


People with 100% of any OS are likely to be more vulnerable than people with heterogeneous systems.


That'll cost the typical business way more than the cost and headache of ransomware.


> categorically preventing this type of attack should be possible with the right internal security approach

Yeah. But many many small/medium businesses have been left behind in understanding how software works and how to be secure. A lot of that is because big businesses offload the cost of it onto others instead of leading the way like they should.

Where before you'd have humans interacting which would prevent massive promulgation of bad actors... now a lot of that is automated. So all it takes is one weak link in the chain.


Why don’t small/medium businesses go all in on the cloud? Surely it must be easier for the non-technical SMB order/manager to use Google Drive than run their own file servers?


Because that same phishing email will expose those services too. It is not a location issue.


Sure the phishing email could cause a data breach. But this is about ransomware.


Kaseya seems to have prioritized support convience over safety. I.e. have a backdoor for all their customers. Who could have figured this could happen?


I wonder how long these vulnerabilities have been exploited, with the attackers waiting for specific—possibly political—timing to actually execute the attack and demand ransom. there could be plenty more exploited systems out there, just waiting for the ransom attack to execute when the attacker sees fit.


From the Reuters article: > "Because Voccola's firm was in the process of fixing a vulnerability in the software that was exploited by the hackers when the ransomware attack was executed, some information security professionals have speculated that the hackers might've been monitoring his company's communications from the inside."

They might have rushed to exploit it before it was closed. Plus the long weekend. Or competitive pressure to exploit before someone else gets there first.


> with the attackers waiting for specific—possibly political—timing to actually execute the attack and demand ransom

Your "possibly political" statement is conspiracy theory nonsense. It's entirely unsurprising that they launched the attack at the beginning of a long holiday weekend when there would be fewer eyes monitoring systems and able to pull plugs/remediate in the moment.


> Your "possibly political" statement is conspiracy theory nonsense.

what does this mean exactly? could you (or anyone else who shares this sentiment) elaborate?


To me it means that the main motivator is maximizing financial success (by the perps). That means do it at a time where you get the best return.

On the other hand, if they would try to make an administration look bad, then they would pick other times and targets.

It’s like when people attack old ladies and other people known to carry cash and instead of looking first at criminal opportunism attribute it to people hating old ladies going to the ATM.


Simple. Things asserted without evidence can be dismissed without evidence.


right. if the replier above had said "you have no evidence for that claim" I would reply by saying "you are correct, it is however a possibility, surely we can agree on that?" but instead the replier used the term "conspiracy theory nonsense" to characterize my words, which is, to say the very least, a highly loaded term with built-in connotations.

I personally believe that dismissing any sort of theorizing as being "conspiracy theory nonsense" is more harmful than any sort of benign theorizing itself. this line of thinking seems to assert that in reality, nothing ever happens with ulterior motives, people and groups in seats of power wouldn't do anything and everything to maintain said power under any circumstances, everything is just an isolated, random occurrence, geopolitics don't exist, and that looking for patterns and connections is something that only delusional people engage in.

many people have been programmed to react to anything not commonly-accepted in the mainstream "narrative" by immediately labeling it a "conspiracy theory" and then refusing to think any more critically about it. I believe the rapid spread of this way of thinking to be incredibly dangerous to society as a whole. I am not saying that theorizing should be taken as fact, merely that theories shouldn't be dismissed entirely out of hand using a stock phrase. in fact, when discussing such things, or anything really, any sort of kneejerk word/phrase association-type response should give one pause, because these word/phrase associations are learned behavior, and it may be enlightening to determine the source of this learned behavior.


> Your "possibly political" statement is conspiracy theory nonsense.

Back in my day we just called it FUD: Fear, Uncertainty, and Doubt

When did FUD go out of style?


What makes you think this attack was politically motivated?


nothing, especially because I haven't looked into it at all, and there probably is no reported evidence anyway. hence, my comment referring broadly to this wave of attacks, and my use of the qualifying "possibly."

why is everyone (mainstream news, etc.) so quick to refer to any attack originating from a Russian IP address as "Russian" (either implied or directly stated to mean "Russian government")—yet positing that these attacks might have political motivations behind them is a "conspiracy theory"?


Raising the possibility of political timing constitutes conspiracy theory nonsense? That seems extreme given the long history of state-sponsored attacks.


Likely a long time

One look at RDP markets show that reality

Computers that you can remote desktop into are listed by location and bandwidth and price. UAS had over a million to choose from, blanketing the globe.


Why do people assume these aren't honeypot operations?

Millions of compromised windows hosts rented out for $5 each to anybody who shows up at some onion site? Hard to believe. At those prices buckshot randomware and cryptomining would be profitable.

What's easier to believe is that there are millions of SOCKS5 proxies out there (IoT/router/ancient-android-phone exploits) and a honeypot operation that will gladly spin up a Windows VM and let you pick which SOCKS5 proxy to use as an exit IP. And then observe everything the badguy-wannabe does with that VM.


But the company said, "We don't believe that they were in our network,"

They wouldn’t say that if they weren’t 100% certain right? /s



Wow, that video is exactly how they shouldn't have responded. They assert their procedures worked as intended (they didn't), that only one part of their app was compromised thanks to their "oh-so-great" architecture (didn't matter), and that they were as fast as possible in their response (debatable). They don't do anything to reassure their customers, and they don't take any part of the blame.


> They don't do anything to reassure their customers, and they don't take any part of the blame.

Is there any world where Kaseya isn't dead? This isn't Equifax, who have a monopoly and could just out-live the bad press. Kaseya is an IT security firm in a hyper competitive marketplace selling to unsophisticated clients. Reputation is everything. I imagine they're in "avoid lawsuits so we can return as much capital to ourselves/investors before closing up shop" mode.


I have to say, the PR guy at Kaseya deserves a pat on the back for this headline:

"Kaseya Responds Swiftly to Sophisticated Cyberattack, Mitigating Global Disruption to Customers"


It's clear we need a full security audit of the companies that provide software tools to the companies that provide software tools to IT outsourcing shops. Of course that just leaves things open to the next security breach of companies that provide software tools to the companies that provide software tools to the companies that provide software tools to IT outsourcing shops.


We do security audits for a living.

In a nut shell, here's why things are so screwed up IMHO:

1) Most of these companies have had audits, but they're being done by 3rd rate or very inexperienced external consultants.

2) The companies limit the scope of the tests. Real hackers don't give a shit about your scope of work, they have no rules, only goals.

3) Even when a test is properly done the exec management looks for silver bullet product solutions instead of changing across people/process/technology

My company solves #1, but we can't do anything about #2 or #3 :-/


Based on my experience on multiple internal Red Teams this is more or less correct.

Add some funding / IT cost center, no value add language in there as well.


Not to mention some theatre and empire-building.


What audit would have found a zero-day vulnerability?


The entire idea behind modern network security is that zero-days happen regularly. You should design your security controls around this fact, defense in depth, least privilege, etc etc


"The attackers were able to exploit zero-day vulnerabilities in the VSA product to bypass authentication and run arbitrary command execution," the Miami-headquartered company noted in the incident analysis. "This allowed the attackers to leverage the standard VSA product functionality to deploy ransomware to endpoints. There is no evidence that Kaseya's VSA codebase has been maliciously modified."

This is very likely not the full story, unless the 0day in VSA was somehow wormable. That "deployment" is doable through overly permissive IAM and everything else that enables privesc.

There are two parts to these vulns. Whatever gets the foothold, and whatever allows privilege escalation. Audits do a great job in catching the misconfigs that allow privesc.

The tragic thing about these attacks is often the blast radius can be contained fairly easily by asking the right questions... If you're someone who has passed these audits, or done these audits, it becomes pretty easy to see how many unforced errors go into these catastrophic attacks.


If https://old.reddit.com/r/msp/comments/ocggbv/crticial_ransom... is correct a compentent web application security review (white box or black box) which was correctly scoped to include the affected files would likely have found the SQLi and authentication bypass issues (mentioned in update 12)

Without seeing the codebase in question, you can't be sure, but having been a web app pentester for 10+ years, these are the kind of issues that were found regularly, and whenever I saw classic ASP in tests, they were the kind of issues I'd be looking for, knowing the inherent weaknesses in the platform.


Did the RMM box really have to be on the open internet? In infra I run, anything with a public IP is behind numerous layers of FWs and VPNs, why not the same here?


Turtles all the way down.

The only way to get secure software is to increase liability of parties involved.

My suggestion: Start with the ability of any customer to return any purchase (hardware/software) which contains software with a disclosed but unfixed CVE after 90 days without a patch. If this doesn't get rid of the Internet of Shit, I don't know what will.

Next, set a minimum damage rate of 100 USD per user for each data-breach that involves personal information and 1000 USD for any special kinds of personal information (credit cards, etc) and 10000 USD for any protected health information breach.


> increase liability of parties involved.

To include the liability of the attackers, which I think will ultimately be more scalable and effective than punishing the victims. Not saying there aren't incentives for the victims to "do better", but I think that will only get us so far.

History paints a picture of societies evaluating the effectiveness of better walls vs. owning the landscape, and deciding on the latter as being a more beneficial approach. It's how we get the saying that "Rome conquered the world in self-defense". I would bet that's where this ends up after enough material losses.


> Start with the ability of any customer to return any purchase (hardware/software) which contains software with a disclosed but unfixed CVE after 90 days without a patch. If this doesn't get rid of the Internet of Shit, I don't know what will.

Does this not also just kill tech? CVEs pop up decades after products have died. Now every tech product is just one unsupported CVE away from losing _all_ lifetime revenue. I just can't see how anyone would ever invest again...

edit: to clarify further the fact that any CVE triggers this, no matter how small, seems egregious to me. The idea of there being no lifetime on the liability seems wild given how CVEs are often the result of other developers breaking ABIs. Imagine a profitable product that was last sold 10 years ago having it's full lifetime revenue refunded because of some change in glibc.


> CVEs pop up decades after products have died.

Are they even covered within the warranty period? I never tried it, but I think I'd have an interesting conversation if I went to a shop and told them I want to return a product because while it works flawlessly it's got a vulnerability. The standard procedure is usually getting a replacement, but this isn't possible here as the whole product range is affected.


Does any software have a warranty period?


Why not try to combine it with some right-to-repair-friendly stuff? If, after the cessation of support, you release any and all source code and documentation needed for any person competent in the relevant sciences and arts to maintain the device and repair any CVEs, you're off the hook for liability.

I had played around with the idea of requiring support for 3/5/7/10/whatever years after the cessation of sale, kind of like how car manufacturers are required to offer parts support for 10 years after sale, but I can see that causing enough overhead that many tech devices simply would never get made.


Apparently if you sell someone an airplane you're on the hook for 18 years of safe operation.


"I just can't see how anyone would ever invest again..."

I think you underestimate the (a) greed and (b) capabilities of the people involved.


Well, some limits of support should be given:

- How about 5 years minimum for hardware? And as much as the vendor wants to promise.

- How about that requiring that vendors at least allow for customers to pay for extended support for another 5 years by paying 20% of the initial price per year.

It is just ridiculous that currently many devices are insecure 3 months out of the gate.


In my view, this isn't really a software problem. Your plan is on the right track though.

We already know its neigh impossible to ensure software is bug and vuln free. We have to reckon with the fact that secure software is possible, but extremely hard and impractical to achieve with any regularity.

Because of well known computer science problems such as the Halting Problem[0] and more that I am totally ignorant of, the only alternative is extremely thorough verification like happens in planes[1]. The rub is that unlike aerospace and other critical controls, we cannot define the software security problem as closely as is possible in those other critical systems. Doing so will undo the benefits of our general technology solutions.

The reason I claim it is nonetheless not a software problem is: In case after case of these troubling examples of security failures, there are only a few organizational commonalities that link them all together. Time after time, there are shocking misconfigurations, corners cut, best practices eschewed, warnings ignored, processes disregarded or absent and swept under the rug.

C-Suite execs, as has been well noted, opt for the checkbox-style silver-bullet-whiz-bang, because they can understand the value proposition of one-problem-one-fix and it is easy to communicate to stakeholders. They are totally ignorant to and uninterested in the details of actually providing a quality product. Their product is often not what they are selling; They are getting their bonus, and they are gonna exercise those stock options.

The only way to make a dent in this security problem is to require adequate processes for infrastructure the way we require adequate lighting, or fire protection devices, or plumbing. Until it is a business requirement to provide appropriately managed processes, infrastructure, testing, and finally development practices to go along with it all, there is no hope.

Will your school or hospital be shut down next? Will it be your bank that is defrauded when your cardholder details are skimmed at the local gas station or stolen from a multinational retailer? Will it be your government's top secret clearance database, your municipal water, your child's photos from their phone or social media?

Our only real hope is that the next victims of this heinous, Steinbeckian, tragedy are the Congress, Senate, and whoever else it takes to get a fucking grip.

[0] - Computerphile https://www.youtube.com/watch?v=macM_MtS_w4

[1] - Examples of aerospace validtion https://scholarworks.sjsu.edu/cgi/viewcontent.cgi?referer=ht...


It does seem that more serious maintenance and auditing of these providers is needed. Looking at this thread on reddit https://old.reddit.com/r/msp/comments/ocggbv/crticial_ransom... which is by a company looking into the incident and how it happened, it seems that it was exploited vulnerabilities in Classic ASP files used as part of the solution.

Having classic ASP pages hosted on a production system in 2021 seems like a pretty strong indication of a lack of codebase maintenance and auditing.


raesene9:5 says >"Having classic ASP pages hosted on a production system in 2021 seems like a pretty strong indication of a lack of codebase maintenance and auditing."<

AFAIK classic ASP pages can be as secure as those in any other framework. The vulnerabilities (most commonly SQL injection) are known and are addressable.

I know it chafes at some (Microsoft marketers esp.?) that ASP pages are still around. But classic ASP is yet another example of that old adage "If it ain't broke, don't fix it!"

I watched one organization assign an entire division of programmers to develop a moderately-sized ASP.NET application: they went through orientation and training in ASP.NET and then planned, designed, coded and rolled out ...nothing! After two years there was literally nothing!

A perceptive manager in another division approached his sole ASP developer and asked if she could write some ASP code to "demo" that same project. She looked at the specifications and then quietly wrote an entire system!

Weeks later, when the department heads saw her "demo", they thought the ASP.NET developers had completed it. Imagine their surprise to find that what they were looking at was done, not by the ASP.NET group funded millions of dollars but by someone in another division: a single focused classic ASP developer quietly working downstairs.

Nonetheless IIRC ASP pages reach end-of-life support by Microsoft in 2025 so it might make sense to migrate. But b/c classic ASP was written in a manner consistent with early web standards (CGI/APACHE) migration of classic ASP to one of the various classic ASP-like frameworks in PHP, Perl, Ruby et al would likely be easier, faster and cheaper. Migration would be mostly translation and much, if not all, could be automated.

In contrast, moving classic ASP to ASP.NET would be more fraught with problems. The underlying models of the WWW are inconsistent.


The thing about Classic ASP is that it has no in-built protections and the individual developer has to code all defence, which is not great.

Also, how many classic ASP developers are there now , to add new protections for new issues...

And if you look at the linked thread, it doesn't seem like they've done a great job of maintenance...


raesene9 says >"The thing about Classic ASP is that it has no in-built protections and the individual developer has to code all defence,,,"<

Yes, it is just like PHP or Perl or Ruby with the CGI specification.

raesene9 says >" how many classic ASP developers are there now ..."<

Good question! But IMO a web developer could learn classic ASP in a single day!8-))

raesene9 says >"the individual developer has to code all defence,<

Yes, just like PHP or Perl or Python.


Sure so they learn classic ASP in a day, now they have to code their own custom routines for XSS protection, SQLi Protection, SSRF protection etc etc etc

or.... they could move the application to a supported platform with a wider developer base and in-built protections against common web application security attacks.

There is a reason why most web app developers do not hand code their entire stacks. For example why ruby web developers use rails. Web app. sec. is a a relatively complex field to get right, full of edge cases. Having each and every web app development team take the time to understand all those cases and edges probably isn't the best use of scarce developer resources.


> and then wrote an entire system!

Yes yes. And this was of course very maintainable, up to todays standards, easy to onboard new hires in, not falling apart in edge cases that weren't part of the demo, code that was audited by multiple developers, it didn't make any shortcuts in terms of security, etc. etc. etc.

That some business guys are amazed by the "lone wolf dev skills" might be explainable, but on HN we should know better. Yes, there are devs that get a ton done irrelevant of framework, but "getting things done" (in terms of business requirements) is only part of the story.


Once upon a time, there was an application that needed to be replaced. (Apache mod_perl 1!?) My employer put a team together to do it, a team that was very concerned that it be "very maintainable, up to todays standards, easy to onboard new hires in", etc. Unfortunately, they apparently failed to include "working" in there. As the project schedule began to slip, they added more people to the team.

Eventually, after many months and at least a couple million dollars, I and three other people got dragged in: another developer (no one lets me do UI code), a good project manager to coordinate with users and management, and a developer/tech lead/manager who had two unique abilities: good at cutting Gordian knots, and having enough political capital to say "no" to people (most people around couldn't say "no" to save their life; I can and do, but nobody listens to me). The latter promptly booted the previous team off the project.

Within four months, we had the new app up and running and in production. I'm told that it has one of the best issue track records in our area, including never having had a sev 1 outage. On the other hand, I have heard some of its current maintainers complain about it not being "up to todays standards" because we deliberately kept it simple rather than adopting a bunch of complexity for resume-padding reasons.


That some business guys are amazed by the "lone wolf dev skills" might be explainable, but on HN we should know better.

I'm less cynical than you are about the parent comment, largely because I'm watching the same scenario unfold within my own company.

We're on year two of the million-dollar team turning out nothing. I'm in a different department, but my position means I liaise with lots of different people throughout the organization, so I also know that a single dev in a third department is half-way through solving the problem because a ultra-high-level manager doesn't want to wait anymore and will use it as an excuse to can the other team and that department's director.

And this was of course very maintainable, up to todays standards, easy to onboard new hires in, not falling apart in edge cases that weren't part of the demo, code that was audited by multiple developers, it didn't make any shortcuts in terms of security, etc. etc. etc.

Unless you've seen the person's codebase, it's uncharitable for you to assume that it's deficient, perhaps based on your own experiences.

If this scenario can happen in two companies, it might be more common that any of us realize.


Indeed, I'm personally half of a two-dev team maintaining and developing a system that I'd say falls deep into this category. Our giant legacy codebase has a few warts, but on the whole it follows a few simple design patterns that, once you understand them, makes it very easy to find your way around and change/extend. We also have an extensive domain understanding and direct contact with our user base, so we design, implement, review/test and deploy features very fast compared to any other team I've encountered in the organization. And though the underlying technology is mostly ancient, our product is growing strongly and consistently outcompeting systems with budgets of a different stratosphere in the open market.


Three.


There was user & supervisor security (code that she had developed for another system). It included backup of code and databases.

What was shown wasn't a demo - it was a full implementation. It was rolled into production weeks after the first showing and AFAIK is still in use 20 years later.

What is there other than "getting things done" (that is, other than bitching about it)?


Hey look, I'm not saying that it's not possible. I even believe you - in a lot of big companies, there are so many "bloat"-teams that don't really do all that much, that just drift around in the currents that flow when cash is abundant and will get shafted the moment the company has to tighten its belt (or goes belly-up altogether, if the company is incapable of identifying its inefficiencies).

That being said, we should really take these anecdotes with the appropriate grain of salt. After all, we're here in a thread about countless companies being threatened by security flaws. A dev that "gets things done" (from a business point of view) might actually do that, but do they also think about all those other requirements that business themselves can neither validate nor appreciate if measured by short-term KPIs?

I'm not saying that's happening, but I've seen too many "Why are you spending weeks on this, I can do this in half a day!"-kind-of-devs, that then hack something together that technically works as the requirements demanded, but completely falls flat under any aspects of maintainability and extensibility, let alone security.


> That some business guys are amazed by the "lone wolf dev skills" might be explainable, but on HN we should know better. Yes, there are devs that get a ton done irrelevant of framework, but "getting things done" (in terms of business requirements) is only part of the story.

No, you shouldn't be surprised about this, it's natural that smaller dev teams are actually faster. This is Mythical Man-Month.


I can build a Twitter clone in a couple of hours. I’ll email my demo to the CEO and tell him he’s wasting millions of dollars on his engineering division.


Please do!

But the best way to do this is (and the way consistent with this thread's narrative) is to to develop the final system before you tell the CEO. Oh, and best to have a supervisor for you who does the telling.


Old != bad.

Old and well maintained software can be reliable and secure, it just rate to encounter it. Maintenance is underappreciated. People don't get rises and promotions for successful maintenance of an old system. But they get it for new projects even if this project is rewriting of an old system in a new language/framework (even if a rewrite introduces new bugs, vulnerabilities and drops some old features).

So if an organization has money to spare, the software will be re-written every several years to flow the fashion and if doesn't have money security will suffer too.


Sure old != bad if they maintain it... Classic ASP was good 20 years ago, it's harder to secure than a modern framework which helps with security (e.g. XSS helpers, ORMs to help mitigate SQLi, which is one of the issues exploited here)

The problem is that in most codebases I've seen, once you get to 19 years passed deprecated, it's not that it's a well maintained hardened environment. It's a poorly maintained environment that people don't want to clean up for fear they break something.


it's bad if it's running on a deprecated runtime that's no longer being patched for security


If the runtime is no longer maintained then it is of course it is a problem (I'm from the Unix world and don't know if classic ASP is still supported by the vendor). We cannot say that a system as a whole is well maintained if it's dependencies are not.


The thing is, in web appsec world, when Classic ASP was a production platform, we were very early in terms of what attacks would be prevalent and the defences that are generally added to modern web application frameworks were not in use.

In theory you can totally add those protections per application but the effort of doing that, maintaining the knowledge required per application or team and keeping on top of new research, is likely higher than just moving to a new framework which has in-built protection.

Also you have to consider developer availability. At 19 years since it was deprecated, there is a smaller pool of people who are skilled at maintaining that codebase, and the group of people who can do that, and keep on top of web application security attacks is even smaller.


Attacks evolve, but I remember reading around 2005 on OWASP site about SQL injections and XSS. Database APIs with parameter binding (and auto-escape of these parameters) where available in most popular languages even back then and here we are 15 years later, SQL injections and XSS are still in OWASP top 10.

Though Classic ASP was released much earlier in 1996 and I don't know if SQL libraries offered parameter binding and template engine - escaping of string in template variables.


The first named mention of SQLi was 1998 by Rain.Forest.Puppy (AFAICR), however classic ASP did not have any in-built protection from it.

I started in security in 2000, as an analyst for an org using classic ASP. maintaining security was a pain.

I then started as a pentester in ~2005 and what I can tell you is, in my experience, classic ASP applications rarely had good protections against injection attacks (e.g. XSS, SQLi) for precisely the reason that the framework did not protect against it, leaving developers to make sure they had routines to canonicalise/sanitize/parameterize input correclty, and also that they implemented them universally across all possible user inputs.

Whilst this isn't impossible, in my experience, it was rarely done perfectly.

In comparison, something like ASP.Net which had in-built protections available, at least had the chance of having good uniform protection.


I believe I can push parameter binding back past 2000.


can't say for sure but I'd wager that it's not


IIRC Classic ASP has the same end-of-life, 2025, as Windows 10.


it's also a matter of the dependencies used, are they running in supported .net


I say this quite often, or we get better as a profession or we are going to get related to death. No society will tolerate failures in major businesses including food providers to happen continuously. Or we do better, or the next generation of developers is going to have regulated programming languages, architectures, and tech stack.

Or we show that we can deliver safely and resist business pressure to deliver fast and cut costs, or the government is going to do it for us.


If software engineering were physical, it would be like a non-stop series of Surfside condos and 737 Max crashes.

People just don’t take virtual things as seriously, unless they involve conspiracies.

*edit: when I say "people", I mean the end-users who would otherwise demand change.,


Except software is behind the ~40 million successful flights around the world each year.


Sabre and ATC core code are 50+ years old. Lockheed bought a particular unix source license 15 years ago for ATC sytems because the vendor would no longer support it.

Everyone who wrote it is retired or dead.


The vast majority of software is more like hipster react web dev garbage than serious engineering.


It's good to stay away from this sort of dismissive language. I'm a so-called "serious engineer" that's spent most of their career working on compilers and embedded OSes, designed and implemented a production JIT compilers, and a wide array of other stuff some people might characterize as deep magic.

There is significant complexity and depth to front end tooling these days, and I consider my colleagues working in that space to be as talented and experienced as anyone else.

The issues in the frontend space seem to me to arise as an artifact of the fact that the work they produce is far more visible and directly evaluateable by non-technical people. The demands they have placed on them to deliver features quickly is far higher. They're far closer to the user-experience side of the product than most backend devs. And they have to deal with a tools ecosystem that evolves and changes far faster than others.

Trying to describe that complex issue in terms of stratification of developers into "serious engineers" and "not serious ones" does a disservice to the underlying problem, and doesn't help address it.


Most of the complexity in front end development is self-inflicted, and the result of amateurish pseudo-engineering. It isn’t dismissive; it’s accurate. Sometimes things just aren’t very good or have poor quality. That’s a separate issue from how smart people working on the issue may be.


I would have to emphatically disagree. The application architecture of the contemporary web - a distributed system which involves smearing application state and logic by transporting code and data dynamically as needed from one location to another - has only existed for at most 15 years (since the advent of "fast javascript" with V8 and the growth of the set of APIs that enable rich network interaction, which we can summarize crudely as "web 2.0" or "AJAX" or what have you).

The flux and frenzy in the space is much more a sympton of the novel nature of the application architecture, the size and speed of growth of industry around it, and the rate of experimentation with frameworks and tools to quickly build extensive applications within it. Those are not self-inflicted, but circumstantial.

> That’s a separate issue from how smart people working on the issue may be.

I took particular issue with the original commenter dismissing an entire class of developers on the basis of what tools they worked with. I found it to be an example of gatekeeping that unfortunately is far too common in the industry.


I was the original commenter; stand by the comments. It's not gatekeeping. I don't think any particular area or specialty/focus of software development is better, requires more intelligence or anything like that. It was an observation of the state of affairs, not a commentary on worthiness.


The vast majority of software doesn't need serious engineering either. If Google breaks that isn't a big deal, even though almost everyone in the world will be annoyed.



Data breaches don't kill people in general. Medical devices can.

Your point is well taken, even though things are not as bad they are still bad.


How much non-aviation software is built to similar quality standards?


We have self regulation available via NIST 800-53, NIST 800-30, SOC2, mandatory in some contexts like FISMA, etc. What is now guidance can become mandatory audits with the stroke of a congressional pen.

I sound like a broken record but it bears repeating: Most of these attacks are successful because companies neglect best practices. Whitelisting, security awareness training, UEBA, etc go a long way.

I would hope the free market would prune companies without proper cybersecurity but regulatory capture means it probably won’t. Equifax and its executives are doing just fine.


> I would hope the free market would prune companies without proper cybersecurity but regulatory capture

Three things.

1. Markets have a slow reaction function, and it really is a reaction function. Let's Consider Equifax. Suppose that market were competitive. Then you'd have dozens or even hundreds of firms with all of that data that Equifax leaked. It seems unlikely that having more copies of data floating around more firms would decrease the risk of a breach. By the time the market signals, the damage is already done, and Equifax going out of business does diddly squat for impacted consumers.

2. Markets also have perverse incentives. Data breaches, in particular, are not necessarily expensive. I've been affected by at least a dozen, none of which had a material impact on the company that lost my data. None of those companies except equifax is subject to any sort of monopolistic forces. Some, like dropbox, are basically commodities. This might be different in the case of Kaseya and Solar Winds, which are effectively IT security outsourcing firms. Maybe. We'll see. If both of those firms continue to exist at similar scale, then the hypothesis that markets can do literally anything about IT security is completely discredited.

3. Equifax is definitely a monopoly/triopoly, but the situation is much closer to cartel behavior than regulatory capture.


That's the problem that free market prefers companies that have 100 bigger budgets for marketing than security teams. Unless this changes, nothing will change.


The markets aren't always rational but a slew of security flaws increasing the drain of revenue by having to pay out ransomware or losing all of your data will be a driving force behind improving security.


The true skill of choosing ransoms is to make sure that it's high enough to compensate you for the risk you take, but low enough to the other options are unappealing to the victim.

Large businesses can trivially set aside a few million dollars per year for ransoms. Small businesses don't have the ability to change the system.


That's a very good point. In that same vein, I do wonder if the hostage takers are that aware of the implications of their actions without letting greed guide them, not being a cohesive organization themselves.


The system as it exists now runs open-loop. There are no particular consequences to companies, executives, or developers not to ignore "best practices", and there are no particular benefits of following them.

Society expects technology to "move fast and break things", at least until the consequences get too visible. And we're not there yet.


Just FYI, the "or ... or ..." construct you are using here sounds odd to a native English speaker. The correct idiom is "either ... or ...", e.g. "Either we get better as a profession, or we are going to get related to death." (And I'm also guessing you meant "regulated" rather than "related".)


If "related" was a typo, it was a happy one. The root problem is that we're getting more and more connected, but those connections have low security. It's too easy for bad stuff to flow across those connections. We're getting related to death.

Much as developers in the past couple of years have really started to grapple with the fact that as wonderful as software libraries are, dependencies have non-zero costs that come with their benefits, connections need to be seen as being having non-zero costs as well. I work in a similar situation myself and I know my philosophy over the past 10 years has very much gone from a permissive "well maybe I'll need this later" open-ended protocol design to a super-strict "this connection moves exactly this set of strongly-typed, verified messages over it and if anybody tries any funny business we slam it shut and scream bloody murder on the monitoring" philosophy. Especially challenging with the Web's support for messaging, which is great for being a web page but is flabby and bloated for a messaging solution, what with all these headers that do magical things in the web servers or anything in between.

As our systems get more and more complicated, biological metaphors seems to be ever more useful, and I think we need to look to nature's highly defensive "programming" a bit more for inspiration. Chemistry has some different characteristics from our programming environment, some to nature's advantage and some to its disadvantage, so we can't blindly copy it, but living things take 'message security' very seriously. You don't last long if you just blindly trust everything out there.


Who is “we” though. I think a lot of deva cut corners, a lot of managers insist on cutting corners (preferably without a paper trail pointing at them), and a lot of businesses hire inexperienced devs that wouldn’t even know where to start, because cheaper.

I don’t know how you coordinate this without regulation and something like P.Eng type certification (which comes with it’s own problems).


I think you should certify systems in place, not engineers.

People without credentials but with experience and care can produce a reasonably secure system. People with credentials but under time pressure and contradictory requirements, and willingness to cut corners, can produce an unsafe system.

It's the system that fails. It can be inspected the way buildings are inspected.


Ideally I agree, but I’m not sure it will be effective. There needs to be concrete consequences, and there’s a lot of room to hide.

If we required something like a P.Eng, which (at least theoretically) means you are responsible with real consequences (for say bad building design) then certified employees will be far more reluctant to attach their name to shoddy practices. This bubbles up to the employer. If a client requires you to have certified P.Eng sign off then there’s some reassurance it’s not just going to fall a part.

I don’t think you need this for everything in a stack, but security wise it might be a good idea.


"I don’t think you need this for everything in a stack..."

The problem there is that everything on the stack is a potential hole. The chain is only as strong as its weakest link.


Certification of people is cheaper than certification of systems built by random people. For fun: Try to get a commercial building built that doesn't have an architect and structural engineer (both of which are certified professionals) somewhere in its background.


This is a really good point. Because the stakes are so high it makes sense. No one would want to live in a building that's yolo designed. Certifying and insuring it after the fact though would be really hard because a lot of critical decisions are hidden away.

We accept that possibility in SaaS though... for now.

Most successful SaaS applications probably weigh in at 250k LOC or more (I know LOC isn't a great metric, but it serves as an idea of an application's complexity) and will use many libraries that themselves could be questionable. Seems like it would be pretty complicated to certify.


I totally agree.


It's the CYA thats preferred over CIA that causes execs, share holders and customers to accept this dangerous malfeasance.

It's far less the programmers than the business plans that demand minimal investment. These are classic externalities that serve to damage society at large.


Doing the right thing is bad for short term profits. Thus, government regulation in our industries is inevitable. In fact, I would argue that it's the only way to ensure consistent reliability. There's a reason why Facebook is pushing for regulation. They know that it can't be stopped, and in that case they would prefer to be the one's writing the regulations.


But government regulation very often is preferring short-term profits to doing the right thing. Just the profits are political.

Think of all the security theater in airports, for instance. Think of all the to and fro in many policies as different officials get elected every few years.


I think we’ve lost control of our machines. At least when we attach them to internet.

I’m not sure what the answer is but better security and a rethinking of user authorizations seems to be in order.


We never had it. It was just not really useful to destroy someone else's toys.

The key to the current spree of ransomware is the massively improved ability to monetize digital hostage-taking. I don't really understand how financial watchdogs have let this go through, but cryptos have become a massive loophole in kyc and anti-laundering regulations. Recent moves in that sector seem to hint that this party is about to end and, hopefully, will create enough friction to reduce ransomware activity.


Cryptocurrencies have been a disaster. The only place to actually use is a currency is in criminal activity. The amount of environmental damage caused by proof of work is massive. We need regulation.


> The only place to actually use is a currency is in criminal activity.

There's actually less illicit activity in cryptocurrency than in USD.

> The amount of environmental damage caused by proof of work is massive.

Not even 1% of global energy consumption. When is this FUD gonna stop?


What would be an acceptable percentage of global energy consumption by cryptocurrency to you?


Your question comes loaded with the assumption that all energy production is equal. In reality, energy harvested from hydro or geothermal may have lots of value for proof of work processes, but they don't have lots of value for someone living far away from the hydro or geothermal plant.


There are other energy consumers than can be located next to cheap energy, like aluminium production. This alone explains why iceland is a significant exporter of energy-intensive manufactured products.


Well if you want to consider that an assumption, energy from coal has lots of value, if proof of work got profitable enough.

Which... it already has. Coal plants have been spooled up just to mine btc. The idea that we should continue or expand crypto mining is ludicrous, because it already has harmed the environment, even at its small market cap.


A currency should take less than .00001% of global energy consumption to run. Absolutely no reason to burn energy like this. Use proof of stake or just convert some other valuable entity like stock into currency.


Depends on which cryptocurrency you're talking about. Bitcoin? 0%. Monero? +Infinity.

The real question is: why do we still allow oil companies to exist when they're the ones responsible for much of the world's pollution? Because the USD is backed by oil.


> why do we still allow oil companies to exist

Probably because attempting to enforce an oil ban overnight would probably involve death counts in the 9-10 digits. That is why any attempt is going to be progressive, and we can all agree that it's not going nearly fast enough.

That question is kind of a diversion from the current discussion, though.


> That question is kind of a diversion from the current discussion, though.

It's really not. Cryptocurrencies have no real impact on the environment so discussing that is not really productive. Better to redirect discussion towards real problems which are actually destroying the planet. Problems which will never be properly solved because powerful people depend on their existence. The petrodollar, trade with China, etc.


> > That question is kind of a diversion from the current discussion, though.

> It's really not. [...] Better to redirect discussion

At some points the strings are getting too visible.


Redirection is not distraction. The topic is environmental impact. Discussing cryptocurrency's impact is pointless because it is irrelevant in the grand scheme of things. People apparently believe cryptocurrency is some kind of high priority problem that needs to be dealt with ASAP. In reality, it's responsible for less than 1% of global energy consumption and pollution. Pointing this out ends that discussion.

People insist on posting this FUD. I get it, they're concerned about the environment. So let's talk about real problems instead. Such as the highly polluting chinese manufacturers which no doubt fabricated the hardware we are using to write our opinions.

The real distraction is this "cryptocurrencies kill the environment" idea.


I wouldn't blame crypto for crime, but a system that makes it practically anonymous large volume payments overseas has helped make these ransomware attacks feasible. The 4 figure scams with bank transfers and mailing cash seem kinda quaint.


The same cryptography that protects us also protects criminals. The whole point is to make total government surveillance impossible. That's what this KYC/AML business is all about: surveillance.

People should be able to transact freely without some government demanding explanations. If there's more crime as a result then so be it.


>The only place to actually use is a currency is in criminal activity.

This discredits your entire post. Your other points may be valid, but how can anyone be sure if this one is provably false?


name a single large company that accepts crypto payments?


I'm not into cryptocurrency at all, but I've read that some of the billion-dollar casinos in Las Vegas take it.

My domain registrar does, and it's not exactly a fly-by-night operation.

I think at least one airline does. Though I might be remembering that wrong.

I won't ever get into crypto because I like money that keeps working when the lights go out. But I don't think it's either as fringe, nor as mainstream, as the two sides present.


The statement was a bit hyperbolic, but not by much. Have you ever used cryptocurrency for any routine transaction? How much total money have you spent in cryptocurrency if so? It's very newsworthy when people like Elon musk say they will accept cryptocurrency for the very reason that most businesses do not. And I would be surprised if more than a handful of Teslas have been sold via cryptocurrency.


Musk has already backtracked on accepting crypto at Tesla, citing environmental concerns. A cynical person might assume that it was just a publicity stunt in order to pump bitcoin.

https://www.bbc.com/news/business-57096305


> And I would be surprised if more than a handful of Teslas have been sold via cryptocurrency.

I personally would be surprised if Tesla made more money from selling those cars than they made by pumping btc with that news line.


Maybe computers shouldn't be talking to strangers in the first place. Why do they accept connections from anyone? These problems would be rare if only authorized persons could connect. Single packet authorization makes the computer ignore all traffic unless a cryptographically signed packet is sent. It's like the computer is not even there. Can't exploit anything without the ability to send payloads.

Of course, the internet would lose its mass appeal. Maybe it wasn't meant to be.


I’m beginning to think this line of advocacy is a red herring. If we’ll-endowed entities like nation-state backed ransomeware attackers want to completely ransack companies with nine figure budgets, they will always win.

To me, this is more of a foreign policy issue. I’d say the damages caused by an attack like this can be requantified in loss of American life, and treat it like such. What should we do if Russia were killing 20 US civilians every few weeks?


Haha it's clear to basically every competent security team that this audit has to occur, but it usually is not clear to product, engineering and operations.


Is that good enough? It seems like we have reached a point where people can perpetually find zero-days in any system and that we need to completely start over - from chip design to the highest level programming languages - with security in mind.


That's not enough either; the humans overseeing the redesign will also make human errs.


It's not like we don't have safe enough CPUs, safe enough languages and programming techniques, or understanding how to architect safe systems and usage practices.

It just costs a very significant amount of money. Many businesses just don't see the lowered risk to be worth the expense.


Why is it not possible to create a storage solution that protects against ransomware?

Of course, it is possible to do this but it requires considerable cost and diligence. External hardware that only takes data from the target machine, for example - long timeline + key transactions logged. "Backup can't protect against ransomware" statements seem to be just shorthand for "your piece of shit backup doesn't protect against ransomware", which is true but when shortened doesn't seem like the right message.


It is; just move to the cloud.

AWS S3 and Google Cloud Storage have Retention Policy Locks [1] and fuse bits in their buckets. e.g. set the retention policy on your backups bucket to 1 year and burn the fuse bit.

Now files cannot be deleted or changed in that bucket for 1 year after their creation. Not even the account root owner contacting support can get it changed. No ransomware, short of breaking into the AWS or GCP control plane is going to compromise those backups.

1. https://cloud.google.com/storage/docs/bucket-lock#policy-loc...

P.S. remember the write protect tab on old floppies/tapes ;)


Yeah, how hard is that?

For kicks, an algorithm that contacts you multiple ways if it hasn't gotten a backup OR if the backup has a suddenly has a high diff from the previous backup.


It’s actually a lot easier than that.

Back in the day I was using these netapp filers which had read-only snapshot volumes which were mounted on .snapshot.

It would be practically impossible to remove those snapshots as it would require root access to the filer head.

ZFS, BTRFS and lvm has similar functionality.


Given current circumstances, it seems like hardware guarantees for numerous things are going to be necessary. Remember, we're solutions for people don't know WTF they're doing.


The base mindset around software development, particularly at the lowest levels, needs to change to put security as the #1 priority. Far too many vulnerabilities are due to use of unsafe languages and features.

While there are holes at all layers that lead to these types of attacks, as a runtime systems and language/compiler person, it's clear to me that unsafe languages should be abandoned, even if it costs a handful of percent of performance. The societal costs are just too great.


Lots to unpack here, and I'm a huge fan of memory safe languages, but honestly it's quite misguided to assume that security can ever be number 1;

Security isn't boolean, and the closer to "security = 1" you get the more non-functional the system.

It's always going to be a tradeoff between doing useful things and being secure. I would agree that we need to shift the bar closer to 1, but absolute security is impossible without the world closing down.

w.r.t unsafe languages: it's not even possible to instrument and operate hardware in a "safe" way, even rust which is rather low level needs to be wrapped in unsafe in order to interact with hardware.

I believe we need to be better at: Detection, mitigation, response -- all things traditionally sysadmins dealt with.

But our industry assumes that sysadmins need not apply.


It's still the case that two thirds of critical CVEs are due to memory safety errors. That ratio has held constant for several decades. Those CVEs aren't in hardware drivers, but in mundane things like JPEG decoders and mail processing programs. That is directly due to them being implemented in unsafe languages.

> Security isn't boolean, and the closer to "security = 1" you get the more non-functional the system.

Security isn't one-dimensional. Security can be phrased in functional requirements like "does not allow remote code execution attacks", which is of course a boolean requirement. You can slice that finer and finer such as "does not allow remote execution attacks through X, Y, or Z mechanisms" and start adding other, higher-level requirements such as "does not leak user data through APIs", etc. Security isn't boolean, but it is absolutely chock-full of boolean requirements one can pose.

> I believe we need to be better at: Detection, mitigation, response -- all things traditionally sysadmins dealt with.

I don't disagree with that, but this is the last-line-of-defense, the-horse-has-left-the-barn stage, which is basically admitting defeat because of how absolutely riddled with vulnerabilities existing software is.


> w.r.t unsafe languages: it's not even possible to instrument and operate hardware in a "safe" way, even rust which is rather low level needs to be wrapped in unsafe in order to interact with hardware.

Like you just said: security doesn't have to be boolean. Using more secure languages doesn't mean we have to completely banish every occurrence of unsafe in Rust. It just means we should avoid languages that force even business logic to be written using pointer arithmetic.


Not just software development, much more of the responsibility lays at the sysadmin/network/firewall layer, and even more at the management layer.

Networks need to move to zero trust models, sure, but companies need to evaluate the risks of all the systems and processes that they rely on. The problem is it’s too easy to accept the risks or downplay them, while the work to get them addressed is costly.


I don't consider "remote IT management" (outsourcing) software to sit at the lowest levels. They're commercial solutions developed solely for commercial purposes.

This isn't some ancient, underfunded open source library, this is people getting what they're actually paying for.


part of me suspects this dearth of attacks could have been prevented had information security not been captured by leadership as a purchasing decision constrained by magic quadrants and trade magazine articles and instead were returned to IT as a technical process with audits by leadership.


Just FYI, "dearth" means the opposite of what you intended here. "Dearth" means "scarcity".


Most small businesses probably don’t have an IT department at all.


I feel like people don't quite have a grasp at how many small businesses there are in the US.

99.9% of businesses in the US are small businesses.

4 out of 5 businesses in the US are so small, they don't have any employees at all.


The flip side of this is that plenty of enterprises come and go without us noticing. You might have 222k enter and 249k exit in a quarter. My point is that ransomware at the b2b layer could shutter 100k businesses and it would be hard to distinguish from normal.


Ransomware isn’t always debilitating. We got hit by an attack a few years back and realized no one had anything relevant on that machine anymore so we just wiped it clean and moved on.


That's old school. targeted attacks today, hone in on critical infrastructure before striking.

Many times data is exfiltrated beforehand, backups are deleted. If someone went the trouble of compromising a 3rd party software vendor, he knows what he is doing.


something strange, No Iran firms, No N Korea firms, No Chinese firms no Russia firms under attack


None of the firms in Russia dare target a Russian based company for fear of elimination the hard way. Easier to attack another country.


Surely not all ransomwear hackers are Russian? At least not for long given how effective and visible they have been lately. Seems like a easy field to hop into and make millions


How do we know that? Would we even know if they were?


What we are experiencing is the next wave of Russia vs USA. In this go-round, instead of atomic missiles we have (profitable) cyberattacks. While it seems non-critical infrastructure was compromised in this attack, I have coined the phrase "Cold War II" to explain the critical infrastructure situation. Feel free to use it. Hopefully the weight of a new cold war will help put cyberattacks into the correct perspective for the media.


here is a story heard recently on irc:

this gentleman was working at SBG a media conglomerate in America. during a troubleshooting process while they were using the system internal tools specifically TCP View(https://docs.microsoft.com/en-us/sysinternals/downloads/tcpv...). they noticed that a certain address/domain kept showing up regularly even though no code was set to talk to that address. this responsible engineer promptly told his manager only to never hear it mentioned again. 1-2 months later that was one of the addresses listed as part of the solarwinds fiasco.

another episode was when this same engineer noticed that a fellow engineer was irresponsibly and probably due to inexperience unknowingly inserting a backdoor in a process via eval on unfiltered input coming in via a command line param, a no no. this engineer was notified by the other and provided with a simple exploit only to receive yelling and gaslighting in return and statements such as "we don't care about these things at this company." eventually the manager was notified and his response was: "i have told them so many times about this" yet that also never went anywhere

security is a layered process but with stories such as these it's no wonder attacks are common, someone somewhere will behave like the characters in the stories and that is all it takes, amplify that across all the companies in business and the other side has a pretty easy time finding open doors

as long as management creates an environment where disclosure is considered "rocking the boat" managers and employees will never do the right thing.


Only 1500? Phew.

Please tell me that my most onerous and security conscious customers weathered this just fine. You know, the folks that lock down ports by jack number, MAC address, and user. The folks that MiTM everything and instantly cauterize a port if the traffic becomes suspect. Please tell me they made it through this OK and that all the security theater was for something.


So who’s next? ConnectWise? Atera? Ninja? I’d be shitting my pants if I were running any of those right now.


> One of those tools was subverted on Friday, allowing the hackers to paralyze hundreds of businesses on all five continents.

Aren't there 7 continents? I get ignoring Antarctica in this context, but it's still wrong to say "All 5" here. What weird phrasing.


It seems continents are taught differently depending on where you live.

https://en.wikipedia.org/wiki/Continent#Number


Interesting, I didn't know this. Thanks!


I wonder how many of these businesses were still able to pass their SOC audit, yet have these kinds of holes.


Headline should be US firms CEOs, want the tax payer to fix this too.

Its hard work cutting budgets and outsourcing whatever possible to the jamaican bobsled team. So now the govt has to hire, train, feed and stable yet another army to protect these helpless overfed crybabies.


When I went to college decades ago, we always studied case studies on what not to do. This company did this and almost went out of business. Don't do this. When bailouts happen, the end result of an action is obscured, literally subsidized and I'm afraid people don't take it as seriously.

Funny though, having gone through the Y2K fix, I'm aggravated that systems are now again storing dates in the 2 digit format.


Given that most people don't know that the Y2K bug was real, I feel that this is a category of bug that will plague us forever. And it's already happening more often than once per century.

https://en.wikipedia.org/wiki/Year_2038_problem


The simplest solution remains to ban the formal exchange of crytpo for fiat, across Western nations.

Its a lot harder to justify giant ransomware campaigns when you're paid in Amazon gift cards instead of easily exchangeable cryptocoins.


So you don’t actually want to ban the exchange of crypto for fiat, you want to ban companies from being able to pay the ransom (with crypto)?

I don’t think yours is a simple solution or the right one (banning cryptocurrency). But I do think bans on payment of the ransom are interesting.


I can’t speak to the parent comment’s intent, but it’s becoming harder and harder to look like an innocent crypto whale. While some can prove that they originated their balance, what if the wallets used in certain transactions are (or must be) confiscatable, say at exploited tumblers?


This works great, until some nation state adversary wants to shut down the entire US infrastructure. Or even better some script kiddy decides that it would be fun to feel powerful

And they won't care about if companies pay ransome or not.

Treat the cause of the sickness, not the symptoms.


I think you'd just get a new category of bad guy--the one who charges you $500 to help you circumvent whatever legal restrictions are preventing you from paying your $10000 ransom.

Or I guess two new categories, because the victims are all criminals now too.


> Or I guess two new categories, because the victims are all criminals now too.

The victims won't become criminals because you'll never find a senior executive willing to go to prison to pay a ransomware ransom. And no, "pay someone to pay it" or "have a random low-level nobody pay someone to pay it" is not going to work. Judges/juries aren't that stupid and senior leadership typically know judges/juries aren't that stupid.

Criminalizing paying ransoms would work, and this particular "they'd just pay someone to pay the ransom" argument against criminalizing paying ransoms is beyond specious. Criminalizing paying has worked with other, much more serious types of ransoms. Why wouldn't it work here?


The parent comment is not about fiat->crypto, but the other way around. A similar effect to stronger kyc on someone suddenly inexplicably trying to pay for a yacht with crypto.


Ransomware existed before crypto, also banning crypto is very hard to do and arguably not legal.


Authoritarian control of currency is only good for the authoritarians.


Brilliant.


Why improve overall cyber security which is at completely garbage levels at most companies when you can blame crypto instead?

It just seems like the bill on security has come due and I recommend paying it. Otherwise you leave the economy open for much more serious attacks than asking a few million in crypto.


[flagged]


Please stop posting in the flamewar style to HN, regardless of how strongly you feel about something. It's tedious and repetitive, and usually turns nasty.

https://news.ycombinator.com/newsguidelines.html


I apologize for the short comment.

I do believe this to be the only humane and effective resolution.

Modern computers and networks are impossible to fully secure against hacking, just like it is nearly impossible to fully secure a building. The solution is deterrence, not trying to move computers and buildings into Fort Knox.

A few strong examples of what happens to criminals will prevent endless more attacks.


Why? IT professionals raise concerns all the time and those concerns are virtually always dismissed by management because costs. Someone made that decision and they should be held to account for it.

You're perpetuating the problem.


I've yet to see any IT person who says this kind of thing perform the actual risk assessment math that shows management made the wrong decision.


Are you implying that somewhere along the way, the math works out that management shouldn't listen to IT? I'm stretched trying to reach that viewpoint how the math works out when you company gets Colonial Pipeline'd


Yes, I'm saying that it is completely possible that the risk assessment could say that the damages of an attack like this are ultimately less expensive than the cost of mitigating them when probabilities are accounted for. I don't know that that's true, but I don't know that it is untrue either, because nobody is talking about the risk assessment math, most especially people calling for massive infosec increases.


I often feel the same way about robocallers. It would only take a few to make anyone doing it question their choices




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: