I think what people are missing with all these analogies about burglaries and negligence is the funny difference between cyberspace and meatspace: In cyberspace, your attacker can be anywhere on the planet, located in virtually any jurisdiction, and reliably tracing and attributing attacks is a very difficult task. In meatspace, your attacker must be physically present and is generally obvious and thus vulnerable. This difference has dramatic implications on the ability of the enforcement model to reduce incidence of attacks.
In meatspace, assigning 100% of the burden of blame to the attacker and absolving the victim of any blame at all agrees with our ideas of morality and sort of works because there is a non-negligible chance of holding the attacker accountable. This provides a measure of deterrence to would-be attackers.
In contrast, in cyberspace, the chance of holding attackers accountable is much lower. There is little deterrence to would-be attackers, especially state-sponsored attackers. Here we need to let go of our fantasy that blame must be assigned according to our idea of who is morally at fault.
Of course the attacker is always morally at fault. But legally, we must hold accountable organizations who are breached, because we need them to improve their security posture. An improved security posture is the only realistic path to a future with fewer and less impactful cyberspace attacks.
Strict liability or "victim blaming" for cyberspace attacks goes against our notions of morality but IMO it is essential.
> I think what people are missing with all these analogies about burglaries and negligence is the funny difference between cyberspace and meatspace: In cyberspace, your attacker can be anywhere on the planet, located in virtually any jurisdiction, and reliably tracing and attributing attacks is a very difficult task. In meatspace, your attacker must be physically present and is generally obvious and thus vulnerable. This difference has dramatic implications on the ability of the enforcement model to reduce incidence of attacks.
The jurisdiction part is the key here, IMHO.
Sure, the likelyhood of reliably tracing a single attack to a crew is very low, but a prolific crew has dozens or even hundreds of attacks going on in parallel, so tracing just one of them should be enough to take them down.
However, most crews live in "bullet proof" jurisdictions, where we cannot reach them.
If this practice of ransomware attacks continue, we really need a better solution for that. This could go as far as cutting off the "bullet proof" countries from the Internet, if it weren't for China and Russia, that simply from an economic and political perspective, we cannot disconnect.
I guess diplomatic solutions are needed, as well as investing more in IT security, secure OSS etc.
The way you have laid out this problem makes it seem similar to the naval piracy issue in the Age of Exploration. You have small, untraceable actors launching both ad-hoc and privateer-style attacks on large national and corporate entities.
Everything you suggested seems valid, and as you pointed out both the carrot and the stick are needed. The European powers enlarged their navies to absorb the surplus of unemployed sailors and used the enlarged navies to hunt the remaining pirates. British naval dominance (followed by American naval dominance) is what makes naval piracy comparatively rare today. I reckon a similar strategy would work digitally (put the best talent in golden handcuffs and hunt down the rest), but I'm not sure anyone has the resources, political will and the national interest right now.
> but I'm not sure anyone has the resources, political will and the national interest right now.
Well, this will change if/when ransomware attacks are becoming a big enough issue to noticeably impact the economy, health care, or something else that politicians and voters care about.
I'm not an IT security expert, but I do think we are now observing an increased industrialization of ransomware. Some crews specialize in initial attack vectors, and sell them to others who specialize in the lateral movement, and then those resell fully compromised systems to specialists that do the actual ransomware and payment.
If this trend continues, countries will be forced to take this far more seriously than they do it now.
Well sure, but all the potential choices have serious problems. American corporations have participated in weakening the government to the point where it's not capable, nor trusted, to do it. The EU may not have the cohesion, and they may not be able to get buy-in, since (I suspect) this will have to be a thing the Germans push hard for. The UK leaving the Union only throws another wrench in Europe being a solution to the problem. China and Russia's interests are aligned with preventing such a thing from happening globally.
Millions for defense, not one cent for tribute. Funny how that makes sense again.
It even occurs to me that like Tripoli of old, a lot of these "bulletproof" locations have a significant chunk of their economies based around this piracy. Romania's got some towns notorious for this, and India has places where scammy call centers are a way of life for thousands of people.
I saw once that the likelihood of a crime is s function of likelihood of getting caught and the severity of the punishment.
It's difficult enough getting government departments within the same country to cooperate. Tunneling attacks along several international jurisdictions compounds the problem, especially if the attacker chooses to tunnel through states that are adversaries to the victim nation.
In practice places like the UAE are examples. Where there is severe punishment for theft. Diamond shops will leave diamonds in front of you while they tend to other customers.
The above was taught during a lecture at business school. A quick search found this link which infers the above along with some interesting other points related to the subject.
"The certainty of being caught is a vastly more
powerful deterrent than the punishment"
Another solution: plugging attack vectors, like users ability run arbitrary non-sandboxed binaries. Server-side systems and thin clients are almost bullet proof. No virii for Chromebooks.
I don't want to start thinking of corporations as moral actors, like humans are. I just want them to be held legally culpable and made accountable when their behavior has negative consequences. The issue of morality is saved for humans.
Fortunately, companies can already be held accountable if their negligence exposes personal data in security breaches.
Likewise, individual members of the corporation can already be held both morally and legally responsible for their personal actions and negligence. That's enough for me, I don't need to try to shame legal constructs too.
Now, whether companies are actually held accountable in practice is a separate issue. Equifax certainly wasn't, not in any way that matters. The same could be said for morally culpable CEOs (or CISOs for that matter). But, that's a question of how the law is applied, not whether our moral stance should be changed.
I’ve always thought along similar lines. What bothers me is that, if someone were to break into your home there is risk to them because you are allowed to defend yourself by fighting back.
As far as I know, we aren’t allowed to counter attack cyber attackers so our only option is better defenses and then handing things off to authorities. I used to work for an smaller eBay-for-a-niche market type site and dealing with fraud was our biggest issue.
We tracked fraud ourselves and even managed to send a delivery to a PO Box used by someone who had swindled customers out of thousands of dollars. We contacted the authorities, told them everything and exactly where the criminal would be.
They did nothing.
If we aren’t allowed to fight back and the authorities won’t do anything, what deterrent is there?
> But legally, we must hold accountable organizations who are breached
Parent comment is also insisting on the imature idea that we must generically hold organizations accountable when breached. I say immature because this idea keeps popping up once in a while from people who didn't yet realize that it's been debated repeatedly in the past and it didn't get applied so generically for good reason.
There are so many nuances OP has ignored, and so many ways this is not only impossible, it's also a bad way of dealing with the situation. When a private citizen gets breached due to an insecure ISP router, is it just the ISP to blame or also the user for not buying a better one even though the ISP allowed it? Who's responsible when a company user gets tricked by fishing even after the regulation training? Personally liable for the breach? When a company Linux server vulnerability is exploited who gets the blame? The user? The admin? The distro maintainer? The developer who pushed the code? This would kick OSS software to the curb because most of it does not have an "organization" behind it to take the blame for every vulnerability.
Organizations will be breached. Most of them can't even afford the defenses that an averagely determined attacker can afford to penetrate. Where do you draw the line between who's to blame, attacker or victim? With real world crime we did a good job of fine tuning that threshold over centuries.
Best you can do (and we should do) is come up with a set of rules, regulations, and best practices that are enforced by law, and I think this is coming one way or another. For example "patch any CVSS 9 or higher within 14 days of publishing", "implement 2FA for x and y access". But even these rules will always be behind the times and never enough to thwart attacks. It raises the bar for a successful attack and creates a clearer (not clear) threshold for responsibility.
Sure, some cases are clear cut, you haven't patched for 2 years and have no leg to stand on. But the solution is certainly not blanket blaming the victim because you can fit it in an HN comment.
The crims have obviously worked out that it's much easier to subvert the "users" rather than have a head-to-head battle with IT. If a user (even a careful one) clicks on a link in an email, should they actually be held responsible for what follows, or is it the fault of IT/Security whose security setup allowed an email with a dubious attachment to make it through to the user?
I know many intelligent, conscientious, non-techy users who'd be mortified to think they enabled a ransomware attack - but is it their fault?
I partially agree, but think it depends on the nature of the attack and the types of security procedures/protections that were already in place.
For example, consider seat-belts. If you don't wear one, and you are involved in a crash, there is a serious likelihood you will die or be seriously injured. Hence we make it the drivers responsibility to ensure passengers wear their seatbelts. Now, if everybody was wearing their seat belts, the car was serviced, there were airbags etc, but out of nowhere a tree hits the car, should we hold the driver accountable for not having installed a cutting edge anti-tree device to their vehicle? Of course not!
Unfortunately, defenders are always on the back foot. You can have the best security posture and still fall to a zero day. We need a nuanced policy in place which blames victims that have no security posture whatsoever, but properly assigns responsibility to the attacker when the victim did everything they reasonably could. Defining "reasonably could" is the very challenging part.
I think you are mistaking safety for security. The whole discussion is about security - preventing attacks from attackers on purpose. What you described here in the seat belt example is safety - preventing accidents that happen without intention/malice.
Overall, it’s a good thing to encourage obligations of organizations to be diligent about cyber security.
However, I think comparing cyberspace attacks to meatspace burglaries (in the not-Ocean’s 11 sense) and negligence is an unfair comparison.
It’s like a cat and mouse game in actuality. Even with good defenses, determined attackers could still keep banging at the gates trying to get in. There are also attackers that have a good deal of sophistication and ‘cyber arsenals’ to go after these bigger orgs - including nation-states and large crime rings.
In a meatspace analogy: If someone owned a staffing agency, they might require employee ID badges, set 2FA, and have cameras in a building... but probably have no contingency plans for the Russian government attacking them or criminals with a wrecking ball smashing through the walls.
> In meatspace, assigning 100% of the burden of blame to the attacker and absolving the victim of any blame at all agrees with our ideas of morality
There's a lot of victim blaming that goes on for physical/non-cyber attacks of all types because of people's ideas about morality (some valid, some not); where the victim of the attack is generally responsibility to a third party for the care of the object of the attack, that also extends into the legal system (mostly validly).
While your argument that there must be some duty to protect online data well as an obligation not to attack, the distinction between ”meatspace” and “cyberspace” you are drawing on the topic seems specious and ill-informed about the way society in general and law in particular handles responsibilities outside of cyberspace.
This. I was not expecting HN crowd to almost universally blame the attackers and fully absolve Funke. It just doesn't make any sense if you have the faintest idea about cyber security in modern age.
With physical security I can walk around and check it for myself. I can even watch the contractors put it in place. There are several people involved that can spot mistakes.
With cyber security I need to trust that some programmer didn't make a mistake 15 years ago when they wrote the TCP stack in a 12 hour crunch shift because their boss needed to meet a deadline. It's impossible to check for the layman and extremely hard even for experts.
With physical security, you need to trust that the lock designers and manufacturers didn't make material mistakes. It is impossible to check for the layman and extremely hard even for experts. You can watch people install it, but that only offers so much assurance and is limited mostly to their expertise in installation. Further, we know that any lock can be bypassed given enough effort, so we have insurance against theft and maybe additional layers of security (cameras, a fence, watchful neighbors, etc.).
With cyber security your position is similar. You're working with a series of tools, none of which you can trust completely, and most of which have limitations or flaws. You layer them with the goal of increasing the amount of effort requires to breach all your defenses to be too high for your adversaries to want to take on.
In both security domains, the basic positions are the same. Non-experts need to layer imperfect defensive systems atop one another to make successful attacks more difficult to achieve. Risk assessments play an important role in helping people decide how much is enough.
The difference is the scale. While you may have one burglar try and break in, in cyberspace, you could have thousands of state sponsored hackers trying to break in.
A burglar needs to quickly break in, otherwise they risk getting caught. Hackers never get caught. There is absolutely no risk, and high reward.
I still blame the company in the second scenario. Pay a multiple for a secure setup or don't store data, even if that means funding new development when no secure solutions exist. I would like people to take user data so seriously that they would go so far as to develop a new operating system to securely handle it. That should be the burden we put on companies that want to collect data on people.
I think there's a strong incentive for a lot of small-business people and software engineers alike to wholly blame attackers. If it's the attackers fault, you don't have to wonder if your insurance is good enough. You don't have to examine if you keep your software sufficiently patched. You don't have to examine if your company's custom internal infrastructure is resilient or if it's one giant shared CIFS drive full of sensitive customer data without backups.
Often, taking security seriously feels like directing a certain amount of resources for uncertain returns at a domain that feels like it should come for free. Software engineering feels like it is like manufacturing, where you produce artifacts and ship them. It's jarring to recontextualize this as actively engaging in an adversarial, human-driven domain.
Between the two, our fellow users are heavily incentivized to find ways that they and people like them are blameless. It's a way to avoid engaging with what can feel like an impossible problem. Without attackers there wouldn't be any cybersecurity issues, right?
well just previously we had a story where a company was taken to task for how they implemented a test of cyber security by using an email that promised bonus money or such.
such is the issue at hand, the attackers know no bounds and it will take coordination among governments to track them down and hold them or their master's accountable.
this does not excuse the victims of such attacks but even the best efforts of many can be circumvented by the latest method, a careless employee or even a malicious one.
I am sure many have experience having access we routinely expected yanked which felt unfair but also be on the other side of the issue trying to lock down users only to have push back that we went too far; the heartache our support team got in locking out what users could do on their desktop could fill novels
>There is little deterrence to would-be attackers, especially state-sponsored attackers
At least with state-sonsored attacks there is theoretically the option of striking back, although speaking as a German citizen, we've sadly neglected both our defensive and offensive capacities and the national infrastructure is simply not up to the task.
That's not 'sadly', that is a direct consequence of the aftermath of World War II when the German offensive capability was purposefully reduced.
And to be fair: this is what allowed Germany to quickly re-emerge as the economic powerhouse in Europe, without having to spend a fortune on defense and with a Marshall plan plus a lot of knowledge about electronics and mechanization what may look like a disadvantage to you today actually is historically a huge advantage.
tbh I don't thnk that's a reason any more. It's not like anyone goes "oh no the Germans are at it again" if we could actually repel and deter Russian or Iranian attacks in cyberspace. In fact our allies have been asking us to actually build up our capacities both analog and digital for a long time now. It's much more mundane, politics just doesn't care. I know a few people who went to do IT careers in the military and it's just bad on all fronts. Payment is bad, they recruit the absolute bottom of the barrel, the resources aren't there, and the infrastructure is neglected.
In the US or Israel a lot of highly qualified people go into military service and it's a priority, here it's a third wheel.
I'm of two minds on this. On the one hand I realize that with Russia still very much a presence on the stage (though vastly diminished from the 80's) there will always be a need for vigilance.
On the other hand: Europe has come a long way through peace, much further than it ever did because of war. To see how much of the American budget goes to weaponry and to see what terrible shape the country overall is in is a good warning that a country that gives high priority to its military will have to take that money from other places. Education, healthcare, infrastructure. All of these will suffer if the military takes too large a part of the budget. And once you have such an MIC you can't just get rid of it, after all, those people employed there would like to continue to be employed. And so the beast starts to feed on itself, and before you know it your countrymen and women are going to be dying in wars of conquest because what's the point of having all those toys if you can never use them?
So, it's a dilemma, and not one that is easily resolved. But the fact is that Germany and Europe as a consequence got much further because of peace than it ever did because of war. Let's hope it stays that way, the alternative is not a pretty one.
I think there's a legal analogy to be made with vehicular liability insurance. By choosing to operate a car, you're putting others at some small risk, and you're therefore required to hedge against the scenario where that impacts someone else.
It's annoying how "cyber attacks" are still treated like some kind of natural disaster, instead of calling them by what they are: the result of negligence, both by vendors who sell products that are insecure by default, and customers who fail to invest in proper security measures/mitigations.
Avoiding ransomware is not rocket science.
Ransomware attacks are loud and noisy - imagine the amount of stealthy attacks we don't hear about.
If you leave the door of your home open and I can just enter and walk out with your TV, it's negligence on your side and your insurance most likely won't cover you for the damages, but I'm still the felon here and your negligence will not absolve me of the charges I will face since I'm not allowed by law to enter and take private property even if it's unsecured.
The difference is that the TV can not be used to harm anyone else, unlike computing equipment that can be (and is) used to attack others.
In your example let's replace the TV with a weapon and I bet you will be still found liable for not taking appropriate precautions to secure your weapon like keeping it in a locked safe.
Similarly, most online crime uses lots of compromised machines as its infrastructure. Making their owners/operators liable would actually add an incentive to not buy/operate insecure devices which in turn would put pressure on manufacturers to make secure products, otherwise customers will be afraid to buy them if it can get them in legal trouble.
I just wanted to +1 the validity of the liability claim. At least where I live, that is absolutely the case[1] that you are liable for crimes committed with a gun you did not properly secure.
Or more closer to infosec, I think, is that I bought a house and it had 'cheap' locks that anyone with 10 minutes and a lockpick can open. Sure, maybe I should have considered some harder core stuff, but in the end thats what I have and what is practical for 100 good reasons. The same way corporate IT can only be so secure. So no one is really leaving the door open, its just with enough effort you can break any lock.
Which puts this further on the attacker. The attacker can't claim that I should have nicer things and he couldn't help himself. There's way too much victim blaming in information security.
The norm in plenty of places where there is a big difference between rich and poor both in degree and in number is that the rich will have private security. Locks are really not going to be worth much as protection at all in those situations.
> it's negligence on your side and your insurance most likely won't cover you for the damages
Fun fact: this isn’t true in the US. For example: you can leave your car unlocked and with keys in it, and if it’s stolen you’ll still be covered by insurance.
Security incidents, viewed generally, are the conjunction of two factors: threats and vulnerabilities. It is a massive over-generalization to view all cyber attacks through a lens that focuses only on one factor.
Sure, there might have been vulnerabilities in the software or processes of Funke group. But it's a dishonest analysis that ignore the threat actor (a criminal in this case - let's not mince words) who exploits the vulnerability.
I mean its tricky to do an analogy between the digital and physical worlds- but I do find it interesting that
I mean there is a whole range of possibilities here, at what point do you stop blaming the victim?
1- The front door was left wide open
2- The front door was closed, but left unlocked.
3- The back door was left unlocked.
4- All doors were locked, but with an easily picked lock.
5- All doors were locked and with a legit key, but a window was left closed, but not locked.
6- All windows and doors were locked, but the windows were easily broken.
7- The windows were reinforced, but a door was broken down or someone drilled the locks, or just forced the window until the lock failed.
8- The doors and window locks were reinforced, but they pried up the garage door and broke through the flimsy door between the house and garage.
9- The exterior entryways were all truly secured, but they found a way in through a chimney, a sewer pipe, or just sawz-alled through the exterior siding...
At what point do you stop calling the resident an idiot and start blaming the perpetrator? Personally I would say around #4 in the meatspace the world, but 2/3 I would say they are still being victimized depending on where they lived and whether they routinely left the door unlocked or not.
And then in the digital world, there are things like not setting a password or leaving a default one up, but what about patching known vulnerabilities? How long is too long once a patch is made available? What about zero days, and vectors of attack no one thought of... and maybe line employees who just don't know better?
Just in general, people are very quick to blame the victims in situations like this- I just had an argument about this with someone the other day when I heard the news that a neighbor's bike was stolen- from outside their door on the third floor. It was an exterior door/stairway, but not visible from the street, and the bike was locked but with a cheap one. They brought bolt cutters and took the bike. People were like "this is on you for using that cheap lock..." but I think he took reasonable precautions, but many had no sympathy for the guy. Personally I can't wrap my head around that mindset.
"Victim blaming" implies that the parent poster is saying that Funke would be responsible for the hackers' actions, that they somehow stuck their necks out and invited an attack. That's not the case.
The parent poster is saying Funke and/or their software vendors are guilty of negligence. You, too, even go on to admit there might have been vulnerabilities in the software or processes of Funke group.
The parent poster is not the one being dishonest...
I read it as blaming the suppliers who don't provide their customers with better tools. Door locks are only modestly secure in absolute terms but it takes some effort or skill to defeat one.
I could probably pick an average lock after only a few hours of training. It would probably take me a good few years to get to the point where I would be able to run a successful hack.
Yeah, but could you do it with confidence on the street, eg break into a commercial building? Also, I don't think it's that difficult to run a successful hack; admittedly I have many many years of computer experience to draw on, but conversely modern tools make it unnecessary to reinvent the wheel.
I am sorry, but the term invented for the purpose of dealing with rape culture does not exist to be generalisable to a completely different niche. In this case the victim is the one paying huge bonuses to execs who decided that security is less important than their salaries. And other people are suffering for it.
There has to exist a difference between victim-blaming and people screwing up at their jobs.
In the case of the SolarWinds attack, their management seemed to be quite technical and the attacker was just too strong/sophisticated, so SolarWinds is considered a victim.
But in the case of the Equifax, they're considered at fault because their head of security seemed unqualified for the job and the victims were actually the millions of users whose private data was put at risk of abuse.
The probability in this case is very high that the upper management at Funke was quite incompetent at how they've digitized their operations.
Are Hackers criminals? Yes, because the law says they are. Does that matter? Not much outside the realm of law.
I don't think hacking/cracking should be illegal. If I send you a series of electrical pulses and that causes your systems to do something stupid, that's your problem. You are not a "victim".
Laws against physical violence and against stealing physical objects have good reasons to exist, because otherwise there would be no point in having any moral conception at all, because everyone would solve their problems through violence.
On the other hand, sending messages through a wire doesn't have good reasons to be outlawed. We could live peacefully sending each other all sorts of bytes, and each entity would make an effort for their own systems to behave correctly.
Most people disagree with me, but their abstraction is leaking, because even they agree with me on a fundamental level, they just don't know it. The moral discussion is so disconnected from concrete actions and formal definitions that it devolves into dumb analogies like "but if you left your door open blah blah".
Having a "right not to be hacked" is of the same nature as having a "right not to be offended".
"[A] series of electrical pulses and that causes your systems to do something stupid" is a hell of a euphemism for e.g. cyber-bullying resulting in homicide and/or suicide; infrastructure attacks; wire fraud and theft; blackmail; conspiracy; etc. Your definition could even cover a remotely detonated bomb.
It's like saying no, I didn't shoot that person, I just gave momentum to a bullet that caused their body to do something stupid and they died.
> If I send you a series of electrical pulses and that causes your systems to do something stupid, that's your problem.
I think these reductionistic arguments are naive. It is the information and intent that matters. Many times racist and sexist harassment only consists of “making pressure changes in the air with vocal cords.” However, those pressure waves have meaning(words) and there is intent behind them, so we condemn that behavior.
Come on, man. Are you 15? Hacking in the spirit of curiosity & experimentation shouldn't be illegal. Hacking to destroy computers, bankrupt companies, and dox journalists should certainly be.
> Are Hackers criminals? Yes, because the law says they are.
You seem to be implying that "hackers are bad because the law says they are".
No, crackers, specifically, are bad because they act in bad faith and they can cause serious problems for people and organizations. We, as a society, codified that into law. If there were no laws, they would still be "bad".
> Does that matter? Not much outside the realm of law.
Yes, it does matter. The law was made for a reason. If it didn't matter, we wouldn't have the law in the first place. You might disagree with the reason the law was made, but even if you do, it mattered for the majority of people, so you're in the minority (not in an "underdog" kind of way, but in the "the earth is flat" kind of way of being a minority).
> If I send you a series of electrical pulses and that causes your systems to do something stupid, that's your problem.
Yes, but that's not where the story ends. It's also my family's problem. It may be my friends' problem as well. It might become a problem for all my contacts. It might even be a problem for my company. It might be a problem for the suppliers of my company. It might be a problem for their clients too. It may even become YOUR problem.
That's just thinking of 1 person. What if, by some miracle, the US Treasury gets hacked? Or the DHS? 328 million people have a new problem now
So we collectively got together we said that my problem could really quickly become everyone's problem and we said we'll start addressing it by making laws that say "hacking other people is bad".
Then you come around and you're like "boo hoo I don't think causing problems for other people is a big deal. They should just deal with it". Well, you're wrong, sir, and I hope you see why you also come across as a bit of an asshole.
> You are not a "victim"
Yes, you are. By definition of the word "victim".
> Most people disagree with me
Well there's your first clue, Sherlock.
> Most people disagree with me, but their abstraction is leaking, because even they agree with me on a fundamental level, they just don't know it
What does a leaky abstraction have to do with people disagreeing with you, but then somehow unknowingly agreeing with you on some fundamental level?
> I don't think hacking/cracking should be illegal. If I send you a series of electrical pulses and that causes your systems to do something stupid, that's your problem. You are not a "victim".
'I don't think assault should be illegal. If I send a series of simple kinetic impacts to someone's unprotected body that causes their organs to do something stupid, that's their problem. They are not a "victim."'
Genuine question: I can understand your reasoning regarding physical violence but why would you include stealing physical objects?
If there was no physical violence or threat of it during the stealing.
In my opinion there are good reasons for physical property to exist (individual freedom), but avoiding violence is not the best theoretical argument for it.
Is physical property against freedom or in favor of freedom? I think both, depending how you look at it. I've though about this a lot, and tend to fall on the side of proprietarianism, because I think property is absolutely necessary for the fulfillment of human needs and desires.
You would absolutely authorize violence by proxy if someone stole your stuff, and if you wanted to cash in your insurance policy it would be required in the form of a police report.
> But it's a dishonest analysis that ignore the threat actor (a criminal in this case - let's not mince words) who exploits the vulnerability.
If I leave my ground floor window open with cash sitting just inside and someone reaches in and steals it, that's my fault. It's dishonest to suggest that dishonest actors don't exist or are anything other than a fact of life/force of human nature.
As such they don't bear much discussion - I left the window open. That's on me.
Not all exploits are due to negligence or at least not crass negligence. Sure, something may have been forgotten (although the present article says nothing about this), systems are complex and as the "defender" you only have to make one mistake. But your "it's on me" is very much victim blaming unless you can show a pattern of intentional ignorance or malfeasance within a network.
I can get in your house even if you close the window. It's a window and can be opened. Are you still to blame? I can look for money even if you hide them, I'll search around. Are you still to blame? You could have taken the precaution of not having money in the house or at least not living on the ground floor if you do.
Many attacks are 0day, many others are supply chain attacks, many exploit the weakest of links, the human. There's a limit to how much you can protect yourself from one attack in an instant before you cause yourself harm slowly but continuously. No matter how many locks you put you're not guaranteeing anything. It's a calculated risk and sometimes the calculation doesn't pay off.
Not every crime means businesses are not liable for the results of their negligence. If the business does not provide the reasonable means for dealing with fraud it’s their responsibility in front of the law for the result of given fraud.
The moral fault is always on the perpetrator. Always.
But in a world that contains muggers, and that you know contains muggers, walking around the bad part of town with cash visibly hanging out of your pocket is not very wise. It's not a moral failure, but it is a failure of common sense and reasonable prudence.
_always_ finding fault with the one who commits the crime is as pointless as finding fault with a hurricane for destroying your property, or the sea for drowning you without an oxygen tank.
In the case of murder, sure. But I'm with you. But bad actors (in the form of petty theft, exploitation, confidence tricks, etc) are a fact of life. Failing to account for them, just as failing to account for hurricanes if you live in hurricane alley, is completely on you. You can reduce that down to a blanket "but you're just victim shaming!" if you'd like, but it doesn't make it any less true.
Compromising a single endpoint via a malware e-mail / link is just the first step of such an attack, you then need to move laterally in the network to get access to the important bits of the infrastructure. For Windows networks this often involves breaking into the Active Directory / domain controller, which then gives you administrative access and makes it possible to e.g. encrypt data. This usually requires manual enumeration and specific exploits, it's not something that can be fully automated yet.
So you're describing just the first link in a long attack chain. Eliminating that weak link doesn't mean the attackers can't gain access to the network, it just means they have to find another entrypoint, and in a real-world network where people need to regularly click on links and open untrusted documents it's notoriously difficult to plug all holes.
Even the largest DAX corporations in Germany regularly become victims of cyber attacks, and trust me they spend a ton of money on cyber security and have thousands of people working on it.
Failing to mitigate attacks is negligence, sure, but the ownership of the consequences of such attacks must be primarily shared by those who would create and initiate malicious attacks.
Ironically, the criminals are likely doing many of their victims an expensive favor by forcing them to improve security, preventing compromise for more sinister purposes such as industrial espionage.
Well, negligence is involved but this isn't what they are unless burglary is having unsecured entrances or murder is having vital organs under too little bone and gristle.
Would you call getting mugged on the street negligence? Even if you walk into a bad neighborhood? People might say use your head more next time, but no one blames the person who was mugged. Criminality is just that.
There's a big difference between a helpless pedestrian walking down the street and a wealthy corporation abdicating their security responsibilities.
Of course the criminal should ultimately be held accountable, but to be honest when when these kind of hacks occur I tend to direct more rage toward the corporation. The analogy I picture is more like an overloaded train crossing the same creaky and neglected bridge every day.
People might say use your head more next time, but no one blames the person who was mugged.
They do, quite often. Same is true for rape victims. It may even be that weak-seeming people are more likely to be blamed as victims, because they serve as an uncomfortable reminder of danger and have already shown that they can be defeated. Think of negative feedback loops where people start to avoid someone they perceive as being unlucky, thus leading to poorer outcomes in future.
Reading some comments that blame the victim of the cyber attack reminds me of something I thought a couple of days ago while I was cursing bureaucracy and taxes: it's amazing how in IT we take the bureaucrats stance. We develop impossible to understand overcomplicated systems and then we blame the users of those system, which usually are forced to use them, for not taking into account every possible detail. You can substitute "you should know the law" with RTFM and a similar attitude.
I'm not sure what is the conclusion of all this. Maybe bureaucrats are as good as developers, maybe developers are as bad as bureaucrats, maybe people should do what I did and pay specialists.
First Funke buys & eats most of the regional news outlets, then it's paralyzed due to some random "cyber attack" after failing to implement a decently secure IT infrastructure for 6000 workers who depend on it now. Why am I not surprised.
Governments should enact legislation making the payment of a Cybersecurity ransom a criminal offense with serious penalties. Organizations would then be faced with the choice of paying the bribe or facing time in jail - an uneasy decision. However, the effect would be to reduce the likelihood of ransom payment, thereby reducing the incentive to launch such an attack in the first place.
That would spur generation of creative paperwork like fake "support contracts" or "licensing" deals.
I think best solution was IIRC in Italy where whole families of people kidnapped by mafia had assets frozen. It made ransom inattractive. So in this case company could apply for having frozen all funds in exchange for evidence. It would both mostly absolve the management from the ethical dilemma, punish those responsible for negligent IT and make ransoms less attractive.
How much of these attacks could be avoided by moving away from Microsoft Windows and EXE execution model? What is the cost of going back to thin clients?
> How much of these attacks could be avoided by moving away from Microsoft Windows and EXE execution model? What is the cost of going back to thin clients?
I bet the great majority of them.
> "It's actually not that sophisticated," said Christian Beyer of the German company Securepoint. "You open the Word document, the document contains a macro, and the macro downloads the malware from the internet." A macro is a kind of shorthand instruction for the computer.
Why the F does a text processor needs to execute anything at first place when reading a document? this is madness.
And yes, the solution is, on top of every possible IT security practice, and linux everywhere, make enterprise computers dumb terminals with the help of browsers and webtechs. Nothing else should be executed on them. The people who refuse to unlearn MS Office and co, well too bad...
> Why the F does a text processor needs to execute anything at first place when reading a document? this is madness.
If you're stuck on Windows, make normal.dotm readonly.
> And yes, the solution is, on top of every possible IT security practice, and linux everywhere ..
To keep people stuck on Windows, the Microsoft mantra used to be, our stuff works better with our stuff. Since most OS functionality is moving to the msCloud. Regardless of the OS, you'll be paying the yearly software rent to Microsoft.
And of course China and Russia are mentioned in the comments. Propaganda works. Repeat something often enough and people will internalize it and assume it’s true even when no proof is provided.
In meatspace, assigning 100% of the burden of blame to the attacker and absolving the victim of any blame at all agrees with our ideas of morality and sort of works because there is a non-negligible chance of holding the attacker accountable. This provides a measure of deterrence to would-be attackers.
In contrast, in cyberspace, the chance of holding attackers accountable is much lower. There is little deterrence to would-be attackers, especially state-sponsored attackers. Here we need to let go of our fantasy that blame must be assigned according to our idea of who is morally at fault.
Of course the attacker is always morally at fault. But legally, we must hold accountable organizations who are breached, because we need them to improve their security posture. An improved security posture is the only realistic path to a future with fewer and less impactful cyberspace attacks.
Strict liability or "victim blaming" for cyberspace attacks goes against our notions of morality but IMO it is essential.