Maersk IT systems are down
We can confirm that Maersk IT systems are down across multiple sites
and business units due to a cyber attack. We continue to assess the
situation. The safety of our employees, our operations and customer's
business is our top priority. We will update when we have more information.[1]
Maersk is the largest shipping company in the world. 600 ships, with ship space for 3.8 million TEU of containers. (The usual 40-foot container counts as two TEUs.) If this outage lasts more than a few hours, port operations worldwide will be disrupted.
The web sites that are supposed to give APM port status are frozen. It appears that many (all?) APM terminals worldwide are not accepting incoming trucks. Unclear whether ships are being unloaded.
There's surprisingly little info about this from the actual ports.
Even Twitter output has become so PR-controlled that nobody involved is getting important information out. APM, Maersk, and the Port of Los Angeles all have Twitter feeds, and none of them have any useful info about this. Even the Port of Los Angeles Police have nothing.
The Port Authority of New York and New Jersey has a clue. Their alerts feed has useful info.[1]
6/27/2017 4:30:08 PM
APM closed 6/28 & plan to open 6/29 6:00 am,
gate hours to 7:00 pm (cut off) 6/29 thru 7/7.
Free-time will be extended 2 days due to service impact.
(The free time extension means customers have two extra days to
bring back their empties before being charged.)
6/27/2017 1:14:23 PM
Due to extent of system impact, APM Terminals will not be opening
for the remainder of the day. Updates on tomorrow's status to follow.
6/27/2017 9:12:22 AM
APM is still experiencing system issues. Please delay arrivals.
6/27/2017 8:58:03 AM
APM Terminals is still experiencing system issues. Please
delay arrival until further notice. Updates will follow.
6/27/2017 7:53:09 AM
APM Terminals is experiencing system issues and working to
restore. Please delay arrival.
Whoever is posting those seems to be the one person on the planet sending out useful info about this. The biggest container terminal on the East Coast is closed today and tomorrow.
As of 6:30 Tues. 6/27, APM Terminals employees are still without email or office telephone services. No emails or voicemails can be accessed or answered. Please standby for PA Alerts or for critical matters please contact Giovanni Antonuccio (908) 966 - 2779.
That's bad. Maersk hasn't been communicating with the shipping industry. Journal of Commerce says nobody is getting useful info about Maersk's status.[1] Now we have a hint as to why - they can't even communicate internally.
The Maersk site still has nothing but a statement that they are down. Maersk's Twitter feed has nothing useful. No press releases. The only useful comments are coming from non-Maersk port employees.
Maersk Line's login site for customers is down, with a message saying their systems are down.[1] APM Terminals, their business unit which runs ports, has their web site down with a 500 error.[2]
* Los Angeles APM container terminal shut down for today according to press report.[3] No mention of this on APM web site.[4]
* Port Elizabeth (NJ) APM container terminal is down for incoming trucks, according to Port Authority of NY and NJ site.[5] No mention of this on APM web site for the port, so apparently APM web site updates have stopped.
The article says the ransomware affects even patched Windows boxes. Perhaps what you mean to say is, "Great. Maybe we can finally put a price on using Windows."
what you are suggesting has grave realities for those who cannot or do not want to mess with formal verification techniques. This is the future of computing and a lot of people will be left behind once this catches on.
Are you sure about that? You do know most organizations will implement that as a huge amount of bureaucracy for every commit, rather than proper man-hours of security-oriented development.
Only because most organizations don't know how to be effective at security.
It's not hard. You don't actually have to change much. You just have to schedule regular pentests, ideally every couple weeks.
Pentests protect everyone because it's our job to worry about all of the security flaws that you can't possibly be aware of in your normal day-to-day development cycle. There's just too much for any organization to know about except security companies. This way you can focus on development and we can focus on pointing out how to fix what's broken.
Pentests aren't a magic bullet either. You can easily find a consultant who isn't going to rip you a new one.
Security is a mindset. Any "checklist" approach will eventually devolve into ass-covering by an organization that is not internally motivated to run a tight ship. Legitimate variances will be hassled to no end, while actual security vulnerabilities will be ignored.
In the real world, one of the only reasons people get pentests is because another company is forcing them to. That results in a document saying company B is secure.
This is a very effective approach at cutting through ass-covering. Company B has to fix the security problems uncovered in the pentest. There is no other option. And I've seen it take products from "SQL injection by typing an apostrophe" to "It'd be very difficult to exploit this app."
If that's not proof that pentsts are effective, then I'm not sure what would be.
We like to say that security is a mindset, but developers have way too much on their mind to be aware of every possible security vector. It's easier and more effective to punt and let us worry about it instead.
There's different levels of penetration testing too. I worked at a SaaS startup and when we got our first big customer they demanded we get a third party to run a pen test on us. They basically ran their script and gave us a report. There might have been some minimal going back and forth about some false positives, but that was about it. That's better than nothing, but may not be what some of the more technically/security minded folks here at would consider a real pen test.
It's exactly the same as physical security. You build fences and buy locks. You pay people to keep an eye on things. You take insurance to cover the rest of the risk.
Nothing hard, no new inventions required. It just takes some attention and cash. It's part of the cost of being in business.
Wait, the hardness of information security comes because it has to be built-in everywhere since everything is connected and so everything is a potential attack surface.
It's not impossible but it requires a somewhat universal attitude change.
I want to agree with you in principle, but in practice it's not possible to be secure with just an attitude change. The attack surfaces have grown too large. Keeping track of all possible vectors is a full-time job in itself. You either need a dedicated security person or regular pentests. And honestly, regular pentests are probably more effective.
It's a positive statement though: it is possible to be constantly secure if you just get a pentest every few weeks. Big companies can even afford to make it a requirement of their release cycle.
> Big companies can even afford to make it a requirement of their release cycle.
Oh man. I have a peer who works for a very large international company. They require pentests in their release cycle. What could go wrong?
Turns out that pentesting isn't in the final portion of their release. They tag a release candidate (e.g. v5.7.0-rc), send that build to the pentesters, then fix other integration and user-acceptance bugs while the pentesters are working. The pentesters may greenlight v5.7.0-rc when it's really v5.7.3-rc that's shipping, and the pentesters are none the wiser.
Attitude change in the sense of not being willing to allow inherently insecure architectures - management always moving the company towards secure-on-principle architectures (not that I'm qualified to say if it's a good example but Google's BeyondCorp is an example of aiming to make everything secure on principle meaning not leaky on principle). That added to any pentesting or other necessary immediate security measures.
The impression I have is that today's event was the result of a lot of companies allowing insecure-on-principle architectures like a zillion apps each with their own update structure (random Ukrainian enterprise app supplier gets penetrated and the whole world goes down). A pentester might never be able to find that vector until that app supplier leaves their door open or someone finds out about them for example.
And people skilled at picking the skilled people and a willingness to actually do what the skilled people say... when those skilled people aren't necessarily the same as the managers shouting managementese...
And this also collides with the willingness to do anything to save a couple of dollars and once that dictate isn't flowing through every once of the company's blood, who knows what will happen.
Pen-tests show the presence if vulnerabilities, not their absence.
To make secure systems, we need to take the (very) difficult road of working our systems bottom up and proving the absence of vulnerabilities and defining the boundaries of safe operations.
What I really want to see is security being integrated into the development process as a conscious tradeoff teams have to make.
When a new feature is proposed, it's rare to hear someone object on the grounds that it could potentially add new vulnerabilities, but in the long run an approach that recognizes and considers those risks would be beneficial.
At the same time, this is incredibly hard to do - managers celebrate employees who develop things that look cool and awesome, not employees who can mitigate risk and manage security effectively (hopefully this changes, but I can't imagine that many unaffected CEOs are calling up their sysadmins right now and congratulating them on their diligence in making sure all their machines are patched).
Definitely a problem. People (incorrectly) compare vulnerability scanning with pen testing. Vuln scanning often is a component of a pen test, but we do a bad job explaining the distinction. Pen test should attempt to use the app(s), maybe test the people and process, not just profile the software versions and complain they are out of date or misconfigured.
This afternoon I was sitting next to a Maersk employee when people walked in with bricked laptops. This person didn't believe it immediately (with all the fake news these days), he tried to get it verified through some former colleges. One minute later this laptop wasn't working anymore. He was lucky as his laptop was synced with a corporate subscription of one-drive and can continue from home on his personal iMac.
Externals and people with a MacBook could continue working.
Some departments request personal to stay home tomorrow.
Mail seems to also be down, although I don't understand as it is hosted on outlook.com
I gotta say, I really like that I managed to get my own, snowflake, self-managed linux notebook at my place of work.
I mean all of IT can access the box once I give them the password for the vault I gave them. That's just the right thing to do. But no one touches or updates my fortress of last hope but me, from a local shell.
They made themselves fragile to this attack. It was completely gratuitous.
They are large enough to chart their own destiny and critical enough to care deeply about it ... and they built on top of cutesy new versions of Windows that everyone knows are garbage.
How does that old saying go ?
"Fool me once, shame on you. Fool me a multitude of times, in varying circumstances, over and over and over again for two fucking decades, shame on me."
By the looks of it, it will be down for several hours, hopefully. And sorry if this sounds wrong, but that's actually a good thing. Only with real damages like this is that security may be taken seriously.
The parallels with the "Daemon" in Daniel Suarez' novel are scary.
small spoiler ahead
This Daemon is an AI that keeps data of big companies hostage - it will destroy all that company's data if the company does not pay protection money, or if the company involves law enforcement.
Because a lot of companies in the novel don't stick to the AI's rules, these companies go down with the exact same symptoms as Maersk is now having:
- unable to do business
- unclear what happened
- declining stock prices
It's always seemed like the best way to end ransomware is to launch hundreds of variants that demand money but don't actually decrypt anything. Unethical, to be sure, but eventually people would learn not to give them money.
All the competent ransomware authors are probably quite unhappy whenever a defective ransomware strain pops up.
If it gets big enough then people just hear from each other
if paying unlocks the data or not.
The best way to end ransomware is to get serious about security. In many cases, being attacked by a ransomware, is paying a low price compared to if it was a targeted attack.
edit: Also, I imagine it gets easier after you wrote one i.e. many ransomewares come from the same author. So he could gain a reputation by signing messages saying that yes, this is our ransomware, we always unlock after receiving the payment.
>If it gets big enough then people just hear from each other if paying unlocks the data or not.
The idea would be to create "fake" ransomware that looks exactly like the real one
>The best way to end ransomware is to get serious about security
No matter how serious you get there always gonna be bugs, there isn't a single piece of mass distributed software in human history without bugs. That said, we should try to improve security of software but expecting it to be THE solution is wrong.
>Also, I imagine it gets easier after you wrote one i.e. many ransomewares come from the same author. So he could gain a reputation by signing messages saying that yes, this is our ransomware, we always unlock after receiving the payment.
If forging digital signature is not that hard, then you can release some great scientific paper moving crypto decades ahead, or alternatively you can make billions.
I was talking about pixel-made signatures; you know, the one the user actually sees when the computer its already infected; not cryptography between public/private keys; otherwise its a chicken and egg problem; how do you know what signature its the "real"; you google it and hope nobody added its own search results? Go to the official website of the ransomware developer?
The ransomware can present the key fingerprint for example.
But even without it, there are so many options, e.g. timestamp signed message on the blockchain before the release. After just one confirmed message you don't care about pretenders because people can check if the signature matches with the previous message.
It's a marketing issue. People likely to get hit with ransomware are incredibly unlikely to understand what that means. Hell, even main devs have trouble writing contracts, so even if a user knew there was a smart contract, verifying it would be another thing. So it'd get reduced to "guys on Twitter said this one works".
Since you can't store the private key needed to decrypt the files in ethereum, I can't think of how to do this.
All blockchain state is public, since it needs to be calculated by and verified by all nodes, so there's nowhere to stash a private key without revealing it.
A non-strawman analogy would be to sell fake heroin that looks exactly like heroin but does nothing at all. This analogy is more exact because the resource you are wasting its the same: money (not lives as in yours)
Interesting. If I was Posteo I don't think I would've been so quick to ban the email, this will potentially cause a lot of harm. What about all the people that need their data back? They have no way to get it now. Plus many people are still going to post the money only to get no response from the email.
It will cause a major headache for those who pay and will hopefully make people learn to distrust ransomware, in turn making it less lucrative.
On the other hand, that requires a fair number of "acceptable casualties" so to speak.
I personally think both sides of this are valid and don't know what the best option really is. It will be interesting to watch how things evolve at least.
>will hopefully make people learn to distrust ransomware, in turn making it less lucrative.
Ransomware will never ever not be lucrative. Preventing people from getting their data back doesn't discourage future campaigns and primarily hurts the victims of the ransomware.
Seriously? The whole idea is so fundamentally stupid.
1) Ransomware authors have obvious economic incentive to decrypt, and no reason not to. This makes it a herculean task to convince the general public that they wouldn't do so.
2) By the time your data is encrypted, you'll be researching your specific ransomware strain and will find out if it's legit or not. Googling the onion address is an obvious choice and something the ransomware author can just tell you to do.
3) Most people will need someone more technical to arrange the bitcoin payment anyway, these people will verify if the ransomware seems to be legit or not.
4) People don't magically get smarter, phishing still works if you pass the spam filters.
5) Winlockers were immensely lucrative even before they started using crypto.
6) Unless you're going to run your fake-ransomware campaign at an immense scale you'll never drown out the real, working ransomware.
And then in the end, what the was your goal anyway? Good job, now you've deleted millions of peoples data on a retarded mission to "stop ransomware". But hey, at least you stopped those evil russians!!!
There are precisely zero good arguments for preventing people from decrypting their data.
> 1) Ransomware authors have obvious economic incentive to decrypt, and no reason not to. This makes it a herculean task to convince the general public that they wouldn't do so.
Its irrelevant, this has nothing to do with the fake ransomwares.
>2) By the time your data is encrypted, you'll be researching your specific ransomware strain and will find out if it's legit or not. Googling the onion address is an obvious choice and something the ransomware author can just tell you to do.
The search results of any onion address are just as fake-able.
> 3) 3) Most people will need someone more technical to arrange the bitcoin payment anyway, these people will verify if the ransomware seems to be legit or not.
Sure, with their ransomware-detecting powers
>4) People don't magically get smarter, phishing still works if you pass the spam filters.
What has to do with anything
I got bored to keep answering, in general your points seem week which make you sound a bit too much like a ransomware creator. Probably not because you have 3 years here but otherwise you do.
>I got bored to keep answering, in general your points seem week which make you sound a bit too much like a ransomware creator. Probably not because you have 3 years here but otherwise you do.

Not a ransomware creator but I understand the economics at play. Ransomware is more profitable than sending spam, unless you're spamming to spread malware.
The value of individual installs has historically averaged at significantly less than a dollar each, ransomware is bringing that way up.
You aren't going to stop ransomware unless you figure out a solution to all other malware, or invent a more profitable scheme. People need to do something with their bots and ransomware is always going to make more money than spamming from bots that haven't been able to inbox anything for 5 years.
There's simply no way you'll stop enough people from paying to make viagra spam beat ransomware.
Not really, ransomware is way more dangerous than selling viagra; I may want to kill you if you encrypt my data, not so much if you sell me a couple of viagra pills that don't work. When you scam someone (e.g nigerian scam) you take money from one (or a few) person only, here you are taking data from a lot of people and hoping some very few will pay; making a lot more enemies in the process, likely including state actors; which may make it a federal crime to pay such ransomwares.
Diminishing returns, running spam botnets is already so risky that making more enemies by graduating to ransomware probably doesn't make a perceptible difference. Do you go to prison for 25 years or 30?
Sure, you could probably deter ransomware by sending DEVGRU to murder the authors, but I doubt it's worth the political shitstorm that'd follow.
It's surprising that the attackers ask victims to send an email. Why not ask victims to publicly post a picture of their screen to social networks with a certain hash tag (and a new account)? That would be less traceable and harder to shut down, I think.
Not that I want to give attackers any ideas... :-)
Yes, as long as the payments are individually relatively small and anonymous, it's easy for people to misunderstand the amount they may actually be getting. Once you paint a target in the millions, people will notice more, it will become a bigger news story, some congressman or another will make it a pet cause, and then you've got a lot or attention on you. Like any criminal enterprise, the less attention from authorities the better.
It's not easy (I presume) to create such software. So why do they rely on some random e-mail provider? They could have done it so that computers unlock automatically after the address receives the payment. It's not that hard, the software could use multiple ways to get the private key (DNS, IRC, twitter, DHT) and it would be really hard to shut down.
Petya is ransomware-as-a-service, the author gives you the binary payload and unlocking service and it's up to the buyer to distribute / infect people. It often leads to poorly setup things like this where the buyer probably didn't expect their variant to spread so wildly.
They should sue the pants off whoever shut down that email address. For many companies it would have been cheaper to pay than to suffer the damage that has already been done. And it would be easier to catch the culprits if people give them lots of cash because in spending the loot, they will make mistakes and make themselves visible to the police.
Now there is nothing to track until they rewrite their code and try the attack again with randomized email addrs
This is even more proof how powerful a 0-day in the wrong hands can be.
All of the affected companies' should be considered compromised by the NSA.
Actually, every single Windows PC with an internet connection that has been used before March 14 should be considered irrevocably compromised. Ransomware is much more visible than spyware. Think about all the spyware-infected PCs/networks that nobody knows about.
And that was probably the tip of the iceberg with regard to their outdated software -- Apache 1.3.36 and PHP 5.1.4 are both from around 2006, so I'd bet everything else in their stack was similarly old. Failing to update anything for 10+ years will get you in trouble, regardless of what OS it's on.
That would be akin to running Windows XP. People running Anfient Monftrosities should not get cocky in general, attacks on old systems are only getting worse with time.
No, the implication is that Windows prior to that was insecure. That does not mean it's secure afterwards, just that we know it was insecure previously. You are extrapolating without evidence.
I think it's more accurate to say that the comment is explicitly stating that Windows was insecure prior to that date, the implication from which is that it was not as insecure after (else why make the distinction of the date at all).
> the implication from which is that it was not as insecure after
I'm saying there is no specific implication without confirmation from the author as the statement can be taken either way, and any you think you see is more to do with your state of mind than the statement itself. It's a statement about what we know. We know something to be factually true prior to that date. Afterwards is open to debate, and is opinion. Making a statement about that the period we have facts for does not imply anything about the period we do not have facts for.
I feel like you and I are not operating on the same definition of implication.
In the above comment when using the word implication my intent was "a conclusion that can be drawn despite not being explicitly stated".
To be unambiguous, the explicit statement is that computers prior to a specific date should be considered to be compromised. The conclusion that can be drawn, based on the fact that the writer specified that date, is that later dates did not qualify for the same statement, because the conditions were not sufficient. That is to say, that they were not insecure enough for the writer to include in his comment. That is the implication, despite the writer not saying outright that computers after that date were "secure".
The conclusion assumes the credibility of the writer, and the intellectual honesty of their comment (i.e. they didn't put that date there just to be facetious) but I believe that's a fair assumption given the context of questioning the semantics.
I also note that the actual implication here is not that computers are secure after that date, or even that computers are insecure but not compromised. The implication is, in fact, that while computers might be compromised after that date, the writer doesn't believe it's worth advising people to ASSUME they are compromised.
> the above comment when using the word implication my intent was "a conclusion that can be drawn despite not being explicitly stated".
Yes, that is the same definition. But it is an error to draw that conclusion in question because it requires unsupported assumptions. That's why it's not implied in the original statement.
> The conclusion that can be drawn, based on the fact that the writer specified that date, is that later dates did not qualify for the same statement, because the conditions were not sufficient.
No, the later dates did not qualify because the knowledge is insufficient, or if you allow that the knowledge was an implicit part of the statement, it's not longer a binary proposition . If there are two true propositions that must be true for the original statement (we were insecure, and we know we were insecure), there are multiple alternatives. The problem is you are assuming a single one of the possible alternatives is implied, when it's not.
For example, I can say "up to this point in life, I haven't committed a felony." That does not imply I plan to commit a felony by itself. With additional context, it may or may not. I could just as easily follow that statement with "I don't see that changing any time soon" as with "I'm not sure if it's likely I'll still be able to say that next year." That additional context combined with the original statement carries the implication. In this case, people are assuming it's along the lines of one of those followups, when there is really no disambiguating context. Assuming one or the other is a problem of the person interpreting the statement, and in my opinion the root cause of quite a few arguments as a result of misunderstanding, which is why I called it out in the first place.
> That is to say, that they were not insecure enough for the writer to include in his comment.
Or they decided for whatever reasons they did not want to mention it. For example, to simplify the message and call attention to what they thought was of greater importance. Don't assume intent without evidence.
> while computers might be compromised after that date, the writer doesn't believe it's worth advising people to ASSUME they are compromised.
Which is a valid stance to have. I don't believe it's useful for the average person that has stayed patched to assume they are compromised. To assume so would mean never logging into any online account in my case. I believe it's useful to assume you are always under some level of attack, whether active or passive, and take precautions, but to assume you are compromised is quite a bit farther than that.
> On Tuesday, March 14, 2017, Microsoft issued security bulletin MS17-010,[7] which detailed the flaw and announced that patches had been released for all Windows versions that were currently supported at that time
It looks like this was not caused by a 0-day, it is apparently using EternalBlue as execution vector plus another (already fixed) vulnerability for lateral movement.
It also appears to be using common Windows lateral movement techniques based on credential stealing (namely WMI and PsExec), in addition to EternalBlue.
Maybe I'm missing something, but is there any evidence that this is actually a 0day attack? I didn't study the last outbreak that closely, but it seemed like it was a vulnerability that had been patched, but affected computers that weren't patched. Maybe I'm wrong though. But 0days or no, there will always exist some number of computers that have not been properly kept up-to-date and thus will be vulnerable to security exploits even after they've been disclosed and patched.
No, it's probably not a 0-day this time. But this exploit used to be a NSA 0-day before it became public. Everything that's happening now is the "lite" version of what the NSA is capable of.
The previous one WanaCry, was based on a vulnerability that was patched on later OSes. Microsoft went back and retroactively added patches for unmaintained operating systems (like XP).
It was based off an SMB exploit released in a ShadowBroker's dump; an unreleased exploit thought to have been used by the NSA.
> But 0days or no, there will always exist some number of computers that have not been properly kept up-to-date and thus will be vulnerable to security exploits even after they've been disclosed and patched.
You are correct about this. Patches were released in March, but many seem to have put off security-critical patching.
> Patches were released in March, but many seem to have put off security-critical patching.
In fairness to some of the unpatched - the last round of Windows 10 updates refused to install on some machines (well, mine and some others on Twitter), and trapped me in an endless loop of download-install-fail-download. When this happened my landline internet was down, so this was happening over 4G tethering, and burning up $20/day in cellphone data until I just turned off my internet/tethering.
I'm not saying don't patch (you should!), just that even people trying to stay patched and do the right thing can find they're unable to do so.
You are absolutely correct, people are even still wary after the aggressive Windows 10 update tricks, so it is extremely unfortunate yet does make some sense.
I hope Microsoft can find a way to earn trust back, this problem is going to get much worse if people do not install security patches ASAP when released.
Distrusting Windows was the wisest thing you did since you climbed off your horse. [1]
No, seriously. How is it paranoia to think the NSA was/is surveilling your Windows installation if we already have proof that they have the means [2] and motivation [3] to do it at scale?
There is no proof of means or motivation to use 0-days at scale. In fact, using EternalBlue "at-scale" would have caused it to not stay a 0-day for very long.
They don't need to deploy 0days if the vendor (willingly or unwillingly) cooperates. Also Microsoft began to heavily spy onto Windows users as part of normal operation making it difficult to impossible to fully opt out.
I don't understand how that would be possible. Such a change would be detected and very loudly discussed, making it pretty useless. There would be very little positive gain yet a whole lot of negative blowback from doing such a thing.
MS engineers can login to your machine and run programs / download documents. There also is some keylogger that sends data back without warning you. I can't remember which bits you can turn off, which bits got backported to 8/7 without warning, etc.
To make a long story short: From what anyone can tell, there is no way for consumers to obtain a version of windows that has security patches and has the ability to run with sane privacy settings. There is an acceptable version called Windows LTSB, but you have to pirate it.
This has been discussed ad nauseum on HN and elsewhere.
Are you suggesting that there's a cast iron guaranteed way of saying 'this stuff should be in the OS and nothing else'?
If you are suggesting that, are you suggesting the trust root for that particular stack is something other than the vendor? If so who?
Take the example of Windows. Let's say they agree to put in a backdoor like DoublePulsar. Microsoft release the official OS and say 'we promise this is all good and only stuff that should be in here is in here. Honest.' How do we as third parties detect they've put something in there that shouldn't be?
I see you're CEO of verify.ly and have some background in this, so I'm actually quite curious to know how you'd detect a malicious closed source vendor like Microsoft who is working with a TLA to provide backdoor access.
> so I'm actually quite curious to know how you'd detect a malicious closed source vendor like Microsoft who is working with a TLA to provide backdoor access.
"Closed-source" certainly does not mean you cannot see the changes, just that far less people know how to read assembly/machine code to understand what is going on.
People frequently reverse engineer patches and updates as addition of features means more vulnerabilities. Security companies generally get a whole lot of free marketing in the press if they find and disclose major vulnerabilities (along with building detection/prevention into their products, so there is a large incentive there. Of course it requires trusting security companies to not hold back findings like that, a valid concern, but it at least a step up from completely trusting the vendor to deliver non-backdoored updates.
> Are you suggesting that there's a cast iron guaranteed way of saying 'this stuff should be in the OS and nothing else'?
The security researcher mindset would be along the lines of "How does this new added/changed functionality work, and how could it be abused?" (You are correct that there is no guaranteed manner to find this, otherwise all software would be un-hackable which is not the case).
> They don't need to deploy 0days if the vendor (willingly or unwillingly) cooperates.
> I don't understand how that would be possible. Such a change would be detected and very loudly discussed, making it pretty useless.
It would seem to me that these things are happening. 0days are being added (often to look like simple bugs) and security companies are detecting them and we're talking about them...eventually. So you're both right, but there's a period of sometimes years following the addition of a backdoor to it being discovered. And the NSA doesn't care too much if it's found as you can be sure it's not the only one as the ShadowBrokers showed.
Take the example in this thread - EternalBlue. That particular flaw was introduced in XP wasn't it? And it survived all this time despite the uncountable security researches pouring over the code for a decade and more. It took a hack to reveal these tools.
Maybe the EternalBlue exploit really did just exploit a bug. Maybe it was a backdoor. It doesn't matter though. If it was a bug, it lay undiscovered for years which means there's plenty of opportunity for an actual backdoor to remain undiscovered too. So we have to deal with the possibility that 'exploitable code' (however it originated) may be around for decades and can be in every system as a result.
Following that logic, a new piece of 'exploitable code' could be added in the next Windows update and it could lay undetected for a decade. It's happened before and we didn't find it until the ShadowBrokers did their work, so it can happen again just as easily.
What about Heartbleed. This was another piece of 'exploitable code' that was around for years undetected. The example of this are no doubt many.
It would seem to me then that there are plenty of cases where a 'backdoor' has been placed and plenty where a genuine mistake was made, but we can't ever really know which is which.
I guess that is the problem for us who talk about it as it encourages taking sides, where the reality is paranoid people are sometimes right in certain cases and cynics who think it's just a bug are right in others.
> So you're both right, but there's a period of sometimes years following the addition of a backdoor to it being discovered. And the NSA doesn't care too much if it's found as you can be sure it's not the only one as the ShadowBrokers showed.
EternalBlue was a vulnerability, not a backdoor, as a backdoor would imply it was intentionally inserted. Again, any proof of malicious code being intentionally inserted would be huge news and would permanently kill trust in the vendor.
> Following that logic, a new piece of 'exploitable code' could be added in the next Windows update and it could lay undetected for a decade. It's happened before and we didn't find it until the ShadowBrokers did their work, so it can happen again just as easily.
This would be huge news. A negative cannot be proven, but it would not really serve much benefit to theorize about intentional backdoor insertion without proof. Anger at something like that is best saved for a provable case (Think of it this way: To a non-tech person, it would be great for them to be able to express outrage/call their reps/etc when there is definitive proof of this, versus saying "oh I heard this was already happening so whatever").
> I guess that is the problem for us who talk about it as it encourages taking sides, where the reality is paranoid people are sometimes right in certain cases and cynics who think it's just a bug are right in others.
There is nothing wrong with being overcautious. Problems arise when worrisome conclusions are reached, causing some (for example) to be unsure about the safety of automatic updates. The effect of this would be users avoiding a perceived risk of a malicious update, yet allowing them to be more exposed to real known vulnerabilities by not installing important security patches.
If you are referring to the level of analytics gathered, I fully agree! My point is, there would be a similarly loud reaction (at a wider scale) if a backdoor were introduced.
That's not true. When an exploit shows up on a computer, "How did it get there?" is often the hardest question. There's no way to know short of capturing it in a lab environment.
If you're talking about "at scale" being "the entire world," then yes. But usually the NSA tends to target their operations regionally, e.g. Iran.
To clarify, I am not talking about attribution. When I say "not stay a 0-day for very long" I am referring to the fact that 0-day use by any threat actor is generally going to be very targeted, because the chance of a PSP and/or network tap logging artifacts or alerting the user is extremely risky in regards to potential exposure of the intrusion, causing the 0-day to likely get burned (Since discovery allows for detection signatures and patches to be quickly created, as well as remediations applied to affected systems).
Any use of a zero-day risks burning it, and this was one of NSA's most potent zero-days. I imagine they used it rarely and wisely; probably trying other exploits first.
And so now it's in the hands of people who have no such foresight. Which means soon it will be mitigated. Which means that despite all the pain right now, in the long run Wikileaks actually may end up having kind of helped humanity.
It was fixed in a security patch one month before the Shadow Brokers leak. All computers affected by this ransomware outbreak (and WannaCry) were those who decided not to patch.
I suppose with the word "mitigation" kind of already having a connotation in the security community, I probably shouldn't have used it without making clear that I wanted the term to include its more banal implications such as "install the patch" and/or "get your systems off that old-ass OS!"
You're not the only one who thinks the idea of wearing a tin foil hat when you use Windows because the NSA only knows how to attack Windows is demeaning to the intelligence of other tin foil hat wearers.
A trade secret proprietary and obfuscated operating system from an organization known to collude with the government
Or
Code I have read in part, and know others read, and stand to believe that among all of us using those with the money or time would also audit
Given, we are all on predominantly x86 computers with proprietary obfuscated control processors that can seize control of the system and do whatever they are told by the manufacturer / those the manufacturer gives access to, so the security is in general a whiff.
Or more generally, don't use Linux for a false sense of security, because the security holes go much, much deeper than just the kernel and whats running on top of it, and Linux itself is nothing outstanding from a security architectural standpoint.
From the phrasing of your question, I suspect we disagree on the answer to your theoretically rhetorical question. I don't care what people could or would like to audit with their free time, I care what people do audit with their actual time, generally because they are paid or have a financial motive to do.
Windows is fuzzed, analyzed, traffic analyzed, attacked, and picked apart inside AND outside Microsoft with higher frequency and greater depth than Linux is, regardless of which happens to be open source and theoretically easier to examine. If Microsoft were to inject malicious stuff into Windows it would be found and reported and exploited. There is too much money, too much exploit opportunity, and too much security researcher brand cred available to anyone who discovers even a hint of malicious behavior on Microsoft's part for it to go unnoticed and unreported.
And again, the point of the comment wasn't "Windows is secure" as nothing in tech is secure. The point was that someone who advocates wearing tinfoil hats around Windows to protect against the NSA while thinking Linux somehow gets a pass from those same bogeymen is not making a rational case for how to behave or what to fear.
That would never happen. A network tap would be able to detect a malicious update even if the main PC was implanted very well, and a Microsoft-signed malicious update would be worldwide news.
Please correct me if I am wrong, but I don't think there has ever been a single instance of this actually occurring, only "this could possibly happen" theories. I am definitely interested to hear more if this is not the case.
> That would never happen. A network tap would be able to detect a malicious update even if the main PC was implanted very well, and a Microsoft-signed malicious update would be worldwide news.
While I don't know of that specific scenario, Stuxnet used a hardware vendor's key to install infected drivers[1]. There was also a Chinese registrar that allowed a customer to man-in-the-middle Google[2]. Depending on how Windows organizes their driver updates, I could see an adversary doing a man-in-the-middle between Microsoft and their target, and pushing a bad driver update.
I will concede that phrasing may be poor, better way to put it is that "forced updates + NSL" would result in detection and a media firestorm, giving absolutely no benefit and obliteration of any trust in Microsoft.
It's extremely risky to put out a mass update, yes. But if it were a targeted attack against an individual, the risk is greatly reduced, especially if that individual won't think twice about it.
> It's extremely risky to put out a mass update, yes. But if it were a targeted attack against an individual, the risk is greatly reduced, especially if that individual won't think twice about it.
At that point, you'd have to hope the target would not check the hashes of update files. If detected, then there is the same issue: A signed malicious update being detected (and easily verified cryptographically if given to a reporter) would cause a catastrophic media firestorm, eroding trust in the vendor forever.
A signed malicious update would be a Big Deal(tm), but the entity would also be able to survive it by claiming it was negligence. I don't believe negligence has not been significantly penalized in the marketplace, aside from perhaps CAs where damage can be limited (prevent new certs from being seen as valid, plenty of other options for sites). There's no such option available for penalizing Microsoft, and their lock-in is significant enough to limit nuclear options for doing so.
"We've revoked the signing key that was hacked by blah blah we have the utmost regard for security and adhered to best practices" and everyone would probably gloss over it for one instance.
What are the alternatives once an event occurs and Google/Microsoft/Redhat/?? claim it was an accident outside of their control (possibly due to negligence)? Yes, outside experts will be investigating to the best of their ability and there will be a statement about what measures have been put in place to mitigate the issue in the future. But what else would happen?
@willlstrafach,
Nothing you have said convinces me the commentator you are replying to is wrong. Especially since an NSL would prevent ANYONE who detected anything from speaking about it. Updates that tweak code to introduce vulnerabilities, is not something thats science fiction.
> Especially since an NSL would prevent ANYONE who detected anything from speaking about it
Forced malicious updates would indeed be a reasonable concern if this was somehow actually the case. It is not, though, and I am not sure how that would even work. Are you saying that when it is detected, the government would somehow become aware of the detection and threaten the finder with an NSL before they could tell anyone?
Just because YOU cant figure out how it works, does not mean its not possible my friend.
But I will say, that when you have a backdoor, and suddenly that backdoor stop providing intel/data/whatever, its usually a good indicator.
1. That screenshot clearly shows the certificate is being treated as not valid. I assume it is being shared for IOC purposes.
2. I am referring to a software update, in the context of revmoo's "forced updates + NSL" comment.
> Pre-Snowden a lot of things had been considered "could possibly happen" tinfoil hat theories, turned out a lot of them had not been mere theories.
I could believe that is the case for those outside of the information security community, but nothing novel/tinfoil-hat-worthy was in the leaks, just confirmations of predictable sources/methods used for intelligence gathering and CNE work. Forcing a company to issue a blessed update containing malicious code is very different, and again, I am very interested to hear of any proof of such a thing occurring without detection (It doesn't seem possible for that to happen without it being detected and being discussed very loudly).
A minor nit: if you convert this over to markdown or ReStructuredText, it'll display more nicely on the page and be easier to move over to GitHub pages or the like.
It is, although it isn't really a 'kill switch' in the sense it can't be deployed universally, but per system it works. This could be considered temporary though, as is turning off your computer if infected and NOT turning it back on. The encryption only takes effect after a restart.
The Netherlands and various other countries have created laws where either their version of the NSA and/or police can hoard 0days to be used for hacking.
This massive outbreak is so widespread that at this stage it appears that it either was a very recent 0day or something which only recently was fixed by a patch.
Instead of having loads of countries hoarding security problems I highly encourage a focus on security instead. Seems much better for the economy overall.
Not OP, but he is right. I just walked out of work, where I had to reverse the sample. It indeed uses EternalBlue (attacks by enumerating local network IPs with Windows APIs and randomly scanning the internet). Apart from that, it overwrites the MBR with a custom bootloader and schedules a restart ("shutdown /t /r") as SYSTEM in a random amount of time. After rebooting, it fakes a chkdsk and meanwhile, encrypts your files.
It is also true that it uses PsExec to spread.
TL;DR good old Petya ransomware (old as shit) with a copy/pasted EternalBlue-based spreading method. Nothing new.
Literature: sorry no, I didn't read anything; everything I know is from practice.
As for the tools: just IDA Pro, really, if you don't count the standard stuff: a VM to avoid getting the host infected (VirtualBox), Burp (to analyze malware HTTP traffic), etc. Nothing too fancy.
In theory, yes. In practice, the reality may be more complicated. How many ongoing investigations and clandestine operations rely on 0days that could be patched tomorrow?
Even if this weren't the case somehow, I could imagine intelligence chiefs and the like defending their 0days as necessary on public safety or national security grounds.
Edit: just to clarify, I believe 0days should be reported and patched to make everybody safer.
Your strange theory, that the economical damage is unavoidable to improve security will break down hard if those 0days are used by terrorists for the first time.
"Your strange theory, that the economical damage is unavoidable to improve security will break down hard if those 0days are used by terrorists for the first time"
It's not a "strange theory", it's the literal reason: NatSec is not a strange theory, it's the stated reason by multiple administrators and officials for why this behavior occurs.
Plus, how much economic damage was mitigated by using zerodays against terrorists and foiling their plots?
What if they used a zero day and prevented a 9/11 size 3000 person, multi-billion-dollar terrorist attack?
To suggest that the needle is at 0 and any negative use makes the entire NatSec angle bad is very naive, because any successful NatSec use that has succeeded is classified and we're not privy.
So we don't know the score, and we certainly can't claim that the score favors one side after any particular event...
But, keep this in mind, Israeli hackers compromised an ISIS computer and were keeping tabs on plots including a plot to weaponize laptop batteries, up until DJT burned the source by outing the Israeli op to Russians.
So the idea that zero days aren't in active use seeing results against terrorists is very naive, I believe.
But the subject isn't risk evaluation, it's the idea of a "score" where using NatSec state zero days get positive points for saving lives and saving money, and negative points for when terrorists use leaked zerodays or take advantage of unfixed holes.
The claim was "any terrorist attack using these proves it's a net loss"
My response was "the classified nature of positive points doesn't invalidate positive points, and you cannot call it a net loss without a full accounting"
Now it's just devolved into a game of hypotheticals where people try to disprove the idea of a full accounting by creating even sillier terrorist scenarios?
They will try to defend it, but a counterargument can be made if people start losing lives (eg. from medical systems going awry). Then the collateral damage will become unacceptable.
Can someone provide a simple (but not overly so) explanation of how the current generation of ransomware operate i.e., A) spread and B) lock up the computer? Does it always require human intervention for A. ? Thank you.
There are indications that this new version uses a number of ways to spread.
Where attacker == the ransomware executable:
First is the EternalBlue exploit developed by and leaked from the NSA. EternalBlue exploits a flaw in Windows systems on port 445 TCP that can be used to take complete control of an unpatched system. So if an attacker can connect to a vulnerable Windows machine on port 445 tcp they can take control of that machine.
There are also indications that this ransomware sample spreads using legitimate administrative tools in Windows systems such as WMI (execute commands on a remote system if you an administrator account on that PC), and PSEXEC (mount shares on the remote system if you have an administrator account, execute command if ''). These are legitimate (but legacy) Windows components that normally facilitate the management of client PC's when they're connected to a domain at a company or school. So if an attacker can connect to a Windows machine on port 445 tcp (PSEXEC) or 135 (WMI) AND have administrative credentials for that PC they can take complete control of that machine.
These two are probably part of how the ransomware spreads once it gets inside your network. The wcry outbreak a few weeks ago gained access to networks by infecting one or several people via a phishing e-mail with malicious files/links-to-files inside. AFAIK it's currently still unknown/unconfirmed how this outbreak spreads precisely but I'd guess it's either actively being spread by phishing OR it's been present but dormant in these networks for a while after having been installed by phishing over a longer period of time.
If an attacker possesses a 0-day then all bets are probably off, and even step A would not necessarily require any human interaction.
This outbreak is particularly nasty because after it's done encrypting files it supposedly triggers a crash that forces the system to restart. (handy for servers where a user is not normally able to restart the system). Because the system restarts any, artefacts from the encryption process that might be used to decrypt files without paying or restoring backups are gone.
Actually, I believe phishing / malicious attachment was debunked as the infection vector.
Subsequent research found that WC starts scanning hosts and IP's on port 445 to try to find other machines to infect.
"Once the malware starts as a service named mssecsvc2.0, the dropper attempts to create and scan a list of IP ranges on the local network
and attempts to connect using UDP ports 137, 138 and TCP ports 139, 445. If a connection to port 445 is successful, it creates an additional
thread to propagate by exploiting the SMBv1 vulnerability documented by Microsoft Security bulliten MS17-010."
This is minutiae at this point, but it scans the "local" /24. My assumption is that it scans the /24 for any interface available, so if a machine is infected with a public IP, it will start scanning machines on the public Internet. Not to mention other variations may decide to scan more aggressively.
Thank you for your explanation (also, others below as well). If I had more time I would try and learn about each of these security exploits because I find it fascinating.
A) It's got to find a victim (IP range scans or whatever), then try to infect it. WannaCry used a vulnerability in SMB (CIFS/Windows file sharing) to get the virus payload onto a new machine and get it to run.
B) Once a piece of ransomware is running on your computer, it can generate an encryption key and send that back to its controller machine, then start encrypting files on the computer.
"A" shouldn't be able to happen on its own on a properly firewalled network, I think. So the start of the spread might be someone clicking an e-mail link that they shouldn't, and the infection works to spread on its own once inside a network.
a) No intervention required although many start that way because people click on everything. The general idea is: get into a computer using any means possible and then spread using any means possible
b) They encrypt your files and make you pay usually with a time limit before they just delete the files
> Usually if it says "0-Day" assume that it can be exploited without human intervention a-la stuxnet
That's not at all what a 0-day means, it just means a previously unknown vulnerability. We've never seen a ransomware attack anywhere close to as sophisticated at Stuxnet. This latest attack is nothing new and is only affecting people who haven't kept their systems up to date.
I understand that this is not what it means. But generally speaking when an article says "0-day malware" it usually ends up meaning that no human involvement is needed.
Please don't assume I don't know what 0-day actually means. I chose my words carefully as not not imply that I was saying the definition of the term.
Typically when we see news using the term 0-day it's because there was no human element needed in the infection of machines. Thinking back in recent memory (17~ years) I can't remember a time when 0-day was used when it didn't mean autonomous infection.
Although. I fully understand that the term means that it's a previously unknown issue. Which is why I chose my words as carefully as I did.
It "usually" means "undisclosed". Everything else is entirely circumstantial and coincidental.
The reason human intervention is generally required now is because Windows has been hardened enough that some idiot user has to click a button to bypass the built-in basic protection. There's still a possibility of a "0-day" exploit remote-owning a machine, though these sorts of exploits are a lot harder to craft due to that attack surface being exposed to more security scrutiny.
Does anyone know if any tools exist on Linux which can be used for early detection of ransomeware?
Something that monitors file access, disk activity, etc. for suspicious behavior and can trigger some action or alert?
I think I remember some discussion about using a 'canary file' - some innocent looking file with known contents which should never be modified. If a modification is detected, you know something fishy is going on.
I'd like to emphasize the canary file. This is a file that you should never access in normal operations. Thus, if the file was in fact accessed, that is a sign that something is scanning your file system.
Depending on the threat, such a scan might be a good reason to pull the cord from the mains socket. You don't want to let a normal shutdown occur, rather pull the cord and mount the disk on another system to recover / analyze.
aide
Couldn't open file /var/lib/aide/please-dont-call-aide-without-parameters/aide.db for reading
aide -i
Couldn't open file /var/lib/aide/please-dont-call-aide-without-parameters/aide.db.new for writing
To do it properly you would likely be looking at mandatory access control, such as SELinux, so that the ransomware wouldn't be authorized to modify the files and further would make itself obvious in the logs.
Not very easy to use (in a way that still provides meaningful security) outside of the server space, though it can be done.
RHEL products, including Fedora, come with a fairly usable SELinux out of the box. By extension, so does Qubes OS.
I currently run a QEMU setup at home with different VMs, all Fedora, for different domains of use (internet, work, development/art, untrusted, a clean environment for installing OS's, etc) in the spirit of Qubes. Regular backups of everything are made frequently.
In the highly unlikely event of a ransomware infection, it would be limited to a single domain.
I believe this is the way forward for personal computing.
Because now we can watch those funds and know how much money they made, we can watch them to see if they make a mistake.
If every address was different we'd have no idea how much money they're making and only funds paid by people who also reported them would be tainted by the long eyeball of the law.
They should have pre-loaded more onto the wallet to give the impression that most people are paying.
Less than £10K USD gives the impression that nobody is paying.
It is the same psychology as a product only getting a couple of two star reviews - you don't buy it, you go for the product with hundreds of 4-5 star reviews instead.
Transaction fees only need to be high if you are in a hurry. If you can wait a week or two you can go with very small TX fees. As you can see in this graph even very low (5 to 10 Satoshi per byte) fee transaction are confirmed eventually. https://jochen-hoenicke.de/queue/#24h
What a clickbait headline. A paltry $3k and yet the article calls this a "MASSIVE ransomware outbreak". I would be curious to see what a "minor" outbreak is.
In Ukraine, banking services nationwide as well as credit card payments on the metro in Kiev, and the airport IT systems, are all down. At what point do we call it massive? When US banks and airports start having trouble?
That's interesting. I've heard a lot about Russia testing out cyberwarfare in Ukraine as a possible proving ground for future targets. Was Ukraine the hardest hit by this latest one?
It'd be interesting if this were actually made to take down infrastructure under the guise of ransomware.
Ukraine seems to be explicitly targeted, with the initial distribution happening already for some time but having a trigger to start lateral infection (which would be detectable) only 27th June 10:30 ( https://twitter.com/CyberpoliceUA/status/879825132088426499 ) and the actual ransom attacks only some hours after that; so it was intended to spread locally before attracting global attention.
Apparently initially the thing spread through the update of a popular Ukrainian accounting software [1], infecting a lot of networks of Ukrainian companies.
There are reports of hundreds to thousands of machines infected across multiple firms in multiple countries. I'd bet >99% of people are never gonna send the $300 in bitcoin to decrypt their machine, instead they'll just clean and restore as much as they can. The $3k is 11 people desperate to restore all their data now, more may come in the future after people have exhausted other options, but the vast majority will never pay unless their backups were hit too.
Seems like a better approach would be to have the ransom increase higher after every person that paid. Just so you'd have some competetion to pay sooner.
it's not, my previous company took everything offline - they got infected via connections to their offices in Ukraine. lots of companies in the Ukraine are infected.
company I work for disabled all working from home VPN accounts for the time being until we do a security audit
Interestingly, WPP mandates all it's employees to shut down their computers – irrespective of the OS.
> As a precaution, WPP is mandating that everyone immediately shut down all computers, both Macs and PCs. This applies to you whether you are in the office or elsewhere. Working on an office computer remotely is not an option. Please leave your computers turned off until you hear from us again.
Is it common to have a list of every employee's mobile phone? I would guess a lot of firms just have informal lists of phone numbers held by managers and colleagues.
Plus if there was a list, wouldn't it be on a computer that's currently off?
Most managers would have their reports' mobile numbers. Just start with the C-suite and work down. This is a case of hierarchy actually being an advantage...
My calculator's a computer, but I don't call it that. My game systems are all computers, but I don't call them that either (and never have, unless I was discussing semantics).
It may be, but it is not in hardware or software lated to the major "desktop" platforms. And they are by design far more locked down than your average laptop or desktop (just wish said locked state didn't leave the OEM so much in control).
It basically comes to it looking like Lazarus Group
The WannaCry attacks used the same command-and-control server used in the North Korean hack of Sony Pictures Entertainment in 2014, which wiped out nearly half of the company’s personal computers and servers.
...
Other digital crumbs linking the North Korean group to WannaCry include a tool that deletes data that had been used in other Lazarus attacks. The hackers behind WannaCry also used a rare encryption method and an equally unusual technique to cover their tracks.
Security researchers matched parts of the WannaCry code to previous viruses that were thought to originate from NK. Of course they also said anyone could of copied and pasted the code and just made it look like that, but the media ignored that part.
> The haystack needle Mehta presented Monday now connects Lazarus to WCry, although the tie connecting the two isn't precisely clear just yet. WCry's creators may have deliberately added code found in Cantopee in an attempt to trick researchers into mistakenly believing Lazarus Group is behind the ransomware. Researchers at antivirus provider Kaspersky Lab said such a "false flag" is plausible but improbable. The Cantopee code snippet, the researchers explained, was removed from later versions of WCry, making it hard to spot and hence ill-suited to act as a decoy.
> [...]
> Grooten went on to say, "BTW, 'North Korea' may well be a foreign hacker group paid by them."
i said this before and it was met with mostly hostility, but im still wondering... bitcoin has enabled ransomware, so its a boon to crooks. what has it done for non-crooks? i dont mean conceptually (no fed! decentralized! etc. etc.), i mean since its come into being, what has it done for you personally? for me: i bought a vpn subscription, anonymously. probably not able to do that as easily without btc. but, i would personally trade that for not having ransomware attacks. thoughts?
Bitcoin, if it had arrived about ten years earlier, would have been one way for the people in Venezuela to store some value and it may one day allow people who want to run away from their country to bring more of their wealth with them.
However crooks are always going to be among the early adopters, so I my guess i my answer would be that it has limited value, right now.
Bitcoin is a neutral technology, think of it like cash. Buying illegal things is always done with cash but it doesn't mean we should get rid of cash altogether.
Regardless even if we came to the collective decision that we wanted to get rid of bitcoin, its not feasible due to its decentralized nature.
I used to agree with the "neutral technology" line of reasoning, however I think my view has changed. Everything has an orientation to it, enabling or strengthening certain dynamics, but not others. These characteristics are not static, as they depend on the broader context, and can change rapidly and unpredictably sometimes -- yet they can be quite important and should be considered.
I would argue the concept of "perfect neutrality" is a non-sequitur. When someone says something is very neutral, it seems to me they are actually noticing that something either has near universal acceptance in the current mind-share or is simply non-consequential such that no one really cares one way or another.
It reminds me "inherent value" (the general philosophical concept, not the financial term with very specific meaning), which a lot of thinkers find to be a misguided concept.
That's not to say we should ban bitcoin. And even if we wanted to, as you said, attempting to do so would be a rather absurd endeavor.
This seems true at face value. Consider the cutting edge technology aka as a knife, with an inherent bias for cutting things.
Dinner time. Killing time. ("Food is murder"?)
It is the context of utility of technology that is the determining factor. A technology, imo, can be deemed directly culpable of ill effects IFF it permits no other utility context other than that which results in morally or ethically unaceptable outcomes.
Ever nuclear weapons can be used for good, you know. (Extinguish fires, for example.)
> it doesn't mean we should get rid of cash altogether
I can't remember the last time I saw physical cash. The only ones I know who are still using cash are drug dealers. Not saying it should be banned but it's almost gone in my country already.
No idea. The convenience is more important to me personally than the risk of being screwed over but since everyone is using it I presume there are people putting pressure back.
Depends on where you live. I doubt you'd have faced any repercussions for buying that VPN subscription with your credit card, but the governments of other countries might not be so understanding.
Bitcoin should exist for the same reasons Tor exists. Just because Tor is used by child predators and of little practical use to the average Western citizen doesn't negate its positive value.
I heard that the guy who invented TV thought it was going to be this amazing thing that would transit knowledge and learning like never before. After he saw all the junk they put on it, he regretted ever inventing it. The same could be said for the internet. A genius that invention that allows so much good in the world, yet the amount of mind-destroying, family-killing pornography on it makes one wonder how much better the world is with it.
I think at the end of the day, it comes down to the fact that with every new tool comes the opportunity to use it for good or evil. So I wouldn't blame the tool, but rather the person who uses it for good purposes or for ill.
"Just because Tor is used by child predators and of little practical use to the average Western citizen doesn't negate its positive value."
you cant just say that and expect me to decide its worth it on balance, you need to make a case for it. maybe this isnt the place, but its a funny thing to just kind of assert. arguing that things should exist because there are potential upsides is missing the point, which is that the downsides may outweigh the upsides. its particularly bold to make that claim about tor- there must be a hell of an upside to justify the child predators...
> its particularly bold to make that claim about tor- there must be a hell of an upside to justify the child predators
Even if Tor didn't exist, there would be child pornography. Would there more, less, or just as much of it? That's difficult to say. It's also difficult to say how much easier it makes the lives of those fighting for freedom of speech. Thus, saying whether Tor is, overall, a "good" or "bad" technology would involve comparing two unknown quantities.
> but, i would personally trade that for not having ransomware attacks.
You can't go back in time and "undo" Bitcoin. If you criminalize Bitcoin, crooks might very well continue to use it, if it's their best option. It won't be as financially liquid, of course, but the crooks will just tweak their prices to account for this. Either a more efficient underground payment system will be used, if Bitcoin were criminalized, or Bitcoin would continue to be used.
The proverbial cat's out the bag: crooks have access to Bitcoin code, too, and can easily replicate a blockchain to create CrookCoin. You can't truly prevent criminals from doing stuff using laws, since we call them criminals in the first place because they seemingly have little reverence for laws.
Via anonymous prepaid cash service, like what Reveton used. Supposedly the attackers made millions; to your credit some were eventually tracked down by following the payments. Unclear if it's because of the way they laundered the money.
Fusob demanded payment in redemption codes for iTunes gift cards.
(You may or may not be joking; let's assume you're not for this response.)
This is a dangerous argument.
I'm a free software activist, and I firmly believe that security without free software is a facade, but that doesn't mean that free software is more always more secure; it's an open source argument that's been fairly easily refuted lately with high-profile bugs in software like OpenSSL.
It's easier to hide secrets in proprietary software, but most security vulnerabilities are bugs, not explicit backdoors. So even bit-for-bit reproducibility won't defend you against that.
I'm not saying you shouldn't use GNU/Linux---I think that every user deserves an operating system that is fully free, and hope that people will use it (or another free/libre OS). But my argument is on the basis of freedom, which still stands _regardless_ of security. It just so happens that I believe that strong confidence in the security of a system is not possible with proprietary software.
So far this year, Windows leads the scorecard regarding mass infections and business downtime due to them.
So while indeed, open source is not a guarantee for better security, the results are in its favor. It might also be because it's not such an attractive target to hackers due to its low share in the desktop market. But still there millions of linux servers online 24h/24h and I assume they have a bigger potential for monetisation.
Windows also leads the score card in installation base, which I think is the real causal relationship. If Linux was installed on 90% of desktops you better well believe there'd be a similar number of exploits for it. Something similar happened to Mac OSX not too long ago, as they grew in popularity more and more exploits were found for the operating system.
That's what I tried to express above. I was also wondering what is more profitable in the ransomware economy: infect many, almost worthless machines? Or infect an order or two of magnitude fewer machines, but with a higher chance of paying?
I'd say with a higher chance of paying because people administering them are more likely to know how to buy bitcoins, how to send them and what to do with the decryption key.
Or maybe Linux/Android on the desktop? Maybe all the PCs out there used for cash registers,ATMs, etc will run locked down Android systems in the near future instead of Windows.
The average Android phone probably has more spyware than the average windows machine these days. Probably a lot more vulnerabilities too because they rarely if ever get patched. Even the best android phones are only patched for 2 years.
Linux maybe more secure, but android is a ticking time bomb.
Note that having a good multi-generational backup system in place for all machines, servers and laptops, would render this kind of ransomware harmless.
But the state of IT has deteriorated so badly these days because management doesn't care any more. After all why care when you can just take your severance pay and get an increase in salary and more responsibility at another company. Rinse and repeat.
It used to be that the primary job of system admins was to keep the data safe from loss. That was more important than keeping the systems running. How did we lose this?
WannaCry caused the price to drop, rather sharply. If anything, the purpose would be to buy cheap Bitcoins and hope the price later corrects back upwards after the news has blown over.
I suspect the price drop is due to some trading algorithms using sentiment analysis. They see all the negative press around these ransomware, see the included word Bitcoin, assume the negative article is about Bitcoin, and automatically sell.
But that's just my theory, since I have a hard time imagining human traders seeing news like this and selling because of it.
>But that's just my theory, since I have a hard time imagining human traders seeing news like this and selling because of it.
Sure they might; the more Bitcoin/cryptocurrency is associated with cybercrime, the more likely it is to be banned (sending the price to ~0 in the affected countries).
It would certainly have a cascading effect, however, because the news soon follows about Bitcoin/all crypto dropping and both bots and watchful traders take their profits out and either get scared out of the market, or hope to capitalize on the dip.
I doubt wannacry was the reason for the price drop. Rather extensive media coverage about bitcoin hitting $3k which probably woke up some people who realised that it might be the time to cash in.
This was long before $3k. The price tanked immediately on May 12th and remained depressed until the 17th which was shortly after the initial outbreak was halted by the domain registration.
As someone affected by the ransomware - did anyone else notice empty console windows popping up from time to time the days before the ransomware triggered the encryption?
I did see those console windows here on Windows 10 but I am not (yet?) affected. So might also be related to something else (Windows Defender updates?).
Those attacks are still "gentle" as if you have (and you should) a read only backup you can resolve it with near 0 dataloss.
What I fear are cancer like virus, not wiping or encrypting data at time T, but introducing subtle errors on a longer period. You would be contacted by hackers saying your last 6 months of data contain error. That's scary.
Moronic design of the microsoft page. It requires you to acknowledge some bullshit T&C on first visit, then redirects you to the website home page. Which means that someone clicking on the CVE to check if there is anything important will be redirected to a home page with no information. Most sane people will go back to the original link, but if there were only sane people in this world, this second wave of malware would be toothless. That's not exactly helping awareness of the vulnerability.
Now there's a new attack target: the central bank. Send 100 BTC to this address and I will decrypt the balances stored by your central bank, so you, again, know how much money you own.
Is anyone aware of an entity that attempts to objectively quantify the economic impact of an event like this (ransoms paid, data lost, labor hours lost, new security costs, etc)?
That just means someone's got a wallet containing the private keys corresponding to a lot of addresses. So when they want to move some coins, they just sign all the various transactions and send the money to new addresses.
So.. ransomware authors want payments in Bitcoins. The obvious counter-attack from the governments would be to target and shut-down all services exchanging bitcoins (or other digital money) to real money. Heck, they can hack them and delete all data, so they shut down on their own.
That's not enough anymore: good ransomware will look for backup systems and wipe those out before proceeding. You need read-only, airgapped backups before you can consider yourself safe.
Not sure about Spideroak but in the case of rsync.net they duplicate snapshots and store them outside of your main account so even if your account gets compromised and an attacker deletes all your backups you're still safe.
I wasn't referring to corrupting the backup directly -- but corrupting the data as it is written to the backup server. This can be done by compromising the backup client, through a rootkit, etc. If this is undetected for a year before the attacker pulls the final trigger, you have a year's worth of bad backups.
If you back your files up on the usb drive on Tuesday, remove the drive after back u the files, and get infected on Wednesday, the files on the drive obviously are not going to be infected.
As if this attack purely relied on people clicking on emails. Maybe that's 1 person out of 10.000 but obviously this used various other methods to spread.
Wanna cry spread to computer connected to network but individuals at home probably aren't connected to a local network unless there are multiple computers
I remember reading something about a guy warning about intrusions on his company during Wannacry to steal company data and install malware. Now we have this. This is giving me goosebumps.
How can one check quickly if his OS is vulnerable ? I know MS pushed updates, but sometimes updates are stuck, or fail to install or are delayed by the user .. so
I'm afraid that this attack demonstrates that the old PC architecture: Side-loading any app, userspace, privilege escalation, low level file sharing functionality just isn't for purpose.
If malware can exploit a 0-day, 100-day, 1000-day security hole in a corporate network of 2000 machines, its too easy for that malware to share itself across the network and send emails attachments to AllUsers (every single company I've worked for still allow Everyone to send anything to Everyone).
Microsoft's next XP patch should be to remove SMB functionality or just outright disable it (and probably remove IE and other nonsense installed by default too).
And when Windows 7 expires the final patch should be a severe lock down too..
[1] http://www.maersk.com/en