Ansible playbooks read about a million times more obviously to me than salt config files. Probably that's just due to me putting in the effort to learn, but I think there's a thread of truth in there too. For one thing, Ansible makes it easy for non-devops folks to just walk through what they'd need to do to provision a box, and then turn that walk through into an ansible playbook.
I also appreciate that there are a number of ways to set things up in Ansible, which makes it easy to write simple playbooks, and easy to write complex ones with multiple roles and branches as well.
At the end of the day, conf management has so many facets that I've learned to stop arguing the merits of one over another and just accept that there are different strokes for different folks.
I agree. I prefer Ansible because I can get something prototyped and built faster with it and it's easy to refactor a playbook into something more complex and more reusable.
This isn't exactly an even comparison -- Ansible has been working on it's SSH implementation for about two years, so it's pretty evolved, and you won't find that elsewhere. By comparison, Salt's implementation is currently a rough sketch, and one they discourage using.
Ansible has a pretty robust implementation that allows sudo and su operations, and is pretty finely tuned for using things like ControlPersist, reports nicely on when passwords being incorrect, and also has a paramiko implementation for older EL platforms where ControlPersist is not available. Doing things like detecting when the SSH-key is not added yet, etc, are also well handled to lock and be able to ask prompts only when needed, etc.
Ansible also features a higher speed 'accelerated mode' that uses SSH for secure key exchange, without relying on in-house crypto. Though the new pipelining features in 1.5 make SSH about as fast as accelerate mode, so that's saying something!
Anyway, we take security very very seriously, which is why we invest so much in having a great SSH implementation.
Please don't conflate acknowledging that the ssh implementation is a newly-implemented feature with "discouraging" the use of it. You're better than that.
I agree that there's no reason to assume that Salt discourages the use of a feature that clearly required time to implement. But in the docs and videos I've seen of the new interface the words "way slower" come up over and over again.
There's a vibe that salt-ssh is an answer to folks who would use Ansible, and less a feature that Salt has long had on it's list of things that need to be implemented.
Not saying there's any truth to that statement, but that's the vibe I got from the folks I know who use Salt and knew that I preferred Ansible at the time. So while the word "discouraging" is a bit heavy, there's an absence of leadership as to why salt-ssh was developed and when it's appropriate vs. 0mq.
We do have a GUI/REST-API company product that sits on top of Ansible that adds things like role based access control, centralized logging, and so on. This is not an enterprise version of Ansible, but an additional offering.
By making this commercial we can produce a really high quality product by hiring some top notch developers, and can move forward at a faster pace.
There are no proprietary modules for Ansible that we hold back, but enterprise companies do have more stringent requirements for tools and that's where we draw the line.
For many users, Ansible will continue to be all they need, and that's fine with us too!.
For reference, here's information about our commercial product:
Thanks for the great tool! I think the GUI and REST API a pay-for service was a smart decision and will hopefully allow you guys to keep building out the project while also making a living.
Not sure where the confusion about Ansible being not fully opensource comes from. And it's not as though Salt doesn't have enterprise support and tool integration that you can pay for too.
As I noted earlier, conf management has so many facets and reasons for being implemented that no one tool will ever win a battle. Honestly I'm happy to have such a wide open space right now with so many great and open source projects in it. When I started developing web apps 10 years ago, there was nothing that begins to approach the robustness of all four big players in this arena right now.
Thank you to all the hardworking contributors, paid and unpaid, to Puppet, Chef, Salt and Ansible. Thank you!
Their website doesn't make it obvious where to go if you just want to download salt and get started. It seems to want to steer you towards SaltStack Enterprise.
However, you can find installation instructions (which include where to download) for all OSes here:
I have yet to check Salt out, but as someone who's just come off the back of Puppet's steep learning curve and lack of robust modules, it's something I'd like to look into.
I started looking at ansible but like salt stack better now. Windows support is one think I liked better about it. I wish I didn't have to deal with Windows but unfortunately I do and salt has the support for it.
I have some windows admin work to do as well. However I prefer that ansible doesn't require daemons on every host, so I was a bit torn on which to use.
Searched around a bit and found pave. It has the things I need (admittedly straightforward) and is about as simple as it gets. Would be great if it could get some love, as I appreciate the no-nonsense design. https://bitbucket.org/mixmastamyk/pave
At Ayatii we use Salt both for initial configuration and scaling. It's very easy to get setup, and the concepts of States, Pillars and Reactors are easy enough to grasp and start hacking useful utilities for your deployment.
For instance, with a couple lines of Python, I was able to 'react' to any new DigitalOcean instance that was created and able to update DNS records or Nginx config (if it was a certain type of instance). As someone is more of a developer than a infrastructure person, Salt saved me a lot of time.
njpatel I would love to get my hands on your python code on the reactor stuff. Still haven't found a real world example for updating loadbalancers or dns.
I was going to do a blog post but never got around to it! I copy-and-pasted the DNS one we use into a gist (removing specific bits of our setup). We use DO, but this should get you to a point where all the bits are working and then you can do as you wish.
I started looking at Salt and really liked what I saw, but the first task I had to do was to provision some Windows Servers on Rackspace or AWS and configure them. Like the other commenter, I wish I didn't have to deal with Windows but I'm stuck with it. I found Saltcloud a bit bleeding edge for a newcomer to configuration management (although improving all the time) with Windows being its biggest weakness.
When it comes to configuration, it also seemed to me (admittedly with limited experience) the Windows support in Salt is a poor cousin rather than an equal citizen. Don't get me wrong, it's good they have it at all, but you'll need to do a lot more work yourself.
I plan to look at Chef next as a possible alternative since the Windows support seems more mature. No comments on Chef yet so I'd be interested to hear anyone else's experience with it, especially for a newcomer to config management tools.
My main experience is with Puppet, though I have migrated legacy systems from CFEngine as well. My main gripes with Puppet are that, at times, the DSL is restrictive and you have to either rely on ugly Execs or dropdown into Ruby extensions. I can see how this can be off putting to beginners.
The advantages though once we've gotten comfortable have made our infrastructure smarter and more resilient.
- Idempotency (If you're careful)
- Abstracting configuration data vs methods. Passwords/Keys/Addresses are kept in a Hiera config file and the manifests merely access these variables instead of hardcoding when applying changes.
- PuppetDB as a canonical reference of the state of all our infrastructure. What are all my servers running MySQL? Which version? Where are all my Ceph MDSs?
- Exported resources (also relying on PuppetDB), enabling propagation of information about new nodes to all others. A godsend when paired with Nagios.
Does Salt have anything that matches these use cases?
We switched away from puppet towards Salt. The broad stroked opinion is that salt covers all of puppet 's use cases, with advantages in terms of both performance and extensibility.
It is idempotent; there is a central configuration db (a couple of options, actually; we use pillar, a YAML set of files for its simplicity). Exported resources are handled a bit differently (you pull data from servers at config time instead of pushing at export time) but cover the same functionality. The same functionality covers software configuration inventory.
Where it excels is in performance. Our deployment runs on puppet took some 15min, salt handles them in 30s.
I also like its codebase. It is clear and well documented, easy to extend. I am biased towards python instead of Ruby, so take my opinion with a grain of salt (heh:-)
Yes, the run will report on each salt state which can result in either: failed (with a reason), succeeded (with a description what changed), succeeded (nothing changed). Each salt state provided by the project is also intelligent about its use of resources - for example, if you have multiple pkg.installed, the list of available packages will be pulled only once and all states will be able to determine quickly if they need to run.
- Abstracting configuration data vs methods
Methods -> salt states; Configuration data -> grains/pillar. Grains are kind of attributes that belong to a host (like hostname, system version, installed packages, available ips, etc.), while pillar is a plugin system that can provide external data (it can be used like puppetdb too; I've got a plugin that pulls json files from s3 and talks makes it available as a simple hash for example). If you know chef, think attributes/databags (but better).
- PuppetDB as a canonical reference of the state of all our infrastructure.
It doesn't actually provide this out of the box, but provides the needed elements so this is trivial. Basically you can query all your nodes from the salt server (or nodes can query each other). You just need to extract the bits you need and save them to whatever destination you want. For example on the server run `salt -G 'roles:database' grains.item mem_total` and save the data. You can also define a "returner" which is a plugin that handles the data you get back and for example implement something that writes the data back into your information store/cmdb.
- Exported resources (also relying on PuppetDB), enabling propagation of information about new nodes to all others.
Pillar again. Although depending on what you want to achieve, you may want to enable some querying between the nodes, so that one of them can just broadcast some message at runtime and work on results.
We use saltstack at Zenpayroll.com to manage our servers, and I think it works pretty well. Not 100% sold on the jinja/python stuff, but it does the job.
I really want to give Salt a try. I've worked with Puppet and find the syntax ugly and the logic weird (e.g. all variables are essentially final and can't be changed in cases after having a value).
But it also results in a mess of what-can-override-what. You can't have just #override. You've got "automatic", "default", "override" and later added "force_override" and "force_default", because "override"/"default" were not enough. And I'm still running into situations (especially when working with different teams deploying on the same host) where I want to override something that can't be changed anymore (or doesn't merge the way I need to).
Big fan of the speed and simplicity of salt vs puppet. Plus it is multiple useful tools in one for me (configuration management, system provisioning and remote execution)
In my opinion it still lacks more automatic deploy mechanisms. Ansible has some great ideas, but I guess the time to write your deploy scripts is needed. I'm still hoping for a salt package like system. (https://github.com/saltstack-formulas) Where you only use your top.sls and perhaps some custom pillar data, to define how packages interact or where packages are needed.
Define wordpress running on 2 appservers + 1 database node + 1 loadbalancer with ssl termination and export db and pipe it through gpg via this pillar key. etc. pp
I also appreciate that there are a number of ways to set things up in Ansible, which makes it easy to write simple playbooks, and easy to write complex ones with multiple roles and branches as well.
At the end of the day, conf management has so many facets that I've learned to stop arguing the merits of one over another and just accept that there are different strokes for different folks.