I think the price point here is about a couple of things:
- Chef and Puppet are too expensive for most companies to acquire, and have too much operational cost for too little revenue
- Ansible got a strong following in the SMB space, Red Hat probably thinks they can move that upmarket some
- Ansible's agentless configuration management has potentially strong applicability in a container world (why do I need a chunky agent to configure resources on my docker image? What if, for some reason, I need to affect change on running docker images? - I realize this is a bit of an anti-pattern for docker, but it was something I heard a lot from big enterprises)
$100m still sounds very high, kudos to the ansible folks who have come a long way in the last few years.
EDIT: one more piece I didn't think of here - the openstack side of things is an area where Red Hat has made big long-term bets for the future of the company, and it probably helps to justify the price in terms of backstopping their openstack support.
I really don't understand why an organization would want to use Docker (besides buzzword compliance) if they were planning on mutating running containers. What's the advantage?
I think one thing to keep in mind about Ansible is that it's an orchestration tool that also does configuration management. We've integrated Ansible into our workflows in such a way that it kicks off everything we need to do, even if that involves just coordinating some info between APIs.
We don't mutate containers at all - merely get Ansible to make things happen around their deployment and communication.
How do you use ansible to deploy your containers, if I may ask ? We're looking into the docker module right now, but I don't know if it's good or what. Currently we're launching container via systemd and manage the unit files with ansible.
We're running all the containers on Mesos hosts, so really all Ansible needs to do for us is talk to Marathon. We realized early on down this path that to accommodate scale we'd need to have some sort of scheduler. Mesos happened to be the most robust.
We originally tried the docker module in Ansible but found it had a few problems. There's been a lot of work on it since, and I expect it will be in a much better state when Ansible 2.0 is released.
- Long-running processes where they don't want to destroy and redeploy every single time a fix or change gets deployed
- One is where they try to reduce the configuration sprawl by making configuration changes at runtime using something like ansible
- One is straight-up bigco stupidity (we must have a way to change the running configuration of a system because the audit team says so)
- separation of responsibilities - we have one team that builds "approved" docker images, and then dev teams can make changes based on that - it might be easier to deploy changes at launch time
Again, I'm not saying all of these make sense, but back when I was workign on docker strategy and interviewing really big companies, these are the types of concerns they had about implementing docker at scale.
But one of the central points of Continuous Delivery is that there is no difference between "configuration" and "code", and that changes for either of them will result in a new release candidate. Every release candidate goes through the same automated quality checks before going to production.
In certain shops that may be true, but definitely not all.
Python has had a large role in Red Hat's tools for a very long time so I bet you'll find more Python than Ruby in sysadmin-land overall.
If you aren't using Rails for your web-stack, I suspect you might not be using any Ruby at all on a server. It isn't even part of the CentOS minimal install. Python is. I'm not sure about how Ruby is used in Ubuntu, maybe it's more common there.
However, language choice alone makes Ansible more "compatible" with the rest of the RHEL stack.
> If you aren't using Rails for your web-stack, I suspect you might not be using any Ruby at all on a server. It isn't even part of the CentOS minimal install. Python is. I'm not sure about how Ruby is used in Ubuntu, maybe it's more common there.
There is no Ruby installed by default in Ubuntu - or Debian last time I looked. Also (like Redhat) a significant chunk of the userspace distro code is written in Python.
Currently Vagrant is the only Ruby tool I use in sysadmin/devops land, and it's a workstation only installation rather than a server one.
Not having to deploy a whole new language runtime for your devops tooling is why I preferred Ansible and Salt over Chef and Puppet.
If anyone is wondering what the 'RDO' acronym stands for, from the FAQ[1]:
"RDO has no expansion or longer name, officially. It is not an acronym or abbreviation for anything.
However, RDO does focus on building a distribution of OpenStack specific to Red Hat operating systems (and clones of Red Hat operating systems). So, in some sense you can think of RDO as being a project started by Red Hat to build a distribution of OpenStack.
The 3 letter meaningless acronym sort of comes from that line of thinking."
Redhat and Mirantis are now direct competitors in the Openstack world. Redhat buying ansible, among the the great points you made, will further solidify their position in the Openstack world against Mirantis going forward...
I think it's hard to compare RedHat to Mirantis even tho they are taking a bit of market to them they are not playing in the same league, it's like to say Docker is a great competitor to Vmware inc,
This clearly is much more about Tower, consultancy, etc, than their main product, but their yaml encoded language is an abomination; masquerading as 'declarative' and easy to read, yet piling on loops and conditional statements and an unintuitive inheritance tree of global and local variables.
You missed the quoting mess, adding their own compact list and map grammar, the convention of having a comment on every line, and there's more I can't think of right now.
I hate ansible, it's just better than any of the alternatives for different reasons. Luckily we're moving away from needing any of them. Scripting an image build is a lot easier than updating a machine using CI: you start from a blank slate every time, and an out of date script isn't the catastrophe it is with CI since you have the images saved.
I think when ansible started it wasn't obvious that logic (loops, conditional etc...) would be needed eventually. By the time it became obvious it was going to be required, it was too late to change.
Using jinja2 for markup compounded the issue in my opinion, as it has no loops and logic is less than obvious (compared to mako for example).
Still I find its agentless model, the idempotent model, being able to use it on machines where you don't have root access etc... gives it a place that nobody had fulfilled.
I'd agree that jinja2 has a perfectly fine way of handling loops, but the problem is that functionality only works in Ansible template files.
The Ansible playbook can only make use jinja2 filters to act on variables:
https://docs.ansible.com/ansible/playbooks_filters.html
Oh no disagreement there. I was just disagreeing with the GPs disparagement of jinja2, which is a limitation of how Ansible was implemented and has absolutely nothing to do with jinja2.
Ansible is great stuff, but some of those decisions were quite weird.
Jinja is great as a templating language. I find yaml by itself to be clean and readable. Cram all of it in to the same soup, mix in the global variables, and it quickly turns into a nightmare. It takes what's already a problem in a dynamically typed language and amplifies it.
The Ansible playbooks are YAML lists. There's no particular reason that:
---
# This playbook deploys the whole application stack in this site
- name: apply common configuration to all nodes
hosts: all
remote_user: root
roles:
- common
…
is more readable than:
(playbook
"This playbook deploys the whole application stack in this site"
(play
"apply common configuration to all nodes"
(hosts all)
(remote-user root)
(roles common)
…)
or:
(playbook '((all '(common) :user root :comment "apply common configuration to all nodes") …)
:comment "This playbook deploys the whole application stack in this site")
In fact, I'd argue that both Lispy representations are much more readable.
Oh, I didn't mean to say that S-expressions can't be used for configuration. It's that in general you don't want a Turing-complete language to do configuration management, because you want to be able to reason about things like rollbacks, dependencies and diffs.
Yeah, except that inevitably one does end up wanting some element of Turing-compleness, hence the Jinja templates used in Saltstack & Ansible.
In a S-expression-based configuration language, one would either embed an S-expression-based programming language, or generate the S-expressions with a programming language which can manipulate S-expressions.
I don't think that it's that easy to get away from needing Turing-completeness in general. No reason you can't still support rollbacks, dependencies and diffs anyway.
I think this is something along the lines of what a friend once told me: "Compare the grammar of Java and C++. C++ has a very complex definition, whereas Java is brain-dead simple. And that fact enables all the powerful transformations IDE can do."
Also there's this school of thought that attributes most security problems to people accidentally using Turing-complete languages where they meant to use something less powerful. Consider vulnerability to arbitrary code execution through user input injection, which could be interpreted by your program being a "parser" (so-called "shotgun parser" - it's implicitly distributed throughout your code base) for a Turing-complete superset of what was supposed to be a list of accepted inputs. There are pretty good talks about this line of thought and I personally find it pretty interesting.
But none of this affects the fact that in complicated enough programs, you need "configuration" to be more code than data, which leads people not knowing of Lisp to reinvent a subset of it in XML or JSON or something similar.
Right. That addresses the point about mangling YAML into something that looks like a language. It was this that made be curious: "It's that in general you don't want a Turing-complete language to do configuration management, because you want to be able to reason about things like rollbacks, dependencies and diffs."
Probably because nobody called it 'configuration management' then. It's a very good tool from the job - because eventually all configuration formats end up getting Turing-complete and totally unreadable. Why not just start with something that supports code = data out of the box?
It can work but it works best when the language is kept dumb.
A dumb declarative language is easier to understand and easier to maintain. It can be used to help maintain a strict separation of concerns.
The smarter you try to make the language, the worse it becomes. Ant & MSBuild AFAIK are basically full programming languages in their own right so there was really no point to them actually being their own language.
The real lost art here is scripting languages. Even if you parse things yourself, configuration files still have a natural tendency to evolve into a crappy programming language over time. So instead of writing your own config file format you should just make a couple a quick bindings for a scripting language like Lua, Python or Ruby.
Syntax and semantics are separate, not having to learn a new syntax is handy.
Syntactically the problem I run into is that it's got it's own DSL in task definitions, so it can be hard to keep in mind what's YAML and what's the DSL.
Semantically loops and conditions are essential features so I don't have a problem with that. The inheritance could use some clarification, I was hit last year by a regression that remains unresolved.
I found this sentence funny "Representatives of Red Hat and Ansible did not immediately respond to requests for comment". I take it to mean: "we wanted to run the story as quickly as possible; still it would have been nice to get superquick comments by RH or Ansible; tough luck, though."
To me "did not immediately respond to requests for comment" smacks of neediness and self importance on the reporter's part (answer me now, you fools, don't you know who I am and what power I behold?!) and the people that would respond to such comments being in the middle of dealing with something more important at the time (perhaps answering a queue of queries that came in first, or queries from people who are more important to their world view). If I were RedHat or Ansible and read that sentence the reporter and/or outlet would be added to a "never respond to these people for at least 24 hours" list...
The appropriate line for online publication is "We reached out to both x and y for comment and will update this article with any responses we receive."
See, I actually have more respect for journalists who put things like that.
I don't think it's self-important, but I do think it's important to note that you can't send someone an email at 9 PM on a friday and expect a response. Noting that they did not _immediately_ respond is an important differentiation between not responding at all.
> answer me now, you fools, don't you know who I am and what power I behold?!
Alternatively, "Answer me now, before someone else beats me to publication, oh god please ... c'mmooon it's almost deadline, pickuppickuppickuppickup ah dammit, too late."
Emphasis yours. That line has existed in journalism for decades. I think it's interesting that the perception exists that this is somehow arrogant; to me, it seems to protect Red Hat/Ansible from the perception that they're uninterested in public relations. Source: I work for a newspaper.
More like "Both companies didn't immediately respond to requests for comment. Its 8:30PM and no one's taking my calls or responding to my emails, but we're an online publication and want our content to go viral and get maximum eyeballs, so we'll run this story anyway."
Those quotes are found in a huge number of stories.
Sure you can blame them for a rush to publish if you want. I think mainly they are trying to indicate they reached out to a relevant party and didn't hear back. And by saying didn't hear back immediately they are making clear that it may well be the party wasn't immediately available (not that they were not willing to respond at all).
On some (maybe all, I don't know) sites, they then update the story if they get responses.
Yes it is, but the future is not evenly distributed, to paraphrase William Gibson. For many enterprises, even Ansible's current model is already way out there in the distant future.
Also, I think Ansible's idempotent model actually works nicely with immutable infrastructure. Why? For development of your stack. While messing around with it, you probably don't want to rebuild the whole thing from scratch. Of course you can play funny games with caching of remote packages and so on, but that's getting into Ansible territory anyway.
So I think a good model for immutable infrastructure is to use a tool like Ansible to develop the stack, then in production you would use the same tool to spin up immutable instances.
I was using ansible with packer https://www.packer.io/ to build AMIs (Amazon Machine Images). I'm spending a lot more time with docker these days though.
I can see how that would work for stateless services. Just build a new image and discard the old one.
But what do you do when you want to change my MySQL config file? Create a new image and somehow transfer the data? Or are the datastores somehow externalized? Then how do you synchronize shutting down the old image, then starting the new updated one, preventing them from accessing the store at the same time?
The linked article kind of waves these issues away ('externalize state in Cassandra or RDS'). Then am I supposed to use two mechanisms/tools to run my infrastructure? Docker for stateless servers and something like Ansible for stateful servers?
'I see it as conceptually dividing your infrastructure into "data" and "everything else". Data is the stuff that's created and modified by the services you're providing. The elements of your infrastructure that aren't managed by the services can be treated as immutable: you build them, use them, but don't make changes to them. If you need to change an infrastructure element, you build a new one and replace the old one'
More in the actual full transcript and in the video
This is all well and good, but the devil is in the details. Like rdeboo says, what happens when you do need to change the datastore config? Databases famously need plenty of care and attention to achieve optimal performance. They are decidedly not fire and forget systems. How do I tweak my postgresql performance parameters in the immutable world?
You could keep Postgres in an immutable image, with only /var/lib/postgres in a separate volume. Upgrading the PG config would just be a matter of unmounting it, replacing the image and re-mounting. (Docker automates this with its "data volumes", but you can do it manually too).
In theory yes - but that strategy doesn't always work. Sometimes the implementation of the data store changes between releases - requiring an upgrade or data migration.
That may prevent the simple unmount/replace image/mount workflow, but it doesn't prevent the separation between the mutable and immutable parts of the DBMS.
We're using it for immutable infrastructure where we build images with ansible and deploy those images. It's basically the same as a dockerfile and ultimately instead of a container you use a right sized machine. I don't really get the need to containerise everything unless you are buying big metal and deploying on top of that.
My experience with Ansible has not been so pleasant. Especially performance is a jobstopper. In my environment it takes 20 min for 12 Servers to be setup with some Redis, Elasticsearch stuff. Quite some become_user directives, but 20 min for this kind of stuff is just not acceptable. After all, application settings needs to be tuned and iterated over, too.
My idea was to develop the infrastructure with Ansible, e.g. no ssh to change some httpd settings at all. Everything via Ansible. It worked very well as long as the playbooks and number of servers was very small.
This has been my experience as well. Even using a small subset of a playbook via tags can take a long time, especially if you're doing a run in serial. One of our deployments that only affects six servers takes fifteen minutes.
This can be mitigated somewhat by putting Ansible on the target machine, downloading all the necessary files to that machine, and then running Ansible locally... but that seems awfully fragile to me.
I am much more interested in Salt's ZeroMQ path these days. It seems to scale better, at least on paper and in my few small tests.
I'd be interested in hearing how this stacks up against simply running the tasks via shell scripts, because the time to install packages/do other tasks is orders of magnitude higher than the connection overhead. Things will always be slow when doing `serial: 1`, so I'd definitely recommend a canary setup where you run a play with a small serial batch followed by a play with no serial limitation - that'll speed things up considerably.
Finally, when using ControlPersist with pipelining mode in Ansible, it's as fast if not faster than zeromq or our own accelerated mode (which we will be deprecating at some future point when older SSH installs are not as common).
If you're using Ansible for orchestration, you could try using the cloud's orchestration service instead. e.g. Rackspace Cloud Orchestration, AWS Cloudformation etc. In this specific case, you can use the orchestration api to spin up and manage the servers, and use ansible to manage the software (although there is a way to manage software as well [0]; I'm just not familiar enough with it to suggest it)
Disclaimer: I work in the Cloud Orchestration team at Rackspace.
Cloudformation is a shit-show. I wrote the boto_* modules in SaltStack to avoid using Cloudformation. It does magical shit like "oh, you wanted to change this one value? I'm going to rebuild entire portions of your cloud."
You can just run portions of the playbooks, but then you lose the value of a descriptive infrastructure. What does X look like? Depends on when each tag was run.
It shouldn't unless you're very careless. A tag that just updates all of the settings files and restarts the services should have the same effect as the full playbook run.
And if you add a new machine to the cluster, which hasn't gotten all tags run against it? Or of a machine was temporarily offline when a tag was run, or...
There are many potential situations where not running a full inventory against a running machine results in a machine not being properly configured.
We eventually settled on having Ansible build an AMI for us that can then be spun up by as part of a Cloudformation template (also initiated by Ansible).
We've actually been moving further and further away from having Ansible handle the configuration management side of things, and deal with Orchestration primarily.
We've moved to using Hashicorp's Consul-Template (https://github.com/hashicorp/consul-template). Ansible populates Consul with any required configuration changes during the deployment of a new version, and Consul-Template knows about these changes and automatically writes them to disk. Applications running on the host are then reloaded to pick up the changes.
So this could explain why his very active github activities suddenly stopped early 2015, I just presumed it was due to consciously allocating more time for management and leadership.
His github contributions rate and depth for ansible while still the CEO in y2014 has been a great inspiration for me.
Interesting! Ansible is great technology. Not as mature as Puppet or Chef, but it's getting there. However Red Hat is currently heavily pushing (what I understand to be) their own fork of Puppet inside Satellite 6. So quite a few RHEL customers in the process of rolling out the latest Satellite is probably going to want to hedge their investment in it. Perhaps there is some Red Hatter here who could comment?
Its not a fork of Puppet, Satellite ships with its own copy of Puppet (3.6 iirc) which it integrates to provide the configuration management side of the product but its stock un-modified puppet.
In fact the puppet side of Satellite is built around Foreman (http://theforeman.org/) which is an open source project that isn;t Red Hat controlled so even if Red Hat wanted to move 100% to Ansible it would be very hard work for little gain. It would also be a really bad commercial idea Puppet is by far the market leader and most of their customers buy satellite precisely because it integrates with their existing puppet manifiests.
So I expect Puppet to stay as Red Hat's goto configuration management tool, and ansible to be used more for its ad-hoc remote execution capabilities where puppet is nowhere near as good. RH already uses ansible in the installer for Open Shift for example because it can set up multiple boxes without needing an agent pre-installed.
Oh, I understand it's stock Puppet inside the thing. But much of the tooling around it (the Hiera syntax, the dashboard, the DB) acts as an alternative to the tooling around Puppet.
Satellite 6 and Puppet Enterprise are direct competitors, and there is not much further upstream development on Puppet 3.6, so I expect Red Hat to have to take on the necessary development work during the life time of the product.
So, in essence almost a fork already, and in the future much more so. You already have to choose, you have to port your old codebase and tooling to one or the other.
Foreman is working on Puppet 4 support [1], and I'd say we'll try to push it forward sooner than later as soon as we complete the migration to Rails 4. Foreman is the upstream for Satellite 6.
I think what makes a product like ansible catch on is its use of a simple scripting language like python. This makes project participation more accessible.
Ordinary sysadmins can write their own ansible modules with ease. It's possible that cfengine has that now but ask sendmail about repairing an old reputation.
Actually I think it's more the YAML config files than the fact it's written in Python. I learned 80% of Ansible in probably 10 days of writing playbooks and going through the infrastructure at my new job.
Also I used to work with Puppet in an 8000 server environment and Ansible and Salt both are so much more fun and easy to use than Puppet. I hear the same thing over Chef too.
Last Ansible is the only one that doesn't require any agents installed and does everything via SSH. At first I didn't think I'd like that coming from Puppet but, I can do everything I need to without another daemon to worry about.
I also came from operating one puppet environment to using ansible, and just like you the major sales points were ease of configuration with YAML and agentless deployment.
But development of the project has been fueled by skyrocketing participation. Myself and a friend of mine have both contributed small bits of code to the project without being professional developers, and looking at the github contributors they are in the thousands for a 3 year old project. Compare that to cfengine's 73 contributors.
My thoughts exactly on all points except the last one.
It's definitely good to not be forced to use agents everywhere + a dedicated "mothership" instance, but sometimes I do wish I had Ansible agents on my instances, just so I could "git push" the whole thing and forget about it.
Looking forward to Red Hat following on their good old habits and open-sourcing Tower.
For example, I wrote a report to inventory which hosts are connected to Active Directory, and had a pretty pie chart for management in minutes (across 2K+ hosts).
Ansible is a fantastic tool. I put it up there with Rails, Backbone, and jQuery. The shadow of Puppet and Chef is large, but many are starting to see the light.
I hope that Redhat will accelerate the growth of this very well engineered platform.
Supposedly a > $100mm deal. Both companies are already headquartered in N.C., and Ansible has a ton of momentum in the RHEL and OpenStack arenas, so it would make sense to pull the project into the fold.
One thing I wonder is how much the project's priorities would shift away from (if at all) anything non-RHEL-centric.
As a Red Hat customer I'll be interesting to see how it affects the complete fucking shambles that has been the Satellite 6 rollout, which was supposed to be full Foreman/Puppet integration for provisioning and config management.
Apart from the fact that it's been a shambles, Red Hat have been solidly pushing customers down the puppet route. I expect there will be some grumpy meetings in the next few weeks.
I would imagine it is not so much about Ansible's general valuation in the industry but about its value for Red Hat (a.k.a -- Red Hat is not buying Ansible for its revenues but for its technology).
The direction in which the technology evolves. That's also the reason why Red Hat hires a lot of upstream developers[1]. It is easier to tailor your services and offer better support (ie: the revenue stream for Red Hat) for OSS when the developers who write it are on your payroll.
I thought so too for a long time. Until that time when I upgraded the RAID10 on our database servers from a 4 drive to a 8 drive configuration (which requires rebuilding the whole array if you want the performance benefits). Getting the intricate configuration of the two machines (postgres streaming replication works, but has a lot of moving parts to keep in mind) back without having to remember any details was absolutely priceless.
Completely wiping and reinstalling the main database servers (one after another of course) during the day while the system was in active use and completing the process with zero user intervention, that felt amazing.
Since then, whenever I had to reinstall a machine for one reason or another, I always appreciated the immense speed-up I gained by not having to ever manually re-do the configuration.
Better yet: All the years of growing the configuration, all the small insights learned over time, all the small fixes to the configuration: All are preserved and readily available. Even better: By using git, I can even go back in time and learn why I did what and when.
"Why am I using TCP for NFS? Oh right - that was back in december of 2012 when we were using UDP and we ran into that kernel deadlock" - that's next to impossible to do when you're configuring servers manually.
I don't think devit challenges the "automate" part, only the "separate tool" part. In Ansible you specify a sequence of commands just like you do in a shell script.
Take the time to learn a tool like Ansible. It is not about replaying a simple sequence of commands (imperative). It is more about declaring what you want your system to look like, and letting the tool decide which pieces need to run based on the current state of the system.
It's like make vs a shell script. If you use scripts to build your programs, you either have to write your own checks to test whether every step is necessary or not (cumbersome, error prone, and quite complex) or you just script it to build from scratch every time (inefficient).
But for systems management, rebuilding from scratch can be worse than just inefficient. Imagine if your script reinstalled MySQL from scratch every time you ran it...
Most shell commands are actually "declarative" in a sense.
If you run "apt-get -y install foo" that means that you want "foo" to be installed. If it's already installed, it just does nothing.
In Ansible, you'd use "apt: name=foo state=present" which does exactly the same thing as the apt-get command, but requires a web search to figure out how to write (assuming you know normal Linux system usage but haven't memorized Ansible).
The only differences seem to be that Ansible tells you whether the command made a change or not, and that you can parse the Ansible configuration with an external tool (assuming there are no loops/variable/etc.), but both of these things don't really seem that useful in practice.
> "apt-get -y install foo" that means that you want "foo" to be installed. If it's already installed, it just does nothing.
not really. If you do that it means that you want to update foo to the latest version the system knows about.
And other commands fail if the thing they are supposed to to is already done. Like `adducer`. So you could still run it and assume a failure to mean "the user already exists" - but it could of course also mean: "the user didn't exist, but creating it failed".
Then you start to have a look at the exit code which may be different between the two cases.
But every command behaves differently, so you need to learn all of this.
With Ansible (or puppet), the syntax is always the same and the actually needed operations are abstracted away.
Again, my advice is to take some time and learn the tools before discussing their strengths and weaknesses.
You missed more Ansible strengths, like detecting changes and restarting only affected services for example. Show me the idempotent shell script which does apt-get to install some dependency, updates some configuration file, and then starts or restarts a service depending on both the current state of the service (was it already running?) and whether the apt-get or config file change actuallly modified things.
Then scale that up. A lot.
There is even more. If you care to learn it before dismissing it.
It's not that hard to make a shell script idempotent - I've done it quite a bit. You check for the artifacts of an install and branch based on the results.
I still use Salt instead of shell scripts, but that's mostly to have the authenticated/encrypted channels and only have to push my code to one place to run it globally.
Well, one of the main advantages of using a configuration management tool is that the configurations you're writing are actually repeatable, and these tools tend to provide you with a lot of modules that take this in regard for you. If you were to use pure shell, you'd have to take a lot of things in account just to take care of this aspect alone. Also, these tools provide abstractions that make it easier to execute things as a unit (such as adding a user and a number of things having to do with it) without having to think about all the details. Often, they can be used on multiple platforms in the same way, too. So yeah, I do think configuration management tools solve real problems.
Just being able to have your tool know the list of servers, and their roles makes it worth it.
I did a fair bit of work based on the OpenStack tripleO project, which suffered from the OpenStack NIH syndrome. They could not agree on a CM tool, and wrote it in bash. Never, ever, ever again. Trying to cluster RabbitMQ / Percona across 3 different machines, via bash is an abomination, whereas in Ansible / Salt etc. it is pretty easy :)
That's ok if you have a known good baseline configuration. In that case it's no different to say a Dockerfile.
However the config management stuff seems to come to light when you've got a mess on your hands and need to rationalise it and make it consistent.
I'm slightly leaning towards the "rebuild with known good baseline" state of affairs these days however even as a long time Ansible user. Rather than upgrade stuff, I build something new alongside and then do a switcheroo nearly every time.
One day, hopefully containers will allow us to have consistent state everywhere.
I think that the big problem with shell is that it doesn't really offer the right abstractions for a lot of this: one doesn't (normally) want to run:
if [ ! -d /opt/foothing ]
then rm -f /opt/foothing && mkdir /opt/foothing
fi
cd /opt/foothing
tar xf /tmp/instpkg.tar.gz
sed -e s/QQQbarvalQQQ/$BAR_SETTING/ -i /opt/foothing/config
…
Normally, one just wants to install & configure foothing. Abstracting that away in shell is possible but a pain: it doesn't really have a rich language for composing paths and other variable values; quoting is a right royal pain; by the time one's written a fully-working shell script (note that the snippet above has no error-handling, breaks if /opt doesn't exist, breaks if $BAR_SETTING contains whitespace and doesn't enable one to override the foothing installation location), it's nearly impossible to read & understand.
The Right Answer would involve a language which enabled one to create one's own syntactic abstractions in order to satisfy the general and specific needs of software installation. As an example, it'd be nice to have a WITH-INSTALLATION-DIRECTORY construct, which ensures that a directory exists, ensures that it's owned by the appropriate user, ensures that no other package already claims it (except that a previous version of the currently-being-installed package is okay), registers the directory and everything created in it during WITH-INSTALLATION-DIRECTORY as belonging to the currently-being-installed package, handles errors in a well-defined and useful manner for calling code, and so on and on and on.
And of course even that isn't high-level enough: If I'm installing bazit, which depends on foothing and quuxstuff, then I'll want to call something which ensures they exist. Or maybe there's an optional 'dependency,' and I want to do certain things if they exist and certain if not.
And maybe it's not low-level enough either. What if I want to override one particular sort of installation behaviour, but not the rest? What if I want to install a package in my own account, as myself? Wouldn't it be cool if I could set a few variables and the package manager Just Worked™?
As another user indicated, what all these tools really need is to be Lisp: versionable data which is code. As Shiver's work with scsh demonstrated, a Lisp-like language can be very pleasant to write POSIX applications in. Macros enable one to create useful syntactic constructs which make meaning, rather than details, clear. Dynamic variables (as in Common Lisp) easily enable customisation based on the call stack. CL's condition and restart systems are the gold standard for error signalling and recovery.
Red Hat has a history of buying closed source software and releasing it as Open Source (KVM, Gluster, Cloudforms etc) so I would expect Tower to be open sourced. Assuming Ansible have the rights to all the code of course and dont license it from someone else of course.
Correct. There was a time, pre-acquisition, when some separate management-console bits were not open source, but nobody cared about those bits anyway and now they're long gone. The "GlusterFS" file system part, which is the part everyone except one misguided CEO (now at Docker) cared about, has always been completely open source.
This does make me wonder how it'll impact their eventual move to Python3. They've been hesitant to move due to a lot of their customer base being on RHEL5/CentOS5, I can't imagine that this move will help matters.
I always wonder why cf-engine is so unpopular on HN. It has some nice advantages like no dependency on ssh or a scripting language. It is not as simple to get started, though.
I'm working on a CFEngine Tutorial to help people get started. I was inspired by Michael Hartl's "Learn Enough Tutorial Writing To Be Dangerous" talk at LA Ruby Conf to finally turn my CFEngine course materials into a book. It'll be my first commercial product so I'm excited!
SSH's major problem here is performance. When you need to orchestrate dozens of servers with many separate tasks, then the slowdown is very noticeable.
Hi all. I am a GM at Red Hat, and I have been deeply involved in the acquisition of Ansible. It's great to see so much interest and so many good questions. I hope that my blog post can help answering some of them:
http://www.redhat-cloudstrategy.com/why-did-red-hat-acquire-...
I think that this will fit nicely with the Cockpit project which should "revolutionize" remote administration (it isn't bad). So now Red Hat wants to add something for wholesome orchestration, which was really needed in that space.
I like what I've seen of ansible but a lot of their modules are a complete mess. I've run into problems with both their AWS and Docker modules and ended up resorting to a series of tasks running shell commands because it was more reliable and didn't require me to install a specific version of some python library on every single machine.
Has Red Hat ever done this with anything? I think a lot of their products exist as open-source versions. Satellite -> Katello, OpenShift is open-source, CloudForms -> ManageIQ, Red Hat Identity Management -> FreeIPA, RHEL -> CentOS. I suspect the list goes on and I have a hunch they will open-source Tower in the near future.
If you're new to Ansible. I've created about two hours of free screencasts on it. It's a very simple to use and understand configuration management tool.
Either it's a strange coincidence or someone spent a lot of time and care creating astroturfing accounts. All are old, have comments and submissions, seem like real people.
Are you for real? Look at my account history. You really think I'd spam HN. Been on flights all day or I would have addressed these comments earlier. Suspect these are just happy people. More than willing to address this with HN admins if they have any questions.
Probably depends on who needs it. With almost no revenues, then it makes monetization only possible through enriching some platform. Maybe other major distro vendors will look at Chef, Puppet and Salt now and find them more expensive.
Here is the financial disclosure from RedHat. NOTICE THE FIRST SENTENCE.
The acquisition is expected to have no material impact to Red Hat's revenue for the third and fourth quarters of its fiscal year ending Feb. 29, 2016 (“fiscal 2016”). Management expects that non-GAAP operating expenses for fiscal 2016 will increase by approximately $2.0 million, or ($0.01) per share, in the third quarter and approximately $4.0 million, or ($0.02) per share, in the fourth quarter as a result of the transaction. Red Hat calculates non-GAAP operating expense by subtracting from GAAP operating expense the estimated impact of non-cash share-based compensation expense, which for fiscal 2016 is expected to increase by approximately $1 million for each of the third and fourth quarters, and amortization of intangible assets, which for fiscal 2016 is expected to increase by approximately $1 million for each of the third and fourth quarters, in addition to transaction costs related to business combinations, which are expected to increase by approximately $1 million in the third quarter. Management expects GAAP operating expense to increase for fiscal 2016 by approximately $5 million, or ($0.02) per share, in the third quarter and approximately $6 million, or ($0.02) per share, in the fourth quarter as a result of the transaction. Excluding the operating expense impact as noted above to GAAP and non-GAAP operating margin and GAAP and non-GAAP earnings per share, Red Hat is otherwise re-affirming its fiscal 2016 third quarter and full year guidance provided in its Sept. 21, 2015, earnings press release.
saltstack is a bit harder to get started but it is, IMHO, a much larger product. I am implying saltstack should be worth more. That said, ansible may prove to become more popular and grow to be bigger than saltstack.
- Chef and Puppet are too expensive for most companies to acquire, and have too much operational cost for too little revenue
- Ansible got a strong following in the SMB space, Red Hat probably thinks they can move that upmarket some
- Ansible's agentless configuration management has potentially strong applicability in a container world (why do I need a chunky agent to configure resources on my docker image? What if, for some reason, I need to affect change on running docker images? - I realize this is a bit of an anti-pattern for docker, but it was something I heard a lot from big enterprises)
$100m still sounds very high, kudos to the ansible folks who have come a long way in the last few years.
EDIT: one more piece I didn't think of here - the openstack side of things is an area where Red Hat has made big long-term bets for the future of the company, and it probably helps to justify the price in terms of backstopping their openstack support.