Yes it is, but the future is not evenly distributed, to paraphrase William Gibson. For many enterprises, even Ansible's current model is already way out there in the distant future.
Also, I think Ansible's idempotent model actually works nicely with immutable infrastructure. Why? For development of your stack. While messing around with it, you probably don't want to rebuild the whole thing from scratch. Of course you can play funny games with caching of remote packages and so on, but that's getting into Ansible territory anyway.
So I think a good model for immutable infrastructure is to use a tool like Ansible to develop the stack, then in production you would use the same tool to spin up immutable instances.
I was using ansible with packer https://www.packer.io/ to build AMIs (Amazon Machine Images). I'm spending a lot more time with docker these days though.
I can see how that would work for stateless services. Just build a new image and discard the old one.
But what do you do when you want to change my MySQL config file? Create a new image and somehow transfer the data? Or are the datastores somehow externalized? Then how do you synchronize shutting down the old image, then starting the new updated one, preventing them from accessing the store at the same time?
The linked article kind of waves these issues away ('externalize state in Cassandra or RDS'). Then am I supposed to use two mechanisms/tools to run my infrastructure? Docker for stateless servers and something like Ansible for stateful servers?
'I see it as conceptually dividing your infrastructure into "data" and "everything else". Data is the stuff that's created and modified by the services you're providing. The elements of your infrastructure that aren't managed by the services can be treated as immutable: you build them, use them, but don't make changes to them. If you need to change an infrastructure element, you build a new one and replace the old one'
More in the actual full transcript and in the video
This is all well and good, but the devil is in the details. Like rdeboo says, what happens when you do need to change the datastore config? Databases famously need plenty of care and attention to achieve optimal performance. They are decidedly not fire and forget systems. How do I tweak my postgresql performance parameters in the immutable world?
You could keep Postgres in an immutable image, with only /var/lib/postgres in a separate volume. Upgrading the PG config would just be a matter of unmounting it, replacing the image and re-mounting. (Docker automates this with its "data volumes", but you can do it manually too).
In theory yes - but that strategy doesn't always work. Sometimes the implementation of the data store changes between releases - requiring an upgrade or data migration.
That may prevent the simple unmount/replace image/mount workflow, but it doesn't prevent the separation between the mutable and immutable parts of the DBMS.
We're using it for immutable infrastructure where we build images with ansible and deploy those images. It's basically the same as a dockerfile and ultimately instead of a container you use a right sized machine. I don't really get the need to containerise everything unless you are buying big metal and deploying on top of that.
http://michaeldehaan.net/post/118717252307/immutable-infrast...