> For me containerization was always about deterministic environments and ease of deployment instead of performance and clustering. But even with these advantages I am currently not using any solution for that.
You can get 99% of the way using a stable distribution and a configuration management system (ansible, chef and the like). It's much much simpler than running an orchestration service. I feel most people don't need containers and orchestration, just config management running redundant system designs.
To be honest I only very recently got to know ansible and related techs so I maybe missing an opportunity to learn something. Even so I think you are forgetting the DEV part. With ansible and chef you can make a deployment to the real infra. With containers you can have infra locally in your DEV environment and have clean slates. The similitude of the DEV environment and the production are crucial for devops. There is nothing more annoying for developers than having something work locally and then needing some weird quirk for the production/ci. A lot of political infighting and hate for devops. I saw this being a tech lead for build system in a fortune 500 company.
Ah they have redhat based distro. Ultra stable! Problem is nothing from outside the company works out of the box, leading to blessed machines. A disaster that lead to so much unofficial workarounds that it is not funny. Lol the kernel is so old it cannot run docker:)
Ubuntu is better but ultra stable machines will tend to massive customizations that are very hard to keep when you finally want to upgrade. It was very common to reach End of Life of LTS distros, and then have the server upgrade being a nightmare due to the long evolution that happened in the mean time.
> With containers you can have infra locally in your DEV environment and have clean slates.
True, but you can do that with plain system containers such as with lxd, rather than having that bundled with the huge paradigm shift that Docker comes with.
My experience with lxd is very limited. Actually I worked with liblxc which is the underlying paradigm, and i kind of disagree with you. The paradigm of lxd is much more foreign to me than docker. I am pretty familiar with my application and the distro of the container in a user perspective. I am definitely very insecure about cgroups and kernel namespaces. In the end my application is connected with my business/work orders. Kernel minutiae is not and the technical skill requirements is much higher. That will put a higher price tag on my team's human resources.
> The paradigm of lxd is much more foreign to me than docker.
The paradigm of lxd is pretty much exactly the same as the paradigm of a regular distribution installed on bare metal or inside a VM. If you can operate a regularly installed distribution, then you can operate inside a lxd container. The commands to create and destroy lxd containers are trivial ("lxc launch ubuntu:bionic" for example).
> Kernel minutiae is not and the technical skill requirements is much higher.
I'm not sure why you think you need to know kernel minutiae, cgroups or kernel namespaces. Operating lxd needs none of that.
> I am pretty familiar with my application and the distro of the container in a user perspective.
So, speaking as a dev, I feel like Docker's killer app is that it makes the config management a lot easier.
Dockerfiles give you a fairly easy and consistent way to express, "The runtime environment needs to have Python 3.5 and these packages," in a format that doesn't introduce too many concepts over and above the basic command line junk you'd use to manage your environment without Docker. If your stack requires multiple services, docker-compose gives you another fairly easy way to describe what all goes into that. And then it gives you a _super_ easy interface for starting and stopping all those services, keeping track of what you have running, all of that.
(And it's all fairly disposable, which is nice, since, as devs, we tend to break things. TBH, if Docker has done nothing else for me, it's that it's turned nuking PostgreSQL to get back to a clean install a 10 second process instead of a 30 minute one.)
It's not really that simple, and I've spent my fair share of time screaming at Docker for being flaky and having confusing under-documented configuration. (And I don't think I'd use it at all if I were working on a platform that weren't so annoyingly susceptible to systemic dependency hell. But worse is better, so the unix philosophy won, so here we are.) But eventually you get over that hump, and it starts feeling fairly easy to understand.
I don't know Chef, but I've seen Ansible used in production, and it just doesn't seem nearly so attractive. It could just be how it's being used, but it felt like there was this infinite regress of complexity where everything was tied to something else and you have to have been the person who built it to understand it, kind of like the bad old days when people were trying to put too much smarts into the database itself so they'd just become this rat's nest of triggers and whatnot. I'm sure it's not that bad. . . but my initial impression was that Docker is great for scratching a developer's itches, but slightly sucks for ops, but maybe is still worthwhile there if you're dealing with microservices or elastic scaling or something like that and you can use Kubernetes to smooth over some of the flakier bits. Ansible is much more for ops, and does a great job there, but I don't see it scratching many dev itches at all.
> So, speaking as a dev, I feel like Docker's killer app is that it makes the config management a lot easier.
It starts with a Dockerfile, which is a limited shell script, and it does not get any better beyond that. Shell scripts are simple, I'll give you that, but please don't sell them as some magic bullet. Dockerfiles are no configuration management system.
I've seen my fair share of hairy chef, puppet and ansible in the wild. Don't read into config management from those. I've also seen beautiful ansible installs, which deploy from dev setups all the way up to full infrastructure setup and deployment with blue-green deployment.
> Dockerfiles are no configuration management system.
Y'know, we might violently agree. That is a much more concise statement than my rambling attempt to explain why I think Docker is so much more palatable for development workflows.
You're right, it is no magic bullet. And I misspoke when I said "configuration management"; I forgot that that's a term of art in operations. By "management" I really just meant "stick it all in one or two files so I can get my checklist down to one step, and manage shared packages in a way that's at least a little bit less kludgey than simply abusing environment variables." So I find that it save some yak shaving, and for that I can deal with it under certain circumstances.
I actually hate using it for deployment or production config management, because IMO it seems to do a crap job at it. And it does a crap job at it precisely because of the features (or lack of features) that make it convenient for development. Even using it to manage our integration tests' runtime dependencies is kind of a hot mess. But I'm willing to concede that, together with Kubernetes, it might be nice for cloud-native elastic scaling microservice-y stuff, insofar as it seems to be popular for that. I don't actually know firsthand; I'm allergic to complexity, so try to avoid building things that way.
You can get 99% of the way using a stable distribution and a configuration management system (ansible, chef and the like). It's much much simpler than running an orchestration service. I feel most people don't need containers and orchestration, just config management running redundant system designs.