It's great that hftguy thinks Google Container Engine is stable (I work on it) but I'm sorry to say it's very easy to prove that is, in fact, running Docker on the nodes.
You can just SSH into one and see for yourself.
Kubernetes was built from the ground up to orchestrate Docker. CoreOS did a lot of work to make it possible to trade rkt in for Docker's engine, and the cri (Container Runtime Interface) is now generalising that so that there is a clear abstraction between the kubelet and the engines it orchestrates. Read about it here: http://blog.kubernetes.io/2016/12/container-runtime-interfac...
If you want to do things that are different to what we provide support for on GKE as a Managed Service (tm), you're able to run your own Kubernetes clusters on GCE. (We do let you run a Kubernetes alpha version, but only on non-supported clusters that self-destruct after 30 days.
We very recently moved from some bare-metal pet machines into Google GKE and couldn't be happier.
Honestly the hardest thing is keeping up with how fast Kubernetes evolves and gets better and better. The same goes for all Google services (pubsub, bigquery, etc)
We started the migration on Kubernetes 1.1 and are now live on 1.5.1
Even using it for things we probably shouldn't (old stateful applications) without a single problem. At least no Docker related problems.
I don't know, this article seems to be very presumptuous. A lot of bold claims, little backing and, as you state, some pretty false claims.
The default distro is Container Optimised OS: https://cloud.google.com/container-optimized-os/docs/. It's derived from Chromium OS, which means we can take advantage of the team who build images for the many devices which use it, and the security response infrastructure around it.
So, customized Google OS, but using the official Docker package? Oh wait. How could there be an official docker package for an OS noone known existed?
I assume customized kernel as well? and where does the overlay drivers come from? How much custom back-ports and custom development?
The article is not on point to say GKE replaced docker entirely then... but you are not point to deny and pretend that you are running Docker on anything remotely common.
CoreOS is a very similar idea, and given that, we don't make builds of the container images available outside GCE.
Customers who need specifics of the OS (that they can't find them by just looking at the kernel config on the node) are welcome to ping us a support ticket.
I am neither a Docker expert nor evangelist, and I have my own gripes and frustrations with it, but this article is full of misinformation and FUD. To wit:
> CoreOS is an operating [system] that can only run Docker and is exclusively intended to run Docker.
No, it isn't. (Maybe if you substitute "Docker" with "containers.")
> First, the main benefit of Docker is to unify dev and production. Having a separate OS in production only for containers totally ruins this point.
No, it doesn't. Completely the opposite, in fact: Docker makes it possible to not care about OS differences between dev, test, and prod.
> Docker on Debian is major no-go
> Docker is 100% guaranteed suicide on Debian 8 and it’s been since the inception of Docker a few years ago
> Debian froze the kernel to a version that doesn’t support anything Docker needs and the few components that are present are rigged with bugs.
rubs temples It's Linux. It's Debian. You can run any kernel you want. There are plenty of repositories out there with binary kernels, including Debian's own back-ported 4.9 kernel, or you can build one from source. Because, you know, open source. I've run many instances of Jessie on Linode with their 4.8 and 4.9 kernels and it's no problem. Hundreds of companies are running Docker in production on Debian and Debian-derived systems.
> I am not aware of any serious companies than run on Ubuntu.
Just because you are not aware does not mean they do not exist. How about Netflix, Snapchat, Dropbox, Uber, and Tesla for starters?
> I cannot comment on the LTS 16 as I do not use it.
It's been out since last April. If you're serious about this survey, this is a no-brainer.
> I received quite a few comments and unfriendly emails of people saying to “just” use the latest Ubuntu beta
What? No. Just use the latest LTS.
> I am moderately confident that there is no-one on the planet using Docker seriously AND successfully AND without major hassle.
I have tried to use Docker on very busy processing systems with short to medium running tasks. My efforts were on Ubuntu 16.04... because any non-LTS is almost a non-starter.
There were two major obstacles; the file system driver and the Docker daemon. I ended up settling on 1.11.{i forget}, because virtually every other version was unusable. As sad as it may sounds, I was tracking two metrics; probability the container will start and time until the docker daemon reaches a state of irrecoverable dead-lock.
Personally I found containers (on a highly contentious system) launched at ~98% success rate with a time-to-dead-lock somewhere around 6500 container start/stop (it was a pretty fat tailed distribution). For a busy system... those numbers equate to an administrative headache.
And on Ubuntu/Debian the defaults (the thing the majority of non-specialists are probably using) were far worse. The only filesystem driver which seemed to work at all was devicemapper direct-lvm.
-
If you have developers on your team who can spend their time figuring out the one magic incantation that makes Docker work most of the time, you're fine. Everybody else should follow his advice for using service providers.
Were you using Device Mapper purposefully, or just because you couldn't get a more typical storage driver to work? The general setup instructions recommend either 1. installing linux-image-extra and using AUFS (for kernel versions < 4) or 2. modifying the dockerd invocation to specify the OverlayFS storage driver with `-s overlay` (for kernel versions >= 4).
Admittedly, the latter strategy is a bit frustrating and counterintuitive on 16.04. When you install Docker it will start automatically and hang because it can't find the AUFS driver and Device Mapper isn't set up. The solution is to either 1. modify policy-rc.d to prevent services from automatically starting or 2. set up Device Mapper (install dmsetup and run `dmsetup mknodes`) before installing Docker and changing the storage driver. Unfortunately, these workarounds are not particularly well-documented.
I would be interested to see how Docker 1.12 or 1.13 with a modern kernel and OverlayFS would be able to handle your described workload.
So you are saying that all default settings are unusable in production, to the point docker might not start at all, and the only cure is series of 5 obscure advanced system setup/configuration that are almost impossible to figure out by oneself and not documented, yet they should be totally obvious to anyone using Docker, right?
FYI: It's because of this sort of bullshit that there are articles called "Docker in Production: An history of failure".
No, that's not what I'm saying. I'm saying that Ubuntu Server 16.04 LTS comes out of the box without AUFS and Device Mapper, and that you either need to enable one or the other before installing Docker or prevent Docker from starting automatically at install time. It sucks, yes, but your description of the problem and its solution is beyond hyperbolic. Googling "docker ubuntu install hang" takes you right to the GitHub issue with the solution at the bottom. And I think it's fixed in Docker 1.13 (it will prefer OverlayFS if AUFS and Device Mapper are not available), although I have not tested it.
I mean to say, device mapper was the only solution which worked in a production setting for me.
I no longer work with that company and so I can't tell how those improvements would change the reliability. Presently I am using docker with btrfs on a low-throughput workload and it seems to work just fine.
Tangentially related - I have been using OverlayFS without Docker for about a year now. It's pretty great. My read-layers can be on an NFS drive and the penalty for writes is relatively small. It basically gives me 90% of what I wanted from Docker in the first place.
It's Linux. It's Debian. You can run any kernel you want.
The guy works in finance, not an area where you can typically say "oh, no worries, we'll just push out this distro with a non-standard kernel." Enterprise IT, and specifically finance (and healthcare) have different requirements. Currently working with a client where not only the full technology stack has to be certified by accountable vendors, but all the internal documentation and customer facing content has to pass strict legal reviews.
"Here is Debian with a custom kernel" won't fly. Nether will "Here is Debian" - Ubuntu LTS will barely pass the line, and that will be after a whole lot of timewasting and ass-covering from many people involved.
I have (or had, I guess) an 11 year career in Fortune ~100 finance and healthcare. Agreed on all points. However, they're already experimenting with Debian, Ubuntu LTS (just not 16.04), kernels, custom AMIs, and different Docker versions. It's not a stretch to imagine they have the ability to deploy a back-ported kernel, or build and/or use an AMI that includes one. Also, HFT practices (or lack thereof) are far from the staid enterprise practices you may be thinking of.
Currently using docker with a Norwegian government organisation. We are very happy with it. Been running production since December. And yes, they run on Ubuntu.
Yes, it's new, and it changes regularly. This is why you do what you do with any other piece of software in an enterprise environment: Pick a stable release (e.g. 1.7), deploy it, then upgrade your development environment to the next stable release (e.g. 1.11) and work through the breaking changes. Regressions are frustrating, I'll agree, but it's free software: report the regressions and help the community fix them.
> Docker Issue: Can’t clean old images
A built-in feature to do so was added in 1.13 (a few months after this article was published).
> The only way to clean space is to run this hack, preferably in cron every day: docker images -q -a | xargs --no-run-if-empty docker rmi
That's not a hack, that's how you do things in Unix.
> As a long-standing goal, the AUFS filesystem was finally dropped in kernel version 4.
> There is no unofficial patch to support it, there is no optional module, there is no backport whatsoever, nothing. AUFS is entirely gone.
While the first point is technically true (actually, based on some light googling, I'm not sure it was ever merged in the first place), many distributions provide it as an optional kernel module. For example, Ubuntu provides it in linux-image-extra. Yes, Virginia, you can build kernel modules from source.
> How does docker work without AUFS then? Well, it doesn’t.
It does: Btrfs, Device Mapper, ZFS...
> So, the docker guys wrote a new filesystem, called overlay. [..] Note that it’s not backported to existing distributions. Docker never cared about [backward] compatibility.
Docker supports multiple storage drivers, one benefit of which is to be able to support older systems: AUFS on older distributions, OverlayFS on newer. The container abstraction allows you to not care about the underlying storage subsystem.
> Right now. We don’t know of ANY combination that is stable
You don't need a custom kernel to run Docker on Linode. Just change the storage driver to overlay (as you do in this stack script) and you're good to go. Otherwise this is a very nice script. I typically install the vim-nox package (instead of vim) but I'm not sure it makes a difference.
I get the sense that this person wrote a sarcastic, vaguely entertaining piece that drew lots of views. Hoping for lightning to strike twice, they did this again because who doesn't love traffic? That seems to be the entire motivation behind this post, from what I can tell.
There's just not a ton of substantiated content here. It's mostly really lousy anecdata like:
> Sadly, I am not aware of any serious companies than run on Ubuntu.
I am troubled by the direction of Docker and there are serious issues, some of which were raised in this post. However, whatever signal there is in this post is lost in a sea of noisy ranting.
My advice to anyone who doesn't already have strong opinions:
* Complement any reading that you do with your own research.
* Don't try to invent your own container orchestration system.
* If you aren't sure how to best do something, ask someone!
And for the love of everything holy:
* Don't run stateful systems on Docker if you can't handle failure or data loss!
Docker and orchestration systems like Kubernetes can be an excellent pairing. It's going to require research, a change in how you develop, build, test, and deploy systems, and a gradual building of operational experience. It will not be a quick process, and it's not for every org. But for some orgs and usage cases, it's an excellent way to go!
I'm not sure I've personally been able to figure out if it was sarcasm or willful ignorance, but yeah, this article annoyed the pants off me just as much as the first in the series. The fact that it's now worked twice to hit the front page is also a little unsettling, because apparently one of the best ways to reach people on HN now is to just post a bunch of misinformed FUD (with !!Attitude!!) and jump on board the rocket ship!
> CoreOS is an operating that can only run Docker and is exclusively intended to run Docker.
This is a patently false statement and needs to be revised, especially given:
> I will not comment on it.
P.S. I'm getting downvoted for speaking the truth, no matter how pedantic it may seem. CoreOS can run rkt containers and is not exclusively intended to run Docker. One possible reason is because Docker is the 800 lb. gorilla in the room and having options is always good.
You're not wrong, CoreOS can also run arbitrary go binaries (or anything else that can be statically linked) rigged up into systemd or fleet units, without any containers at all.
> CoreOS is an operating that can only run Docker and is exclusively intended to run Docker.
This shows an astonishing level of ignorance for someone who claims to have done their research.
> First, the main benefit of Docker is to unify dev and production. Having a separate OS in production only for containers totally ruins this point.
What? This makes no sense. Your images will be the same between dev and prod, even if the host running the containers is different - which is really the whole point - if you build and run an image in dev, it should run identically in prod.
> If you like playing with fire, it looks like that’s the OS of choice.
We spent the last year+ running containers on Centos7 with no problems from the OS. Whatever issues we did encounter were either transient bugs with Docker or our own configuration. Perhaps we got super lucky, but we were running 120+ containers on 12 hosts, so I would've expected at least some evidence of significant problems within that timeframe if it were really such a risky setup.
> It’s not possible to build a stable product on a broken core, yet both Pivotal and RedHat are trying.
We've been running OpenShift Origin since March of last year, it's been very stable during that time - the few issues we did encounter were due to our own mistakes, and were usually fixed just by changing some configuration and restarting the host.
While there are undoubtedly problems with Docker, and likely many of the issues you brought up are very real, there are many teams like mine that use it successfully, and painlessly. Docker isn't the tire fire you want to make it out to be.
> What? This makes no sense. Your images will be the same between dev and prod, even if the host running the containers is different - which is really the whole point - if you build and run an image in dev, it should run identically in prod.
I think the author is saying that while this is the point, it's not the reality. As it is right now, things can run absolutely fine in dev and then due to prod running a different operating system, things break.
it is virtually irrelevant what you use to run and operate this sort of cluster. it gets tricky when you pass 100 nodes and even trickier when you get to 1000+ nodes in a non-linear fashion. Docker certainly has issues that are pretty severe when you are in the financial space and not a big deal if you are running a popular blog like medium.com for example.
I have to agree when it comes to the business perspective. I can't imagine how Docker is going to survive financially.
I use Docker at work, and I 1) don't have any loyalty to their brand and 2) try as hard as I can to abstract away their specific APIs.
For example, we use Convox [0] to deploy containers to AWS. I could care less what Convox and/or AWS are doing under the hood. They could switch out Docker for rkt under my feet, and I probably wouldn't even notice.
It is kind of like POSIX to me. My apps are designed to run in a POSIX environment, not specifically CentOS or Debian. And just like it's easy for a new Linux distro to come along, give me POSIX, and give me some other shiny features I like, it will be easy for any competitor to come along and replace the Docker interfaces I use.
> I am moderately confident that there is no-one on the planet using Docker seriously AND successfully AND without major hassle.
Though the tone of this article is very negative, the conclusion is interesting to me as someone that's considered Docker but hasn't done much with it beyond experimenting locally.
I'm assuming if there's any forum that has users that can speak to their use of Docker in production, this would be the place.
We've been running docker in production since late 2013 (data science SaaS). We had some growing pains in the beginning, but since then it has been smooth sailing and a tremendous help in getting our services deployed reliably. However, we have always been very conservative when it comes to upgrading docker and have our own custom glue in place. Still, the statement that "no-one on the planet using Docker seriously AND successfully AND without major hassle" seems majorly hyperbolic.
We're technically using it in production for my site, www.bugdedupe.com, but we're still in beta, so we haven't experienced heavy usage.
We've been using Kubernetes to deploy Docker containers on Google Container Engine, and while there have been a few issues due to Docker/Kubernetes (namely, getting the containers to expose localhost to each other, and to expose themselves to the world), the issues that we've had so far have been issues that we would have been bit by eventually. Namely, if we hadn't been forced to deal with the issues now, we would have been screwed later. There have been some weird bugs due to the internal environment that containers use (we had extremely slow DNS lookups that caused our request times to shoot up to 9s each). These issues have been transient though, so it's not clear that it's Dockers fault, or if we're making mistakes in our code.
Docker has made our deployment much easier. You just build & push your container, and you instantly have a versioned deployable instance of your code. Kubernetes makes it extremely easy to rollout or rollback containers, so I have nothing but good things to say about containers.
> Google offers containers as a service, but more importantly, as confirmed by internal sources, their offering is 100% NOT Dockerized.
> Google merely exposes a Docker interface, all the containers are run on internal google containerization technologies, that cannot possibly suffer from all the Docker implementation flaws.
I don't know the above poster but we run over 4500 docker containers on AWS distributed using kubernetes. Docker has indeed caused us a problem or two in the past, but actually it's largely smooth sailing and most of our problems are with the app layers and both docker and kubernetes have largely made our lives much easier.
A quick skim would suggests that sits on about 8TB of Ram and 1000vCPUs spread over 100 machines. It's not the biggest stack in the world, but it's doing a good job as our little internal PaaS.
We have some non-aws versions too, but I don't have metrics on those right now.
Our experience is that it has made running servers locally a hassle because Docker for mac sucks beyond belief but it has been awesome in production. For us the real positive is Kubernetes and tbh docker would be useless without it. I really could care less about docker other than that the Dockerfile format is easy enough to use.
We went through the gammut of swarm etc. and were never able to get any of them to work in a meaningful way.
I think Docker was and still is a great way to popularize the concept of containers along with the smarter tooling around it, like resource schedulers.
By the time we adopted Docker (because it was popular and realized it's a good vehicle to push some concepts across organizations) we knew we're going to use it as a package system in a distributed scheduling environment (based on Mesos, Zookeeper, HDFS) and that it may get replaced later on. However, it took longer for Docker Inc. and the community to depart from the original ideas (initially very developer task centric, including trying to figure out how to store data inside containers) and figure how to enable scheduling, service discovery, health checking etc. I think Kubernetes did a good job popularizing better practices.
IMO two of the few systems that were sound were Mesos and Kubernetes (and their roots are somehow interleaved) and neither puts the Docker as the central piece, nor the container, which is just a building block to achieve the actual goals of running distributed, highly available jobs and services efficiently in a shared environment.
the problem is, that the docker (community) or rather the company behind it, focuses on too much, instead of just doing one thing and doing it great.
no they need to reimplement the world and create their own kubernetes (or some sort of it).
their new website is less accessible as it was in the old version, etc...
and all the new projects and people.
and the various ways to configure docker. why does a simple container engine needs to have so many parameters to configure it?
the network experience is still not as good as it could be and docker is still way more complicated to setup than a simple aws and some AWS AMIs.
I'd like to see Rkt/Coreos folks start showing how their stuff IS accessible than keep spewing this flamewar crap on HN. I've had nothing but excellent comments from the Kube developers, but every time a Rkt dev or fan comes on here, this is the crap we get.
I think you might be conflating pot stirrers with actual rkt devs. CoreOS (along with many other companies) are the Kubernetes developers.
FWIW, there was drama with the initial rkt announcement, but we've actually seen docker adopt most of the original criticisms and that should be applauded for both CoreOS and Docker.
> Google merely exposes a Docker interface, all the containers are run on internal google containerization technologies, that cannot possibly suffer from all the Docker implementation flaws.
Google running their own containerization tech with a Docker interface seems a bit far-fetched given the level of integration of Kubernetes with Docker. That's totally possible though, I'd like to read more about it.
It is far-fetched 'cause it's not AFAIK true. My experience for this is having set up a GKE Kubernetes cluster last night, it was definitely running Docker.
TLDR of the first 60% of the blog post - "I cannot comment on the LTS 16 as I do not use it. It’s the only distribution to have Overlay2 and ZFS available, that gives some more options to be tried and maybe find something working?"
Second part TLDR - "If you are locked-in on Docker and running on AWS, your only salvation might be to let AWS handles it for you.". Use Docker-For-AWS that sets up Cloudformation + Swarm mode.
Google offers containers as a service, but more importantly, as confirmed by internal sources, their offering is 100% NOT Dockerized. That is a huge label of quality: Containers without docker.
Actually, this is deliberate. Containerd. For example, rkt support on GKE is "coming soon" . https://www.mail-archive.com/google-containers@googlegroups....
Details matter at Amazon scale as they say. I have used Docker ever since it was beta, it never made it to our production systems though. LXD looks pretty promising, unprivileged containers, no container service and so on. When I have more time I will investigate how to roll it on CentOS.
Has Amazon spent a lot of effort on ECS? I am totally ignorant here, but the people who I consider more knowledgeable about AWS things have said in a nutshell that "in 2015 the things I heard about ECS were not good things, and I have basically not heard any new things since then."
Basically stating that ECS was Amazon's attempt to plant a flag in the container-space and that it was half-hearted, not done with the rigor of solutions like k8s that had additional advantages of also being cloud-agnostic, and finally that ECS was not a solution that anyone they knew was recommending or would recommend.
First of all I can assure you that Amazon is 100% committed to containers. Amazon's compute strategy is aimed at three levels of abstraction: instances, containers, lambda. All three are equally important to Amazon.
With regard to ECS feature set relative to K8, the thing to understand is that AWS follows a startup-like strategy of launching an MVP and then letting customer feedback drive roadmap from there. AWS is definitely not half hearted about ECS. Rather AWS is constantly working with customers to define a roadmap for further development on ECS.
To me the most exciting thing about ECS is the open source work being done around ECS, for Blox (a framework for building out custom container scheduling logic), and the ECS agent itself:
With these ECS components you can open issues or even PR's just like any other open source project. Additionally another cool thing we are doing is sharing feature proposals for public comment. You can check out an Amazon employee's public fork of the ECS agent to see a preview of coming roadmap, and we are actively soliciting feedback on proposals such as this one:
hey yebyen- i was one of the very first beta users of ECS, and I now work for AWS. first off, sounds like maybe we're not doing the best job we can be with educating users on ECS- that's on us, and we'll focus on improving. to dig into your actual question: when ECS launched, the goal was to do a smaller number of things really well, and then listen to the community on what _they_ wanted to see from a container management platform, and grow accordingly. I've seen a number of significant improvements over the last year or so. Off the top of my head: ECR (thanks @coding123), task placement policies and strategies to give developers more control over how they place tasks and use resources, IAM roles for tasks, event stream for cloudwatch events, service level autoscaling, ALB support, and a number of smaller configuration changes, like multiple network modes, and out-of-the-box support for and awslogs driver. also discovered while writing a workshop the other day that there is a pretty sweet first-run wizard for users just trying out ECS for the first time. in any case, a couple of main takeaways here: i'm seeing a focus on adding features and services that reduce the operational work for developers- let AWS worry about scaling and managing your cluster infrastructure, and you can focus on building cool stuff. beyond that, though, developers have asked for more control, flexibility, and extensibility, and i think ECS is working on delivering that: the cloudwatch event stream can be consumed by other services, the blox open source project (and the already opensourced ecs-agent) let you build custom schedulers/functionality on top of what AWS offers, placement policies let you customize how ECS consumes your cluster resources. would love to know where you get your news and why you haven't heard much about ECS, so we can make sure we fix it. if you want to talk more, you can also DM me
From an engineering POV, this line makes the most sense out of all:
> when ECS launched, the goal was to do a smaller number of things really well, and then listen to the community on what _they_ wanted to see from a container management platform, and grow accordingly
I'm coming from a small CoreOS cluster on bare metal, onto Kubernetes and Helm on EC2 nodes. I was a Fleet user before and I loved it! But always with the understanding that when things got better in the cluster space, I'd move from Fleet toward some resource aware scheduler.
I love to read how you're all going down a similar road, however you get there! Thanks for the blox links and I think I will be able to make immediate use of ECR with the rest of my AWS stack.
awesome! keep me posted on how ECR works out for you. If you build something sweet, write about it and let me know! we love blog posts/write ups/all that jazz.
I do want to talk more, but I don't know how to DM on HackerNews... I am also yebyen @ gmail
I went looking for how to enable ECR and I didn't find it, is this a feature you can only use from within ECS?
On my legacy CoreOS cluster I always used Deis components to (theoretically) manage all of the cluster things. Kubernetes offloads many of these concerns to the Cloud provider, and handles others of them using Addons. Can I get ECR as my private registry on a Kubernetes cluster running on EC2 nodes?
Every three months or so I tell myself "right,get your finger out, and do something production quality with containers" and come to the same conclusion this guy did.
Same here. I am trying to use Docker in production since 2014. Production readiness means very different things at different scale. Have you tried LXD? It looks promising.
Actually a single JAR and Ansible deployments solve the same problem as well. The number of times I was running into issues with Docker >>> the number of times I was running into issues with single jars and Ansible. Until this stays like that it is not justified to move to Docker. The business use case is to have a reproducible & reliable infrastructure and not to use containers at all cost.
production quality as in "I will be happy to run my clients' stacks on this, and won't worry about being woken up in the middle of the night by pagerduty" - I have this right now, and although there are more things I want, the pricetag is too high to justify. I have been doing VM (and container) work since a one of the fist versions of VMWare blew my mind, but given the fact that things work well for me right now, there is no pressing problem to solve. It's a cool way to do things, and I am fully on board with the ideas behind it, but right now it simply doesn't solve any problems I have, and only introduces a ton of new ones.
Lot of memes, but a little more facts, statistics, links to bug reports etc would have been nice. It may all be true, but how can I decide if I believe it or not? It's not that I know you, dear author, and therefore could trust your personal judgement. Make your point a little more clearly. Please.
LXSS (Bash on Ubuntu on Windows) is an honest-to-goodness installation of Ubuntu running on top of Windows using the Linux subsystem. Cygwin/MSYS are more limited in comparison (or at least I have found them to be so).
Yeah like running a CI job for customers. Absolutely great with Docker, nobody cares about long term reliability. Container crashes? Now worries we spin up 10 more and re-start the CI job. Perfect match for Docker.
Running financial infra is quite the opposite. Long term reliable processes, not much space left for failures and mishaps, especially not in the lowest level of your infrastructure. What is the benefit for using Docker for this sort of services? Almost zero.
I still think that Solaris zones are / were a superior technology. Sadly it died with the demise of Solaris as an operating system. Still in use in some "must have enterprise" types of companies.
I would not run anything serious on docker (definitely not yet) if I would not be heavily invested in understanding and contributing to the technology (docker codebase + related).
Docker is a nice to have but absolutely abstracts too much and takes control from you. I am wondering what people like Brendand Gregg think about docker.
Solaris within Oracle is as good as dead from the latest reports. However, Illumos is a Solaris descendant that is very much alive. It has all the good systems stuff that came out of Sun (zones, ZFS, DTrace) and is backed by Joyent (now part of Samsung).
There's also FreeBSD jails that can be used to contain applications. I'm not sure about the timeline, but I think jails came first. Sun engineers wanted to achieve the same so they ported the same technology over to Solaris.
You can just SSH into one and see for yourself.
Kubernetes was built from the ground up to orchestrate Docker. CoreOS did a lot of work to make it possible to trade rkt in for Docker's engine, and the cri (Container Runtime Interface) is now generalising that so that there is a clear abstraction between the kubelet and the engines it orchestrates. Read about it here: http://blog.kubernetes.io/2016/12/container-runtime-interfac...
If you want to do things that are different to what we provide support for on GKE as a Managed Service (tm), you're able to run your own Kubernetes clusters on GCE. (We do let you run a Kubernetes alpha version, but only on non-supported clusters that self-destruct after 30 days.