Hacker Newsnew | past | comments | ask | show | jobs | submitlogin
How Docker broke in half (infoworld.com)
302 points by pauljonas on Sept 8, 2021 | hide | past | favorite | 194 comments



> “The biggest mistake was to miss Kubernetes. We were in that collective thought bubble where internally we thought Kubernetes was way too complicated and Swarm would be much more successful,” Jérôme Petazzoni, one of Docker’s first and longest serving employees, said. “It was our collective failure to not realize that.”

They were not wrong on saying that Kubernetes was very complicated, at least in some sense. In the beginning no one wanted to use it because they could easily setup Docker Swarm with minimal effort. This argument still pops up frequently on HN when there's a new post about Kubernetes.

I guess the problem was they didn't realize why Kubernetes needs to be that complicated. And if a system is complicated for good reasons, that's actually good business opportunity and tons of people and companies will be willing to make the effort to fill that gap.


K8S is great if you have Google problems, but most people don't and I think much of hype around it is much like the hype that existed a decade ago around "web scale" and "big data" and "NoSQL for everything". Docker Swarm is (with some use-case specific exceptions) more than sufficient for any shop running a few hundred nodes with a few thousand containers – and it is two orders of magnitude less complex than k8s.

You can read Docker Swarm's docs end-to-end in < 30 minutes, and have a cluster running in roughly the amount of time it takes to install Docker on three boxes.

Docker's failing was a failure to market Docker Swarm. Most people don't even know it exists, yet every single machine with Docker has it already! The last time I checked, they only had a single dev that was part-time allocated to supporting it.

So it's not so much that Docker failed to jump on the k8s bandwagon, so much as they built a wonderful thing that they never marketed and semi-abandoned, to focus on Docker Hub and other core Docker things they thought they'd be able to more directly monetize.

Fortunately, Docker Swarm remains a wonderful piece of software, even without particularly active investment, because it is a fairly complete offering for most use cases. And it seems to be growing in adoption.


Unfortunately for Docker, I don't trust or want to use Swarm because of their attempts to monetize and because of the frequency that they push breaking bugs or breaking changes into production releases. To the extent that they made downgrading a paid option, not upgrading. And I'm in the process of switching everything I've got over to Podman now.

Swarm might be great, but the ecosystem around k8s is massive. So my two choices are: ignore Docker entirely and do it like I've always done (Heroku, chef, puppet, etc. just managing my servers myself), or buy into containers and go with K8S.

To that extent, I only ever deal with K8S at work, and for my personal stuff I don't go near Docker at all. Why would I when I can get an 8GB/4CPU VPS for $9/mo and serve everything I care about from that? I've got fail2ban, systemd, caddy, and plenty of resources.

I can host mysql on the same machine because there isn't a god-given law about hosting your DB on a separate box. Same with anything else.

No need for Docker or Swarm or K8s.


Who do you use for your VPS provider? Those specs for that price are much better than what I typically see price wise on DigitalOcean, Linode, Vultr, AWS, etc.

I ignore docker myself. It's been too unstable for me and historically just costs me time, I'd rather just use systemd on a VPS. Occasionally when someone else has setup a complex system in docker I want to play with it has been convenient. Still, I avoid installing it at all.


You can get somewhat close to that on Hetzner. Very stable in my experience, too.


> No need for Docker or Swarm or K8s.

This is exactly why I think new-cloud companies like fly.io, stackpath, deno.land, vercel.app are the future. No one wants to deal with muck if they can avoid it.

Cloud exposed data center primitives. Then, cloud-native laid bare the at-scale primitives. Imho, I think the industry is ripe for both NoCloud and NewCloud.


> because of their attempts to monetize

Did you read the article? It was about a company struggling to monetize an open source project. It hardly seems time for teenagerish M$-Windows asides on how you don't like people monetizing the software they are giving you for free.


There's an implicit "in a way that was hostile to their customers/the docker community" in there.


What was implicit was the view that any "attempt to monetize" is hostile. Which, as I said, is something one associates with Linux hobbyists from the early 2000s.


> To the extent that they made downgrading a paid option, not upgrading.

Except that they didn't. With even a little bit of research you can easily find out that they made skipping updates a paid feature, and then even removed that after negative feedback!

People were even talking about downgrading below the version that introduced the 'paid update skipping', so any way you look at it, your comment doesn't make any sense.


It’s not a good look and that matters. If I’m going to rely on something for production, and that thing ships breaking changes and considers weird, terrible monetization strategies, I go elsewhere.


Spewing FUD is not a good look either. Introducing paid features is not a 'breaking change'–your pipelines and production code continued to work throughout all this–and charging large commercial enterprises for premium features is not a 'weird, terrible monetization strategy'.


You don't use Docker Desktop in production, right? Right?


IMO the complexity of k8s tends to be overstated. Of course its a vast and complex system, but its also incredibly well documented and based on simple design principles. The reason people have such a hard time with it is -

- deploying k8s on bare metal is not easy. It was never really designed for this and to be fair Google has a vested interest in making you use GCP. EKS is nothing more than cobbled together scripts and you still need to do a lot of manual work

- k8s introduces a ton of terminology and new concepts (pods/service/deployment/ingress) not to mention the whole host of addons built on top like Istio.

It shouldn't take any decent dev more than a couple days to install minikube/k3s or use a cloud provider, play around and get comfortable with k8s.


Coming from a long career in ops and systems administration, k8s was very straightforward to learn. All the complexity is in automating the things that I’ve already had to know how to do - scaling, failover, resource planning, etc etc.


Yeah, this stuff is complicated any way you go: puppet, chef, ansible, Cloudformation/CDK, Terraform, K8s all have non-zero learning curves.


You're trading for structured environment at the cost of complexity for sure, but I think there's something to be said about the degree of complexity. Ansible for example only requires python + some handful library + ssh connection whereas Chef/Puppet requires agent install + SSL for correct functionality.


See k3s.io. Cannot get much easier.


Why only Google problems? Why not homelab problems where I'm managing upwards of 20 apps and I don't want to come up with custom automation to do things like cert renewal, tls termiantion, dynamic storage, and disaster recovery? What if I don't want to come up with my own system for organizing and grouping resources? Lots of homelab people (I assume) chain together hacky scripts and probably many things are not repeatable. With k8s, all my configs are in a single repo and I can spin back up my entire cluster on new hardware anywhere provided I've backed up my storage. How does Swarm handle all of these problems in a more unified or complete way that k8s does?


I'd heavily encourage you to read the docs. If the scenarios you outlined are your top concern for your homelab, I suspect the complexity and resource overhead of k8s isn't warranted.

- Cert Renewal / Rotation: https://docs.docker.com/engine/reference/commandline/swarm_c...

- TLS Termination: Arguably this is an application layer concern and out of scope for an orchestration layer, but in either case use Nginx (or your favorite TLS terminating proxy) as your ingress point. This is equivalent to the Nginx ingress controller in k8s.

- Storage: Docker volumes (local or ceph, for distributed).

- Disaster Recovery: Node failure? Cluster failure? Data loss? What kind of DR? The strategies you'd use with Swarm will be very similar to those you'd use with k8s.

- Grouping resources: Labels and/or services.

In a Swarm world you'll have fewer moving pieces, far lower resource requirements, and will manage it all in a docker-compose file(s). You'll also get things like service discovery, routing, and secrets management for "free". No additional configuration needed.


I would argue the myriad of alternatives you've suggested as alternatives produce significantly more overhead.

You linked Swarm CA renewal operations; this is maybe 1/50 of the functionality of cert-manager. Is there a product like cert-manager that I can run on Swarm that's tightly integrated?

Why not have ingress intrinsically linked to the underlying reaources like k8s does? Your suggestion means that someone needs to come up with a methodology and automation to update ingress with Swarm. Yeah, it's obviously possible but imo, Traefik ingress reaources are way easier to manage with config spread across far fewer places.

Storage.. I have an nfs storage provider in k8s which allows me to dynamically create storage with lifecycle management included... for free... just because I'm using k8s.

DR as in something happens to my house or other technical issues which bring down my cluster. I would posit that it's much easier for me to restore a cluster because it's all yaml and imo more descriptive and capable than swarm yaml syntax.

I think you get my point. Yes, you can do it all with Swarm but with things like k3s and k0s, it's just as easy to start using k8s. And way more help, docs, and tooling to boot.


Honestly, this is such a common refrain in open-source communities.

"I'm going to use this product/project."

"You shouldn't use that, use this instead."

"But this product/project has all these features I want."

"Sure but you can just build and maintain those features yourself if you want. It's open source, after all!"

Or alternately:

"Well no one really needs those anyway, it's just people being told that they do."

I've been hearing this same refrain for over twenty years now. It seems as though some people don't understand that there's actual value in having a system that just works, rather than one that you have to build and maintain yourself just to emulate the system that works.


So, I don't use either of these things, but still tend to read the arguments between people who do, and that isn't what it feels like to me (as again, someone who doesn't use either of these things).

Specifically, it feels like you are missing that the people who argue against k8s are saying that it is extremely difficult to configure, and so instead of this clean "some people don't understand that it is nice to have a system that just works", it feels like the argument is actually about a tradeoff between what you think is easier to put together: a single system with a million options that seem to all need to be configured or it doesn't work or a handful of systems that are each easy to understand in separate, but where you will need to do all the glue manually.


'A system that just works' isn't a universal constant, so we have these discussions because 'just works' means different things to different people.

You're no more right or wrong here than anyone else is.


I do partially agree with the point that you're making, but perhaps from a slightly different angle - i'd say that the biggest value that Kubernetes provides (or at least tries to, with varying results), is having a common set of abstractions that allow both reusing skills and also treating different vendors' clusters as very similar resources.

> It seems as though some people don't understand that there's actual value in having a system that just works, rather than one that you have to build and maintain yourself just to emulate the system that works.

This, however, feels like a lie.

It might "just work" because of the efforts of hundreds of engineers that are in the employ of the larger cloud vendors, because they're handling all of the problems in the background, however that will also cost you. It might also "just work" in on prem deployments because you simply haven't run into any problems yet... and in my experience it's not a matter of "Whether?" but rather one of "When?"

And when these problems inevitably do surface, you have to ask yourself, whether you have the capacity to debug them and mitigate them. In the case of K3s clusters, or another lightweight Kubernetes distro that might be doable. However, if you're dealing with a large distro like RKE or your own custom one, then you might have to debug this complex, fragmented and distributed system to get to the root cause and then figure out how to solve it.

There's definitely a reason why so many DevOps specialists with Kubernetes credentials are so well paid right now. There's also a reason for why many of the enterprise deployments of Kubernetes that i've seen are essentially black holes of manhours, much like implementing Apache Kafka when something like RabbitMQ might have sufficed - sometimes because of chasing after hype and doing CV driven development, other times because the people didn't know enough about the other options out there. There's also a reason for why Docker Swarm is still alive and kicking despite how its maintenance and development have been mismanaged, as well as why HashiCorp Nomad is liked by many.

In short:

The people who will try to build their own complex systems will most likely fail and will build something undocumented, unmaintainable, buggy and unstable.

The people who try to adapt the technologies that are better suited for large scale deployments for their own needs, might find that the complexity grinds their velocity to a halt and makes them waste huge amounts of effort in maintaining this solution.

The people who want to make someone else deal with these complexities and just want the benefits of the tech, will probably have to pay appropriately for this, which sometimes is a good deal, but other times is a bad one and also ensures vendor lock.

The people who reevaluate what they're trying to do and how much they actually need to do, might sometimes settle for a lightweight solution which does what's necessary, not much more and leads to vaguely positive outcomes.


> the biggest value that Kubernetes provides is having a common set of abstractions that allow both reusing skills and treating different vendors' clusters as very similar resources

This is the real value of Kubernetes. I'm at my fourth kubernetes shop and the deployments, service, ingress, pod specs all look largely the same. From an operations standpoint the actual production system looks the same, the only difference between companies is how the deployments are updated and what triggers the container build and how fast they're growing. Giant development/engineering systems went from very specialized, highly complex systems, to a mundane, even boring part of the company. Infrastructure should be boring, and it should be easy to hire for these roles. That's what Kubernetes does.

Oh, and it scales too, but I'd say 95% of companies running their workloads on k8s don't need the scaling it is capable of.


Yeah I've heard the endless "Kubernetes is too complicated arguments" but at a glance it's a container deployment/management service that the big three cloud providers all support which makes it close enough to a standard.


The issues with large clusters you've surfaced are ones I've never once experienced in my homelab. I'm already an experienced devops (?) engineer so the transition to k8s only took a few weeks and a couple sleepness nights which will not be everyone's experience, especially beginners. But it's a cohesive system and once you learn it, you understand the various patterns and you make it work for you vs you sucking time into it. For myself, learning k8s was way easier than keeping track of various little scripts to do this or that in a swarm cluster.

Folks here are making sound arguments for swarm and alternatives and they are totally valid. But for an experienced engineer who needed to start their homelab back up after a long hiatus, I will never look back once for choosing k8s. Honestly some of the most fun I've had with computers in a long time and great experience for when I move into a k8s related $job.


> the transition to k8s only took a few weeks and a couple sleepness nights

Only?

I have had a home server for fifteen years and used to have a homemade router. That's significantly more time than I have ever spent on them. I don't even bother maintaining scripts for my home installation. I have reinstalled it once in the past decade and it took me significantly less time than writing said scripts would have. There was enough things I wanted to do differently that any scripts would have been useless by that point anyway.

At that point, it sounds more like a hobby than something saving you time.


I guess I never considered self-hosting as not a hobby? Web apps with modern stacks are usually pretty complex, so hard for me to imagine someone just casually jumping in as a non-hobby and not running into big problems at some point. Or getting hacked because you didn't bother or forgot to secure something.


> Web apps with modern stacks are usually pretty complex, so hard for me to imagine someone just casually jumping in as a non-hobby and not running into big problems at some point. Or getting hacked because you didn't bother or forgot to secure something.

That's not my experience at all. Plenty of web apps for self hosting are pretty simple. You just install the packet, read the config files, turn on and off what you need and you are good to go. Since systemd made that easy, most apps now come preconfigured to use a dedicated users and drop the capabilities they don't need. If they don't, you are two lines aways from doing it in their service file.

Then I just throw Caddy in front of everything I want to access from outside my local network and call it a day.


That's great! Did you use a Kubernetes distribution that's provided by the OS, like MicroK8s ( https://microk8s.io/ )?

Or perhaps one of the lightweight ones, like K3s ( https://k3s.io/ ) or k0s ( https://k0sproject.io/ )?

Maybe a turnkey one, like RKE ( https://rancher.com/products/rke/ )?

Or did you use something like kubespray ( https://kubespray.io/#/ ) to create a cluster in a semi automated fashion?

Alternatively, perhaps you built your cluster from scratch or maybe used kubeadm ( https://kubernetes.io/docs/setup/production-environment/tool... ) and simply got lucky?

Out of all of those, my experiences have only been positive with K3s (though i haven't tried k0s) and RKE, especially when using the latter with Rancher ( https://rancher.com/ ) or when using the former with Portainer ( https://www.portainer.io/solutions ).

In most of the other cases, i've run into a variety of problems:

  - different networking implementations that work inconsistently ( https://kubernetes.io/docs/concepts/cluster-administration/addons/#networking-and-network-policy )
  - general problems with networking, communication between nodes, and certain node components refusing to talk to others and thus nodes not allowing anything to be scheduled on them
  - large CPU/RAM resource usage for even small clusters, which is unfortunate, given that i'm too poor to afford lots of either
  - no overprovisioning support (when i last tried), leading to pods not being scheduled on nodes when there's plenty of resources available, due to unnecessary reservations
  - problems with PVCs and storing data in host directories, like how Docker has bind mounts with few to no issues
  - also, they recently stopped supporting the Docker shim and now you have to use containerd, which is unfortunate from a debugging perspective (since a lot of people out there actually like Docker CLI)
If i could afford proper server hardware instead of 200GEs with value RAM then i bet the situation would be different, but in my experience this is a representation of how things would run in typical small VPSes (should you self host the control plane), which doesn't inspire confidence.

That said, when i compared the RAM usage of Docker Swarm with that of K3s, the picture was far better! My current approach is to introduce clients and companies to containers in the following fashion, if they don't use the cloud vendors: Docker --> Docker Compose --> Docker Swarm --> (check if there's need for more functionality) --> K3s.

The beautiful think is that because of the OCI standard you can choose where and how to run your app, as well as use tools like Kompose to make migration very easy: https://kompose.io/


Traefik, for example, handles certificates for your services. I've set it up this morning on our Docker Swarm at work, 5 domains are now served by it with automated ACME certificates.

It's not like cert-manager does rocket science that is possible only on k8s


> You linked Swarm CA renewal operations; this is maybe 1/50 of the functionality of cert-manager. Is there a product like cert-manager that I can run on Swarm that's tightly integrated?

> Why not have ingress intrinsically linked to the underlying reaources like k8s does? Your suggestion means that someone needs to come up with a methodology and automation to update ingress with Swarm. Yeah, it's obviously possible but imo, Traefik ingress reaources are way easier to manage with config spread across far fewer places.

Traefik can provide a lightweight ingress for Docker Swarm clusters alongside ensuring most of the certificate related functionality that you might want for web apps: https://traefik.io/

In particular, here's an example of using it with Docker Swarm: https://dockerswarm.rocks/traefik/

It integrates well enough that you can set up listening for particular ports, domain names, set up basicauth, use Let's Encrypt for certificates and do many other things with labels inside of your Docker Compose file!

> Storage.. I have an nfs storage provider in k8s which allows me to dynamically create storage with lifecycle management included... for free... just because I'm using k8s.

Docker also supports volume plugins, some of which were carried out of the base offering not to create needless bloat (much like what K3s does with many other in tree plugins of the heavyweight Kubernetes distros): https://docs.docker.com/engine/extend/plugins_volume/

Here's a pretty good documentation page on how to set them up and use them: https://docs.docker.com/storage/volumes/#share-data-among-ma...

You'll find examples of SSHFS, NFS, CIFS/Samba there, which should cover most base use cases. The latter two are supported by the default "local" driver (which feels like a naming failure on their part), so it's safe to say that Docker and Docker Swarm support both NFS and CIFS/Samba out of the box.

> DR as in something happens to my house or other technical issues which bring down my cluster. I would posit that it's much easier for me to restore a cluster because it's all yaml and imo more descriptive and capable than swarm yaml syntax.

I'm afraid i don't follow here.

Here's how you set up a cluster: https://docs.docker.com/engine/swarm/swarm-tutorial/create-s...

There are also commands for managing nodes within the Swarm: https://docs.docker.com/engine/reference/commandline/node/

There are also commands for managing the stacks that are deployed: https://docs.docker.com/engine/reference/commandline/stack/

That's literally it. Verbose YAML doesn't imply any additional quality. If you want, feel free to look at the Compose specification too see all of the supported options: https://docs.docker.com/compose/compose-file/compose-file-v3...

With Swarm, i've actually found that it's actually very easy to both manage the cluster, resolve any issues (because of very few moving parts), or even just wipe it all and create it anew with all of the deployments in a few commands. Furthermore, there is a very lovely Ansible Swarm module if you'd like: https://docs.ansible.com/ansible/latest/collections/communit...

In short, fewer moving parts equal easier administration and fewer Byzantine failures in this case.

> I think you get my point. Yes, you can do it all with Swarm but with things like k3s and k0s, it's just as easy to start using k8s. And way more help, docs, and tooling to boot.

K3s and other projects like it are a step in the right direction for Kubernetes in smaller deployments! That said, in my experience, the documentation of Docker Swarm is comparatively better to that of many of the Kubernetes distro projects, simply because of how mature and stable Docker and Docker Swarm both are.

I don't doubt that the situation will change in a few years, but until then, Swarm is more than enough.


Yes, ingress like Traefik can auto renew certs.

My problem with swarm is that the overlay networking was super flaky. Dead containers still being registered on the network cause every X requests to fail (since it’s round robin) because it’s routing to an overlay IP that doesn’t exist.

Further, I could never find storage adapters for cloud storage that continued to be maintained. Rexray was the longest standing but hasn’t had a commit since Jan 2019.


I personally use Docker Compose for homelab deployment. It handles internetworking dependencies, and with nginx-proxy + letsencrypt-nginx-proxy-companion you get automatic subdomain redirection and certificate renewal.

I will say that it's still a fair amount of complexity, but it's a workable amount of complexity on a day-to-day basis.


+1

I run bitwarden, syncthing, transmission etc. on a single box at home and this setup works great.


I've been using Ansible's Docker module to manage my containers in my homelab, and I've been meaning to switch it to Docker-Compose for a while now, because it's kind of a pain to have to wait for the playbook to run for simple container changes (among other annoyances).

Overall, though: Traefik proxying to Docker Containers has been _very_ smooth in my Homelab: I'm loving it.


I got all my personal services dockerized and running in a bunch of Docker Compose (not even Swarm) and an nginx as an ingress and SSL termination point. I have all the configs in one repo too. I chose nginx because I am quite familiar with it, but I believe that Caddy may do the same work even easier.


The problem with solutions bigger than `docker-compose up -d` and smaller than k8s is that they have a growth ceiling, real or imaginary. Sooner or later you will hit some limitation and will either thrash against it or have to upgrade to k8s. As a decision maker in an IT org, you can either:

- pick up Swarm or Nomad now, and then potentially be blamed for the expensive and costly migration to k8s - pick up k8s now, waste the time of your (voiceless) devs and ops on an overengineered solution, but retain access to the larger pool of technologies and a smooth upgrade path should the need arise


You've posed a false dichotomoy. I've worked at multiple billion dollar startups that got by just fine without k8s. And I've worked at startups that started on completely different stacks and migrated to k8s. If you're already containerized, you've reserved a good amount of optionality.

The kind of reasoning you pose is similar to "SQL doesn't scale to infinity, so start with an infinitely scalable eventually consistent document store". This line of thinking is dangerous to most companies, and often the death of startups. Assume YAGNI.

There's some truth to "Nobody ever got fired for choosing IBM", but IBM was never a good choice for anyone except consultants. k8s isn't too far from that.


But you are posing a similar false dichotomy. k8s isn't some massive intractable beast that needs an army of consultants to run and maintain. It can run small loads just fine as well.

Anyone can leann k8s basic principles in a day or 2. And managing a cluster is arguably easier than managing a typical bunch of EX2 instances+ALB etc that almost everyone uses.

k8s also has the advantage of ubiquity. If I had to manage a bunch of containers on a cluster with > 10 nodes, I'd pick k8s over swarm/nomad etc.

Your argument is 'k8s isn't a good choice for anyone except Google-scale' and I don't think thats true at all.


> k8s isn't some massive intractable beast that needs an army of consultants to run and maintain.

As someone who has actively avoided k8s (despite enjoying devops), my observation has been quite the opposite. At my current (small) org we have one devops guy who spends all day working on k8s related work (configuration, troubleshooting, automation, etc.) And we're about to start looking for another devops engineer because he's swamped. We only have a handful of clusters up and running so far.

k8s is the poster child for overcomplicating matters for folks that don't have Google-scale problems.


You might be doing it wrong. I use k8s for my personal projects and the maintenance overhead is lighter than what it cost me to log into my VPS every 3 months and renew the Let's Encrypt certificate. (Which was literally just pressing the up arrow and enter, because it was always the last thing in my shell history.) At work, a team of 5 people maintain our production Kubernetes clusters (that host our cloud product and app), and develop the app itself. Which by the way, creates Kubernetes clusters for individual customers, of which we have created thousands of. Overall, it's not that hard and we don't spend much time managing computers. (We do have all of the good stuff, like all K8s changes in Git, one-click deploys, etc. I did spend a little time making Shipit run in Kubernetes, but it certainly wasn't my full time job for any extended period of time.)

There are problems that people run into. One is not needing more than one computer -- if everything you need to run fits on one computer, you might as well use systemd as your orchestrator. (Which people also say is overly complicated, but really it's just extremely misunderstood.) The other is not understanding Kubernetes enough. A lot of people just explode random helm charts into their cluster and are mad that it's hard to maintain. Basically, adding another abstraction layer only helps if you can debug the actual thing on the surface -- otherwise, you're just setting a time bomb for yourself. (And, helm charts require you beg the maintainers for every feature you could possibly need, and once enough people have done that you just have raw k8s manifests where everything is called some random word instead of its actual thing. Really bad. Use Kustomize or jsonnet!)


Compared to something proprietary like AWS ECS, K8s seems like it's almost impenetrable to learn. If you're already on one of the cloud providers, likely they will have something else to offer.

ECS isn't exactly simple either, but it is manageable for small shops where the developer is the devops guy.


it’s not only suitable for google scale problems, rather it isn’t for all problems. Like a lot of new shiny tech, it takes time investment to find out if something really is the solution to a problem. Don’t just jump in with both feet.


As one of the mentioned consultants, I think the statement that one can learn Kubernetes in a day or two grossly unfair. It is a complicated beast, for better and for worse. The (almost) unique selling point is that it is an orchestration layer that is pretty much provider agnostic in a believable way.


I didn't mean to imply you can learn it in a few days, but you can certainly learn the basic design principles and overall approach, and its pretty consistent across all the k8s products.


And that is a sound solution if you have limitless VC bux to burn while you implement your over-engineered solution, but most shops have to be a little bit scrappier.

Not to mention that you end up scaling to a size where k8s is a sound solution, that "costly" migration is going to be a rounding error on your balance sheet


> that "costly" migration is going to be a rounding error on your balance sheet

This. Just because your services are containerized doesn't mean they'll scale well with each order of magnitude. Likely you'll need to re-architect other parts of the system to work at your new scale.


> smaller than k8s is that they have a growth ceiling

Kubernetes has a serious growth ceiling - way below what Nomad has. So if you actually have a problem of interesting scale, it's almost certainly "pick Nomad now, or get blamed for a costly migration later when you realise that Kubernetes is not for you".


I was dismayed when Docker effectively (and quietly) declared that they were no longer supporting Docker Swarm.

Docker Swarm remains far simpler to set up, run, and troubleshoot than Kubernetes.

I just can't choose it over K8s, though, knowing that it's unsupported.


I've hemmed and hawed over this as well, but I believe that sometimes software can be complete and stable and not need constant iteration. As long as they add security patches and don't remove it from Docker, I'll still default to Swarm.

My fallback, if Swarm is ever removed, is Hashicorp's Nomad. I've only used it in a hobby capacity, but is (at face value, at least) another wonderful piece of software. I've used k8s a bunch in production (including right now at my current employer) and just can't recommend it – it's a technical distraction for most teams.


I've never looked into Nomad. Your comment's gotten me interested now.


It’s awesome. Works just as well on one node as in a cluster, and has Docker drivers and everything. Give it a whirl.


>K8S is great if you have Google problems

Borg is great if you have Google problems. K8s is not Borg.


The dream is to be able to take a software and yeet it into the cloud with minimal effort. That's not kubernetes at all.

That said, if your tooling or products or software supports kubernetes, which a lot more are these days, then it becomes a lot easier. A lot of companies these days are saying "Here's how to install it, or just grab this helm chart"; Gitlab, for example. We even had one piece of software we were testing which only had two options: cloud-hosted, or self-hosted on k8s. Nothing else was supported.

k8s doesn't seem to be any easier to configure, with the possible exception of microk8s (which has its own issues), but the amount of things you can do with it once it's up and running is compelling.


Pretty much. Fundamentally Kubernetes “won.” Try to pitch your company on something else and you’re asking them to bet on something else “without particularly active investment” to use your words. As well as much less of a complementary software ecosystem.


K8S based on distros like k3s or microk8s is a great and simple solution for manufacturing and managing edge devices, IoT solutions and similar products that are setup once on a single box that runs half a dozen or more things and are shipped to business clients.

It is very simple to spin a minimal distro based cluster. It is relatively simple to have it up and running your solution. Sure getting Kubernetes from the source and tailoring all the required components to run cluster with full HA etc' is a complex stuff, but building a private cloud based on vanilla Kubernetes isn't the only or the most common use case.


I've also seen more of it being used for deployments of on-prem versions of SaaS services. I think the idea is that if you're already hosting your SaaS services on k8s distributing a machine image to your Enterprise customers running a scaled down deployment of your services on a single node kube cluster makes some of the packaging and updating challenges easier.


I suspect that there is a tendency to pick more powerful and complex tools and infrastructure than needed. For us as engineers it's easy to foresee some of the scaling-related, technical problems of the far future. It's so tempting to prepare for those, especially if you need to learn a new technology anyways and it looks great on your resume. In addition everyone knows intellectually that their own startup is unlikely to reach Google scale, but we do badly want to believed it will. So maybe the right bet is to always put your money on the more powerful and complex technology as long as the complexity is inherent and not incidental. Of course this is also limited to infrastructure. Consumer products and other markets would be totally different


One of the tough lessons of my years as a developer is many developers love to adopt Google scale solutions and Google practices because they’re from Google and they come with prestige and a feeling of proximity to the most visible technology company. Whether their problems are a good fit for Google scale solutions or not. It’s inevitable and it takes a lot of soft skills to argue against even if it’s dead obvious you’re going to be running a few instances of an application maximum. The simple solution is far less sexy than the Google solution.


On this note, HN users and several HC employees (including some enterprise architects) I've spoken to love to mention Nomad and how it's much lighter and simpler to run, completely oblivious that many with bigger names have tried and failed to go up against K8s.

Kubernetes has become a speeding bullet, it won't be easy to catch and it has to run through a lot of walls to slow down for a competitor to do any damage to it's dominance.


> completely oblivious that many with bigger names have tried and failed to go up against K8s

Ha, I can at least assure you we're not oblivious. Nomad has a product manager from Mesosphere and most of HashiCorp's products integrate deeply with Kubernetes. We're well aware Kubernetes is the juggernaut to which every other scheduler has succumbed.

I believe there's room for both Nomad and Kubernetes. Whether as competitors or complements, having more than one vibrant project in the orchestration space will hopefully make all the projects better for users. Any one project has to make tradeoffs that improve the user experience for some while degrading it for others. For example Kubernetes has an IP-per-Pod network abstraction which provides an excellent out of the box PaaS-like experience for microservices in the cloud. On the other hand Nomad's more flexible network model more easily supports networks outside of flat cloud topologies whether it's small home raspberrypi clusters or globally distributed edge compute.


Sorry, I think I came across dismissive of Nomad which I am not. I work with multiple products in the HC stack daily, but have yet to work with Nomad, however, the bits that I have seen and read about Nomad, it seems like a great tool, and I agree there is place for Nomad in the current ecosystem, just like there is place for ECS. Even if I didn't think so, at the very least, like you said, it's good to have competition.

What I was calling out is the abundance of people who claim it is the greatest thing since the tech equivalent of sliced bread, and love to point out how simple it is despite requiring the need for Consul to run it at enterprise scale and likely also Vault if you need secret management in any form, which then also needs Consul if you want to run it at scale. I have intimate experience with running Vault and Consul and have advised colleagues for the better part of 2 years on utilising Vault (either as basic secrets management or some of its more advanced features). IIRC the recommendation from HC is also running a third Consul cluster for service discovery. If running Nomad at scale is anything like Vault, then it isn't as simple as people make it out to be, never mind the fact that you'll probably be running 5 different clusters of 3 different products to provide functionality that doesn't fulfill half of what Kubernetes gives.


> If running Nomad at scale is anything like Vault, then it isn't as simple as people make it out to be, never mind the fact that you'll probably be running 5 different clusters of 3 different products to provide functionality that doesn't fulfill half of what Kubernetes gives.

The complexity of setting up a Nomad cluster due to Vault and Consul being soft requirements is very real and something we're hoping to (finally!) make progress on in the coming months.


you would need to make it as simple as running k3s on 1,3,5 nodes and probably also have stuff like system-upgrade-manager and another good idea is to work on flatcar linux, etc. k3s is so good for small-middle sized metal clusters and also works on a single node. it's so simple to start, you will have a rough time to actually try to gain momentum in this space which will also not generate a lot of revenue.

the only thing which is not so easy is LoadBalancing to extern i.e. kube-vip/metallb both can be a pain (externalTrafficPolicy: Local is still a shit show). k3s basically creates a k8s cluster with everything except good load balancing over multiple nodes


A single Nomad process can be the server (scheduler) and client (worker). Nomad can run in single server mode if highly available scheduling is not a concern (workloads will not be interrupted if your server does go down) by setting bootstrap_expect=1 (instead of 3, 5, 7, etc). You can always add more servers later to make a non-HA cluster HA. No need to use different projects to setup different clusters. Clients can be added or removed at any time with no configuration changes (people using Nomad in the cloud generally put servers in 1 ASG and clients in another ASG).

Nomad does not have a first class LoadBalancing concept in keeping with its unopinionated network model, although we may add ingress/loadbalancing someday. Right now most people use Traefik or Nginx with Consul for ingress, and Consul Connect is popular for internal service mesh. Obviously unfortunate extra complexity over having it builtin, but Nomad has been focused more on core scheduling than these ancillary services so far.


Great perspective. Thank you for you comment!


Thing is, Nomad isn't trying to go up against K8S. It's part of the Hashicorp family of products, so if you use a couple of those, it's easy to add Nomad into the mix, or just use K8S. While other competitors have tried to out-do K8S, Hashicorp seem to try to work with the broader eco-system, which means some will just use Nomad to keep it simple, but don't need to for Hashicorp to continue working for them in other areas.


> Thing is, Nomad isn't trying to go up against K8S.

Hard[0] disagree[1] on this[2]. Nomad is a direct competitor of Kubernetes.

[0] https://www.nomadproject.io/docs/nomad-vs-kubernetes

[1] https://www.hashicorp.com/blog/a-kubernetes-user-s-guide-to-...

[2] https://www.hashicorp.com/blog/nomad-kubernetes-a-pragmatic-...


Nomad can compete with a fraction of k8s, a bit like all the other hashicorp products


We will see if the next hot scaling/clustering/orchestration solution replaces k8s in a decade from now. Fingers crossed.

There have been lots of speeding, hard-to-catch bullets in the past.


It's an incredible litmus test for developer v sysadmin. Developers think the complexity is unnecessary and so don't want to learn k8s. For sysadmins it's mostly just nice automation around all the chores we used to do with home-brewed bash scripts.


As a developer, I hate k8s, but mostly because my management does not understand difference between developers and devops. They changed our job titles to devops engineers and fired sys admins. I understand k8s is complex for people who specialize in it, try imagining as a developer, you need to setup private vlan ingress. At least, on regular Linux, I know how to setup ip tables or look up 1000s of tutorial.

K8s and devops has destroyed us developers' productivity, confidence, and morale.


Isn't it your management who have destroyed those things?


I don't disagree but I don't know where they are getting their marketing material where they think developers can do devops. I suspect it might some sales tactics from these Cloud/DevOps companies.


I wonder if some of the derision is at least due in part to k8s effectively forcing you to think about certain application characteristics upfront instead of addressing them later when you outgrow the single instance performance or suffer an outage.


I've never met a sysadmin that would prefer it over the bash scripts.


What's your sample size?


Having worked in FAANGs and a number of other large companies, I lost count.


I’ve seen a number of companies going for k8s and not other solution not because they needed the complexity, but because it was first party supported.

To me what killed Swarm is Google needing to spread k8s.

If you are eyeing at GCE but decide you want a bit more flexibility, going for a Google backed solution gives you k8s, and Swarm isn’t even on the choice list.

Choosing a third party means you’re on your own to bootstrap/make it live, or contract a vendor that might not live longer than you. That’s fine for companies who strategically choose to invest for the long term, sadly that’s not most of the companies out there.


I think docker itself had plenty of time to innovate. They could have solved the problem long before mainstream got a hold of kubernetes. They just didn't solve the actual problems users where having at that time, which was container orchestration across multiple servers. They went very very deep with a lot of drivers and API changes and what not. If I am not mistaken docker-compose wasn't even a part of docker first before it got bundled, someone else was solving docker's same-server orchestration problems.

Kubernetes is for a lot of organisations pretty difficult and way too much for a start. We used rancher for solving our problems which did a fine job, rancher got a worse when they made the move to Kubernetes.

In the beginning they (docker) also removed a lot of options like cpu throttling and configuration and such that LXC had from the start. They also had their own version (a shim) of pid 1, some point at the time. And other things that made it a little painful to properly containerise. I was often very frustrated by docker and fell back on lxc.

There was also something with the management of docker images like pruning and cleaning and other of much needed functionality that just didn't get included.

Also something with the container registries which I a this point can't remember (maybe deleting images or authentication out of the box or something that made it hard to host yourself).

Anyway I think it's failed because it failed to listen to its users and act upon them. They really had a lot of chances I think. I think they just made some wrong business decisions. I always felt they had a strong technical CTO that really was deep on the product but not on the whole pipeline from dev->x->prod workflows.


> We used rancher for solving our problems which did a fine job, rancher got a worse when they made the move to Kubernetes.

(Early Rancher employee)

We liked our Cattle orchestration and ease of use of 1.x as much as the next person. Hell, I still like a lot of it better.

But just as this article talks about with Swarm, embracing K8s was absolutely the right move. We were the smallest of at least 4 major choices.

Picking the right horse early enough and making K8s easier to use led us to a successful exit and continued relevance (now in SUSE) instead of a slow painful spiral to irrelevance like Swarm, Mesos, and others and eventual fire-sale.


Ah nice to hear from a rancher employee <3. Ah yeah, I totally understand the move, no blame there. But cattle was just amazing, it was easy and elegant!


What is so hard about k8s? I think for example aws with fargate and all the networking there is far more complicated… i have never used swarm though.


If you're talking about a hosted k8s like EKS or a toy/single-node k8s, in 2021, then nothing: k8s is much better.

But if you're on-prem, and have a tonne of metal and just arrived in 2014 via a time machine, swarm was so much simpler and simpler that if you were a sysadmin who already had their own scripts -- their own jenkins-powered IC and git-hooks that built and deployed -- whatever it is you were building -- then swarm looked like a nice gradual extension of that, and k8s looked more like starting over and admitting defeat.


I mean k8s is basically an infra rewrite in any shop that was/is currently VM based. "Hey you know that fundamental unit of isolation you've built all your tooling around. Throw it all away."

Don't get me wrong, I would absolutely go with k8s on any greenfield project but there's a huge huge opportunity for someone to take everyone's VM orchestration tools and quietly and semi-transparently add k8s support for gradual migrations.


Isn’t that VMware’s entire pitch for continued relevance?


I see your point there. I only work on the pod side, not touching any metal.


There's still plenty of room for improvement, but managing a K8s cluster on-prem is miles better than it was in 2014.


Exactly.

Personally, k8s is one of the easiest system I learned to use in the past few years. It has many components, too many that one can't practically understand all of them well. But every component follows the same design principle, so there's a clear path for a noob like me to get the full picture step by step.

The first thing I read on k8s was controller pattern[0]. Then everything becomes so easy to learn. Something wrong with a pod? Find its controller and start troubleshooting from there. Oh, the pod is controlled by a replicaset? Check replicaset controller. The replicaset is managed by a deployment? You know where to go.

That's why I said it is only complicated in some sense. I saw a lot of people started learning this system by going through a list of popular components and their concepts. I would've gotten lost easily and probably given up if I did the same in the beginning.

[0]: https://kubernetes.io/docs/concepts/architecture/controller/


k8s is just a bunch of processes working together. The downside is that if something goes wrong, it is unclear which process is faulty and how to fix it. A misconfigured DNS may take down the whole cluster, and the symptoms are that every process fails because the network is out. Difficult to backtrack to the fault source, there are a lot of configs pertaining to the network that could be faulty.

On the plus side, if someone else configured the cluster correctly for you (e.g. a Cloud service like GKE), then it's a breeze to use.


As a guy who's been running stuff with docker-compose and is learning k8s, the learning curve for k8s is a cliff. Every single layer of the stack gets added complexity.


Look at the history: In the beginning Kubernetes didn't have deployments - only replicasets, it didn't have ingress, it lacked the cloud provider integration you have today...

It was barebones like Mesos but they managed to focus on the right things early on. Once these features and a bit of an ecosystem started to build, the question if we should stay on Mesos with a bunch of homegrown management tooling or move to Kubernetes wasn't even a decision anymore. There was still a significant amount of code which we later just deleted because it became a first class feature or the community implemented something much better.


> I guess the problem was they didn't realize why Kubernetes needs to be that complicated.

Does it?

It seems like there are three kinds of complexity involved here: the essential complexity of the total problem set, the essential complexity of a specific company's problems, and the amount of complexity experienced by a user as they try to solve those problems.

Let's stipulate that Kubernetes needs to be that complicated for Google's needs. I'm not totally sure that's true, but let's pretend for now. So we'll say you're right for the first kind.

But I'm still not seeing it for the other two.

For modest deployment efforts, Kubernetes is a giant pain in the ass. It's just way more complicated than needed for many non-Google use cases. I tried setting up Kubernetes for a bunch of personal stuff I run and it was egregious overkill. I persisted until my home lighting system went dark because some internal Kubernetes keys expired and it decided that just not working was a good failure mode. Last time I mentioned this on HN several people said, well duh, obviously you shouldn't run it just for yourself. And that's fine; every technology has an expected operational envelope. But there are an awful lot of projects out there that have between 0.0 and 0.5 sysadmins. For them it's too complex. Kubernetes is just too complex for many people's needs.

And the other side of that is complexity of experience. Modern processors are fantastically complicated. I'm on a mailing list for people in finance who micro-optimize everything to maximize response speed. Those people know incredible amounts about exactly what different processors do. And they have to. But most working developers know about 1% of that because a lot of people work very hard to hide the complexity from them. I don't think that really happened with Kubernetes.

At this point, it is what it is. It's much easier to scale a technology up than down, so I expect it'll always be like this. But For whatever comes after Kubernetes, I'd like to see much more attention paid to more common use cases.


It's really unfortunate. I am a kubernetes user today, but I LOVED swarm. K8S' main differentiator is that specific intentional "cross-cloud" features such as ingress and volume mapping that help the implementing cloud understand the deployment topology. It's basically the new PaaS but instead of working on one cloud it can work on any.

I don't know that if Swarm had those features if it would really catch on like Kubernetes did. They didn't have the same budget as Google on that side of things.


What I haven’t seen mentioned here is at least by my recollection swarm was incredibly bug ridden especially in networking layer around ‘17/18 when k8s really started pulling away


I beleive that wide k8s adoption is the same dead end as JS frameworks. It is not actually needed and is too complex, so it introduced it's own class of pain. VMs were a great abstraction with very little added complexity, so was Docker. K8s doesn't abstract anything away compared with Docker swarm, just adding unnecessary complexity.


The ironic thing is that Docker became successful only because of the tremendous hype they generated about themselves, and then failed to see the K8s hype train barreling towards them.

Swarm is still what 99% of companies need when they adopt Kubernetes, and is probably 1/10th to 1/20th the cost. Somebody needs to re-invent swarm as an independent project so that we can stop investing in K8s. The cost of K8s isn't even in the complexity, it's in all the millions of side-effects of the complexity, like security, fragmentation, and specialization.


Swarm is (and always has been) an independent project: github.com/docker/swarmkit


> I guess the problem was they didn't realize why Kubernetes needs to be that complicated.

I guess this strongly depends on one's needs. I mean, if all you want to do is glue together a couple of nodes and have a few stacks running on them, do we really need all the bells and whistles?

I mean, with Docker Swarm anyone can get ingress right from the start on any bare ones cloud provider. Deploy your Docker Swarm cluster, slap a service like Traefik, and you're done. It's a 10min job. What else does Kubernetes brings to the table?


Good read, and touches on a lot of the pain points from that era. As someone who was lurking on HN at the time during the containerization boom, I think that the key failing of dotCloud/Docker was not capitalising on Docker Swarm almost immediately. Docker Swarm was touted almost from the start but the repeated delays gave it a reputation of smoke and mirrors left people scrambling for solutions.

I also clearly remember the multiple high profile spats that 'shykes had on HN which burned a lot of bridges. At the time he had a reputation for answering lots of questions on HN which helped a lot with community building. After those bridges were burned there was no one to speak for Docker as the developer mindsets shifted slowly towards Kubernetes. IIRC the Github PRs were also a source of contention as dotCloud corporatised.

To be fair, Kubernetes was a real slog to understand at the start and had a lot of competition; it was definitely not the same level of simple, direct technical solution that Docker was.

Interesting trip down memory lane and what a pivotal technology! Regardless of the rest Docker is a true cultural phenomenon and a testament to the insight of the creators working outside of the myopia of big tech.


Yes. Anyone who wants to relive an early ignition of one bridge should read the HN thread on the Rocket announcement.[0] No one is sympathetic with respect to the Rocket/Docker spat; both companies behaved poorly -- and it was increasingly clear that the energy at Docker, Inc. was going to spent fighting others rather than building a business.

Speaking personally, the fate of Docker, Inc. was clear to me when they took their $40M Series C round in 2014. I had met with Solomon in April 2014 (after their $15M Series B) and tried to tell him what I had learned at Joyent: that raising a ton of money without having a concrete and repeatable business would almost inevitably lead to poor decision making. I could see that I was being too abstract, so I distilled it -- and I more or less begged him to not take any more money. (Sadly, Silicon Valley's superlative, must-watch "Sand Hill Shuffle" episode[1] would not air until 2015, or I would have pointed him to it.) When they took the $40M round -- which was an absolutely outrageous amount of capital to take into a company that didn't even have a conception of what they would possibly sell -- the future was clear to me. I wasn't at all surprised when the SVP washouts from the likes of VMware and IBM landed at Docker -- though still somehow disappointed when they behaved predictably, accelerating the demise of the company. May Docker, Inc. at least become a business school case study to warn future generations of an avoidable fate!

[0] https://news.ycombinator.com/item?id=8682525

[1] https://www.hbo.com/silicon-valley/season-02/1-sand-hill-shu...


I worked at A Thinking Ape (another YC alumni), and for the longest time we had people beating down our door to invest, but the founders always told us why they weren't going to take it: we didn't need money. We had money to run the business, but we didn't have a handle on what we would do with $10m or $100m. What do we need to do to get more users that stick around and generate profit? More and better advertising? Sponsoring YouTube content creators? Hiring 50 more developers? Acquiring other companies?

Without a clear idea of what specifically you need to spend money on, and how specifically you are going to spend that money to generate growth and value, taking money is a bad idea. And if the answer to that question is "so that we can keep paying our developers", then you're probably already doomed.


I remember having a ton of sympathy for CoreOS, Docker really was hard to integrate with as someone doing sysadmin automation stuff.

Docker went to war with the standard Linux init system and the kernel developers who maintained cgroups, their core system primitive. I don't really understand how that was supposed to work out.


I honestly feel that they were both pretty unsympathetic. In particular, CoreOS timed their Rocket announcement for the moment at which the Docker team was airborne en route to DockerCon Europe 2014. It was very petty -- to say nothing of the petty naming (clearly a play on "Docker") or the generally antagonistic approach. None of this is to forgive Docker, who was equally petty in their response -- and a sign of much more pettiness to come, sadly...


> CoreOS timed their Rocket announcement for the moment at which the Docker team was airborne en route to DockerCon Europe 2014

Would that be the conference where Jessie Frazelle wore this telling gem of a troll badge? [0] If so, the parent article [1] says it was 2015.

I was actually present at CoreOS during this period, and played a significant role in the early rkt releases. There's no prominent memory of timing the announcement to coincide with Docker people being on a plane, I think we were just rushing to get it out by dockercon and the en route thing is a coincidence. Rkt didn't just appear out of nowhere overnight, we had to build it and making something like that actually work reliably as a standalone executable without installing a daemon on arbitrary distros is a bit of a chore.

[0] https://static.lwn.net/images/2016/devconf-badge.jpg

[1] https://lwn.net/Articles/676831/


Early CoreOS employee and involved in all of the launch conversations around this...I can confirm there was absolutely no talk of travel plans or catching them at a bad time or anything of the sort. It simply had to get shipped before the conference started so we could talk about it.


No, it was DockerCon EU, December 4-5, 2014 -- and the Docker folks very much believed that it was deliberately timed to their flight several days prior. Unknowable now as that may be, there is no question that Rocket took deliberate aim at Docker -- and timing its announcement to upstage DockerCon is frankly only marginally less petty than timing it to a transoceanic flight. Lest I be seen as defending Docker, Inc. or DockerCon: this is also the only conference that removed a presentation of mine because they didn't like presentations that had more views than their keynotes. So Docker Inc. very much made their own bed, and I understand how Alex and CoreOS got to the point they got to -- but it doesn't mean that they were any more sympathetic in the resulting spat...


Rkt took deliberate aim at the problem of reliably running docker containers as systemd services, and as one of the early rkt developers I can honestly say I did so grudgingly after various meetings convincing me we had no choice.

If the Docker folks hadn't deliberately obstructed our ability to reliably run Docker containers as systemd services, none of this would have been necessary. We certainly had better things to spend our time on.


I never understood why Rocket was killed, when it seemed to be a better design.

Was it because Red Hat was already pushing podman and didn't need two tools doing similar things?


Not sure, I was no longer involved when those changes were happening.

What I can say is there was a lot of change in the container landscape at the time. Kubernetes flipped the script, Fleet from CoreOS was deprecated, and my guess is amidst the consolidation happening around Kubernetes and the CNCF, rkt became less relevant and just didn't find a permanent home.

Flannel, Fleet, Rocket... does anyone use any of this stuff still? "Move fast and break things" tends to leave a trail of waste in its wake.


I'm not the person you asked, but another rkt dev answered a similar question last year

https://news.ycombinator.com/item?id=22250270


> raising a ton of money without having a concrete and repeatable business would almost inevitably lead to poor decision making

This for me is one of those "I feel like I'm taking crazy pills" lessons. It seems so obvious to me, and I hesitate to even estimate the vast amount of cash wasted in the last decade like this. But it's a lesson that an awful lot of people seem to insist on learning for themselves. Or not!


> The truth is, Docker had the chance to work closely with the Kubernetes team at Google in 2014 and potentially own the entire container ecosystem in the process. “We could have had Kubernetes be a first-class Docker project under the Docker banner on GitHub. In hindsight that was a major blunder given Swarm was so late to market,” Stinemates said.

> Craig McLuckie, Kubernetes cofounder and now vice president at VMware, says he offered to donate Kubernetes to Docker, but the two sides couldn’t come to an agreement. “There was a mutual element of hubris there, from them that we didn’t understand developer experience, but the reciprocal feeling was these young upstarts really don’t understand distributed systems management,” he told InfoWorld.

The article criticizes Docker Swarm as myopic, but IMO, there were only two possibilities for Docker to move forward; either they acquired Kubernetes, which was a possibility in this telling of events, or they won with their own Docker Swarm.


a great piece of trivia related to this point is that the original k8s developers tried to convince Docker to adopt the pod concept:

https://github.com/moby/moby/issues/8781


“Didn’t understand developer experience”

I am wondering if anyone ever actually properly used docker swarm in any proper way? Especially as a small-scale developer.


Yeah we used it in production, I’m not there any more but as far as I know it’s still running.

Swarm was good, easy to set up and run, but it had a lot of networking bugs. More than once we had to cycle the docker daemon on all machines which basically resulted in a rolling outage.

We’re trialling nomad at our new co, I was put off by the opaqueness and complexity of K8s, but apparently that’s just me.


It is definitely not just you.


We used Tutum before it was acquired by Docker and turned into Docker Cloud. Whilst it probably would never have scaled to our complexity needs indefinitely as we grew, it was very clear that as soon as it was purchased the product stopped going in a positive direction.

The only changes that they really rolled out were replacing the older, perfectly functional UI with something superficially shinier but much, much harder to use.

Not only was it just hard to see what you wanted to see once they'd halved the information density and got rid of any visual hierarchy, you had absolutely crazy things like a slider for the number of instances of a service that would instantly apply with no confirmation and, even crazier, responded to the scroll wheel. We once doubled our number of production pods and didn't notice for a few hours, because someone scrolled up the page and their cursor went over the slider. Lucky they weren't scrolling the other way.

Anyway, I lost a lot of trust in Docker as an organisation that knows what people value about their products as a part of that.


I don't like so much the idea pushed by the article author that it is partly the fault of the software to be open source!

At least, the founders of Docker are honest and clear in my opinion:

<< Hykes disagrees with this assessment. “I think that is wrong and generally speaking the core open source product created massive growth which created the opportunity to monetize in the first place,” he said. “Lots of companies monetize Docker successfully, just not Docker. There was plenty to monetize, just Docker failed to execute on monetizing it.>>

There is still a thing that is missing there: the article let think that docker was an innovation out of nowhere and they had an unique idea that was kind of spoiled.

But, one has to remember the context of the period when Docker was created:

It was a time were cgroups and "namespaces" of network, process, ... were the hot new things inside the Linux and everyone was thinking about the concept of "container", one way or another.

There were already chroots and a lot of persons were already working on "app"/"module"/"package" systems using aufs/overlaysfs layers.

So, this was the moment and a few competitor were emerging like lxc and docker. Docker was very good to take the of the media cover and hype and so to become the dominant way to have "containers". But, at the beginning, there was nothing particular in the technology and lots of competitors could have done it if it was not Docker.


Certainly my recollection is that containers didn't enter the zeitgeist until Docker came around. LXC and OpenVZ had existed for years before Docker, and Jails/Zones had existed in BSD/Solaris for years before that. Mentioning the long-standing existence of these tools was a favorite pastime of Docker detractors in the early days.

Really, Docker's innovation wasn't the ability to run processes in a container (as mentioned before, that technology had existed for years already); it was the Dockerfile and the Docker hub/container registry. Prior to Docker, the way you'd build a container was by running debootstrap or unpacking a tarball/zip file of a root image. Take a look at https://developer.ibm.com/tutorials/l-lxc-containers/ . As far as I recall, there was no LXC equivalent to `docker build` or the registry except maybe wiring up Packer or home-grown shell script and throwing a tarball in S3 or something.


The Docker registry is what made Docker popular. (One could even suspect that free hosting for developers wins popularity contests.)

What made Docker really explode was when it ran on Mac and Windows. The cloud runs on Linux, but desktop computers generally don't, mostly for cultural reasons.

This made for an explosive combination. There are probably similarities to Github here.


Yeah, Docker nailed the UX right but screwed up the implementation and community building.

> Red Hat specifically “weren’t part of the community, they never rooted for the success of the Docker,” he said. “The mistake on our end was desperately wanting them to be part of the community. In retrospect we would never have benefited from that partnership.”

I understand how it seemed like other open source companies were dictating terms, but Docker's implementation was crap: one fat daemon running as root, no sandboxing, no cryptographic signatures, and a total lack of integration with the rest of Linux. Red Hat gets to define how containers should work on the back end because it pays for ~7% of all kernel development (compare that to Canonical's ~1%) [0]. But instead of cooperating with others in the Linux ecosystem, Docker only begrudgingly adopted interoperable standards while Red Hat, CoreOS, Intel, and others implemented fundamentally better solutions.

> We didn’t work at Google, we didn’t go to Stanford, we didn’t have a PhD in computer science. Some people felt like it was theirs to do, so there was a battle of egos. ... Kubernetes was so early and one of dozens and we didn’t magically guess that it would dominate.

Google had experience running containers at scale and was using K8s as an opportunity to retool their internal proprietary solution. It sounds to me that Google engineers tried to explain to less experienced team the best path forward ... and Docker took it as an insult.

Sure, not everyone needs all the features of K8s, but Docker could have built a simpler interface on top of K8s and benefited from Google's investment. Instead they built a half-baked competitor that didn't work well. Whoops!

Docker delivered a slick developer UX but their architecture was antithetical to how Linux developers thought it should work. Then they spurned offers of help from both the hyperscalers and the dominate Linux enterprise vendor [1] and chose to build inferior solutions for those markets. Now they are stuck selling their pretty UI because that's the only thing Docker does fundamentally better than anyone else.

[0]: https://www.linuxfoundation.org/wp-content/uploads/linux-ker...

[1]: https://www.redhat.com/en/blog/red-hat-continues-lead-linux-....


The real problem is that docker swarm is nowhere close to being production ready and it's full of gotchas and design decisions that make little sense as an end user. They simply couldn't make a product good enough for the enterprise, like k8s did. I have similar reservations towards docker; sure it was a mostly novel concept (I saw live when Solomon announced docker and relatively few people knew at the time about Linux native containers or about bsd jails) but it so full of quirks and gotchas and bad APIs that it's not smooth sailing.

Unfortunately they couldn't make docker into a product (or they would have faced the wrath of the OSS community) so they probably felt stripped for time on both docker and on swarm, which made for an unpolished experience.

I've been using swarm for a small project for funz and because I didn't want to run the full k8s (even k0s or k3s would have been heavy for my use case) and because I had bad experiences with nomad.

My list of complaints: - Stacks are not working 1-1 with Compose - docker machine is a separate binary - You can't pull the latest of a docker image or the cache hit will just fail to redeploy your changes (you're forced to do a sort of blue-green deployment or just update a tag everytime) - You have to configure, run, garbage collect a registry from scratch - docker-flow-proxy should be included out of the box - I need a way to integrate secrets

Don't get me wrong, there's an amazing amount of progress in docker swarm (and I'm running it in production, for free, on a 5$ machine). Years ago it was even worse but it's not something I would ever recommend to my employer. I hope we'll get there eventually.


I am a huge fan of Docker the tool, and during the prime of Docker the company they were doing some really impressive stuff. Swarm was dramatically easier to use than Kubernetes. The people working on Docker both in and outside of the company were top notch and they should be proud of what they accomplished.

It's too bad they didn't make the business side of things work, though I agree that at the time there was a certain feeling of too much money chasing an uncertain future for an open source project. I hope this next iteration of the company works out for them.


Swarm may have been easier to get going but from what I heard it had a lot of operational problems and lacked a complete feature set


There is no mention on LXC/jails which is ofcourse the biggest inspiration for docker, if I am not mistaken it also used lxc under the hood. That was already a good'ish product and but hard to configure and not for the mainstream at the time. Docker introduced AUFS and downloading of images which was added. I always saw docker as a properly marketed nicely ribboned lxc but mediocre implemented since they removed a lot of options at that time that were super useful like cpu limiting etc from std lxc.


Very well written article! Personally, I'm still waiting for docker swarm. The thing is enterprise doesn't care about developer experience. They don't need to; they pay developers so much, they have over qualified developers who can grok Kubernetes, and can hire more at a moment's notice. They care much more about whether it meets their goals. Smaller companies/orgs are what care about DX. They have likely very few, potentially not super highly qualified developers, and very limited resources. To them, something that their developers can set up, understand, and maintain, that's reliable and resilient, is a HUGE win.


> Personally, I'm still waiting for docker swarm.

What are you waiting for, is there something new in the works wrt swarm? I was a swarm fan early on(and secretly still am), but otherwise k8s is where I spend all my time at work.


It never quite felt "ready"; it's development and docs always felt like an after thought. I guess it's more to do with marketing than the actual service. And I'm always afraid they're going to axe the thing. It probably is ready enough to give it a whirl.


I think you vastly overestimate the skill level of enterprise developers. Enterprise may pay a lot but they also hamstring developers in some interesting ways which tends to attract people who are better at the political game than the tech. The high pay attracts skill just the wrong skill in many cases.


As someone only casually knowledgable in this area (e.g. I use docker for some of my home automation thingies, I still haven't wrapped my head fully around what Kubernetes is beyond something like a way to manage a bunch of similar containers? VMs? eg)..

Docker, the kabillion dollar company, never had a chance. We all recognize the potential and actual problems when some very crucial piece of infrastructure is only free/open source and not maintained -- but this is what happens when it is "overmaintained" I.e. it makes sense to invest big in something that is good and could be crucial; but only if you can both keep up quality AND stave off competitors -- and in this case, competitors aren't just other companies, but free software in general. It's kind of funny, in a way, Docker made itself a target for "good competition" by being so visible, in a way that e.g. "curl" didn't.

Either way, I always knew "Docker the kabillion dollar company" was a stupid stupid bet. Small company, great bet. But this? Destined to fail.


My issue with Docker Inc has been with the support. I started to give them money once but pulled back after it became clear the support experience was essentially “customer support thyself”. I don’t have to pay anyone to have my support ticket ignored, I can do that myself for free.

Anyway they are past the point of no return now. Their creations will undoubtedly outlive them - probably Queen Elizabeth also. Anything they pull back will be replaced by the community. Anything new will get aped by everyone else trying to make a name for themselves. Containers are more or less a generic commodity now.


I am surprised everyone is blaming K8s for eating Docker's lunch but from my experience in the banking industry.. it was cloudfoundry that stole the market away from Docker swarm.I don't think I ever saw a Docker swarm in production; IT departments standardized on cloudfoundry to manage their "private/on-prem cloud".


There's a lot of "dark" matter and energy in this industry that doesn't get much attention, even when it may be the majority of mass in a particular area.


I have always been somewhat skeptical of Docker, for example due to how horribly it has always worked on Windows. My first real exposure to containers was when getting into Red Hat certifications and being introduced to Podman, and with the benefit of hindsight I don't really understand the benefit of Docker today when comparing the two. Podman seems just as easy, more restricted and more in line with what some might call "Unix philosophy".

Of course when it comes to distributed environments I'm sure Kubernetes still reigns supreme, but for relatively simple container setups I don't think there's a need to even run Docker anymore on Linux.

Obviously, I'm still not well versed in containers so this might all be overlooking something.


Docker has with no doubt much better UX. It just works and it’s very easy to use. For podman is seems you need a bit more knowing what you are doing. You need to know iptables. With Docker your database container is reachable in the Internet automatically. Not possible to lock it down with iptables rules. (Could be a disadvantage too.)


Yes, it's common for better designs to come along after insight has been gained by trial and error.


Use the right tools for the job. I continue to run small to medium scale deployments with a couple dozen containers with Swarm. It's smooth, and just works. Kubernetes would work too, and I use it for newer deployments.

Heck, I use plain old docker-compose to manage production servers with half a dozen containers, trouble-free for a few years now. Compose v2 seems promising, I've been using it for a while and it's pretty stable too.

Swarm could have been an industry in itself, a de-facto standard for orchestration if it had been marketed and supported right. It was a lot more stable than Kubernetes in the early days and things just worked with docker while you'd have to kludge a few things to get Kubernetes doing its thing right.


kubernetes makes simple deployments look complex, and complex deployments look simple. Docker would be my choice for a simple deployment (like server + db) Only use kubernetes when you need to do complex stuff, it does make things look simple then.


But did kubernetes really won the orchestration war or is it still an ongoing thing?


It won. Right now it's a speeding train. To compete with it, you better come up with something that's mind blowing, or a culturally shift in development.


I am not sure. almost 80% of the customer we talk to have troubles with kubernetes. They might be too small to operate it or something.


K8s aren't easy, but if you have enough workloads that you can't keep track of on a single spreadsheet, you probably could use K8s. I haven't seen really true alternative to manage large enough and variable enough workloads successfully.

One such scenario that I've seen is companies running multi-tenancy services where clients don't share the same databases, apps, etc.. K8s might be too much.


> One such scenario that I've seen is companies running multi-tenancy services where clients don't share the same databases

That's our situation soon. We're about to start migrating away from a desktop win32 application.

Any good deployment options for such a scenario?

We're talking 4-500 customers with their own db, less than 10k users. On-premise installation must be possible (though "give us a VM" is what we currently do, so that might not be to different).


The other poster made some very reasonable suggestion. The answer is largely depend on your app architecture. For example, if it's as simple as an web-app + db, then docker compose image should be sufficient. Do your best to extract some of the more generic features such as email list, storage, log server out of your app and have a shared service for those if possible.

Honestly, managing on-prem VM is actually one of the easiest thing to do in my experience if the entire architecture is in a single bundle.

For the rest of your customers, you can still run K8s. If you aren't concern about scaling, then run webapp + db in a single pod. This is somewhat of an anti-pattern because it changes your ability to scale. If you do want to scale for an individual customer, then move db into its own pod, and then connect to it from your webapp.

Personally, I would rather run your ephemeral load such as webapp in K8s, then run databases on VMs to make it a bit easier to manage.


> For example, if it's as simple as an web-app + db

Yes, this is most likely the architecture we'll go for, maybe with a few microservices.

I had suspected all we needed was docker compose, so good to know I'm on the right track.

Thank you all for the feedback, very much appreciated.


Depends a lot on what the new application architecture looks like, but one option is this: https://docs.microsoft.com/en-us/azure/azure-resource-manage...

You could use the same templates to deploy both the vendor-hosted and customer-hosted version of the app.

The physical architecture can be anything. A single container, the PaaS App Service, VMs, etc...


two types of software: those that people complain about and those that nobody uses


It has for the most part, nomad is it’s only real competitor now


Wow this was a great read! I remember taking a kubernetes class in late 2015 and the instructor then was saying that Kubernetes had already won over Docker Swarm and Mesos.

If I were at the helm of Docker today, I'd focus on Enterprise customers. Provide an on-prem version of DockerHub and you'll convince thousands of companies tired of Artifactory to switch immediately. Don't get me wrong Artifactory is pretty nice, but it's a bear to run at scale on an enterprise level. DockerHub already is handling that traffic, so there's no reason to think they can't port that to an on-prem offering.


why even bother? there is quay, which is way better than dockerhub or the open source integration. and even than there is also harbor, nexus3 and as you mentioned artifactory. ALL of them can be bought with support from way bigger companies than docker inc. so why even bother?! Docker Inc. basically failed by trying to do everything on their own.


Nexus3 can be a docker registry. It’s built-in.


> The combination of huge amounts of venture funding, a quickly growing competitive landscape, and the looming shadow of cloud industry giants all wanting a piece of the pie created a pressure cooker environment for the young company to operate within.

I actually got an AWS ad on the same page while reading this comment. It's definitely hard to compete when your competitors(with lot of capital) can easily integrate your tech into theirs in no time.


I really enjoyed reading this history of the creation of Docker and the company that supports it (Docker, Inc., previously dotCloud, Inc.)


I almost enjoyed it. I need a better ad blocker. That site sucks on mobile.



I have a better ad blocker, so now it's just white rectangles chasing me down the page, with a big white banner at the top that also chases page scrolls. Were it not for reader view, I wouldn't even have bothered with reading it. I'm dead serious when I say that I think I'll start using InfoWorld as my pi-hole test bed rather cafemom.com (which used to be my egregiously obnoxious go-to for ad-laden sites).


Do you have reader mode, which is available on many browsers?


Or use Firefox Focus.


How many employees does Docker has/had? I feel like if I got a couple hundred million dollars in funding, I could make it last many many years, producing what Docker has produced. But of course this feeling isn't based on any entrepeneurial experience, so I'm probably wrong. But that seems like an awful lot of money to shoot through.


VC investors don't give you that money to sit on it. They want you to scale, and quickly.


ugh docker only ever did one thing right, allowed me to run postgres + mysql on my laptop without the DBs shitting on the rest of the laptop

if they had just released a chroot database runner, would have been as good as what they did

it has never been good at build or deployment. it has definitely never been good at cloud. even kube is still awful at most things

these systems have never been sure if they're configuration languages, buildsystems, plugin hosts, or operating systems, and so they've been bad at all 4


I wonder if there's more room for things like fly dot io but with things like minio/s3 or lambda included as well for bandwidth savings, kind of an "aws lite"


If software development was a real engineering discipline docker wouldn't exist. The idea that you need isolated environments to run/develop programs that need different versions of libraries is regressive. Way back in the 80s and 90s Microsoft had backwards compatiblity down. Now in the 2010s and 2020s every couple months we can expect a breaking change in the webdev, linux, python ecosystem. Try running a 3 year old python app outside a container. This foot gunning is dumb. But it sure does make work for software developers. Who go on HN and crow about how much value they create while messing around in docker or whatever.


It's important to distinguish between the different flavors of containerization - Docker, and container orchestration - K8S. Docker is intriguing as a nice lightweight way of separating your processes - and is (mostly?) interesting to developers. K8S changed the entire trajectory of application management at scale.

We have about 4300 containers in production where I work (at this moment - it can change from hour to hour), and I don't believe "isolated environments that need different versions of libraries" was ever a driver, or even strong incentive for using containers.

Rather - the ability to rapidly add resources, resize resources, re-allocate storage performance dynamically, and use whichever cloud provider made sense (we've worked with AWS, Google Cloud, and now Azure) - and be able to migrate your environments between them in a period of a week - was what made containerization attractive.

Being able to type:

    kubectl scale deploy worker-background --replicas=300
And have 300 worker pods come online in < 60 seconds really changes the power of an individual. Being able to have that done all dynamically (up and down) without any human intervention - that's a game changer in some of the ETL heavy workloads.

If you've ever worked at a large company - think of the effort required to get 300 new 6 CPU/32 GB VMs allocated to a project you are working on. Now consider a company where pretty much all of the engineering / ops employees can do that from the bash prompt.

I'm just looking forward to the day where we can scale the performance of our storage from 500 IOPS/sec to 8K IOPS/sec dynamically as required (still requires a manual step with Azure - albeit a pretty straightforward one in their portal).


This is a little bit hilarious to me, because I've been through this exact same "reinventing the wheel" cycle about 3 times now.

The cycle goes like this:

    "A mainframe takes too long to buy, it's too expensive!"
    "Here, have an LPAR, you can spin one up in seconds!"
    "Wooaa there! You can't just do whatever you want, fill in this paperwork to get an LPAR!"
    "Getting an LPAR approved takes too much paperwork!"
    "Here, have an Intel server instead! It's not a 'proper' machine, but you can have it in a week!"
    "Wooaa there! You can't just do whatever you want, fill in this paperwork to get a new server provisioned properly!"
    "Getting a server provisioning approved takes too much paperwork!"
    "Here, have a VMware virtual server instead! It's not a 'proper' machine, but you can have it in a day!"
    "Wooaa there! You can't just do whatever you want, fill in this paperwork to get a new VM provisioned properly!"
    "Getting the VM provisioning approved takes too much paperwork!"
    "Here, have a Cloud server instead! It's not a 'proper' machine, but you can have it in minutes!"
    "Wooaa there! You can't just do whatever you want, fill in this paperwork to get a new cloud IaaS VM provisioned properly!"
    "Getting cloud IaaS VM provisioning approved takes too much paperwork!"
    "Here, have a Kubernetes Namespace instead! It's not a 'proper' machine, but you can have it in minutes!" <-- you are here.


I totally dig what you are saying - and I've been through all of these cycles (well, our IBM minicomputer was dedicated to a single application, so we didn't have engineers spinning up LPARS) + a gig at Loudcloud/Opsware (Pre-Virtualization, so spinning up new environments was nowhere near fast enough to be considered on demand). And I've never worked at a company that let me spin up cloud computers myself (though I've obviously done so myself with Digital Ocean, et al.)

I ran an IT organization that had VMware so I've been on that side - but, once again, employees could never spin up their own stuff, and requesting a new one was never a day, usually took a week or so. And I've worked on customer sites where it was a month+ to get VMs provisioned.

So, despite never having worked at companies as enlightened about empowering engineers to do stuff like you have - your entire analysis has a pretty solid grain of truth, and with $2mm+ in cloud costs at the current gig - we may be entering the "Wooaa there" stage of the cycle, but, I don't ever recall having a toolset that was globally available that let people scale up pods/deploy namespaces the way we do here. We have the ability for engineers to clone an entire customer environment, databases and all, do their work on it, issue a PR, and shut down the clone in a period of 24 hours - and this happens a hundred+ times/month. An engineer interested in working in the latest release is a single click (plus filling in a couple fields around names/labels/release) + 3 minutes 30 seconds away from having a complete build of our current app deployed on 12 containers. I don't think any of the earlier cycles had that flexibility.

Also - when I look at the costs - honestly, the engineering environments are essentially round-off error compared to the full scale multi-terabyte, 500+ container customer environments - so empowering them to do all these things has essentially zero real cost impact.

So - while I think there is some fundamental insight into what you are saying - and I genuinely appreciate it - I think the flexibility / velocity of k8s container management / deployment (particularly around the dynamic resizing of CPU/Memory + replica count) was as readily available as it is now - though, I'm guessing an SRE qualified on Vsphere could jump in right now and tell me everything on K8S was always available in virtualization if you had the right tooling...

Does cause one to think what the next cycle will entail though...


> Does cause one to think what the next cycle will entail though...

I've just spent a couple of years doing various small projects in AWS and Azure. They're... okay. Good, but not great. Proprietary. Closed-source. Etc, etc...

All of that makes me suspect that eventually the cloud will be commoditized and end up being based on a common standard. Kubernetes may or may not become that standard. It certainly feels like the early days of POSIX and Linux starting to slowly take over the proprietary Solaris, HP-UX, and SCO Unix shops.

Something to note is the Kubernetes is already a cloud-within-a-cloud. It has reinvented half of the commonly used Azure/AWS components: Resource groups, storage management, monitoring, naming, tagging, DNS, IP allocation, etc...

Why should I define my resources in Azure, and then again in Kubernetes?

You can see where this is leading: new cloud providers will appear, and they will be Kubernetes only. Instead of proprietary template languages, simply upload your helm chart instead. No complex virtual networking, just use IPv6 and let Kubernetes take care of the rest.


As long as you touch 0.0% of their native services, Isn't that where we are already to some degree with GKE/AKS/EKS? Admittedly, networking/ingress/storage continue to be pretty platform specific. But in theory, as those platforms mature, being able to provide "k8s dialtone" would seem to be attractive.

I'm just wondering what comes after k8s itself - though I suspect we're looking at a minimum 15-20 year runway.


Broken dependencies in 90s-era Microsoft systems was so common that it has its own Wikipedia article: https://en.wikipedia.org/wiki/DLL_Hell


If there were no need for isolated environments in Windows, VMware wouldn’t exist.


> The idea that you need isolated environments to run/develop programs that need different versions of libraries is regressive.

Containerization's value proposition isn't really the isolation part. That's nice and all, but it's not their main selling point.

The main selling point of containers is the fact that they solve the problems of packaging, deploying applications, and configuration, and they do so in a perfectly auditable and observable way.


Again if backwards compatibility were respected then packaging and deployment wouldn't be so complicated.


C’mon, compatibility with the host OS is just one of many deployment issues that need to be solved.

Deployment covers a ton of problems including resource allocation, traffic routing, discoverability, health checks, secrets management and heaps more. Containerisation just provides a unit of management.

I work in Go which has great backwards compatibility, builds static binaries, and since //go:embed you can release a single binary containing literally all resources needed to run an application.

We don’t technically need to use docker to deploy our services but we still need an orchestration platform.

Ironically given the discussion, we’ve chosen nomad over K8s, but nevertheless we needed something, and “backwards compatibility” didn’t really even register.


If wishes were horses beggars would ride.


Spot on, and this is seriously harming Linux distributions and ultimately making FOSS unusable for non-developers.


I'm curious - does anyone have a story on how they used Docker Swarm for something at scale (>100 nodes), but later migrated to K8s successfully at the same scale?


Anyone know any good metrics or analysis about trends of Kubernetes vs other ways to run containers in the not-huge-scale dev world which is 99% of software?


Kubernetes is an absolute joke. Even after being set-up right, it rarely works as it is supposed too.


The desire to change the world was stronger than the desire to make money while doing it


Happened to a pair of my Dockers once.


> Hykes does acknowledge that there were tensions between the Docker and Google teams at the time. “There was a moment when egos prevailed. A lot of smart and experienced people at Google were blindsided by the complete outsiders at Docker,” Hykes said. “We didn’t work at Google, we didn’t go to Stanford, we didn’t have a PhD in computer science.

I found this passage interesting. Are these tensions common?


Yes, I graduated from the same school as Salomon a few years later, and I got a lot of “you’re just a random dude from nowhere, go back to the playground” vibe during my time in California.

It’s quite an American attitude to judge your entire life and skills solely based on the college you went to. I didn’t encounter such attitude with my fellow European and Australian colleagues.

Nowadays, I don’t care that much because I’m not the little, defenseless, graduate student I was, and I do enjoy bringing a Google-Standford-Californian down to earth when they are bragging about something they have no idea about.


I never graduated from college. I also worked at Google for 7 years and never once got that vibe from my coworkers. It probably varied by office and I was in the Chicago office. But even on my somewhat frequent visits to New York or California offices I still didn't get that vibe.


Might be because you didn't come and say something like "I was talking to my buddy in the cab, and we think your entire work for the last decade could be dramatically improved by this approach and we think this would yield a more practical solution with better performances. By the way, we've only started working in this field last month."

(In some case, it really happens to be true)


Depressing. I went to a basic state school and work at a normal company (not Google) and it feels like I’ll be destined to stay a failure forever.


You aren't! I'm a state-school bachelor's-only don't-even-have-a-CS-degree person; neither of my parents have a 4-year-degree. I had a FAANG job, left it for a better one, own a house in 2/5 of the most expensive cities of the world, etc., etc.

Elitist assholes definitely exist everywhere, but in my experience a lot of what I viewed as "showing off" when I was younger was really just "being normal" for a different "normal". When a 25-year-old software developer talks all the time about Stanford and Lambos, he's not trying to show off, he's just talking about what he knows, just like the guy talking about tail-gating at <insert SEC school here> and then going four-wheeling all the time.

I don't want to devalue experience and education in any way; those things have value! But time is linear and the past has happened -- you just have to move forward with what you have and go after what you want, and don't get distracted by the fact that others started out with a lot more than you. I double-pinky-promise you that if you are good at something, keep getting better at it, and aren't an asshole yourself, you can work at any company you want. :-)


For what it's worth, I have no degree at all, and that didn't preclude Microsoft from hiring me - nor have I ever been discriminated or disparaged for this reason in the decade since. So it might be something that varies from company to company. I wouldn't be surprised that Google, in particular, has these attitudes, as they're known to be picky about degrees when hiring.


I don't have the same recollection Solomon does and I was in the room. The Google guys had an opinionated view of the world based upon experience and wanted logical justification for why their method was not superior.

Solomon's "outsider" comment is ironic since multiple of us in the room from Docker did, in fact, formerly work at Google and go to Stanford. He walked into that room assuming Google would have those prejudices and that's all he could see.


> We didn’t work at Google, we didn’t go to Stanford, we didn’t have a PhD in computer science.

Amusingly, except the "work at Google"-part, Joe and Craig don't have those other qualifications either. Brendan did get a PhD and was even a professor, but I dunno, he gave up on that.


I think what happened with Docker is that it was just too good at what it was originally intended to do and too easy to use for that.

Programmers hate things that are easy to use (by the way I am a programmer, and no I don't feel that way) because if they admit they use them then they might be accused of being users. And of course no programmer will admit this. They don't realize it because it's actually a subconscious psychological issue.

And so what programmers started to do was immediately make it much more complicated and at the same time, take it completely for granted. It was so useful it was like a floor to walk on. And so people started giving it the same level of respect they give a floor.

So for those reasons, Docker became very uncool. But at the same time it was incredibly useful. Solution: make something just like Docker but not Docker, which hip people will be allowed to use without any shame. Make it a bit more complicated and only run on the cool expensive hardware.




Consider applying for YC's Fall 2025 batch! Applications are open till Aug 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: