Hacker News new | past | comments | ask | show | jobs | submit login
Why doesn’t anyone weep for Docker? (techrepublic.com)
222 points by jumpingdeeps on Sept 6, 2019 | hide | past | favorite | 239 comments



There's this prevalent false position that Kubernetes is successful because of Google.

Yeah, Kubernetes initially learned a ton because of Borg and Google's deep investment into containers dating back a very long time. But, arguably, Kubernetes is successful because Google Let It Go. Its a true open source project, with governance by a wide number of industry advocates, underneath the Linux Foundation.

By comparison, Docker is a VC-backed profit-minded startup. Of course it was going to lose this race, for the same reason Windows isn't the dominant OS in the cloud.

Fundamentally: You can't build a hyperscale startup based on a technology. It doesn't appear to work anymore. The best case is the Docker/Kubernetes or Oracle/Postgres/MySQL case: someone else does it, maybe better, open sources it, community forms around it, you're toast. The worst case is the MongoDB/AWS or Elastic/AWS case; a cloud provider copies you, probably does it worse, but its cheaper and more integrated with the cloud, so they still win.

Docker was doomed; they could have been a very nice business, but the issue is taking on huge valuations and capital, scaling like mad, and then finding out you have no ground underneath your feet to support that valuation.


Parenthetically, Linux being an open source reincarnation of Solaris seems also an example, no?


It would surprise me if Linus even knew about Solaris when he worked on the first release of Linux.

Solaris’ first release was in June 1992, with first use of the name in marketing materials in September 1991 (https://en.wikipedia.org/wiki/Solaris_(operating_system)#His...)

Linus’ famous message was from the same time (September 17, 1991)

Calling Linux an open source version of Minix is more appropriate, but it still wouldn’t be a good example of this.

Minix isn’t dead. It moved to a BSD license, and is deployed in millions of hundreds of millions of Intel CPUs (https://en.wikipedia.org/wiki/Intel_Management_Engine#Hardwa...)


I think that, once you're going that far back in history, it's pretty critical to keep track of the GNU/Linux distinction. Linus just wrote a kernel. And then the GNU userland, which had already been in development since the mid 80s, but was still somewhat lacking a workable kernel, was adopted as the official userland to use with the Linux kernel.

And at roughly the same time, IIRC, when Sun decided to migrate their Unix from a BSD flavor to a SysV flavor, which came to be called Solaris, they also used some GNU bits. Which might explain some similarities between the two.


The first Linux distribution I remember using was Slackware on floppy disks, at a time when building a kernel was pretty much necessary to get a working system. A great learning experience, all in all.

However, the point I'd like to make is that it was clear back then just how much was contributed directly by the GNU project, and also how favored GNU's GPL was by developers who wanted to contribute their work.

GNU really did seem like it was everything that made the Linux kernel act and feel like a Unix system, though I had no proper appreciation of that at the time.


Is this Richard Stallman? ;)


No, he wouldn't have called it GNU/Linux: https://www.sudosatirical.com/articles/richard-stallman-inte...


Not really. Solaris is (was) from a ground up a corporate Unix environment designed to run mission critical applications. All the management interfaces and features reflected that. Zones were way ahead of the time, also pretty nice features.


Solaris is more an example of Oracle's shittiness, it was doing fine with Sun, plenty of places that still relied on it being a commercially supported unix, even when it was open sourced.


Linux is a more-open and more-pragmatic reincarnation of what MINIX was in 1991, when you couldn't redistribute MINIX source code but could only get it by buying Tanenbaum's book, and MINIX was high-church microkernel design with performance problems that Torvalds got sick of.

MINIX has been BSD licensed since 2000, BTW.


Versions of Solaris are open source aswell (Illumos)


Illumos and friends are effectively dead, except as hobbies for enthusiasts. Sun Microsystems, now Oracle, didn't like the reception Open Solaris got so they packed up their source code and went home.

Which is really a shame.

All 'True' Unixes are just closed source versions of what originally were open source operating systems. By taking copyright seriously and having the misconception that there is intrinsic value in 'IP' they effectively sentenced their operating systems and investors to a long term grave.

But I doubt most of them feel bad about it. They got their millions and their nice fat retirements. It doesn't matter now if customers now view their once dominate systems as a sort of technical debt cancer.

If it wasn't for the destructive power that copyright has on technology and the demands of board members to monetize Unix.. we would all be using BSD right now. Unfortunately the tech people from 30 years ago didn't understand the power of 'letting go' and thus allowed them to destroy Unix.

As far as docker goes it has a lot of momentum as a daemon and it'll probably stay that way for a long time despite some technically superior solutions for running containers that have cropped up in the past few years. The newer container solutions just don't have the community backing them and that matters.


FreeBSD is a 'True' Unix and it is as open as they come.


From a trademarking stance, the only 'True" unixes are those that pay to get certified by Single UNIX specification. Of those, two are commercial Linux distributions, and the rest (including macOS) are proprietary unix operating systems

https://en.wikipedia.org/wiki/Single_UNIX_Specification#Curr...


> someone else does it, maybe better, open sources it, community forms around it, you're toast

Oracle is very much okay still with superior technology, despite open source, and community, if we are talking about PostgreSQL.


This article seems to be arguing that Docker’s primary downfall was being hostile to its open source community. Without having an opinion on whether that’s true, I suspect the core issue was not that but their business model and execution.

Before Kubernetes was the dominant container tech, they were pushing Swarm but I remember being confused about where Docker “standalone” stopped and where Swarm began. Perhaps it would have been better as a separate tool with a more clear open core model?

Then there was Docker Hub, whose UI was never great and which always seemed light on features.

I don’t recall seeing any kind of container introspection tool from them for a while either, despite others coming out.

Meanwhile, they represented a threat to the cloud providers if you could truly run anything in a container on any cloud. But the cloud providers all neutralized that threat by the classic “commoditizing the complement” strategy where the Docker cluster and registry tech were all either open source or commoditized.

Once Kubernetes emerged as the winner and de-valued Swarm while the cloud providers all offered their own Kubernetes and Docker registry offerings, I’m not sure how much more profit there was for Docker to claim.

Honestly, startups are hard. Sometimes really hard. It’s hard to know if a different team would have gotten different results in this space.


Totally agree with your first paragraph. Regarding execution, I'm not sure what Docker could have done differently that did not lead to the outcome we have today. I don't think that playing nice with other opensource devs would have made a difference (as the article claims).

Also .. the K was hardened at Google is BS. The ideas, maybe. But I am quite skeptical about the amount of prod internal google code went into early K (pls don't point at the Borg paper .. I'm talking about actual working code). I recall doing a deep comparison of swarm vs K circa 2015 and Swarm was clearly superior in both design and implementation. Today, K is better and has an ecosystem .. maybe the issue isn't that core Docker containers played nice with opensource .. rather .. swarm should have focused much more on playing well with others.

One point of contrast is Hashicorp .. they are in the workload orchestration and mgmt space and seem to be doing really well. Kudos to them!


Kubernetes (from open sourcing to about 1.3 or 1.4) is a second system mostly written by senior engineers (from several companies) with deep experience in the problem domain and strong architectural guidance, and a willingness to stop at “just good enough” and then let stuff mature. Kube was mostly “done” from a design perspective in early 2015.

Swarm was 2-3 people in the early days, without as much strong opinionation about what exactly they were building, which meant while it was a tighter, simpler system, it couldn’t evolve as easily.

I’m obviously biased - I was the first non googler to have commit on the repo. But it’s much easier to build something when you know upfront exactly what it looks like and you have a set of committed and experienced engineers with good leadership.


> I’m obviously biased - I was the first non googler to have commit on the repo.

How well was your PR received?



I fail to see how an alleged threat of universal cloud platform compatibility was neutralized by commoditizing the services you mentioned.


At least Docker is not going to benefit anymore right?


My experience agrees with this.

I'm a huge fan of Docker, I've actively taken part since the early days, attending meetups and using it actively day to day.

Unfortunately, when I brought several issues to GitHub, or +1'd other people's issues that were affecting the usability within our company, the attitude was very much "f* you and your problems" because Docker want things to be one way and that's how it'll be.

There were issues raised 4+ years ago and are still open, for solutions to problems that would have mooted a need for us to use something like K8s (which doesn't work anyway for our requirements).

I believe Docker locking the community out of valuable features had also done harm and (possibly) failed to be the monitiser they'd hoped for.

After so long, I no longer go to Docker to solve problems that could be solved in Docker (secrets anyone? without the "hacks"), and just look towards the other tools solving the problems.

I'll continue to use Docker, but I don't consider it a friend.


I wonder how much of that attitude was caused by an overstretched team with no effective scaling mechanism in place. No, "open source" does not automatically mean "scalable team". Part of Kubernetes success is its ability to scale up the community, empowering multiple entities to meaningfully contribute.


Scaling is an issue with any company, but judging by a friend's description of working there, there's almost certainly a large company culture / attitude factor as well.


I don't have an answer to the why, but I can say that in the 5+ years I've been using it daily, the issues I've faced have been reciprocated by others others, and GitHub reflects this.

Some of the issues are solved by the likes of K8s, but many of us don't want or need K8s for our use cases. Other issues are resolved by other told, and yet still others are only resolved with effort (you still can't expose/bind a port range, e.g. 10000 UDP ports in Docker without killing your server).


At the same time, I gather it's really, really hard to write software that satisfies a lot of use cases across a lot of businesses without having to be somewhat opinionated.


True! Another part of Kubernetes success is the fact that it's core architecture is sound, scalable for both workloads and features.


I maintain a couple of docker network plugins. One day Docker devs decided that they would bundle an internal DNS resolver inside the docker daemon and take over the resolv.conf in every container. But here's the most amazing part. It only did this for custom networks. If you just used docker out of the box you would never see this. But if you used custom networks, and custom network drivers, it did this. And the docker team refused to allow any configuration options to disable it.

I now maintain a daemon who's sole job is to undo Docker's meddling with resolv.conf until I can get the bandwidth to explore migrating to Kubernetes.


I ran into a piece of software that was basically impossible to containerize because of this weirdness.

It wanted to ‘mv’ over resolv.conf but couldn’t so we had to run it outside the container because it was proprietary and we couldn’t readily modify it.


> ..., I no longer go to Docker to solve problems that could be solved in Docker (secrets anyone? without the "hacks"), and just look towards the other tools solving the problems.

Are you talking about k8s or do you mean something like bitnamis sealed secrets?


Hashicorp's Vault and a bunch of your own code, and/or some of their other tools, e.g. Consul.


The problem i have with cubernetes is the following: I as a small developer and small server owner don't have the ressources to even get started. The first thing i see at cubernetes is a cluster. Why a cluster. Do i need to cluster my Raspberry pi's to get something out of it? Do i need to buy 3 servers just to run 5 containers?

In docker its easy. Download Docker. Start container. Install container manager like platformio. Done.

But true to the article. Docker seems very hostile towards the community and towards getting revenue. If i think about Kubernetes and Revenue i hear IBM, Red Hat. And i am too cheap of a person and too small of a customer to ever need those guys. So i will still keep using Docker.

And because i know more about Docker i will probably try to use it at work as well. Easy as that.

But i am open to suggestions.


> as a small developer

Do you need kubernetes?

I know the hype cycle is mad for copying big tech, but if stackoverflow can operate on a couple of IIS instances I’d argue that you almost never need kubernetes.


People jump on bandwagons.

Ten years ago you started with PHP + memcache + MySQL running on just a physical box running linux. No Docker or Kubernetes. No virtual machine for what it is worth. Then you split it in multiple PHP frontends with a load balancer or MySQL master and slaves as the traffic demanded it.

I think a lot of systems these days are over-engineered and over-paid from day one. It may cost you 10-20 times what you would if you keep the "old day" approach.

That said Docker and Kubernetes are nice. They give you a lot of flexibility. But the most important part, I think, has been that they consolidated the shift from "pet" to "cattle" servers that simplifies a lot the sysOp and sysAdm work.


Disclaimer : I get paid to setup k8s clusters for the bandwagon folks

From my perspective k8s is really good for a company who has a team of experienced SRE who can manage k8s well and the company has a marketing team driving new development that needs to get to market fast

It is not cheaper than simple autoscale groups, it is not easier than rebuilding packages with jenkins ( rpm spec files or debian src rebuilds )

It is however the current popular framework so if you want to ride the wave learn it. Also learn how to migrate away from it as that will be a future role as several early adopters are pulling back out of it.

The larger issue is the simple fact automating cloud infrastructure is not trivial so abstract layers lets more people implement things without understanding the lower levels giving folks the appearance of having a complete framework.

It works great until it doesn't. Then it is interesting watching people try to figure out how to fix it.

Learn the lower levels of Linux and k8s will come to you organically


In your experience, what is the strongest point of evidence leading to the successful utilization of k8s?

> the company has a marketing team driving new development that needs to get to market fast

At what point in the growth curve? Are your SRE shipping the new features, or running like a red queen to keep the product developers able to keep shipping?

What development process or cycle has overheated with friction that requires k8s? Where is that friction which k8s relieves?


:)

Successful utilization of k8s, interesting metric, what is your criteria ?

>At what point in the growth curve?

I have seen properly staffed startups (in silicon valley) leverage the platform to pivot direction fast but overall I would say you need a large enterprise to support it. Ironically the large enterprise which could benefit the most wants to lay ITIL on top of k8s, call it agile and the methodologies conflict creating more issues than they had.

> Are your SRE shipping the new features, or running like a red queen to keep the product developers able to keep shipping?

Yes to both

>What development process or cycle has overheated with friction that requires k8s?

Upgrading k8s :)

>Where is that friction which k8s relieves?

What k8s relieves is once you get a pipeline getting product to production fast, the approval process from legacy ITIL groups being the problem ( I have yet to see an enterprise company use k8s properly though I have been to presentations where some claim to have done this )

If someone attempts to use ITIL and k8s you are going to have problems. Training people to NOT do this is the #1 issue in my experience.


All my mentions of issues or friction in my previous comment was about pre-k8s team's experiences... I try and clarify a few key cases in the following:

> Successful utilization of k8s, interesting metric, what is your criteria ?

Sorry, what pain motivated using k8s, and using k8s relieved that pain.

> I have seen properly staffed startups (in silicon valley)

What is 'proper' for staffing?

>> Are your SRE shipping the new features, or running like a red queen to keep the product developers able to keep shipping? > Yes to both

Is it the appropriate use of a Reliability engineer's skills to develop or change arbitrary features? Or are you saying they are at least Engineers so they should be able to pitch in everywhere... CSS accessibility features or k8s config.

>> What development process or cycle has overheated with friction that requires k8s? > Upgrading k8s :)

This tautology keeps me away from k8s. Before k8s what pain in the process required using k8s to solve?

> a pipeline getting product to production fast

Does k8s make it fast? Does k8s make it possible? I think I'm deploying "fast", without k8s... But because you keep mentioning ITIL I suppose it's more about the infrastructure changes? Or are you talking about ITIL tension because k8s requires a more liberal policy than ITIL allows?

There is so much assumed context when talking about k8s, the only comprehensible part of these discussions is k8s is a rabbit hole to end all rabbit holes.


What I've learned from the comments above is k8s consulting business is the best thing since OOP. Just tell the marketing team you have a magic button that being installed will give them ability to change the course of business every 5 minutes by handwaving and a PowerPoint slide.


>Sorry, what pain motivated using k8s, and using k8s relieved that pain.

CTO / CIO / $SOME_C reads a blog and decides they want k8s is how it usually is introduced

> What is 'proper' for staffing?

Experienced C Developers who can do operations ( like google level SRE )

Summary of k8s : if you use the proper methodology to implement k8s and you are in a cloud framework you likely dont need k8s :)

What shines about k8s is it leverages the current container fad to ship code faster because devs like it. The current container fad pushes the burden of supporting buggy code to operations teams. IF the operations teams have issues they had better be highly skilled to figure them out

I have yet to see a company let requirements drive the choice to use k8s.


I don’t think kubernetes is nice because I’ve seen more than one small team throw a lot of resources at it and fail.

I’m not attacking docker, however, I can see why you would want containers. We still haven’t found an efficient usage for them at my place, but we never need to spin up more than one instance of our software. I think docker can sometimes be a way to cheat unsafe software around operations, but that’s more of an anti-pattern than an issue with docker.

Then again, I’m probably old and grumpy, but that sometimes has the advantage of not adopting techs before they are easy to use.


could you elaborate more about failure of kubernetes? I'm considering using docker/kubernetes in our production so to hear warning stories would be extremely helpful.


For me containerization was always about deterministic environments and ease of deployment instead of performance and clustering. But even with these advantages I am currently not using any solution for that.

For cloud services this is probably a good idea, even for users to a degree if the provider doesn't already give you a fitting box.

But otherwise it is not a must have in my opinion. Maybe that is a mistake and the apps I develop today are not going to work in 10 years. Well, worst case: I have to be paid again.


> For me containerization was always about deterministic environments and ease of deployment instead of performance and clustering. But even with these advantages I am currently not using any solution for that.

You can get 99% of the way using a stable distribution and a configuration management system (ansible, chef and the like). It's much much simpler than running an orchestration service. I feel most people don't need containers and orchestration, just config management running redundant system designs.


To be honest I only very recently got to know ansible and related techs so I maybe missing an opportunity to learn something. Even so I think you are forgetting the DEV part. With ansible and chef you can make a deployment to the real infra. With containers you can have infra locally in your DEV environment and have clean slates. The similitude of the DEV environment and the production are crucial for devops. There is nothing more annoying for developers than having something work locally and then needing some weird quirk for the production/ci. A lot of political infighting and hate for devops. I saw this being a tech lead for build system in a fortune 500 company. Ah they have redhat based distro. Ultra stable! Problem is nothing from outside the company works out of the box, leading to blessed machines. A disaster that lead to so much unofficial workarounds that it is not funny. Lol the kernel is so old it cannot run docker:) Ubuntu is better but ultra stable machines will tend to massive customizations that are very hard to keep when you finally want to upgrade. It was very common to reach End of Life of LTS distros, and then have the server upgrade being a nightmare due to the long evolution that happened in the mean time.


> With containers you can have infra locally in your DEV environment and have clean slates.

True, but you can do that with plain system containers such as with lxd, rather than having that bundled with the huge paradigm shift that Docker comes with.


My experience with lxd is very limited. Actually I worked with liblxc which is the underlying paradigm, and i kind of disagree with you. The paradigm of lxd is much more foreign to me than docker. I am pretty familiar with my application and the distro of the container in a user perspective. I am definitely very insecure about cgroups and kernel namespaces. In the end my application is connected with my business/work orders. Kernel minutiae is not and the technical skill requirements is much higher. That will put a higher price tag on my team's human resources.


> The paradigm of lxd is much more foreign to me than docker.

The paradigm of lxd is pretty much exactly the same as the paradigm of a regular distribution installed on bare metal or inside a VM. If you can operate a regularly installed distribution, then you can operate inside a lxd container. The commands to create and destroy lxd containers are trivial ("lxc launch ubuntu:bionic" for example).

> Kernel minutiae is not and the technical skill requirements is much higher.

I'm not sure why you think you need to know kernel minutiae, cgroups or kernel namespaces. Operating lxd needs none of that.

> I am pretty familiar with my application and the distro of the container in a user perspective.

That's all you need.


So, speaking as a dev, I feel like Docker's killer app is that it makes the config management a lot easier.

Dockerfiles give you a fairly easy and consistent way to express, "The runtime environment needs to have Python 3.5 and these packages," in a format that doesn't introduce too many concepts over and above the basic command line junk you'd use to manage your environment without Docker. If your stack requires multiple services, docker-compose gives you another fairly easy way to describe what all goes into that. And then it gives you a _super_ easy interface for starting and stopping all those services, keeping track of what you have running, all of that.

(And it's all fairly disposable, which is nice, since, as devs, we tend to break things. TBH, if Docker has done nothing else for me, it's that it's turned nuking PostgreSQL to get back to a clean install a 10 second process instead of a 30 minute one.)

It's not really that simple, and I've spent my fair share of time screaming at Docker for being flaky and having confusing under-documented configuration. (And I don't think I'd use it at all if I were working on a platform that weren't so annoyingly susceptible to systemic dependency hell. But worse is better, so the unix philosophy won, so here we are.) But eventually you get over that hump, and it starts feeling fairly easy to understand.

I don't know Chef, but I've seen Ansible used in production, and it just doesn't seem nearly so attractive. It could just be how it's being used, but it felt like there was this infinite regress of complexity where everything was tied to something else and you have to have been the person who built it to understand it, kind of like the bad old days when people were trying to put too much smarts into the database itself so they'd just become this rat's nest of triggers and whatnot. I'm sure it's not that bad. . . but my initial impression was that Docker is great for scratching a developer's itches, but slightly sucks for ops, but maybe is still worthwhile there if you're dealing with microservices or elastic scaling or something like that and you can use Kubernetes to smooth over some of the flakier bits. Ansible is much more for ops, and does a great job there, but I don't see it scratching many dev itches at all.


> So, speaking as a dev, I feel like Docker's killer app is that it makes the config management a lot easier.

It starts with a Dockerfile, which is a limited shell script, and it does not get any better beyond that. Shell scripts are simple, I'll give you that, but please don't sell them as some magic bullet. Dockerfiles are no configuration management system.

I've seen my fair share of hairy chef, puppet and ansible in the wild. Don't read into config management from those. I've also seen beautiful ansible installs, which deploy from dev setups all the way up to full infrastructure setup and deployment with blue-green deployment.


> Dockerfiles are no configuration management system.

Y'know, we might violently agree. That is a much more concise statement than my rambling attempt to explain why I think Docker is so much more palatable for development workflows.

You're right, it is no magic bullet. And I misspoke when I said "configuration management"; I forgot that that's a term of art in operations. By "management" I really just meant "stick it all in one or two files so I can get my checklist down to one step, and manage shared packages in a way that's at least a little bit less kludgey than simply abusing environment variables." So I find that it save some yak shaving, and for that I can deal with it under certain circumstances.

I actually hate using it for deployment or production config management, because IMO it seems to do a crap job at it. And it does a crap job at it precisely because of the features (or lack of features) that make it convenient for development. Even using it to manage our integration tests' runtime dependencies is kind of a hot mess. But I'm willing to concede that, together with Kubernetes, it might be nice for cloud-native elastic scaling microservice-y stuff, insofar as it seems to be popular for that. I don't actually know firsthand; I'm allergic to complexity, so try to avoid building things that way.


> For me containerization was always about deterministic environments and ease of deployment instead of performance and clusterin

This! I come from the embedded world with a bit of webui and having yocto for the embedded reproducibility and docker for the infrastructure, I am so happy. No more uncertainty when moving to another machine or upgrading my DEV machine. Nope Everything running right everywhere. Some scriptology and I had a full boot up from tftp for kernel and an ephemeral nfs from an ext4 master, and I could reliably make full system component tests all the way to web browser experience validation.I even had an autopilot simulator spawning ephemerally in a docker. Restarting the containers gave me a clean env again. Pure piece of mind and productivity. The initial investment was big though.


Am also an embedded dev. What toolchains do you use and how do you set them up using docker? Can you provide some sort of guide, because you already said. First investment is pretty high.


I use embedded Linux, not rtos chips. The tool chain is generated by yocto. Yocto is an embedded Linux from scratch distro creator. It creates images and environment for cross development. My connection with the docker was not so toolchain related, but system component test related. I modified the testing harnesses of yocto to start docker containers that serve a kernel zImage(over tftp) and a copy of the pristine ext4 image over nfs. With some program scriptology I made in python I was able to have serial boot log expects as well as boot time monitoring. I made a write up about it but it seems not many people dabble in such topics as SCTs for embedded devices[1]. I must say that embedded developing is a kind of a lonely experience,compared with for example the docker and web developer communities.

Regarding the investment it is an embedded industry problem. There is very little re-use. My experience is that we are an industry of wheel reinvention, where all of us are deep experts so we roll everything on our own. Maybe against myself I speak as I indeed developed this harnesses even though other solutions exist. My quip with what exists is that Intel absolutely owns, or used to own yocto and embedded tooling open source projects. This lead to very crappy code that was Intel specific being half merged into upstream. Mind you their featues are really not working,and were accepted because Intel is a gold yocto project sponsor. When you try to remove the broken code to a more sane one, you cannot because then the actual good rules of making small changes will bite you back and your changes will not be accepted. So you need to wait for Intel to cave in to remove their broken features. I digress sorry :). Even so Yocto project is a great step towards embedded projects productivity and knowledge re-use. https://www.reddit.com/r/embeddedlinux/comments/bk8a8k/yocto...


I'm currently using k8s in a staging/uat environment (running on a tiny 2 node cluster). So far, I've found it to be super useful in CI, as you can use container orchestration to emulate the larger production IaaS stack with much lower cost and time overhead than, say, a real terraform deployment. And if your team is already drinking the "cloud-native" kool-aid and wanting to use AWS lambda or a similar service, I'd argue it would be a much better investment to deploy those types of workloads as kubernetes jobs or with a framework like OpenFaaS[1] on kubernetes, giving the team much greater flexibility and avoiding vendor lock in.

https://www.openfaas.com/


Yeah, for my side projects I just use gitlab CI + docker compose.

Builds use the dind images on gitlab's runners to build an image and push to their container registry.

For deployments I have a host with a personal CI runner instance on Linode's smallest instance type which can access a user on the "production" host when SSHing over a private network, and has the docker-compose command allowed in the sudoers file. Then it can run docker-compose up to deploy. The key for this is passed to the job via gitlab's secrets UI so someone getting read access to either of my hosts wouldn't be able to do anything.

While people will rightly point out that this does mean the CI builder effectively has root on the "prod" host, for a side project it's enough for me. I might investigate podman/buildah some weekend when I have time as apparently that allows for rootless container launches.


I've used Docker Compose and Swarm for small-scale stuff too - it's really easy to work with, doesn't use much CPU for management (something k8s os/was notorious for - not sute if that's still valid?), the docs are pretty good, and there are a gazillion YAML templates on GitHub to use as a reference.

Compose and Swarm can take you pretty far, but TBH, it felt like Docker gave up on them years ago, even before k8s "won" the container orchestration war. A real shame :(


I don't need Kubernetes, no. But what do I use? What's there to automatically deploy my software when I push in a simple and reliable way? All the other stuff I can do myself, but I want something to deploy stuff to servers.


"git checkout X && git pull" ?


If I wanted to bring production down I'd just rm -rf / the database machines instead of using half-measures like that.


I'm just saying you don't need fancy CI-de-jour tech for a home project. If you're on the Atlassian stack Bamboo does pretty solid deployment/test/rollback. With a proper staging and QA environment even a large corp doesn't need much more than git.


I've never used Bamboo, so maybe that's the solution, but I disagree with your comment otherwise. No one should ever need to SSH to the servers to deploy things, and I would prefer to keep the hand-rolling to a minimum. Is there an OSS solution that will take care of that? Maybe Nomad would work.


>> I as a small developer and small server owner...

You'll never ever need either Docker or Kubernetes or even the latest and greatest javascript frameworks.

I started running a server before I knew anything and that server is still purring along happily.

But if you're ever targeting an enterprise, either as a freelancer or as an employee, those words are invaluable in your resume.

It of course helps if you actually know about those technologies!

And when you do get to learn them you'll wonder why everyone is coming full circle!


> You'll never ever need either Docker or Kubernetes or even the latest and greatest javascript frameworks.

I agree with "no need for k8s and the latest JS frameworks", but strongly disagree on not needing Docker. It is extremely useful for setting up separate development instances for your projects - no matter if you're doing PHP development with N different versions of PHP (as some sites may still be stuck at 5.6 while others are already requiring 7.2 due to Composer dependencies) or, worse, nodejs and Java where each project will have its own requirements for node, Tomcat and whatnot.

I personally set up one mega-container for each project which runs all the services required - mysql/pgsql for the database, apache as frontend / mod-php, if needed Tomcat - and can simply shut them down when I'm done working on a project instead of having the databases and servers all consuming memory and resources all the time.


>It is extremely useful for setting up separate development instances for your projects this. and not only the ability to have different software and versions for each project but also the ability to more or less match the production environment in your dev container for each project.


The point is to deploy immutable images that were clean rebuilt from scratch and tested prior to deployment, rather than upgrading an environment which becomes risky after some time. Another added value is automatic actions based on container watchers, traefik for example will self-configure on the fly when you just spawn a container with the hostname in a label. If deployment becomes that easy then why not leverage gitlab dynamic environment feature and deploy on $branchname.ci.example.com so that you can have say a product owner to review the development prior to merging to master (which would deploy to staging if you have an aggressive CD strategy - which I do)


When you break fast and move things, Docker is invaluable. I have more than one product and something like Docker makes it tractable for a single person to support.

Still, I run it all on one server using Docker so there is a medium :-)


How often are you spinning up new servers? In one to two hours, I can configure a production ready Debian server from base install with firewall rules, correct network interfaces, cron jobs, all dependencies and tooling, monitoring, Postgres, and my application server (either C epoll passed or Spring Boot) sitting behind an nginx proxy. I would consider myself neither a Sysadmin nor particularly fast at configuring Linux.

One to two hours per project multiplied by potentially one or two QA/UAT environments, and this is a tiny blip in the total time spent developing a solution. I think it would take me a few years before I paid back the debt of learning docker/kubernetes for it to actually start saving me time as a freelancer (or if multitenancy suddenly becomes out of the question).


At my job I regularly spin up ~1000 servers to test some workloads or do some data processing. Would be a pain to do that manually, and spinning up 1000 servers isn't significantly more expensive than 100 or 10 but it is significantly faster.


I actually run all my docker services on one 16-core server.

Great thing about Docker is a commit is all that's needed to reproduce everything since I also use terraform to deploy the infrastructure.


Docker and Kubernetes are on completely different layers. You can't do those comparisons.

Kubernetes is like an operating system, Docker is like a format for the executable files.

There is no problem with servers. We're running Kubernetes in production on a single Linux machine, essentially using Kubernetes as an alternative OS.


Kubernetes seems to be intended for large, complex environments but yours seems to not meet that category. What have you gained by using it?


(Not OP) I'm running a single node kubernetes cluster because it provides much needed isolation between services as well as, and that's the biggest part, a aingle and simple way to have everything in one place.

I can duplicate a service I'm running 1:1 with a new version for testing in 2 minutes, I can tear it down in 2 seconds. I can roll back changes in one command, I can wipe the server and reinstall everything from scratch in 25 minutes.

The environment is completely reproducible, and I can with a single command see every config that applies to a service.

All the usual deployment trouble is gone, no more weird setups and config situations. All the weirdness is nicely encapsulated.


This. Debugging k8s issues can be tricky, but you do get the above-mentioned benefits and in my opinion can make small teams very productive. It's also a great skill to have.


What did you use to create the single node "cluster"? I was using k8s for a similar purpose but had to create 3 nodes at the time (>2 years ago).


Sounds too good to be true. What challenges have you faced by using Kubernetes?



Precisely. I'm reading the article and not grasping the technical point.

Mainly because the point is non-technical; while I'm not sure that Google is exactly a saintly organization, things like the Summer of Code and giving K8s away have procured some goodwill for them.

The whole article is over the head of Joe User (me) who is competent to use Docker but nowhere near deep enough to contribute code to it.


> Why a cluster?

For availability. If you have promised 99.9% or more availability to your customers you probably need some sort of redundancy or hot failover. Kubernetes is a good option to get that.

If you don't need 99.9%, simpler is better.

> Too cheap

You can get a single node Kubernetes cluster from Digital Ocean for $10 per month.

That's silly because a single node cluster doesn't have redundancy which is the main reason for Kubernetes, but it lets you get started. A minimum redundant cluster would have two nodes and a $15 load balancer, so $35 per month at DO.


Just use Docker.

Let others run to Kubernetes, it doesn't sound like the right tool for you. That's ok!


Why bother with containers at all at that scale then?

We keep on using plain old VMs, while watching everyone rush into containers fashion.


I like the tooling better and suspect that many other developers feel the same. Docker is more like managing and configuring software libraries and dependencies. You just declare what type of environment you want and it's there. If you change the version number of a dependency the old image is discarded and a fresh one is created. The Dockerfile is managed with the source code.

VirtualBox feels like installing a regular computer. It takes a long time and is a lot of manual work. If you want to change something you login on the existing VM until you reach a point where you no longer remember all the changes you've made over the years. The machine is unclean.

I realize that there are solutions out there for automating VM deployments but Docker did a good job of catering, and perhaps marketing, to developers.


> VirtualBox feels like installing a regular computer. It takes a long time and is a lot of manual work. If you want to change something you login on the existing VM until you reach a point where you no longer remember all the changes you've made over the years. The machine is unclean.

Unfair comparison. People running on VMs usually rely on configuration management to do the install+config part. Think of stuff like Ansible as a Dockerfile for VMs/bare-metal.


Config management isn't a magic bullet. At the core every config management platform is manual work in a for loop. Lot's of layers in the case of Salt/Puppet to make it a bit more ergonomic but you're ultimately still on the hook for all the server maintenance. No config management is all-encompassing and so without extreme diligence you will lose the state of your servers over time.

I use Ansible all day every day -- it's not comparable container tooling in the slightest.


Containers aren't a magic bullet either. Surely the core of container systems is also "manual work in a for loop". You're right that the default 'ephemerality' of containers is (or can be) a plus but config management tools don't require one to treat one's servers as 'pets'.

> No config management is all-encompassing and so without extreme diligence you will lose the state of your servers over time.

I'm not sure exactly what you mean by "all-encompassing". Surely all of the popular config management systems have 'escape hatches' to let you do anything you could do from a shell, which seems pretty "all-encompassing" to me. But if you mean the config management system don't completely enforce a specific 'state' for managed servers then of course you're correct. Containers don't do that either, beyond periodically replacing running containers with new ones created from a base image (and even that's almost certainly not perfect either).

Unless your container hosts are entirely managed by someone else (and probably even not then, in the fullness of time), you're always going to "lose the state of your servers over time". There's (almost always?) some state somewhere that has to be explicitly managed and thus requires, generally, "extreme diligence".

I do agree that containerization and config management are very different but, like everything, it's 'just' another set of tradeoffs to be made, hopefully depending on one's actual or expected needs and wants.


> I use Ansible all day every day -- it's not comparable container tooling in the slightest.

Only if you are not actually cleanly managing containers. With containers you just pile the shit into them, close the doors and say "Oh look, we have a clean surface!" which is certainly fine in the dev.

If it is fine in production, then it does not matter if the shit is in containers, VMs, dedicated servers, etc. Container may as well be curl http://mylservice/containername.img followed by dd if=containername.img of=/dev/sdb ; set-boot-flag-sdb ; reboot


Docker manages dependencies and libraries as well as whoever created the docker images manages those dependencies and libraries.

Most of developers that are suddenly becoming more productive with docker do equivalent of a full install of an distribution that they run in production, call it a container and get the claps from management because "We managed to do it faster". Never mind that the surface for brokenness is now even higher than the surface for brokenness on a VM that the container is going to be ran on had the VM got the full install of of the distribution.

> VirtualBox feels like installing a regular computer. It takes a long time and is a lot of manual work. If you want to change something you login on the existing VM until you reach a point where you no longer remember all the changes you've made over the years. The machine is unclean.

That's because in this approach no one bothered to do equivalent of what one does when creating a docker image - write chef/puppet/salt/ansible baseline configuration so the new VM is nothing other than "base-VM + special config for this function".


Might be, on large organisations we just make out an order and get our VM.

I have more experience with VMware and Hyper-V, as type 1 hypervisors, VirtualBox always felt a bit underpowered.


Containers are just groups of processes, essentially a chroot that isn't limited to the filesystem; I find them much simpler than full VMs, which are overkill for a single server.

It's not like containers are new technology, even on Linux; we were using OpenVZ a decade ago. Now they're just integrated into the mainline.


Sure, after all I used HP-UX vaults back in the day, however in the context of Java and .NET application servers they hardly bring anything new.

And we can use native containers if we actually need them, so I see such tooling more as yet another consulting wave.


Deployment and transfer of environments between people.

Using docker images means that we can take the thing that developer A built and that "runs on their machine" (without necessarily having that developer on hand) and have developer B easily launch it on their machine without having any dependency or isolation or reproducibility issues, and after making some trivial changes put it in a new, fresh production server without having to ask the original dev how it was/should be configured.


We do that with VM images, WAR/EAR, RPM, NuGET, MSI packages.


You don't need a whole cluster to get started, you can use Minikube locally for example : https://minikube.sigs.k8s.io/docs/


If uptime is not a concern and only one node is intended, docker-compose is a far more efficient solution to any k8s solution.


It might be more efficient but it's definitely not a solution to get up to speed with k8s


I'm sorry, I somehow missed that the top-level commenter was interested in the one-node k8s to get started in it.


Have they solved the high CPU usage problem while it is running (even when no containers are inside)? I don't like my laptop hot, so never could make a switch.


They haven't. As an example of where this lack of efficiency comes from, recently I've fixed an issue during which I've discovered that the Etcd component of k8s uses periodic (10 sec) liveness checking, where for every check (launching of the etcdctl client), the `runc` binary (from container.d) is executed 3 (three) times. You can imagine this probably just scratches the surface.


Ugh! Was the fix committed upstream?


Yes, this one was a systemd detection in runc, which was causing big log flooding and runtime overhead due to runc being executed so frequently. I initially cached the checks, but later the devs removed them altogether. Still, this is not solving the inefficiencies in k8s. https://github.com/kubernetes/kubernetes/issues/76531


Yes, it's a real issue. Kubernetes burns too much RAM and CPU even with no workload.


Docker on macOS also suffers from that issue though.


It will use more on MacOS and Windows than on Linux, as it needs to use a sidecar Linux "Moby" VM to run containers. In practice on Windows, this hasn't been an issue for me (64GB RAM in my dev laptop).

This will actually finally change soon'ish with Windows at least, I think as part of WSL2. Not sure if anything is going to change on MacOS.


Not for me, except for the RAM blocking - I wish that Hyperkit could dynamically allocate and release RAM for the guest machine instead of blocking 2GB (in my case) by default all the time.


I haven't noticed that.


--vm-driver=none helps a lot with the CPU and especially RAM usage, using k3s instead of minikube helps more.


We've had the same problems - we didn't need all the overhead of kubernetes. Have you seen BeeKube? https://beekube.cloud It is a managed container platform, similar to a paas but with containers


Docker is more than enough for most of the setups TBH, prometheus and what will work just as well. I would never recommend K8 locally, if you can even avoid docker it's a win.

It's just a matter of preferences, where you want to invest your time to learn etc... Kubernetes being the fully fledged "state-of-the-art"

I would just recommend the work of Stefan Prodan who's repository and blog are full of open and well thought devops work, even if you are just interested in K8 or only Docker


You can run a single-node cluster. Not the best option and overkill if you don't have the need for redundancy, but it might save you some time deploying software. Honestly at that point you might be better running your software using docker-compose or something like that.

For development, microk8s is actually very nice: https://microk8s.io


I feel until you have atleast a clusterful of services, you shouldn't add that much complexity to your applications. It's much easier to manage a single tower for as long as possible, and slowly migrate to a new one when it seems ripe


I was looking at https://k3s.io/ as a better configuration management solution than just pure-docker. Because I like kubernetes, I used it, I know it.


Dabble with LXC on Linux


Can anyone offer a good guide to DevOps for people who don't directly use these tools but work with engineers who do and would like to learn more? The whole ecosystem of servers, cloud infrastructure (and all of the different offerings there), Docker, Kubernetes, CICD tools etc is a bit overwhelming to get into.


Sure. None of the things you mentioned are DevOps.

DevOps is two things:

1. Applying the methods of modern software development (version control, automation, DSLs...) to operations (provisioning, config, deployment, monitoring, backups...).

2. Reducing silo barriers between devs and ops groups so that everyone is working together as a team, rather than blaming each other for poor communication and the resulting messes.

Then there are all the DevOps hijacking attempts, such as equating it to Agile or Scrum or XP, or insisting that it's a way to stop paying for expensive operations experts by making devs do it, or a way to stop paying for expensive devs by making ops do it, or a way to stop paying for expensive hardware by paying Amazon/Google/$CLOUD to do it.

No matter what your software-as-a-service company actually does, it will need to execute certain things:

- have computers to run software

- have computers to develop software

- have computers to run infrastructure support

You can outsource various aspects of these things to different degrees. Anywhere you need computers, you have a choice of buying computers (and figuring out where to put them and how to run them and maintain them), or leasing computers (just a financing distinction), or renting existing computers (dedicated machines at a datacenter) or renting time on someone else's infrastructure. If you rent time, you can do so via virtual machines (which pretend to be whole servers) or containers (which pretend to be application deployments) or "serverless", which is actually a small auto-scaled container.

Docker is a management scheme for containers. VMWare provides management schemes for virtual machines. Kubernetes is an extensive management scheme for virtual machines or containers.

A continuous integration tool is, essentially, a program that notes that you have committed changes to your version control system and tries to build the resulting program. A continuous deployment system takes the CI's program and tries to put it into production (or, if you're sensible, into a QA deployment first).


At last, someone who gets it. Absolutely nailed it. Great answer. I never log into my HN account anymore, but for this response I just had to say: yes. Well said.

When you boil the Cloud, DevOps, CloudOps, SecOps, *Ops, CI, CD, Containers, VMs, and all the other technologies we've devised over the past ten years, you always end up at the basic building blocks.

You eventually come to the conclusion that all we're really doing with all these new tools is adding software layers on top of those building blocks in an attempt to make them easier and faster to consume.

And how have we done overall?

Not bad, if you ask me. Some solutions are overkill for most people (K8s is an example of over kill for a start up and even an SME.) But Terraform, Ansible and GitLab (CI) are something I'm currently developing a highly opinionated video training course on because I believe they strike the right balance of improving on prior experiences without taking the absolute piss.


I am a developer, who also dealt with ops in a small business context. I agree with Ansible striking a good balance between prior experience and the future of automating server configuration.

I did a write-up on how I used it on my blog: [link redacted]

The workflow worked really well, provisioning Vagrant servers in staging and Digital Ocean droplets in production.


Thanks, I appreciated this blog post - I've struggled to get started with Ansible before, and this was just what I needed!


Nice write up. Good job, mate.

I moved away from Vagrant in favour of Terraform, but I agree Vagrant still holds its own and is a great choice (HashiCorp really nailed it, eh?)


I agree with _dsr, he really nailed it. As for GitLab I agree they got the balance right except that it feels quite Kubernetes centric.


There is a lot of flexibility in the CI/CD configuration. You don't need to use the Kubernetes stuff.


Curious about where you see ansible fit in while using Terraform, could you expand on that? Everywhere I have thought I would need ansible to scratch an itch it has turned out that terraform has that functionality in some way (through null_resource, runners).


Terraform is for managing infrastructure. Ansible is for managing configuration. You can argue they're the same thing, but I disagree.

I believe in one tool to do one job really well.

Terraform is excellent at provisioning and managing infrastructure due in part to its DAG and HCL. On the other hand Ansible has been tuned over the years for managing configuration and the state of anything and everything from the OS upwards.

I also believe in using building blocks to get to where you're going, and these two bad boys click together quite well.


I guess what I'm questioning is the place of configuration management tools in a world of increasingly managed services where the server is not patched by you. In those cases, it makes no sense to me to patch individual containers through automation versus updating the image and pushing out the artifact to the service so all containers everywhere are updated and there's no checking for variance in state since all are running the same (updated) image.


personal experience only, but like Dockerfiles, terraform is only good for provisioning until its not.

Once your VM or container hits a complexity point above trivial, ansible is very much a useful tool for provisioning container states, and specifically for patching container images to, eg. include security updates.

...beyond that, as in, the intended use case of dynamically updating multiple live machines in parallel... dunno, I don’t use ansible for that... but it beats the hell of out having a single monolithic batch script to setup a container. I use it for that purpose all of the time.


I guess that's the disconnect for me. Why would I want to update individual containers when I can just push out a new image and have automation rotate my services? Individually applying security patches at the container level also means there's probably SSH access as well, something I am quick to remove in environments in which I encounter it.

For host based security patches (if I'm in an environment where the servers aren't managed), adding an item to the crontab in user data usually handles that, and again any fleet-wide changes would usually be propagated by updating the user data, pushing out the change and having automation rotate the fleet.


Just some minor clarifications:

- DevOps is a peer with Agile and Lean. Scrum and XP are Agile implementations. Scrum doesn't prescribe ways to code, XP does.

- 90% of what people develop or run today should be in containers, and not because containers are great, but because of the DevOps patterns of IaC, immutability, reproducibility, homogeneous environments. Whether you run them on your laptop, a VPC, AWS Fargate, a K8s cluster, etc is dependent on your business needs.

- Continuous Integration and Continuous Delivery aren't so much tools as a practice, and they're more complicated to implement at scale than just using a tool. There are some great books on the subject.


>> 90% of what people develop or run today should be in containers, and not because containers are great, but because of the DevOps patterns of IaC, immutability, reproducibility, homogeneous environments

Sorry but no. Container is __a__ way of achieving a small part of what you are talking about but not the only way.

Break it down:

- IaC: how do you containerise a load balancer? Terraform gives you infrastructure as code without containers.

- immutability: VMs, AMIs are immutable just like containers are (discounting the entropy that happens in every OS)

- reproducibility: Same, VMs, AMI, Terraform, Ansible all give your that

- homogeneous environments: Not sure what you mean by that, your Cisco or Juniper firewalls are not running in Docker so I am pretty sure you already have "heterogenous" environment if you meant that by what you wrote

I absolutely disagree this approach that we need containers for the reasons you just mentioned.


> - IaC: how do you containerise a load balancer? Terraform gives you infrastructure as code without containers.

1) Terraform should be run in a container so that it will actually behave the way you expect, 2) containers are application environments built based on a Dockerfile, which makes it IaC.

> - immutability: VMs, AMIs are immutable just like containers are (discounting the entropy that happens in every OS)

True. But containers are easier and more portable, which is important to supporting the other aspects involved. Containers thus are a better general solution.

> - reproducibility: Same, VMs, AMI, Terraform, Ansible all give your that

Containers and VMs just... work. They're just collections of files. Very reproducible. Not 100% - you may need different guest drivers/kernels, different arguments to run your container in your particular system. But they're conceptually and operationally simple.

Terraform and Ansible are garbage fires of reproducibility and immutability. I could write a book on all the different ways these tools fail (most of it stemming from people trying to use them as interpreted programming languages, but also their designs are crap). There are whole frameworks built around Terraform and Ansible just to make sure they work right. They are overcomplicated, fragile bash scripts, and I'm quite frankly sick of using them. I think their entire existence is evidence of a huge gap in understanding how we should be operating systems today. [/rant]

> - homogeneous environments: Not sure what you mean by that [..] I am pretty sure you already have "heterogenous" environment

Those are opposites; homogeneous means "of uniform structure or composition throughout", heterogeneous means "consisting of dissimilar or diverse ingredients or constituents".

A homogeneous environment in a DevOps sense is when all environments have the same components and are operated the same way, and thus provide the closest results possible. This is incredibly important to prevent the classic "Well, it worked on my machine!" dev->production breakdown.

Homogeneous environments apply to lots of different things, but in the context of containers, they ensure that the environment the dev used to build the app is the same as what is in production. They also ensure that any scripts, tools, etc will use the same environment, if they are run in containers. I've wasted so much time in my career "correcting" heterogeneous environments in a bunch of different ways, whereas with containers the equivalent fix is "Please run the correct container version. Thanks"

The more systems you have, the more important this gets. At a certain point, the best choice is just to use baked VMs or containers for everything, everywhere, and containers are just so much easier, almost exclusively because Docker shoved so much extra useful functionality in. (I'll add that I do not necessarily like containers, but I do find them to be the most useful solution, because they solve the most problems in the most convenient ways)


Please do me the favor of engaging with what I wrote.


I thought I did? Also, this feels a bit passive-aggressive, did I do something wrong?


It was meant to be polite. Under the guise of "clarifying" what I said, you completely contradicted it, without even doing me the courtesy of addressing my statements directly.

If it helps, my core point is:

DevOps is the name we give to two philosophical ideas. The first idea is that the tools and methods of software development can be used to improve our ability to do operations work. The second idea is that siloing people with operational skill away from people with development skill is a terrible practice.

Along the way, I specifically denounced the idea that DevOps is a single methodology, or that some tools are more DevOps than others, or that DevOps makes prescriptions about what you should do. Those are all things that you immediately advocated.


I wasn't sure how much your ideas deviated from mine, which I why I said "clarifications"; but you're slightly incorrect. DevOps isn't two philosophical ideas. It's lots of things, and all those things are the methodology. There are many books, podcasts, blogs, conferences, etc that go over all of the things DevOps is. It actually has little to do with tools or software, even though that's basically what it was created around. It is a general methodology, and you pick how you implement it. You can even apply DevOps to non-software processes.

Look at it this way: The Toyota Production System isn't about cars. It was developed specifically to produce cars as well as they could be, but it doesn't address "car problems"; it addresses business problems, production problems, workflow problems. It applies methods as practices in ways that are specific to the production of cars, but you can apply the principles of TPS to things other than building cars (as we do with Lean).

DevOps is comparable to TPS (well, Lean), but for software instead of cars, and it borrows from other systems, and it has a few of its own ideas specific to software.

> Along the way, I specifically denounced the idea that DevOps is a single methodology, or that some tools are more DevOps than others, or that DevOps makes prescriptions about what you should do. Those are all things that you immediately advocated.

I advocated using containers because they help reinforce DevOps principles better than alternatives. You don't have to use them, but that doesn't make them un-applicable to DevOps. There are different levels to DevOps, and one of them is "practices": particular ways of doing things that DevOps encourages, such as Infrastructure as Code, Immutable Infrastructure, Heterogeneous Environments, Continuous Integration & Delivery, etc. Things that containers are more useful at accomplishing than, for example, VMs.

You don't have to use Kanban to run a car production line. But it's more TPS than the alternatives.


Right, it's not a clarification: one of us is objectively wrong.


I think I understand how my comments came off now; sorry for that, and thanks for the clarification.


I wish I could upvote this multiple times. Thanks for sharing your knowledge!


After you accept that all these tools were built for overworked and stressed people who don't really have the time to learn things deeply, it becomes much easier. In fact, most of the programming ecosystem and systems administration works like this.

Try focusing on "what do I want?", get a superficial understanding on how the tool works, then try to apply that knowledge to your search engine query.

For example: Say you know that docker has images and containers. That means it is somehow going to install an operating system into your operating system and make an image. Then you will copy your program into that image. Then you will start a container (an instance) based on that image. And this is basically all you need to know about docker to start searching for how to do things. Like "how to build a docker image?" or "how to start a docker container?".

Another example. You know that Integration Testing means running your servers and running tests against them as if they are in actual production and continuous integration is a service that runs your integration tests everytime someone merges a branch to a monitored branch in a version control system. From here on, you are able to look up how to set a monitored branch, how to create a build machine and how to scale it.


This is how you end up with low-staying-value understanding built on shaky, fragile footing.


"And what the heck do you call an act like that?"

"I call it 'modern software development practices!'"


You can but not necessarily will. It can be valuable (to an extent) to abstract certain parts of process away from new users or employees to aid in bootstrapping. That's the message that I get from GP's comment. What these tools can't do is replace experts that can get between the commands to diagnose the root cause of an issue rather than the taking stabs at the dark of a cryptic error message.

These tools for DevOps are no different than the tools and tutorials for developers. It's fine to copy-paste a tutorial that launches an entire webapp from scratch just to get your project off the ground, but if you don't eventually learn what that 10-minute autorun.sh script is doing behind the scenes, you'll fall behind.


Debatable, I think this sort of investigation is what leads to a more robust understanding compared to some abstract, non-applied information that has dubious relevance st the moment.


This is how I get air humidity experts commenting on approaches to IT problems.


What most enterprises think is DevOps is different from what dsr_ wrote.

They have admins that maintain their pool of servers.

They have developers that are fluent in the stack of their application.

They decide they need to have some of that cloud, containers, CI/CD stuff.

Turns out they need people who can write code that builds their programs, tests their programs, packages their programs, provisions cloud infrastructure, sets up that infrastructure, deploys the packaged program on their infrastructure and finally monitors its health and performance.

Most of their admins say they are not programmers, so it's not their job.

Most of their programmers say they are there to write Java/C#/Python/JS, so it's not their job either.

They find some people who don't mind learning all these things and call them their DevOps team.

In a not perfect, but generally just world this team disseminates their knowledge across both programmers and admins, making both aware of each other. Programmers now think about the infrastructure they need, admins now think about the workloads their infrastructure runs.

In an unjust world, you end up with three silos. Programmers say their code compiles on their machine, admins say they have installed the new server, devops frantically try to build some pipeline that deploys that code on that server.


>> What most enterprises think

You mean the companies we regularly fail every aspect of engineering?

dsr_ summed up pretty well how Amazon and a bunch of companies think abut DevOps. Coincidentally these companies produce the highest grade of software, tools, services etc.

>> Most of their admins say they are not programmers, so it's not their job.

What you are describing is the 90s approach to IT. These companies disappear really fast. IT is changing just like agriculture was changing long time ago. Toffler talks about this in The Third Wave.

Old approach: lets do everything by hand New approach: automate most of the things you can

>> In an unjust world, you end up with three silos. Programmers say their code compiles on their machine, admins say they have installed the new server, devops frantically try to build some pipeline that deploys that code on that server.

I migrated countless companies from 90s approach to CI/CD world, they never looked back. You just think that because there are late adopters this world is going to exist indefinitely. I do not think so.


I was not talking about IT companies that produce h/w or s/w goods and services. I was talking about companies that produce other kinds of goods and services and insource or outsource IT services that support their value chain. They don't necessarily feel the same pressure to improve. I agree they are late adopters, but late adopters aren't stragglers. There will still be enough of them in 10 years time.


I'm currently in the process of developing a video training course that teaches Terraform, Ansible, Packer and GitLab (CI) as a set of interwoven, dependent tools. Is this something you feel would scratch your itch?

Would you be willing to have a quick chat? I'd pay for your time, of course. I need to gather feed back from people looking to develop their skills into the CloudOps space and understand what it is they're looking for.


I have been looking for a entry point to start learning containerization and other fancy, related tools that I've been reading on HN in the past couple of years. I mostly write backend code/scripts, so I'm very new to dev ops stuff.

The immediate need for me is to package up a C#/.NET web app and its components (DB, etc.) into a container so that I can deploy it on any big-name cloud provider (Azure, AWS, Google Cloud). Now after reading through the comments in this HN post, I am not sure if I should choose Docker or something else. If you have any suggestion, I would love to learn. I'm more than happy to provide you with feedback and such (even for free) if I can learn from your tutorials. Thank you.


I think I would start by looking at how-to craft that container by hand. You'll understand the fundamentals better if you do it that way. Once you have a grasp of those, research the tools that then do it for you.

If you deploy a container to a Cloud provider, you'll need to first setup and understand the container engine as well. Not a bad thing to learn, for sure, but again start small and from the bottom upwards.

My tutorials won't cover containerisation because frankly I believe they're overkill for most situations. Sure they're fast and so on, but a slower golden image and a simple EC2 Instance in an AutoScaling Group is easier to understand, manage, and can be just as easily orchestrated.


I'd suggest starting with just scripts, e.g. Bash, PowerShell, anything – that's assuming you have automated builds for both your web app and its components first. (So, automate builds of your app and all of its components first if you don't already have that.)

Pick one cloud provider first and write some setup scripts, i.e. scripts to build an initial (minimal) environment, 'from scratch', for your app and its components, e.g. create a new EC2 instance for the web app, upload your build package to it, etc.. Write the scripts so that any 'secrets', e.g. your AWS API key, are provided as either environment variables or regular command line arguments.

Then, still for the first cloud provider, write some update scripts, i.e. scripts to update your web app and its components.

Assuming your app, or its deployed 'instances', are relatively small and intended to serve a modest load, I'd suggest starting out treating the cloud servers or services as 'pets', i.e. entities you distinguish by name and for which you would be 'sad' if they 'died' (crashed or shutdown). At larger scales, it's often worth treating servers as 'cattle', i.e. a mass of nameless entities, but you probably won't need that at this point or anytime soon. (You'll know better tho.)

As for "containerization and other fancy, related tools that I've been reading on HN in the past couple of years", they're just like any other software – tools that can be used but aren't ever strictly necessary.

Containers are, basically, virtualized OSes, and they can be (very) useful. In my opinion, they're most useful as a way to bundle components of an app with an OS (and other OS components). That you can run those containers in different environments and be reasonably assured they're (mostly) identical can be a big benefit. But there are associated costs too (as with anything)! But if your app and all of its components can comfortably fit on a single (virtual) server, the cost of any changes you need to make to your app to run inside them are probably not worth the (currently) modest benefits.

And all of the other "fancy ... tools" are generally even more a matter of tradeoffs you'll need to make. Once your app, or (production) instances of it, are distributed over several, or many, servers, and you start adding things like load balancers, caching, separate search services, etc., then the benefits of the other 'fancy' tools will start to make more sense.

But, like with many things, it's good experience to directly run into some of the issues that containers and the other fancy tools aim to solve, and really try to solve them yourself with DIY solutions, before committing to use yet another program or tool to do it for you.

Of course, if you just want to learn those tools, and you'd like to use your own app as a 'motivate example', that's perfectly reasonable and valid too. I would recommend tho not to lean on those tools for your own immediate needs unless you really need them.


nice guide to devops linked at the bottom of the article https://www.techrepublic.com/article/devops-the-smart-person...


I think the only way to learn them is to use them. You can do that locally or purchase a VPS and work there.


Maybe this isn't quite the perspective the article's taking—but damn near no one visibly wept for LXC when Docker stomped all over it in terms of “what people think containers just Are”. And now the news asks why I don't weep for them? Live by the stomp, die by the stomp.


I weep for Solaris Zones and FreeBSD Jails. Granted I don't really have much experience of them, I do have some experience of containers on Linux via Docker but also in constructing a minimal container runtime in C (not OCI compatible or anything), but my point is there was a lot of work in this area before Docker and especially in the case of Zones, freely available today in illumos distributions, are completely overlooked. I mean I could be completely missing something here, but Joyent for example seem to have made some really good innovations with Manta, i.e. spinning up containers to run UNIX pipeline equivalent jobs directly in the cloud on the data, but as with illumos vs Linux, Zones vs Docker and Joyent vs AWS/GCP/Azure, it seems to me a david vs goliath kind of battle, even if the tech is better.


As do I. Solaris in general and Zones in particular are so much better. There just wasn't an ecosystem around it. Solaris was too late to make the shift to open source. It might not matter; had they done so "in time" it might have killed them anyway!


LXC feels more UNIXy. Docker command line tools and formats feel awkward in that regard (which helped to popularize the thing by pushing this one specific view).


I agree. I had some VMs that I wanted to turn into containers. With LXC it was a breeze, and the result is very much like a "lightweight VM". Docker seems more like putting a single application process in a container, which is a very different thing. And if I want to do that, I'll seriously consider running the application in a unikernel (e.g. OSv) instead.


If you are trying to use docker to build lightweight VMs you are really swimming upstream. In the (docker, etc) container world they use the phrase "pets not cattle". Containers are designed to be stateless (nothing is stored in the container) and to be spun up and down as demand changes.

https://devops.stackexchange.com/questions/653/what-is-the-d...

If you want to build a mini-VM using containers, LXC is a great choice. If you want to deploy software, easily, with CI/CD and [auto-]scaling, then containers are what you want.


By the way, the lightweight VM project is currently the LXD by Ubuntu/Canonical.


Yeah, LXD is what I have been using. Maybe I should have said "LXC/LXD" instead of just "LXC".


LXD is fine. You can run K8s on it.


I don't get why k8s is the dominant scheduler. If you have a 3 ~ 6 person platform team that can set one up, or build a secure terraform of CFN codebase to establish an AWS/EKS system, they can be nice. But I've also worked at DCOS/marathon shops where it worked just as well.

The trouble with all these schedulers is they can't go from just one node (where scheduling and processes run on the same node .. and minikube is a hack; not a production system) to 100. You can't just setup a small k8s, and then add a node, and another node, and scale up. You go from a single docker system, to a big managed k8s system.

There needs to be more competition. It's the same deal with the dominance of systemd as the only system layer. Only the small startups seem to be using more lightweight stuff like Nomad, k3s, RancherOS (Rancher is mostly going the managed k8s solution anyway; even though they have their own k3s implementation).

A running k8s system can be okay, but there is a lot of room for improvement (in terms of making it simpler). Both DCOS and k8s seem to waste a lot of resources. Docker could have competed in this space, but everyone complained about all the bugs in Swarm and it never really went anywhere.

I did a writeup on container orchestration systems late last year:

https://penguindreams.org/blog/my-love-hate-relationship-wit...


For a very long time there was a gaping security hole in Docker: anyone who could run a container could mount anything on the underlying host as root. This says to me that Docker (the company) don’t really consider any use cases beyond “fooling around on a personal laptop”. Meanwhile other container projects took seriously from day 1 that they would need to run in production.

Docker (the company) certainly helped to raise the profile of containerisation but they invented very little of it and did a poor job of implementing what they did do. Good riddance to them.


A couple of things :-

You can still mount filesystems as root from a container, if you have Docker command rights. In Docker's security model access to run docker commands on a given host == root, that's a design choice AFAIK, not an oversight.

It's perfectly possible to mitigate that issue, by restricting who can run containers and also ensuring that all containers specify and use a non-root user account (or enable user namespaces at the Docker daemon level)

Also, many early stage technologies don't prioritise security . For example, for several early releases of Kubernetes all you needed was remote access to a single port (10250/TCP) and you could get root access to the underlying host without any authentication...


If you run in your container as a non-root user, it makes working with volumes a pain. Who knows what the container user UID will map to on the host and whether this host user, if any, will have permissions to access files in the volume.

Otherwise you can hard code a UID when creating the user in the Dockerfile but that means your containers aren't generally portable.

In the end, the path of least resistance is to run as root within the container and simply accept the security implications if using volumes.


In the Dockerfile, get UID and GID as ARGs, and make sure those variables are available in your host environment. Then when creating the user in Dockerfile, use that UID and GID. Volumes will work like a charm.

That's what I am doing for local development setups with Docker.

See https://github.com/a2way-com/template-docker-laravel/blob/ma... and its README.


That means your docker file is portable, but your images are not, which is what your parent is referring to. It's a friggin mess. It's still the same as when I started using docker.


> Docker (the company) don’t really consider any use cases beyond “fooling around on a personal laptop”

Much worse: Docker was never built with security in mind but the company kept pushing it as production ready.


That's not a gaping security hole. Only root users can run docker containers. Users added to the docker group count as root users and the documentation explictly tells you that.


That's only a problem if you allow untrusted users operate the docker daemon.


That's only a problem if you allow untrusted users operate the docker daemon.

Sure, if you trust every developer in your company with the root password anyway, why not? That might be true at Docker (the company), I don’t know. Certainly wasn’t true at one company I worked at with 30,000 devs...

By the way, this problem does not exist with competing container tools like Podman/Buildah.


Why would you be allowing devs to directly deploy to production in a company with 30,000 developers?

Surely you'd have proper release management where Ops teams would review deployment artifacts before deploying them?


You wouldn’t but you wouldn’t give every dev root access to every dev box either... would you?


Not necessarily every dev box, but I'd say that in most environments it's reasonable that devs would have full permissions in any "their" dev boxes/VMs. If you split boxes/VMs across devs instead of sharing them, then the access would be limited to whatever people are assigned to own that box, but they'd have full access. I mean, if something breaks, it should be trivial to reset that machine or get a new one, VMs can be cloned and spawned in seconds and there's no reason not to spend an hour once so that you automatically get a fully working dev environment with all the tooling needed.

In any case the notion of "the root password" seems weird, root passwords should be unique (even for VMs), randomly generated, and mostly not used; in most situations you'd use publickey authentication instead of passwords.


I'd have developers working in Dev VMs on their laptops, and sure I'd let them have root access to those.


In my experience having worked at two developer tools companies where we wanted to partner and co-market products, Docker would never pick up the phone. There was definitely a “we don’t need you attitude” whenever I approached them and I had the same experience repeated to me by friends at other companies trying to do the same thing.


This is the core reason why they are where they are.

They acted like you were bothering them and in fairness you probably were. Even today they wouldn't pick up.

When you start believing the hype reality becomes distorted.


The kind of activities we were approaching about were for integrations, conferences, blog posts and webinars - great methods to get leads and remain in the zeitgeist.


Google doesn't actually use Kubernetes much, so the "operation hardened internally" argument isn't valid.


This reminds me of Joel Spolsky's fire and motion piece (https://www.joelonsoftware.com/2002/01/06/fire-and-motion/). To paraphrase a little bit:

> Fire and Motion. You move towards the enemy while firing your weapon. The firing forces him to keep his head down so he can’t fire at you. ... The companies who stumble are the ones who spend too much time reading tea leaves to figure out the future direction of Google. People get worried about kubernetes and decide to rewrite their whole architecture for kubernetes because they think they have to. Google is shooting at you, and it’s just cover fire so that they can move forward and you can’t


Felt the same about Polymer. Starry eyed devs open a pre-webpack build terminal, import the if (no else!) and for keywords, build dom nodes manually and tolerate a terrible debugging experience. Youtube loads a few seconds slower in Firefox, since it gets served a polyfil of a runtime and a slower build; Chrome had a head start with a native runtime. Sites built by starry eyed developers simply break on Firefox.


Perhaps @thockin can comment since this was on his Twitter some time ago, but I believe Google is starting to use it more and more internally. He would know, since he's been involved since the beginning.


source?


Anecdotal, but I know a guy working as a senior Engineer at Google with some central server-side components for the Android ecosystem. He had barely heard the word Kubernetes when we last spoke, let alone knew what it was.


For the record, the Google developers I know also don't use it and generally don't care about it. But it's a big company, I'm sure they have a few users


Kubernetes is based on Borg which Google uses extensively internally.


"inspired by" borg. It's not the same codebase.


I didn't say it was the same codebase. "Inspired" may be more accurate but the point still remains that the concepts in Kubernetes have a direct path to the concepts in Borg.


I think the reason they failed is that they tried to make a platform for micro services but micro services is an anti pattern, people really just wanted containerisation.


Companies don't have feelings. The only ones weeping are the VCs that invested into Docker :-D

Docker played its role and introduced the majority developers to containerization. This is a major success for the industry.


Whenever the topic of building online open source communities comes up I feel compelled to share the work of the great Pieter Hintjens, the guy who wrote zero MQ. He wrote a book about this topic which I thought was quite good: https://www.goodreads.com/book/show/30121783-social-architec...


Pieter was an outstanding writer. Everything I've read from him was top notch, from his ZeroMQ guide to the last blog posts explaining how he was dealing with the unthinkable process of getting his affairs in order because he know he'd die soon. I'll definitely add this to my reading list.


I also feel sorry for Docker, in a way. Was it their arrogance, or just incompetence?

The came up with this amazing tool, that lot of companies started using, but they did not have a business strategy on how to make money in a long term. They tried to keep up (Docker Swarm, Docker Hub Premium, Tutum, Moby, Docker Community vs Docker Enterprise etc). But at the end they just seem like they don't really know how to approach it.


What did Moby try to do anyway?


A rebrand that made things more confusing.


Is it dead now? It's still in GitHub, so as Docker.


AFAIK it's a different name for the same Docker.


Hmm hmm...


No, nobody weeps for Docker. But, everybody cheers for `docker`.


I jumped on the container bandwagon late and immediately fell in love with Docker. It built on my existing skill set so I was quickly able to get something up and running.

Then I started tinkering with Docker Compose and for a while things were great. But after a while I started running into issues. Compose felt artificially crippled. No secrets? No health checks? Pushing me towards Docker Swarm?

Eventually I just sucked it up and switched to Kubernetes even though I think it's overkill for my applications.


My thoughts exactly. You cannot scale Docker. You can to a point, but there's always k8s looming ahead of you, saying, "sooner or later you will have to learn me instead". So most people learn it sooner than later to migrate to it while their workloads are still small. Of course, they often don't grow large enough for the benefits of k8s to kick in, but that's another story.


Weep? Docker's downfall? What happened?

As far as I can tell, everybody and his grandmother is using Docker. Why should we weep about it?


'Everybody' is using docker the software, 'nobody' is using (ie paying) Docker the company. The article makes it clear they're talking about the company.


My small company pays for private image storage there.

We actually used Google Cloud Platform's Docker Image hosting service, and that was expensive.

Yay!?

What else can we pay them for? All the stuff we need are available from them for free, except private storage. If they had a container hosting solution, we'd pay for it.


yeh we pay them like $15/month for Hub/Cloud/Private Image Storage/whatever it's called this week.

It actually seems quite cheap...we have something like 2TB of tags up there, and they don't charge for network I/O. I did feel slightly bad when a hidden crash looping pod set to always download lay undiscovered for a month...that's a LOT of I/O.


> 'Everybody' is using docker the software

Not at all - thanks goodness.


Nah, plain old VMs over here.


What's interesting, to me, about Docker as a company perhaps not doing well is how that'll impact Microsoft.

Microsoft have done a load of work on getting containers running well on Windows servers and that work relies on Docker EE as the container runtime engine (you get a free Docker EE license to run on Windows servers AFAIK)

If Docker get bought up (by someone other than Microsoft), then that would seem to possibly place Microsoft's container efforts at risk...


Microsoft has already heavily invested in getting Kubernetes running on Windows as a fist-class citizen: https://docs.microsoft.com/en-us/virtualization/windowsconta...


The only thing Docker is now useful for is Docker Desktop. Unlike other desktop container software, it actually works on locked down machines in enterprise environments.

K8s can run on any CRI-compatible runtime, and IBM/RedHat don't even want you to install Docker on RHEL8.


Whilst k8s can run on any CRI compliant runtime, I've never actually seen a prod. deployment use anything other than Docker.


K8s is a “datacenter operating system”, just like VMWare’s own VSphere, or Mesos, Mosix, etc. These solutions also compete for mindshare with mainframe solutions like IBM’s; and with “control planes” like OpenStack, Canonical’s Landscape, or (I think?) Microsoft’s System Center. This space is very, very profitable.

None of this applies to Docker itself. Docker is “just” a virtualization technology. Sure, Docker Swarm exists, but at this point it’s mostly used as a shimming UI for connecting the Docker client and daemon to the abstractions mentioned above, not a clustering solution in its own right. Swarm lost in the DCOS market. And the market for pure virtualization solutions isn’t anywhere near the market for DCOSes.


Doesn't K8s typically run on virtual machines? In which case it's K8s + VMs to get to the “datacenter operating system” model?


It can run on VMs but doesn't have to. There are many bare metal k8s users out there doing some really neat things with the stack.


A little OT, but is there anything remotely competitive with k8s these days? By "competitive", I mean: good feature set, thriving community, active development.

I still use Docker Swarm for small scale stuff, and am pretty happy with it - it's simple, easy to use and doesn't eat resources. But it very much feels like Docker have given up on it.

I'm particularly interested to know if there is anything simpler than k8s that's competitive?


For the same reason that no one weeps for a company that markets hammers even if it invented a new way to hold a hammer and raised lots of money because of it. We do not care about hammers, we just use them when we need to hit something. Our customers do not care about hammers either, they care about the result that we deliver.


Docker sold its soul for money at the cost of its core product. The second you take as much money as they did so you can have your luxury box at AT&T or whatever else I’m going to find it increasingly hard to sympathize with your future mistakes.


I regret the time and money I spent thinking about learning Docker. I'm sure containers solve somebody's problems, but it's not any problems that I have.


It's minor and maybe I'm being petty, but my sympathy for Docker ended the moment they forced you to register an account and log in to download Docker CE.


The first and only experience I had with Docker as a company was requiring me to sign up to download their mac osx client. They seem to have since changed that policy but it really made me resent them, and made them feel pretty unfriendly.



>Kubernetes "was operation hardened internally at Google

Is this true? Isn't kubernetes "based" on work done at google but also a complete rewrite.



Why cry at all? Isn't this the point of open source?

Can't the same be said for Git? Linux? Python? (That they didn't make the creator billions, and the creator is fine with that)


This article is real rich coming from a guy who works at AWS. The amount of absurd hubris and doublespeak entering this community unchecked is shocking to me.


Ugh, Google in the middle, please change the following URL:

https://www.google.com/url?sa=i&source=web&cd=&ved=0ahUKEwiS...

..to:

https://www.techrepublic.com/article/why-doesnt-anyone-weep-...

EDIT: The article itself is quite interesting, on the rise of Kubernetes, its adoption by VMWare, and the reason why Docker failed to capture market value as much as it could have.


Yes, it would be smart for HN to reject links that are google search engine tracking links.


[flagged]


To be fair, arrogance seems to be quite popular in systemd as well


Sure, but saying "I won't merge this because I don't want my software to be compatible" seems to be strange. Would you not accept say changes to your Makefile to support BSD or Windows or Linux?


It depends. Trying to avoid being compatible with Systemd would be strange. Simply not valuing at all -- if it works, that's okay, but you won't add any Systemd-specific code -- is a different thing.


Simply not valuing at all -- if it works, that's okay, but you won't add any Systemd-specific code -- is a different thing.

That's a fair stance to take if you're just a group hackers hacking away on this cool open source project in you free time. It's a very bad look when you're trying to present yourself as a serious company that other serious companies can depend upon for some of their most critical infrastructure components.


I see parallels with discrimination.

If it is some obscure compatibility we are talking about, I can understand it. If it is a defacto or official standard, then no. Regardless of whether I use Systemd or not (my primary OS uses launchd...), its usage in Linux distributions is currently widespread.

Not wanting to develop compatibility yourself, I can understand. No discussion about it.

Not wanting to include compatibility PRs is a recipe for hostility, and ultimately, a fork. There's a nuance here that it could lead to a lead dev importing PRs for software they don't use or understand but other than that, it seems the way to go if you want an inclusive and gentle environment.


And that perpetually refused PR to disable connecting to the central repository every time.


[flagged]


I've seen this happen quite a lot in the Open Source and hacker world. The sort of glib bravado that goes down so well with your friends on mailing lists and at your local meetups just doesn't translate well when you're trying to convince Fortune500 companies to give you millions of dollars.

I bet the person in question had gotten nothing but cheers and support for their anit-systemd rhetoric from their immediate peers and was shocked and surprised to find that it didn't go down just as well in the wider world.


> systemd sucks balls

No it doesn't. Shell scripts in your init system sucks balls. Systemd is great.


I am in favor of a new discussion about existential philosophical differences again. Almost missed the systemd drama.

I use systemd on pretty much any linux machine but am very sympathetic to the arguments of its detractors, which to a significant degree results from the behavior of its proponents and the inability to see that you actually do loose flexibility in theory.

You could argue taht GNU/linux could be called GNU/systemd/linux.

I couldn't write a better init system than systemd, but I like the idea that some people could. And I am pretty sure there are people like that.


> I couldn't write a better init system than systemd, but I like the idea that some people could. And I am pretty sure there are people like that.

But they did not. Functionally, systemd is great. I don't like the architecture, for its monolithic non unix characteristics. I begrudgingly started to use it when Debian switched to systemd. After some time, I must admit it is leaps and bounds better than sysv init, and better than the alternatives that appeared before it.


Why did the shell scripts in my init system "suck balls" ? Init scripts worked wonderfully for me, systemd solved no problems that I had. What it did was add a lot of complexity to my world for no perceptible benefit. Maybe my desktop, that I reboot twice a year, boots up 1 second faster? Maybe? Probably not though, because the long tent is probably nfs mounts or something.

I think the reality is both have their pros and cons, and it depends on your use case.


systemd wasn't necessary for init when you ignore Poettering's binaries which broke with common interfaces (now there are more, sigh). It's worse from a maintenance standpoint, for init. It's quite sane for a daemon management system. It just happened to combine a bunch of systems into one and now we're stuck with it.


Sigh No, it really does, and it has nothing to do with init systems.

Systemd was never necessary. An init system just has to run a couple of steps once a system starts or stops. Bash is fine for that, or a tiny C wrapper. Look at every freaking Docker container in the world - they all use tini (https://github.com/krallin/tini), the tiniest, most dinky init tool ever made, because it's totally fine to use something small and stupid to initialize some system calls and execute a program. Once you've "started up", you can choose to use a dedicated service manager to manage various applications.

There's a reason we use K8s, Nomad, or Docker for container scheduling even on a single host: Systemd is not good at it, and we'd have to replace it to scale up to over 1 host anyway. We use distributed decentralized services on modern systems. Systemd isn't intended to work for modern services; it's intended to just be a "faster desktop system", iterating on what we had before, rather than being redesigned for modern workloads and applications.

Anyone who used Systemd purely because they didn't like Bash scripts had no idea how to manage a system. There were already replacements that managed services well, and you'd just install those and use them, with very minimal change to the rest of your system. I mean, if developers need to write a new microservice, they don't say, "I know, I'll use Systemd!" They say, "I know, I'll use minikube!" Because it actually gives them everything they'll need to run their application locally, and on a globally distributed decentralized fault-tolerant service coordinator. Their application can then opt-in to all of K8s' myriad customizations and complexities, but the rest of their operating system doesn't have to!!

But furthermore, as a system itself, Systemd actually just sucks. The user interfaces are totally clunky and not standard to anything else we have in Linux. The binary format makes it much more annoying to use with other tools without having to learn how the wrappers work, and managing the binary files from a systems perspective is annoying. The filesystem structure is fucking atrocious; who the fuck puts config files in /lib ??? Why do I have to re-run 3 commands just to reload a daemon after I've edited its service file?? Why do I need to now learn an entirely new DNS resolver setup that I never asked for?? Then there's the bugs and security issues and generally arrogant, dickish way Systemd works with other open source projects. It sucks balls. You can't tell me it doesn't, because I have to use the damn thing every day, and work around its dumb issues.

To people who manage systems for a living, and regular users just trying to get on the 'Net, Systemd was not some revelation from the gods because "oh no, bash scripts". Bash scripts may have been annoying, but Systemd is annoying in a whole new, more complicated way.




Consider applying for YC's Fall 2025 batch! Applications are open till Aug 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: