Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Honestly, this is such a common refrain in open-source communities.

"I'm going to use this product/project."

"You shouldn't use that, use this instead."

"But this product/project has all these features I want."

"Sure but you can just build and maintain those features yourself if you want. It's open source, after all!"

Or alternately:

"Well no one really needs those anyway, it's just people being told that they do."

I've been hearing this same refrain for over twenty years now. It seems as though some people don't understand that there's actual value in having a system that just works, rather than one that you have to build and maintain yourself just to emulate the system that works.




So, I don't use either of these things, but still tend to read the arguments between people who do, and that isn't what it feels like to me (as again, someone who doesn't use either of these things).

Specifically, it feels like you are missing that the people who argue against k8s are saying that it is extremely difficult to configure, and so instead of this clean "some people don't understand that it is nice to have a system that just works", it feels like the argument is actually about a tradeoff between what you think is easier to put together: a single system with a million options that seem to all need to be configured or it doesn't work or a handful of systems that are each easy to understand in separate, but where you will need to do all the glue manually.


'A system that just works' isn't a universal constant, so we have these discussions because 'just works' means different things to different people.

You're no more right or wrong here than anyone else is.


I do partially agree with the point that you're making, but perhaps from a slightly different angle - i'd say that the biggest value that Kubernetes provides (or at least tries to, with varying results), is having a common set of abstractions that allow both reusing skills and also treating different vendors' clusters as very similar resources.

> It seems as though some people don't understand that there's actual value in having a system that just works, rather than one that you have to build and maintain yourself just to emulate the system that works.

This, however, feels like a lie.

It might "just work" because of the efforts of hundreds of engineers that are in the employ of the larger cloud vendors, because they're handling all of the problems in the background, however that will also cost you. It might also "just work" in on prem deployments because you simply haven't run into any problems yet... and in my experience it's not a matter of "Whether?" but rather one of "When?"

And when these problems inevitably do surface, you have to ask yourself, whether you have the capacity to debug them and mitigate them. In the case of K3s clusters, or another lightweight Kubernetes distro that might be doable. However, if you're dealing with a large distro like RKE or your own custom one, then you might have to debug this complex, fragmented and distributed system to get to the root cause and then figure out how to solve it.

There's definitely a reason why so many DevOps specialists with Kubernetes credentials are so well paid right now. There's also a reason for why many of the enterprise deployments of Kubernetes that i've seen are essentially black holes of manhours, much like implementing Apache Kafka when something like RabbitMQ might have sufficed - sometimes because of chasing after hype and doing CV driven development, other times because the people didn't know enough about the other options out there. There's also a reason for why Docker Swarm is still alive and kicking despite how its maintenance and development have been mismanaged, as well as why HashiCorp Nomad is liked by many.

In short:

The people who will try to build their own complex systems will most likely fail and will build something undocumented, unmaintainable, buggy and unstable.

The people who try to adapt the technologies that are better suited for large scale deployments for their own needs, might find that the complexity grinds their velocity to a halt and makes them waste huge amounts of effort in maintaining this solution.

The people who want to make someone else deal with these complexities and just want the benefits of the tech, will probably have to pay appropriately for this, which sometimes is a good deal, but other times is a bad one and also ensures vendor lock.

The people who reevaluate what they're trying to do and how much they actually need to do, might sometimes settle for a lightweight solution which does what's necessary, not much more and leads to vaguely positive outcomes.


> the biggest value that Kubernetes provides is having a common set of abstractions that allow both reusing skills and treating different vendors' clusters as very similar resources

This is the real value of Kubernetes. I'm at my fourth kubernetes shop and the deployments, service, ingress, pod specs all look largely the same. From an operations standpoint the actual production system looks the same, the only difference between companies is how the deployments are updated and what triggers the container build and how fast they're growing. Giant development/engineering systems went from very specialized, highly complex systems, to a mundane, even boring part of the company. Infrastructure should be boring, and it should be easy to hire for these roles. That's what Kubernetes does.

Oh, and it scales too, but I'd say 95% of companies running their workloads on k8s don't need the scaling it is capable of.


Yeah I've heard the endless "Kubernetes is too complicated arguments" but at a glance it's a container deployment/management service that the big three cloud providers all support which makes it close enough to a standard.


The issues with large clusters you've surfaced are ones I've never once experienced in my homelab. I'm already an experienced devops (?) engineer so the transition to k8s only took a few weeks and a couple sleepness nights which will not be everyone's experience, especially beginners. But it's a cohesive system and once you learn it, you understand the various patterns and you make it work for you vs you sucking time into it. For myself, learning k8s was way easier than keeping track of various little scripts to do this or that in a swarm cluster.

Folks here are making sound arguments for swarm and alternatives and they are totally valid. But for an experienced engineer who needed to start their homelab back up after a long hiatus, I will never look back once for choosing k8s. Honestly some of the most fun I've had with computers in a long time and great experience for when I move into a k8s related $job.


> the transition to k8s only took a few weeks and a couple sleepness nights

Only?

I have had a home server for fifteen years and used to have a homemade router. That's significantly more time than I have ever spent on them. I don't even bother maintaining scripts for my home installation. I have reinstalled it once in the past decade and it took me significantly less time than writing said scripts would have. There was enough things I wanted to do differently that any scripts would have been useless by that point anyway.

At that point, it sounds more like a hobby than something saving you time.


I guess I never considered self-hosting as not a hobby? Web apps with modern stacks are usually pretty complex, so hard for me to imagine someone just casually jumping in as a non-hobby and not running into big problems at some point. Or getting hacked because you didn't bother or forgot to secure something.


> Web apps with modern stacks are usually pretty complex, so hard for me to imagine someone just casually jumping in as a non-hobby and not running into big problems at some point. Or getting hacked because you didn't bother or forgot to secure something.

That's not my experience at all. Plenty of web apps for self hosting are pretty simple. You just install the packet, read the config files, turn on and off what you need and you are good to go. Since systemd made that easy, most apps now come preconfigured to use a dedicated users and drop the capabilities they don't need. If they don't, you are two lines aways from doing it in their service file.

Then I just throw Caddy in front of everything I want to access from outside my local network and call it a day.


That's great! Did you use a Kubernetes distribution that's provided by the OS, like MicroK8s ( https://microk8s.io/ )?

Or perhaps one of the lightweight ones, like K3s ( https://k3s.io/ ) or k0s ( https://k0sproject.io/ )?

Maybe a turnkey one, like RKE ( https://rancher.com/products/rke/ )?

Or did you use something like kubespray ( https://kubespray.io/#/ ) to create a cluster in a semi automated fashion?

Alternatively, perhaps you built your cluster from scratch or maybe used kubeadm ( https://kubernetes.io/docs/setup/production-environment/tool... ) and simply got lucky?

Out of all of those, my experiences have only been positive with K3s (though i haven't tried k0s) and RKE, especially when using the latter with Rancher ( https://rancher.com/ ) or when using the former with Portainer ( https://www.portainer.io/solutions ).

In most of the other cases, i've run into a variety of problems:

  - different networking implementations that work inconsistently ( https://kubernetes.io/docs/concepts/cluster-administration/addons/#networking-and-network-policy )
  - general problems with networking, communication between nodes, and certain node components refusing to talk to others and thus nodes not allowing anything to be scheduled on them
  - large CPU/RAM resource usage for even small clusters, which is unfortunate, given that i'm too poor to afford lots of either
  - no overprovisioning support (when i last tried), leading to pods not being scheduled on nodes when there's plenty of resources available, due to unnecessary reservations
  - problems with PVCs and storing data in host directories, like how Docker has bind mounts with few to no issues
  - also, they recently stopped supporting the Docker shim and now you have to use containerd, which is unfortunate from a debugging perspective (since a lot of people out there actually like Docker CLI)
If i could afford proper server hardware instead of 200GEs with value RAM then i bet the situation would be different, but in my experience this is a representation of how things would run in typical small VPSes (should you self host the control plane), which doesn't inspire confidence.

That said, when i compared the RAM usage of Docker Swarm with that of K3s, the picture was far better! My current approach is to introduce clients and companies to containers in the following fashion, if they don't use the cloud vendors: Docker --> Docker Compose --> Docker Swarm --> (check if there's need for more functionality) --> K3s.

The beautiful think is that because of the OCI standard you can choose where and how to run your app, as well as use tools like Kompose to make migration very easy: https://kompose.io/




Consider applying for YC's Fall 2025 batch! Applications are open till Aug 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: