I'd be curious what a better alternative looks like.
I'm a huge fan of keeping things simple (vertically scaling 1 server with Docker Compose and scaling horizontally only when it's necessary) but having learned and used Kubernetes recently for a project I think it's pretty good.
I haven't come across too many other tools that were so well thought out while also guiding you into how to break down the components of "deploying".
The idea of a pod, deployment, service, ingress, job, etc. are super well thought out and are flexible enough to let you deploy many types of things but the abstractions are good enough that you can also abstract away a ton of complexity once you've learned the fundamentals.
For example you can write about 15 lines of straight forward YAML configuration to deploy any type of stateless web app once you set up a decently tricked out Helm chart.. That's complete with running DB migrations in a sane way, updating public DNS records, SSL certs, CI / CD, having live-preview pull requests that get deployed to a sub-domain, zero downtime deployments and more.
> once you set up a decently tricked out Helm chart
I don't disagree but this condition is doing a hell of a lot of work.
To be fair, you don't need to do much to run a service on a toy k8s project. It just gets complicated when you layer on all production grade stuff like load balancers, service meshes, access control, CI pipelines, o11y, etc. etc.
> To be fair, you don't need to do much to run a service on a toy k8s project.
The previous reply is based on a multi-service production grade work load. Setting up a load balancer wasn't bad. Most cloud providers that offer managed Kubernetes make it pretty painless to get their load balancer set up and working with Kubernetes. On EKS with AWS that meant using the AWS Load Balancer Controller and adding a few annotations. That includes HTTP to HTTPS redirects, www to apex domain redirects, etc.. On AWS it took a few hours to get it all working complete with ACM (SSL certificate manager) integration.
The cool thing is when I spin up a local cluster on my dev box, I can use the nginx ingress instead and everything works the same with no code changes. Just a few Helm YAML config values.
Maybe I dodged a bullet by starting with Kubernetes so late. I imagine 2-3 years ago would have been a completely different world. That's also why I haven't bothered to look into using Kubernetes until recently.
> I don't disagree but this condition is doing a hell of a lot of work.
It was kind of a lot of work to get here, but it wasn't anything too crazy. It took ~160 hours to go from never using Kubernetes to getting most of the way there. This also includes writing a lot of ancillary documentation and wiki style posts to get some of the research and ideas out of my head and onto paper so others can reference it.
Only if you've never seen it before. The word "accessibility" is incredibly inaccessible to non-native speakers and native speakers with learning disabilities or dyslexia. There's some double characters in there but which ones? Also it sounds like there's an a or "uh" sound in there but somehow it's all "i"s except one is an "e"? "a11y" is four letters (well, two of them are digits but who's counting?) and clearly refers to one particular concept.
Likewise "i18n" (internationalization/internationalisation) and "l10n" (localization/localisation) avoids confusion of whether it's "ize" or "ise", which is literally the problem those concepts try to solve.
I can somewhat excuse "k8s" with "nobody can remember how kubernetes is spelled let alone pronounced" (Germans insist pronouncing the "kuber" part the same way "kyber/cyber" is pronounced in other Greek loanwords, with a German "ü" umlaut) but I admit that one is a stretch and "visual puns" like "k0s" ("minimal", you see?) and "k3s" (the digit 3 looks like half of an 8 so it's "lightweight", right?) are a bit beyond the pale for me.
You specifically called it out as being "inaccessible" (ie, difficult to understand) to non-native speakers (of English).
Also, "a11y" looks too much like the English word "ally". That, IMO, is more likely to cause reading difficulties, particularly with non-native speakers and people with dyslexia.
Thanks, that was actually a wildly misleading typo haha. I meant to write "sane" way and have updated my previous comment.
For saFeness it's still on us as developers to do the dance of making our migrations and code changes compatible with running both the old and new version of our app.
But for saNeness, Kubernetes has some neat constructs to help ensure your migrations only get run once even if you have 20 copies of your app performing a rolling restart. You can define your migration in a Kubernetes job and then have an initContainer trigger the job while also using kubectl to watch the job's status to see if it's complete. This translates to only 1 pod ever running the migration while other pods hang tight until it finishes.
I'm not a grizzled Kubernetes veteran here but the above pattern seems to work in practice in a pretty robust way. If anyone has any better solutions please reply here with how you're doing this.
Hahaha, OK, I figured you didn't mean what I hoped you meant, or I'd have heard a lot more about that already. That still reads like it's pretty handy, but way less "holy crap my entire world just changed".
> You can define your migration in a Kubernetes job and then have an initContainer trigger the job while also using kubectl to watch the job's status to see if it's complete.
Much simpler way is to run migration in init container itself. Most SQL migration frameworks know about locks and transactions, so concurrent migrations wont run anyway
I think the value in the init+job+watcher approach is you don't need to depend on a framework being smart enough to lock things which makes it suitable and safe to run with any tech stack worry free. It also avoids potential edge cases if a framework's locking mechanism fails, and an edge case in this scenario could be really bad.
But it does come at the cost of a little more complexity (a 30 line YAML job and then ClusterRole/ClusterRoleBinding resources for RBAC stuff on the watcher), but fortunately that's only a 1 time thing that you need to set up.
It simpler than that for simple scenarios. `kubectl run` can set you up with a standard deployment + service. Then you can describe the resulting objects, save the yaml, and adapt/reuse as you need.
> For example you can write about 15 lines of straight forward YAML configuration to deploy any type of stateless web app once you set up a decently tricked out Helm chart.
I understand you might outsource the Helm chart creation but this sounds like oversimplifying a lot, to me. But maybe I'm spoiled by running infra/software in a tricky production context and I'm too cynical.
It's not too oversimplified. I have a library chart that's optimized for running a web app. Then each web app uses that library chart. Each chart has reasonable default values that likely won't have to change so you're left only having to change the options that change per app.
That's values like number of replicas, which Docker image to pull, resource limits and a couple of timeout related values (probes, database migration, etc.). Before you know it, you're at 15ish lines of really straight forward configuration like `replicaCount: 3`.
It's just not finished yet. with < 0.01% of the funding kube has, it has many times more design and elegance. Help us out. Have a look and tell me what you think. =D
My two cents is that docker compose is an order of magnitude simpler to troubleshoot or understand than Kubernetes but the problem that Kubernetes solves is not that much more difficult.
I'm a huge fan of keeping things simple (vertically scaling 1 server with Docker Compose and scaling horizontally only when it's necessary) but having learned and used Kubernetes recently for a project I think it's pretty good.
I haven't come across too many other tools that were so well thought out while also guiding you into how to break down the components of "deploying".
The idea of a pod, deployment, service, ingress, job, etc. are super well thought out and are flexible enough to let you deploy many types of things but the abstractions are good enough that you can also abstract away a ton of complexity once you've learned the fundamentals.
For example you can write about 15 lines of straight forward YAML configuration to deploy any type of stateless web app once you set up a decently tricked out Helm chart.. That's complete with running DB migrations in a sane way, updating public DNS records, SSL certs, CI / CD, having live-preview pull requests that get deployed to a sub-domain, zero downtime deployments and more.