Hacker News new | past | comments | ask | show | jobs | submit login

I find this statement to be technically correct, but practically untrue. Having worked in large terraform deployments using TFE, it's very easy for a resource to get deleted by mistake.

Terraform's provider model is fundamentally broken. You cannot spin up a k8s server and then subsequently use the k8s modules to configure the server in the same workspace. You need a different workspace to import the outputs. The net result was we had like 5 workspaces which really should have been one or two.

A seemingly inconsequential change in one of the predecessor workspaces could absolutely wreck the later resources in the latter workspaces.

It's very easy in such a scenario to trigger a delete and replace, and for larger changes, you have to inspect the plan very, very carefully. The other pain point was I found most of my colleagues going "IDK, this is what worked in non-prod" whilst plans were actively destroying and recreating things, as long as the plan looked like it would execute and create whatever little thing they were working on, the downstream consequences didn't matter (I realize this is not a shortcoming of the tool itself).




This sounds like an operational issue and/or a lack of expertise with terraform. I use terraform (self hosted, I guess you’d call it?) and manage not only kubernetes clusters but helm deployments with it just fine and without the issues you are describing. Honestly, this is just my honest feedback, I see things and complaints a lot like this in consulting, where they expect terraform to magically solve their terrible infrastructure and automation decisions. It can’t, but it absolutely provides you the tooling to avoid what I think you are describing.

It’s fair to complain that terraform requires weird areas of expertise that aren’t that intuitive and take a little bit of a learning curve, but it’s not really fair to complain that it should prevent bad practices and inexperience from causing the issues they typically do.


Terraform explicitly recommends in the Kubernetes provider documentation that the the cluster creation itself and everything else related to Kubernetes should live in different states.

https://registry.terraform.io/providers/hashicorp/kubernetes...

> The most reliable way to configure the Kubernetes provider is to ensure that the cluster itself and the Kubernetes provider resources can be managed with separate apply operations. Data-sources can be used to convey values between the two stages as needed.


I agree with you (this is something that OpenTofu is trying to fix), but the way I do k8s provisioning in Terraform is to have one module that brings up a cluster, another to print the cluster's Kubeconfig, then, finally, another to use the Kubeconfig to provision Kubernetes resources. It's not perfect but it gets the job done most of the time.


this is best practice. I couldnt imagine doing it any other way and would flatly refuse.

There are shortcomings in the kubernetes provider as well that make wanting to maintain that in one state file a nonstarter for me.




Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: