That’s great if you have control over all the moving parts, but a lot of real-world (i.e. not exclusively software-based) orgs have interfaces to components and external entities that aren’t so amenable to change. Maybe you can upgrade your cluster without anybody noticing. Maybe you’re a wizard and upgrades never go wrong for you.
More likely, you will be constrained by a patchwork quilt of contracts with customers and suppliers, or by regulatory and statutory requirements, or technology risk policies, and to get approval you’ll need to schedule end-to-end testing with a bunch of stakeholders whose incentives aren’t necessarily aligned to yours.
That all adds up to $$$, and that’s why there’s a demand for stability and LTS editions.
@ji6 I have to tell you, we are using Kubernetes since the very early versions available.
If you automate everything you will never have an issue with K8, you can deploy all the required dependencies in one go. You can run the tests, if done correctly this literally requires 10 minutes.
This argument to have an LTS Version for me is like air. It's the kind of Nuclear Reactor argument. We did have the money to buy Uranium to burn but now we can't afford to dispose it normally.
Kubernetes is for flexibility not for big companies which want to use their software development processes of the 90's...
I'm sorry, this is simply not true. K8s relatively often introduces breaking changes. They are announced well in advance, which is very nice, but the solution will not take 10 minutes. Take Pod Security Policies and migration to Kyverno of OPA Gatekeeper. Even if you have a relatively simple cluster and choose to stick to defaults rather than write your own rules, it will still take time to understand the new system, to choose the policies you need, to update and test it. In complex cluster this takes weeks and in some cases even months, especially if you are legally obliged to follow change management.
Im curious, help me understand where the breakdown happens. Abstracting a few layers away, can we assume that you have system input, your system does something, it has output. If this is correct, can we also then say that inputs are generated by systems you dont control that might not play nice with updates within your system. Similarly, changing outputs might break something dow stream.
If all that holds, my question then becomes, why does your system not have a preprocesisng and postprocessong layer for compatibility? Stuff like that is easier than ever to build, and would allow your components to grow with the ecosystem?
It’s all about risk. If you have a simple enough system, you might be able to hide it behind an abstraction layer that adequately contains the possible effects of change.
But many interesting useful real-world systems are difficult to contain within a perfect black box. Abstractions are leaky. An API gateway, for example, cannot hide increased latency in the backend.
People accountable for technology have learned, through years of pain, not to trust changes that claim to be purely technical with no possible
impact on business functionality. Hence testing, approval and cost.
More likely, you will be constrained by a patchwork quilt of contracts with customers and suppliers, or by regulatory and statutory requirements, or technology risk policies, and to get approval you’ll need to schedule end-to-end testing with a bunch of stakeholders whose incentives aren’t necessarily aligned to yours.
That all adds up to $$$, and that’s why there’s a demand for stability and LTS editions.