If you can literally pick up and shift to another cloud provider just by moving Kubernetes somewhere else, you are spending mountains of engineering time reinventing a bunch of different wheels.
Are you saying you don't use any of your cloud vendor's supporting services, like CloudWatch, EFS, S3, DynamoDB, Lambda, SQS, SNS?
If you're running on plain EC2 and have any kind of sane build process, moving your compute stuff is the easy part. It's all of the surrounding crap that is a giant pain (the aforementioned services + whatever security policies you have around those).
I use MongoDB instead of DynamoDB, and Kafka instead of SQS. I use S3 (the Google equivalent since I am on their cloud) through Kubernetes abstractions. In some rare cases I use the cloud vendor's supporting services but I build a microservice on top of it. My application runs on Google cloud and yet I use Amazon SES (Simple Email Service) and I do that by running a small microservice on AWS.
Sure, you can use those things. But now you also have to maintain them. It costs time, and time is money. If you don't have the expertise to administrate those things effectively, it may not be a worthwhile investment.
Everyone's situation is different, of course, but there is a reason that cloud providers have these supporting services and there is a reason people use them.
In my experience it is less work than keeping up with cloud provider's changes [1]. You can stay with a version of Kafka for 10 years if it meets your requirements. When you use a cloud provider's equivalent service you have to keep up with their changes, price increases and obsolescence. You are at their mercy. I am not saying it is always better to set up your own equivalent using OSS, but I am saying that makes sense for a lot of things. For example Kafka works well for me, and I wouldn't use Amazon SQS instead, but I do use Amazon SES for emailing.
While in general I agree with your overall argument, when it comes to:
> cloud provider's equivalent service you have to keep up with their changes, price increases and obsolescence
AWS S3 and SQS have both gone down significantly in price over the last 10 years and code written 10 years ago still works today with zero changes. I know because I have some code running on a Raspberry Pi today that uses an S3 bucket I created in 2009 and haven't changed since*.
(of course I wasn't using an rPi back then, but I moved the code from one machine to the next over the years)
But "keeping up with changes" applies just as much to Kubernetes, and I would argue it's even more dangerous because an upgrade potentially impacts every service in your cluster.
I build AMIs for most things on EC2. That interface never breaks. There is exactly one service on which provisioning is dependent: S3. All of the code (generally via Docker images), required packages, etc are baked in, and configuration is passed in via user data.
EC2 is what I like to call a "foundational" service. If you're using EC2 and it breaks, you wouldn't have been saved by using EKS or Lambda instead, because those use EC2 somewhere underneath.
Re: services like SQS, we could choose to roll our own but it's not really been an issue for us so far. The only thing we've been "forced" to move on is Lambda, which we use where appropriate. In those cases, the benefits outweigh the drawbacks.
Given that life is finite and you want to accomplish some objective with you company (and it’s not training dev ops professionals), it’s quite interesting having the ability to outsource a big part of the problems needed to be solved to get there.
Given this perspective, much better to use managed services. Let’s you focus on the code (and maintenance) specific to your problem.
And don't you have specific yaml for "AWS LB configuration option" and stuff? The concepts in different cloud providers are different. I can't image it's possible to be portable without some jquery-type layer expressing concepts you can use and that are built out of the native concepts. But I'd bet the different browsers were more similar in 2005 than the different cloud providers are in 2021.
Sure, there is configuration that goes into using your cloud provider's "infrastructure primatives". My point is that Kubernetes is often using those anyway, and if you don't understand them you're unprepared to respond in the case that your cloud provider has an issue.
In terms of the effort to deploy something new, for my organization it's low. We have a Terraform module creates the infrastructure, glues the pieces together, tags stuff, makes sure everything is configured uniformly. You specify some basic parameters for your deployment and you're off to the races.
We don't need to add yet more complexity with a Kubernetes-specific cost tracking software, AWS does it for us automatically. We don't have to care about how pods are sized and how those pods might or might not fit on nodes. Autoscaling gives us consistently sized EC2 instances that, in my experience, have never run into issues because we have a bad neighbor. Most importantly of all, I don't have upgrade anxiety because there are a ton of services stacked on one Kubernetes cluster which may suffer issues if an upgrade does not go well.
Are you saying you don't use any of your cloud vendor's supporting services, like CloudWatch, EFS, S3, DynamoDB, Lambda, SQS, SNS?
If you're running on plain EC2 and have any kind of sane build process, moving your compute stuff is the easy part. It's all of the surrounding crap that is a giant pain (the aforementioned services + whatever security policies you have around those).