Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Interesting that you say you worry about re-creating the cluster from scratch because I've experienced exactly the opposite. Our EKS cluster required so many operations outside CloudFormation to configure access control, add-ons, metrics server, ENABLE_PREFIX_DELEGATION, ENABLE_POD_ENI... It would be a huge risk to rebuild the EKS cluster. And applications hosted there are not independent because of these factors. It makes me very anxious working on the EKS cluster. Yes you can pay an extra $70/month to have a dev cluster, but it will never be equal to prod.

On the other hand, I was able to spin up an entire ECS cluster in a few minutes time with no manual operations and entirely within CloudFormation. ECS costs nothing extra, so creating multiple clusters is very reasonable, though separate clusters would impact packing efficiency. The applications can be fully independent.

> ECS has weird limits on how many containers you can run on one instance

Interesting. With ECS it says for c5.large the task limit is 2 with without trunking, 10 with.

With EKS

    $ ./max-pods-calculator.sh --instance-type c5.large --cni-version 1.12.6
    29
    $ ./max-pods-calculator.sh --instance-type c5.large --cni-version 1.12.6 --cni-prefix-delegation-enabled
    110


In ECS I had to recreate the cluster from scratch because some of the changes I wanted to do, CDK/CF wouldn't do.

My approach on Azure has been to rely as little as possible in their Infra-as-code, and do everything I can to setup the cluster using K8s native stuff. So, add-ons, RBAC, metrics, all I'd try to handle with Helm. That way if I ever need to change K8s provider, it "should" be easy.




Consider applying for YC's Fall 2025 batch! Applications are open till Aug 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: