Hacker News new | past | comments | ask | show | jobs | submit login

It won't - for any single given service, running on a dedicated system will probably be able to squeeze out a tiny bit more performance than running on a shared system with containers, for whatever fraction of the time that you're close to maxing out that server's performance.

But in practice (at Google-scale, anyway), that's dwarfed by the efficiency gains you can get by squeezing lots of things on to the same machine and increasing the overall utilization of the machine. Prior to adding kernel containers to Borg to allow proper resource isolation between the different jobs on a machine, the per-machine utilization was really embarrassingly low.

Another point to consider is that not all jobs are shaped the same as the machines - some jobs need more memory (so if you put them on a number of dedicated machines adding up to the total amount of memory needed, there will be lots of wasted CPU), and other jobs use a lot more CPU and less memory (so if you put them on a number of dedicated machines adding up to the total amount of CPU needed, there will be lots of wasted memory).

By breaking each job up into a greater number of smaller instances and bin-packing on to each machine, you could take advantage of the different resource shapes of different jobs to get better overall utilization.




Hang on there, are you saying you can squeeze in more applications on any given server using containers, rather than just running them in the regular filesystem? That doesn't make sense.

No, you use containers despite the fact that your hardware utilization goes down (mainly because no shared pages between applications), because your huge sprawling environment is too hard to change with flag days.


There are multiple definitions of the word 'container'. In the case of Borg, 'container' referred to the kernel resource isolation component, which is completely orthogonal to how the apps were packaged. Borg aggressively shared packages between jobs on the machine where possible.

Being able to strictly apportion resources between the different jobs on a machine (and decide who gets starved in the event that the scheduler has overcommitted the machine) means you can squeeze more out of a given server (by safely getting its utilization closer to 100%)

There are other definitions of the word 'container' that are closer to 'virtual machine' and include things like a disk image which is much harder to share, but that's not what's being discussed in the context of Borg. (Not sure about Kubernetes, that's after my time)




Consider applying for YC's Summer 2025 batch! Applications are open till May 13

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: