Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

>20k RPS.

If this metric is what you are chasing, there are ways to reliably break 1 million RPS using a single box if you don't play the shiny BS tech game. The moment you involve multiple computers and containers, you are typically removed from this level of performance. Going from 2,000 to 2,000,000 RPS (serialized throughput) requires many ideological sacrifices.

Mechanical sympathy (ring buffers, batching, minimizing latency) can save you unbelievable amounts of margin and time when properly utilized.



I frankly don't see where containers could lower the performance.

Basically a container is a glorified chroot. It has the same networking unless you asked for isolation, then packets have to follow a local (inside the host) route. It has exactly no CPU or kernel interface penalty.

Maybe you wanted to say about container orchestration like k8s, with its custom network fabric, etc.


> I frankly don't see where containers could lower the performance.

Have you seen most k8s deployments? It's not the containers, it's the thoughtspace that comes with them. Even just using bare containers invites a level of abstraction and generally comes with a type of developer that just isn't desirable.


So it's not due to container, but nowadays "container" in prod tend to mean k8s or similar clustering.


Even loopback is significantly slower than a direct method invocation.




Consider applying for YC's Fall 2025 batch! Applications are open till Aug 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: