Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Queuing theory is nice, but then again its completely abstract and mathematical. Real life issues get in the way.

In particular: for many parallelization problems, its simply not worth the effort to ever load balance. Tasks are too small and the "time spent calculating where to go" is more expensive than just accomplishing the task to begin with. (Ex: Matrix multiplication can be seen as a parallelization of multiplies, followed by a set of additions. The multiplies are too cheap to perform any form of load-balancing, executing in just one clock tick).

So in reality, we have "fine grained parallelism", and "coarse grained parallelism". Coarse grained is where load balancing is feasible (the task is "big enough" that its worth spending a bit of CPU time figuring out load balancing).

"Fine grained" is very difficult, and is in the realm of CPU design (CPUs will execute your instructions out-of-order to discover parallelism)... though software also plays a role.

----------

That being said, I probably should study more queuing theory myself. The math seems less about "how to design a good parallel system" and more focused on "how to measure a parallel system" (ex: throughput * latency == state... which means you can measure latency == state/throughput, and other such mathematical tricks).

The math is simple, albeit abstract. Its not there for "deeper understanding", its there for "even basic understanding" (but much much faster to calculate if you already know queuing theory).



Consider applying for YC's Winter 2026 batch! Applications are open till Nov 10

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: