Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

People down vote this, so I will provide a bit more explanation: Parallelism is 'easy' as long as you have natural separated problems. This is the reason embarrassingly problems are so great for many-core systems. Every single sub-problem can be done without having to synchronize with other parts of the system. Unfortunately, most problems aren't that way, but instead are interlocked on multiple axes:

- Minimal granularity: You can never run more cores than you have problems at the same time, but you also cannot just split problems as long as you want or the overhead of splitting will kill all performance gains you'd had (x+y in an extra thread/process is technically possible, in reality pretty stupid)

- Synchronization points: Most problems are not completely separate from each other, so you can do part of the problem in parallel but at some point you have to converge and do something with the combined result.

- Comparable task size: In an ideal world all of your tasks would take exactly as long as the other, because if one task takes far longer than the others you have to wait for it. So, the minimal run time of your parallel problem is bound by the time of your longest sub-problem. If one of the problems takes far longer than the others (and they depend on each other) you've lost.

The last one is more of a "meta" point: Complexity doesn't scale linear for dependent problems. That's another reason embarrassingly parallel problems are so nice. You cannot only run them all in parallel, you can think about each one as a completely separate problem. The moment you have dependencies you have to think about how to bring them all together and that gets hard very fast if you have many moving parts.

So, to sum it up: Many small things conspire against just scaling something which works great for two or four cores up to eight, 16, 32 or whatever. If you happen to have an embarrassingly parallel problem you're golden, but unfortunately only a small subset of interesting problems are that way and for other problems scaling is hard.



"In an ideal world all of your tasks would take exactly as long as the other, because if one task takes far longer than the others you have to wait for it. So, the minimal run time of your parallel problem is bound by the time of your longest sub-problem. If one of the problems takes far longer than the others (and they depend on each other) you've lost."

Could the cores that would otherwise be waiting for the longest sub-problem to complete start working on some new sub-problems to make the following set of sub-problems finish sooner?


If you have a set left that doesn't depend on the result of the problem still in computation you can do this, yes. Though in that case you don't actually have two sets, you have one set which is bigger than your core count. I simplified a bit before, but if you have more problems than cores your longest sub-problem shouldn't run longer than the time it takes you to compute all other sub-problems (i.e. that includes computing five sub-problems on one core while another core computes one longer problem) or you'll have to wait.

Unfortunately, I had this happen to a program of mine last year. All other tasks were long done but one kept running and running. The run time of the program escalated to hours with the first few minutes being marked by high usage of the system and then only one core in use, while everything else had to wait. It sucked.


Thanks for expanding on this. My answer of 'Yes.' deserved the downvotes.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: