Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

If it was anything like the Niagra processors the shared fp units normally are a bottleneck for fp. But the larger problem was the register remapping/pipelines. They were fast, if you were running certain workloads. God help you if you had to compress anything on those systems. Without pbzip2 or pigz it took forever. Really bad example but bulldozer seemed way too niagraish to me based on its goals.


Running threaded floating point workloads on bulldozer-derived architectures is just folly. If you have parallel floating point code you should in general be running it on a GPU.


These weren't fp intensive workloads at all - mostly your typical IT IO workloads. I don't know the internals to say exactly what or why, but something seems to go really wrong on bulldozer when you try to schedule two different vms on the same coupled pair of cores.


It's because they're not independent cores. You're pretty much never going to get the same single-thread performance with two threads running on a module as with one, the idea is that you ought to get better than 0.5X the single thread performance, such that if you have two threads then 2*0.75X is better than X, while still allowing you to get X (or better with turbo) on strictly single threaded workloads.

Where this can fall apart is if you're trying to use eight homogenous threads at once and the threads have large working set sizes, such that the second thread causes spill out from the per-module caches. Then you have eight threads contending for L3 bandwidth, or if you're really screwed you fill up the L3 and start to hit main memory.

Out of curiosity, have you tried any of the Abu Dhabi Opterons? They doubled the L3 from 8MB to 2x8MB, which I would expect to help by both keeping you out of main memory and reducing contention by splitting each L3 between half as many cores (assuming you don't get the new twice-as-many-cores models).




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: