Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Huh, where did GPU's enter the discussion?

Anyway, for a typical HPC cluster, it's bog standard x86 hardware, the only remotely exotic thing is the Infiniband network. Common wisdom says that since Infiniband is a niche technology, it's hugely expensive, but strangely(?) it seems to have (MUCH!) better bang per buck than ethernet. A 36-port FDR IB (56 Gb/s) switch has a list price of around $10k, whereas a quick search seems to suggest a 48-port 10GbE switch has a list price of around $15k. So the per-port price is roughly in the same ballpark, but IB gives you >5 times better bandwidth and 2 orders of magnitude lower MPI latency. Another advantage is that IB supports multipathing, so you can build high bisection bandwidth networks (all the way to fully non-blocking) without needing $$$ uber-switches on the spine.



That's interesting, things may have changes with IB since I last looked.

The GPU thing seems to have fallen out of my original comment, I meant to write "I can build you a 100,000 core system for $300k" but some how the decimal point jumped left three times! To do that I would definitely have to use GPU's...

I am seriously lusting after such a device, I feel that there is much to be done.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: