Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

It really depends on what you consider a core.

http://www.anandtech.com/show/2918/2

That first picture shows 4 cores made of 4 sub cores with 32 processing elements each. Now Nvidia would claim each of those 32 processing elements is a core, but each of those cores can not act independently. So it is more like a very wide, very hyper threaded 16 core processor.



I think NVIDIAs definition of a 'core' has some merit. First of all, they have some independency in that you can introduce branches over a subset of them, so they're not just SIMD vector units. Secondly, their threaded programming model is pretty well suited for many computational tasks. Executing the same operations over a whole 2D or 3D region of data is a pretty common thing in computing. If you can't parallelize your task that way, chances are it's not even parallelizeable on N x86 cores. If you compare this to x86 however, you'd have to count n Cores times the SSE vector length on each core to be fair. GPUs still come out ahead for most of heavy computational tasks though - which is why Intel is now fighting back with their Xeon Phi stuff (which sounds very promising btw., looking forward to play with our prerelease model that's coming soon ;) ).




Consider applying for YC's Fall 2025 batch! Applications are open till Aug 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: