Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

> For instance, Skymont finally has no penalty for denormals, a long standing Intel weakness.

yeah, that's crazy to me. Intel has been so completely discunctional for the last 15 years. I feel like you couldn't have a clearer sign of "we have 2 completely separate teams that are competing with each other and aren't allowed to/don't want to talk to each other". it's just such a clear sign that the chicken is running around headless



Not really, to me it more seems like Pentium-4 vs Pentium-M/Core again.

The downfall of Pentium 4 was that they had been stuffing things into longer and longer pipes to keep up the frequency race(with horrible branch latencies as a result). They scaled it all away by "resetting" to the P3/P-M/Core architecture and scaling up from that again.

Pipes today are even _longer_ and if E-cores has shorter pipes at a similar frequency then "regular" JS,Java,etc code will be far more performant even if you lose a bit of perf for "performance" cases where people vectorize (Did the HPC computing crowd influence Intel into a ditch AGAIN? wouldn't be surprising!).


Thankfully, the P-cores are nowhere near as bad as the Pentium 4 was. The Pentium 4 had such a skewed architecture that it was annoyingly frustrating to optimize for. Not only was the branch misprediction penalty long, but all common methods of doing branchless logic like conditional moves were also slow. It also had a slow shifter such that small left shifts were actually faster as sequences of adds, which I hadn't needed to do since the 68000 and 8086. And an annoying L1 cache that had 64K aliasing penalties (guess which popular OS allocates all virtual memory, particularly thread stacks, at 64K alignment.....)

The P-cores have their warts, but are still much more well-rounded than the P4 was.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: