Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Since 2000, AMD has often beaten Intel on multi-threaded workloads, but never single-threaded (at comparable clocks). This bench shows AMD beating Intel on both! That's what's incredibly notable here. The last time it happened was 20 years ago with the AMD K7.

Now Zen 2 seems to overall beat Intel on every metric: single-threaded perf, multi-threaded perf, perf/dollar, perf/watt. No matter how you look at it, Zen 2 comes out on top.¹ Very impressive.

Man the folks at Intel must feel the heat.

¹ Except perf/socket when competing with the Xeon 9200, but that's just a PR stunt no one cares about: https://mobile.twitter.com/zorinaq/status/113576693566724096...




You want notable ? AMD is currently beating Intel on price, single threaded perf, multi threaded perf, TDP/power usage at equivalent performance AND on many of the "little pluses on the side" (more PCI-e lanes, ECC supports, ...).

When Zen first came is was a huge deal, but it merely put them as a real competitors, with a decent advantage on many cases, reinforced with Zen+. But Zen 2 put them ahead in almost every category, and in all markets; threadripper and EPYC are just as strong in their areas.

Either Intel has something strong about to appear, or they're going to face a truly difficult few years with customers going AMD now that it's not merely "one generation of chip" that was good. It feels like getting their 10nm working will not be enough by itself.


I hope Intel will at least offer ECC memory for their consumer CPU's now. For now at least AMD are offering the option and will only increase uptake, increasing supply/demand and drive prices down.

I hope that in a few years time that ECC will become standard, as for many it is driving a car without a seatbelt.

This and with security exploits (rowhammer), ECC would be the solution that is a price gap that for many, is still one that can and should be closed.

That all said, it would only take the one or two big mobile manufacturers to go ECC and marketing virtu the whole security and integrity aspect for the rest of the market to follow suit. Which would be a bigger driver in reducing the price gap premium over non-ecc memory. Which is how I see things panning out as mobile phone makers are running out of sell features to add and this would be an easy one for the premium market of phones out there today. Also one in which would go down very well for that feature alone.


> I hope Intel will at least offer ECC memory for their consumer CPU's now.

They do (or did?). My home NAS is running a Sandy Bridge Intel Celeron with ECC memory. Support seems to be randomly distributed thought through the product line[1] though and obviously depends on the motherboard manufacturer to implement it as well.

In general, Intel has a problem with branding. Their product lines are confusing mess requiring looking up each specific part to get a list of features it does or does not support. There's little rhyme or reason to it.

[1] https://arstechnica.com/civis/viewtopic.php?p=22587440


> Their product lines are confusing mess requiring looking up each specific part to get a list of features it does or does not support. There's little rhyme or reason to it.

I got bitten by that back in the Core 2 era. When I built my very last Intel system, around an Intel DG43NB board, I picked up a Q8200, thinking it would be a great bang-for-the-buck chip. Little did I realize from the store display that the Q8200 was the only Core 2 Quad that lacked virtualization support.

The DG43NB died prematurely, due to capacitor plague. I didn't shed any tears for it.


Since Haswell, ECC support has been on (Core-based desktop) Celerons, Pentiums, i3s, and Xeons. It's quite straightforward, when they pulled 2C Xeons off the market they added it to the consumer processors instead, but they still want to retain Xeon sales for the higher-end products so they lock it off on the i5s/i7s/i9s.

Of course you need to know what you're looking at, the suffix of the number matters a lot (eg 7100 vs 7100U), but that is nothing Intel-specific. A Ryzen 2700 and a Ryzen 2700U are very different processors as well.


The issue isn't that somehow the 7100U vs 7100 is some fundamental design decision that didn't include ECC support, its that Intel had a chip where ECC was supported, then disabled it.

Its not because they are "very different processors".


The 7100U is a BGA laptop processor. You're not in danger of accidentally buying a 7100U to put into your NAS.

If you look at the products that are actually compatible with your system, it's not confusing at all. Pentium/Celeron/i3/Xeon = has ECC, i5/i7 = no ECC.


So whats the advantage to ECC and why would you choose to go with ECC over more, faster memory, cheaper? I've seen ECC touted in much marketing, but the usecases to me seem, well boutique, such as video or computer graphics rendering.


I'm running 64 GB ECC with a Ryzen system. As it turns out, memory in general was just exorbitantly expensive in the recent past, so the ECC premium wasn't all that much. On the flip side, I don't do PC gaming so having faster memory didn't really matter much; current ECC was already faster than the 5 year old system I was upgrading from.

For me, the reason to go ECC was just to prevent silent file corruption. With 64 GB, the math is in favor of me seeing bit flips. Moreover, I tend to put my machine to sleep, rather than shut it off, which also increases the likelihood of memory errors. I wouldn't say I have a highly specialized workload outside of the occasional VM, consisting of large files, for development. A lot of it is I just didn't want to deal with silent corruption of family photos & videos, even if the underlying file formats are pretty resilient.

Personally, I think the situation should be flipped. Everyone should run with ECC these days and only run without for specialized environments like gaming where you want to squeeze out every FPS you can. Faster memory isn't going to matter for most situations.


Protection against memory errors, I don't have exact odds of such errors, but certainly not negligible and with faster memory and capacities - those odds only increase. Many such errors will go unnoticed and the impact for most will be unknown, but can equally be very impacting. Hence today it is only critical systems that have such use, as can justify the extra cost.

But for everything else, it's a scale of needs, sure the low-end would be a gaming console or graphics memory, but if that gap gets smaller, uptake and usage will scale as that advantage becomes cost effective at various consumer levels.

But having the option of choice, is and always been a huge plus for any consumer - AMD makes that choice far more accessible than Intel. Though I do see a mobile phone maker going ECC as the turning point in uptake and making that price gap more palatable in the end.


If you value stability and no file corruption, you'll go for ECC.

If you use it just for media consumption and playing games, anything goes.

A good power supply and ECC RAM make a very stable PC.


If you have ever had a corrupt file on your hard disk, ECC memory can help prevent that. Usually data corruption happens in memory (and then is saved to disk).


In short, stability. If the performance hit is low double digits, that may not even be noticeable for many use cases.


Reliability.

Whether one cares about that is one's own tradeoff.


Intel actually offers ECC memory by chipset, not CPU. If you get a board with the server-oriented chipset, it will often support a Core i5.


No ECC is supported in that configuration. If you're lucky, it works as normal RAM.

A few i3 CPUs support ECC, though.


I think it might have worked in some earlier generations of Core i, perhaps Sandy Bridge and Ivy Bridge.


There are some. But none in the desktop market segment. For embedded use.

All desktop segment chips with ECC have i3 designation.

See:

https://ark.intel.com/content/www/us/en/ark/search/featurefi...


And the motherboards from two years ago (AM4) will support the latest Zen2 / Ryzen 3000 series CPU[0]. Meanwhile intel changes the socket every generation which adds another $100-$300 to the cost of a CPU upgrade.

[0] double check your board manufacturer for bios updates to support new CPU, my cheap B350 board does.


Helpful link for knowing which motherboards are able to get their bios flashed to support Zen2 without an installed CPU/memory:

https://redd.it/bvfo57


This is unfortunately not helpful to those of us who haven't made the jump to AMD yet :(


Well the good news is you can pick up a very inexpensive board now, even a used one, and it will be compatible with the latest CPU. I bought my motherboard July of 2017 and will be upgrading my CPU to a Ryzen 3000 after 7/7 launch.


It's been 24 years since you could use an Intel or AMD processor on the same motherboard and some people just can't get over it.


It's currently Intel's turn to have Jim Keller. They'll figure something out.

https://en.wikipedia.org/wiki/Jim_Keller_(engineer)


Everyone likes to look at individuals to solve problems when it's all about the team. I imagine the internal politics at Intel are much worse than at AMD but that's just a guess.


Generally I would agree, but in this case its Jim Keller we are talking about.


I mean you might be right but looking at the lead times for a new CPU architecture I guess he will start making a difference in about 3-4 years from now.


Which would still be good enough for Intel, considering their current market share. They'll have a couple bad years but Intel will still be on top if it takes them four years to beat AMD again. Many businesses and OEMs didn't even have AMD on the radar anymore. I'm still waiting for a suitable dev laptop based on ryzen.


Have a look at the ThinkPad. Lenovo has fully embraced Ryzen in desktop and laptop.


Interesting, he even popped by Tesla on his journey: https://web.archive.org/web/20180426124248/https://www.kitgu...


Pretty amazing to see him going straight from a a B.S. into designing processors and fairly quickly becoming the head engineer on cutting edge processors.


It looks like Intel completely messed up 10nm. Nothing new since 2015...

My understanding is that they pushed multi-patterning a bit too far and they have too much defects. 7nm, which use a different tech and is developed in parallel seems to get better.

So if Intel has a big thing in the making, I expect it to be 7nm, not 10nm.


Don’t forget... security ;)


> Either Intel has something strong about to appear

People are hypothesizing this to be true given that Apple is putting Intel rather than AMD chips in the Mac Pro, and Apple doesn’t usually make dumb purchasing decisions, but does sometimes have private access to product roadmaps.


There are a lot of reasons for Apple to choose Intel.

Apple has a lot of optimizations for Intel at the moment from the instruction set down to the motherboards and chipsets. A great example is all the work they do in undervolting mobile chips so they perform better (when the latest MBPs shipped with this disabled, everyone really complained). Re-writing all that software definitely has non-trivial R&D costs.

When making a new motherboard design, a ton of stuff simply gets reused and moved around. Switch to a different chipset and you start all over for a lot of stuff. Even if AMD were 10-20% faster overall, their current "fast enough" Intel chips would still win out.

AMD's zen+ 3000 mobile chips don't compete with Intel in single-clock performance, clockspeeds, or total power draw. With the exception of the mac pro, Apple's entire lineup uses mobile processors. In addition, Intel probably gives amazing discounts to Apple. Zen serves them best as a way to squeeze out an even better deal.

A final consideration is ARM. Given the performance characteristics of A12, Apple most certainly has their sights set on using some variant in their laptops in the not-too-distant future. They already run their phone chips in their macbooks as the T2 chip. They are probably working on the timing to allow those chips to run more than the touchbar and IO.


Pretty good analysis overall, although this part is not true:

> With the exception of the mac pro, Apple's entire lineup uses mobile processors.

In fact their entire desktop lineup now uses desktop-grade CPUs. (except possibly some entry-level iMacs that weren't subject to the recent refresh)

  * iMac Pro: Workstation processor (Xeon)
  * iMac 27": Socketed desktop processor (e.g. Core i9-9900KF in top config)
  * iMac 21": Socketed desktop processor (e.g. Core i7-8700)
  * Mac Mini: Soldered embedded desktop processor (e.g. Core i7-8700B)


Yeah, it looks like they switched over to desktop chips around 2017 (they still use laptop memory though -- except for imac pro).

An i9-9900K has a 95w TDP, but Anandtech puts the real load number at around 170w TDP. I've seen people undervolt these down to around 110-120w in the 4.7GHz range. I imagine Apple's dynamic undervolting and custom motherboard can shave 10% or so off that total while their dynamic undervolting can go much lower with fewer cores and lower frequencies. While even that isn't going to make their tiny cooler keep sustained loads from throttling, it could get much closer.


By "laptop memory" you mean SO-DIMMs; the only significant difference to full-size DIMMs is, well, size. Voltage and frequency tends to be the same, leaving aside extreme overclocker's RAM.

In c't magazine's review of the current iMac, they found that the whole machine appears to have a power limit which is shared by CPU and GPU, so yeah, Apple are definitely doing something fancy in that regard.


You’re talking about mobile here, and I get why—it’s a majority of their computer sales—but most of these arguments don’t apply in the case of the Mac Pro. A Xeon and an Intel mobile processor are different-enough chipsets that there isn’t much motherboard silicon that can be reused between them. (They could maybe reuse the chipset design from the iMac Pro, but Apple kept saying that was a “transitional” design—which I read as “an evolutionary dead-end stop-gap product that we aren’t basing our future design thinking off of.”)

Likewise, I do agree that Apple is likely switching to ARM for mobile—but are there any ARM cores on anyone’s product roadmaps that could power a Mac Pro, or even an iMac? Nah.

I do agree with the greater point: Intel probably do have an exclusivity agreement with Apple right now, and so Apple sticking with them right now isn’t evidence of anything in particular.

But to me, it looks like a natural shift for Apple, in the near future, to adopt a hybrid strategy: if they can switch to ARM entirely for their highest-volume segment (laptops et al), then they’ll no longer need the benefits of having Intel as a locked-in high-volume chip supplier, and will thus free to choose anyone they like for the much-lower-volume segment (desktops/workstations) on a per-machine basis. That might be Intel, or AMD, at any given time. They won’t get such great deals from Intel, but in exchange they can play Intel and AMD off one-another, now that they’re in healthy competition again.

Intel is probably just as aware of what Apple has on its roadmap as Apple is aware of what’s on Intel’s roadmap, so I would expect, if anything, that they’re scrounging desperately around for a mobile architecture that’ll be competitive-enough with the nascent A13 to stave off that collapse of a partnership.


My prediction is that within 5–10 years, we'll be seeing ARM/Apple CPUs at the core of their laptops, low-end desktops, and even their top of the line Mac Pro. Powerful x86 CPUs will be still be available in the Mac Pro and implemented as an "accelerator" card.


another large consideration is availability of chips. Can AMD spin up enough production quickly enough to handle Apple, on top of their own sales, and their soon to be ramp up for new xbox and ps consoles..


Or it's just inertia. The Mac Pro (and every Mac computer) have always had Intel processors.

It's true that quite a few have had AMD GPUs, and they made the more difficult PowerPC to Intel switch back in 2006 with OS X 10.4. But it would be a significant effort to change a processor partnership more than a decade old. No Apple developers have anything but Intel in their machines; it's not just an item on a BOM.


> No Apple developers have anything but Intel in their machines

Given that Apple almost certainly has a research lab maintaining machines running macOS on top of Apple’s own ARM chips, to watch for the inflection point where it becomes tenable to ship laptops running those chips; and thus, given that Apple already has a staff for that lab whose job is to quickly redo macOS’s uarch optimization for each new A[N] core as Apple pumps them out; it doesn’t seem like much of a stretch that those same people would do a uarch optimization pass every once in a while to answer the “can we use AMD chips on desktop yet?” question, does it?


>People are hypothesizing this to be true given that Apple is putting Intel rather than AMD chips in the Mac Pro, and Apple doesn’t usually make dumb purchasing decisions

Apple wouldn't care that AMD narrowly beat Intel for 1-2 chip generations either. They'd care about AMD's ability to produce chips at large enough volumes and keep the pace going forward.

They've been burnt by Motorola before and Intel now, to just jump on a short-term bandwagon.

If AMD manages to keep this up (and ramp up their production) for 5+ years, then they might have a chance with Apple. But again, Apple is more likely to go for their own ARM based chips in 5+ years...


>Apple wouldn't care that AMD narrowly beat Intel for 1-2 chip generations either. They'd care about AMD's ability to produce chips at large enough volumes and keep the pace going forward.

I think there's no reason to bother switching to AMD if/when they plan on moving to their own ARM based CPU within the next few years.


The Adobe Suite, which accounts for the work of a LOT of Apple users has some significant issues on AMD. Not that they couldn't/shouldn't fix them, but it's Adobe. This is probably a very large part of the issue. Beyond that, they're using custom board designs that have significant lead time, and changing platforms isn't easy for an OEM.

Not talking down about prior motherboards, but the next run will have some very high end designs and features compared to prior gen Ryzen as well. I'm really looking forward to upgrading in September/October. Looking at a 3950X unless a more compelling TR option gets announced before then.


Or there's a long term exclusivity agreement. Who knows.


I wish AMD would release something close to the NUC now :-( only thing keeping me from buying them is my disinterest in making large desktops anymore. I'm over the windows cases, water cooling, or jet engine desktops.


Have you seen the Asrock Deskmini A300? It's not quite NUC sized but it's very close. I think it would fit the bill for you.


> The last time it happened was 20 years ago with the AMD K7.

K8, too. AMD's IPC was so far ahead of Intel's at that time it was crazy. 2.2GHZ Athlon 64's were keeping up with or beating the 3-3.2ghz P4's.

It was so strong & competitive Intel resorted to straight up bribery to compete, resulting in many anti-trust judgements against it as a result. But they succeeded in preventing K8 from hurting their market share, and kept AMD down despite a vastly superior product. Here's hoping that doesn't happen again this time around, but maybe Intel will decide the wrist slap is worth it.


I think that's unlikely. AMD has to be ready to deal with it somehow.


There's one thing left that I have both doubts and excitement for:

How good are AMD's laptop chips? The improvement in IPC and efficiency in Zen 2 can go a long way in improving this, and then, of course, they must improve perception.

Anecdotally: I've owned almost exclusively AMD chips in my desktop builds for the past 15 years. I've never once owned an AMD laptop. When Intel built the Core line of chips, they seemed to nail laptop first, and then apply the efficiency to their desktop line with higher clocks. It worked wonders.

In my opinion, AMD really needs to nail laptop CPUs/APUs now more than ever. I hope they do!


AMD's current laptop lineup consists of 12nm APUs. They will switch to 7nm next year, in the 4000 series.


Given AMD's recent success in engineering and product decisions, I think they'll overcome their previous laptop processor shortcomings. I'm optimistic. I don't think the process node was the primary driver, though. I think they just had some lingering issues with sleep states, etc. that need to be spot on to ensure excellent battery life.


Chances are I'll be upgrading to a Ryzen 3000 this year. Now that I think about it the last time I ran an AMD CPU was indeed the K7.


I've gone back and forth a couple times... currently 4790K, before that FX-8150, before that a first gen i7-860, before that and AMD XP/X2 ... Now looking at the r7-3950X.

Of course, this is the longest I've held out on upgrading, and getting itchy about it...


iirc, amd was beating intel on single thread in the pentium 4 era with way lower clocks.


The FX-60 from 2006 I think was the last AMD CPU that held this crown. 2.6Ghz and it beat Intel's top chip at 3.5Ghz. The only thing on the market with higher single threaded performance was their own FX-57 which was single core with a slightly higher clock speed.

https://www.anandtech.com/show/1920


I still have my old FX-60 in an antistatic clamshell. I just could never bring myself to get rid of it, I loved that processor so much. That and XPx64 held me for a very long time.


That's when Intel "let's go faster" made them meet the 4Ghz barrier head on for the first time. More than a decade later and we're still pretty much there.

One has to wonder how things would have gone if they didn't have the Core-M architecture on the side back then.


Intel knew that the Netburst Architecture was power hungry so it had established an Israeli team to develop a mobile architecture in tandem. I believe they started with Tualatin or another late PIII design and optimized the hell out of the microcode. I believe that became Banias and they continued to bring in aspects of Netburst to produce the precursors to the Core line.


Yeah that's why they invented the nomenclature such as "AMD Athlon 3200+". It was equivalent to a 3.2Ghz pentium in performance.


netburst was rapidly killed, nobody hid that it was a failure


Its death wasn't rapid.

RDRAM, though, that was rapid. But that was only a fraction of Netburst's problems.


Netburst lasted for 6ish years before it was not intel's premier product, and then another 2-3 afterwards in various forms.


Really.. my memory is bad then. I thought it was 5 years including a 2 years lingering period.


> Man the folks at Intel must feel the heat.

Well, their processors are very effective space heaters, so yeah.


I'm guessing you missed the comments regarding performance per watt?


You mean AMD beating Intel, therefore being less effective space heaters than Intel's own? Yeah, I saw those.


This is great news for consumers. I've always wanted to build another pc based on AMD chips and it looks like Zen 2 is going to be it.


I'd argue AMD and Intel were pretty close in single-threaded performance until the Core 2 Duo (released in 2006).


You'd be wrong. Athlon 64 destroyed the Pentium 4 and the Athlon XP model names were originally meant to be an equivalent to how fast the lower-clocked AMD cpu would perform to the speed of a Pentium 4 clocked at the same MHz as the model name.

Here's a top of the line Intel Pentium 4 3.46 Extreme Edition being unable to compete with AMD processors running at 66% of the clock speed of the Intel.

https://www.anandtech.com/show/1529

I really don't count Core Duo, even though I had a Macbook equipped with one. The Core 2 Duo was Intel's first real competition to Athlon 64.




Consider applying for YC's Fall 2025 batch! Applications are open till Aug 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: