The rumor on the street is that Zen 3 is a major upgrade with huge IPC improvements.
If that's true, Intel has basically nothing. All they've got left at this point is "I want the best gaming performance and literally nothing else matters". AMD may just win that crown too.
It's never been more exciting to be an enthusiast, and Intel was asleep for a long time. It's about time someone woke the sleepy giant.
Weren't we all sick of Intel churning out 10% incremental improvements every year on an entirely new socket every time? They were phoning it in for a long time.
Biggest issue I see with this latest breed of Intel chips are their thermals... looks like you'll need a motherboard that can push quite a few watts to the CPU and you'll need a cooling solution (liquid cooling or a high-end aircooler) to remove the heat.
If you need to drop $$$ on a high-end motherboard and cooler to actually hit top performance it can obscure the true cost of the CPU, because you can easily drop an extra $100 trying to cope with its power and thermal requirements.
Allegedly they shrank the thickness of the die and increased the thickness of the IHS and push roughly the same thermals under load as the 9th gen chips. And decent coolers aren't that expensive in the grand scheme of things.
But yea, the motherboard situation is way worse. 250W sustained consumption in a relatively small surface area is not going to be great for the substrate layers of the board.
It’s an Intel problem in this case because they’ve made most of their performance improvements with 10th gen by adding cores and increasing frequencies. They haven’t made efficiency improvements in architecture or process node. Their TDP has gone up as a result. Intel managed to keep 10th gen from turning a computer case into a kiln by playing with the heat spreader, but it’s a hot part that needs good/expensive cooling.
So while AMD has made big efficiency improvements in the last couple generations, Intel has been losing efficiency.
It doesn't look that way. The i9-10900k (10 cores) at full load uses 254W, but the 3950x (16 cores) only uses 144W. There are AMD processors that top the i9-10900k in terms of power consumption, but they're 24/32 cores, consuming 280W/286W respectively.
> Weren't we all sick of Intel churning out 10% incremental improvements every year on an entirely new socket every time? They were phoning it in for a long time.
Sort of yes (although, they often did two generations per socket in recent times), but then they started just rehashing Skylake every year, and we long for the time of predictable incremental improvement. On the server side, the predictable incremental improvement also came with more cores each time too, so that was nice.
I have the fastest HEDT Intel processor (10980xe) and have benchmarked it extensively against AMD's equivalents (3950x and 3960x).
If you are willing to OC then Intel has a significantly better price/performance ratio than AMD for general use. If you doing anything that benefits from AVX then Intel will pull away significantly. And if you are doing deep learning than DLBoost is seriously under-appreciated. However if you're not in these three camps then AMD is the better option.
Also the x299 LGA1151 socket has been around for 3 years so Intel doesn't always just change sockets for no reason.
If you are willing to OC then Intel has a significantly better price/performance ratio than AMD for general use
You're engaged in some serious attempts at rationalization.
The 3950X trounces the 10980XE in virtually every metric (including measures where the 10980XE gets to use AVX-512). It costs 1/3rd less. Unless you're significantly overclocking, with expensive cooling hardware, you aren't making that up. And in the real world the headroom for overclocking is pretty tiny.
Intel is behind, and even after price cuts they're still seriously overpriced. Trying to rationalize that they're "significantly better price/performance" if you just do A,B, C (which are of course completely unverifiable, moving target meaningless steps) is nonsense.
And you're seriously positing this as just a typical thing to do?
Your processor costs 33%+ more than the 3950X. Your cooler, even your "hardly expensive" cooler, costs another $200-$300+. No, you didn't hit anywhere close to a high in price/performance -- even if we assume that you aren't blasting through 300W to the processor alone, and suffering a cascade of faults if you tried to pretend this was normal operations, you still haven't come close to breaking even on that ratio.
And all of that is going by Geekbench which already has an oddly synthetic slant towards Intel. In actual benchmarks of real world activities -- compiling, running blender, most games, etc -- the 3950X absolutely shoots in the lead (despite having just two RAM channels, where in memory access it barely lags the 4 channel 10980XE)
I don't have any dog in this race. This laptop has an i7. My MBP has an i7. My "gaming" PC has an older Zen. But you have to seriously stretch and twist reality to try to make a case for Intel, much less a value proposition.
My overlocking is hardly extreme, it's 4.8GHz. And my cooler costs $119.
Also I run this configuration as my daily driver with zero faults and in my real world use cases e.g. machine learning it is 2-5x faster than then 3950x. AVX512 is still a killer feature for Intel.
It turns out AVX512 doesn't give you the kind of performance speedouts you think. On those benchmarks, AVX512 is only around 8% faster than AVX2.
The reality is that AMD is killing Intel at every price point. My 3950x scores 12444 (no OC) and only costs around $400. Looking up on Newegg, your 10980XE costs... wait for it.. $3000.
Even with your crazy overclocking, that intel cpu is only 35% faster and 650% more expensive. How can you seriously claim that Intel is at all competitive with AMD?
And speaking of ML, it's rare to find a ML workload that is expensive that doesn't involve neural networks. And for neural networks, you should be running on a GPU anyways so I'm not sure how useful this DL Boost tech is.
maybe not in the official product stack, but it's a 16-core processor that costs $700. it performs the same as or slightly worse than the 3900x in any workload that isn't massively parallel. imo, when you get to the point in the product stack where $300 dollars yields the same or worse single-thread performance, you have entered HEDT land.
>it performs the same as or slightly worse than the 3900x
This is getting nowhere, 3900x is 12core version of 3950x, both are commodity grade hardware with 'cheaper' motherboards and two memory channels. 3950x tends to have the higher quality silicone, it boosts higher on its right own.
HEDT has more memory channels and more PCIe lanes - this is "the land".
All they've got left at this point is "I want the best gaming performance and literally nothing else matters".
For the average person maybe that's all they've got left.
But technically their are still ahead in some interesting ways when we look at x86 extensions:
By decreasing order of impact:
AVX 512
TSX
SGX
Their next architecture could bring bfloat16 too but I wonder how useful that would be on a CPU instead of on a GPU.
AVX512 and TSX can give Intel a BIG performance advantage in some niche (but foundational) applications
Is it a 30% perf/$ advantage though? In most comparisons I've seen Intel lost horrendously even with AVX-512 code compared to EPYC 2 when price was factored in. Doubly so if the cost of electricity and cooling is included, because the watts/core for AMD is much lower than anything Intel has.
Compare against 3950x and 3960x and you will see it's closer to the 3960x for quite a bit cheaper and with 6 less cores. And that's not entirely AVX-512 code.
Also power consumption is a little overblown since all CPUs scale their frequencies so even though mine runs at 5GHz it will spend most of the day below 2GHz or so.
A _lot_ of people (myself included) buy Intel and only Intel. Maybe there isn't a good reason by a strict price-performance option, but I like the wider choice of motherboards and I just trust their compatibility more.
Also, all our desktop systems here are Xeon because we've had bad experience with ECC support on AMD.
Edit: Honest truthful non-inflammatory comments get you downvoted and mocked. That's some toxic community you've built here, @dang.
Intel's hardware quality seems to have been a bit iffy these last few years, like they've got complacent. Their latest screw-up is that their 2.5G Ethernet chipset that's meant to be included on higher-end Comet Lake motherboards isn't actually spec-compliant: https://wccftech.com/intel-400-series-chipsets-z490-i225-v-n... Apparently 2.5G Ethernet allows the next packet to be sent 5 byte times after the end of the previous ones, but they incorrectly followed the gigabit spec and required 8 byte times, causing massive packet loss when receiving data from other hardware that follows the spec strictly. A new, non-broken revision is meant to be shipping sometime, but the workaround for customers stuck with the old one is dropping to gigabit speeds.
I'm with you. I've simply had so many problems with AMD products (CPU and GPU). They seem attractive because of their price points but they are always a purchase I end up regretting for a very diverse variety of reasons. Never again.
So you know I downvoted you because it’s against the rules to complain about being downvoted, it’s rarely constructive, not because you are using Intel and only Intel.
Exactly what do you expect Dang to do about it? You want him you him to somehow punish your downvoters?
I read a lot of good comments from you, especially on software-related topics. But it would serve you well to grow a thicker skin. Who knows why some posts get inexplicable downvotes. But who cares?
Can someone breakdown what the most exciting CPU developments have been recently? I stopped paying attention a long time ago, but find myself interested having recently gotten back into 3D animation. I know GPU renderers are gaining steam, but there are still CPU renderers out there and putting my toes in the water I’m finding it difficult to parse the advantages of one new chip vs another.
* AMD introduced their Ryzen architecture ~3 years ago. This was the first time in years that their chips were competitive with Intel chips.
* AMD's newfound competitiveness forced Intel to increase core counts in their consumer chips. 6 and 8 core chips are now extremely common in the consumer space.
* AMD has released consumer chips with as many as 16 physical cores (32 threads) in their most recent generation. This is a huge improvement for tasks that scale well to multiple cores like (I presume) 3D animation. Intel has 10 core consumer chips out.
* If you're willing to move up to the HEDT chips you can now buy an AMD Threadripper chip with up to 64 cores (128 threads).
* Intel has struggled to make the next step in manufacturing. Their desktop chips have been stuck at 14nm for half a decade now. AMD contracts their manufacturing out and has been able to take advantage of other foundries improved processes. Their current chips are using 7nm process nodes and use significantly less power than their Intel counterparts. Because of this they require less extreme cooling.
* There are some new instruction sets such as AVX and AVX512 that can improve performance in some software. I'm not an expert there and you'd probably want to do some research on sites related to 3D rendering to see which chip would perform best for your needs.
Huh, how did AMD manage to catch up to Intel? I would've thought that Intel as the market leader should've been able to out innovate their competitors based on price. Has Intel just been complacent for the last 3 years?
They could, but why should they? Intel is still on full capacity at the moment with backorders to fill. So despite technically losing to AMD, it isn't doing much damage ( if at all ) financially.
The CPU developments may or may not be of use to you.
If you're really interested in buying hardware, you can follow a rough process. Web search for the software that peaks on an X/Y axis of "how much time I spend using it" and "how long things take because of CPU processing" and add in keywords like "benchmark" and "CPU." Read those articles carefully - maybe three different sites/authors to get different biases ironed out.
I've been an AMD fan since the Athlon XP (though I skipped over the FX / Bulldozer mess and upgraded a Phenom to a Ryzen 2700X) so I can't pretend I'm unbiased. My opinion, though, is that Intel lost their way. They were tick-tocking really well, where they'd improve architecture and then process, and they were dominating AMD. When the Zen cores were released, first as Ryzen desktop chips (up to 1800X), the clouds began to part, because they were starting to look pretty competitive. The clocks are still not as high, and in specific gaming scenarios (specifically 1080p where high frame rates are critical), the Intel chips still showed a clear lead. The follow up Zen+ (up to 2700X) was a clear evolutionary improvement. (Along the way, Threadripper and Epyc chips were also released on the Zen/Zen+ architecture, and for workstation/server loads, they were competitive.)
Zen 2 was released last year with a move to chiplets, 7nm processes and another 15% increase in IPC. This really moved the needle. In many cases, The Ryzen 3xxx, Threadripper and Epyc chips are clearly better than the Intel counterparts. We also see Intel cutting the price of their HEDT chips in half overnight, so that gives you some insight. Most recently, the first Zen 2 laptop chips were released as Ryzen 4xxx with U, H and HS to indicate low power, high power and special high power chips that perform about on par with HS but using less power. They seem to be a huge upgrade to Ryzen mobile, and are much more power efficient, and battery life is finally reasonable (and in some cases impressive.)
On the Intel front, they've really struggled to break the cycle of tweaking their (5?) year old 14nm process, and they tweak the architecture, but they haven't had a major overhaul in quite some time. Meanwhile, the performance of Intel chips has been affected (arguably) more than AMD by having to implement fixes for various security flaws. And this old process/architecture is requiring Intel to continue to increase the power requirements so they can boost the clocks over 5Ghz, but of course this results in lots of heat, as well as a reputation for laptops throttling down in response to that heat.
There are still cases where it makes sense to buy Intel, but each person has to know their workload well enough to do the research to make that decision. If it's anything but mission critical (every second is worth real money), I don't think you can go wrong by just buying the best AMD chip you can afford. I don't think there are any real bad apples in the bunch, and in most cases you'll get a solidly adequate cooling solution for free and pretty good motherboard options. (Though I find the X570 market pretty unpleasant at the moment.)
$50 either way doesn’t change much especially since it’s often lost in motherboards, or if you need to buy a cooler (When B550 motherboards launch the balance will tilt further towards AMD).
What really does make a difference is thermals/noise. The higher end 9X00 and 10X00 series feel like they have been squeezed very tightly into their thermal envelopes.
A CPU should come with a cooler and with that cooler the CPU should be quiet with some headroom. Reviews of 10900 seem to indicate you should opt for one of the most expensive air coolers or even water cooling, which of course throws the price comparison off completely.
You'd think so, but it's a well known marketing trick that pricing at $399 and $449 will absolutely guarantee more sales of the lower priced item than if priced at $402 and $449.
True, but for items where you have to buy more things to complete your purchase (such as a fan, or a motherboard) at least customers will always see and compare the total.
$50 less but without cooler isn’t really $50 less.
There is no way I would trust an Intel stock cooler with a $400+ CPU. I might trust an AMD Wraith cooler, but I'd still probably at least drop $40 on a Hyper 212 EVO or NH-L9i/a depending on the case.
Resellers have very low margins on core parts like CPU, GPU, memory, mainboards. They make money elsewhere. That's why you don't really see bullshit pricing like 199.99 when you buy a CPU. For instance the last CPU I bought was 193.42.
I've been very happy with my 3900X so far. The only suggestion I would make would be to upgrade the cooler from the one that comes in the AMD retail box to something like a Noctua. Since I made that change, it's run much cooler and likely extended it's life significantly. Just make sure you have room in your case, as the Noctua is pretty darn big.
My workload with it is general .NET development, transcoding movies, and light gaming.
I can attest to Noctua building large coolers. I recently built a PC with a NH-D15 which came with two NF-P12 redux-900 140mm fans for the cooler. It was easily the largest cooler I’ve ever installed, and also the quietest. I only had it paired with an i7-9700K, but it was amazingly quiet even under max load. Wasn’t able to get it to thermally throttle with 5Ghz boost enabled and running prime95 and MSI Kombustor burn-in mode on the RTX 2060 Super simultaneously to heat up the case. Really happy with how that build turned out.
I wasn't able to install the second fan. It hit the top of my ram sticks and stuck up too far to where the case wouldn't close. Even with just the one, temperatures are much improved.
I had that same issue. I hate using fans in the “pull” orientation rather than “push,” but with this cooler, I had to. I left the space above the RAM free, and put one fan in the middle, between the two heatsink stacks, pulling air from the front toward rear side of case. The second fan was on the rear side of the case, also pulling from the front toward rear side. My case had a rear case fan, which I left in place in the push orientation.
Next time I would use low-profile memory or just memory with a shorter heatsinks, so I can have the fans push. All in all no thermal problems, so good enough works. It was not a personal build, just a work for hire, and not meant to break records, just a stable high performance gaming PC.
What would you all recommend for someone who just wants to maximize the performance of his IDE (.NET in Visual Studio)?
I get that going 3900X will give me the fastest compile times, which is cool. But I hit the build button relatively infrequently, and anyway I'm usually switching to reddit if the build is going to take longer than a few seconds, so I'm actually more interested in minimizing the latency of all the moment-to-moment operations like Intellisense, refactorings, highlighting, code folding, etc.
I suspect that some of those operations are mostly single-threaded, but even if there is some degree of parallelism going on, surely this type of "bursty" workload is where Intel can still register an advantage?
So, in other words, do these operations more closely resemble Benchmark A: "Blender render time" or Benchmark B: "Max FPS in Tomb Raider" (and why don't we talk more about the difficulty of making these sorts of determinations)?
Visual studio is slow because its an unoptimized piece of software (it misses CPU cache often and does frequent disk seeks, etc.)
The best way to get performance from such software is:
1) make sure it (and the project) are on a fast SSD. NVME is better but SATA is good.
2) getting a good CPU for the workload. For pure IDE user experience it won't make much of a difference as long as it's a recent high end enough CPU (fast RAM latency, high enough clock frequency, enough cores, ...)
I have come to accept that I will be replacing SSDs periodically. Either because they're worn out, or because a newer/faster/better technology is available.
A RAM disk is an interesting alternative for sure. The last time I used one was when I did OS/2 development using IBM VisualAge IDE. Which sorely needed it...
Windows doesn't have a driver for one out of the box, but there is a freeware one named ImDisk that looks widely used.
For compute workloads, I highly recommend the AMD CPUs. In this case, doing multiple tasks at once - like switching betweeen reddit and a built qualifies :-)
For these type of workloads, I personally find that the synthetic tests (like
Geekbench) are pretty good. In addition Linus tech tips does a good job of covering “productivity” workloads - which AMD usually massacres the intel workloads. Finally if you use VMS or docker, the added CPU count is critical.
I’ve been looking at Ryzen CPUs for a while, and although I don’t have the budget for a top-tier box right now, this is excellent news. Drop one of these and an RTX into a good case and you have a workstation that will blast most Macs out of the water (if not all).
People that buy a Mac are looking for the fastest computer that will run OSX.
And if you want to do a Hackintosh then it really doesn't work that well with Ryzen e.g. issues with Creative Suite, Docker etc as well as poorer performance due to lack of AVX2/AVX-512.
I went with a 3700X and a 5700xt and can’t be happier. Compilation times have dropped significantly from my old box, that suffered heavily in performance from the meltdown/specter mitigations.
It also holds nicely my occasional gaming sessions :)
I don't know about that. The i5 6 core chips seem like pretty good value. I would probably go with an r5 3600, but it isn't really as clear cut anymore.
Checking local prices the 3600x is 90aud cheaper than the 10600k. Factor in a ~60aud cooler for the 10600k and the 6 core is competing with the 3700x on price, not the 3600x. In terms of value it still seems pretty clear-cut to me.
The article is comparing the upcoming Intel Core i9-10900K and Core i7-10700K chips, and how their price and performance compares to the AMD Ryzen 9 3900X.
That being said, it seems like, with the exception of some advantages in low resolution gaming, the less expensive Ryzen 5 3600 is still a better option than the (so far) unavailable Core i5 10600K.
Not really, the Zen3 APUs for laptops and the consoles coming out later this year have been showing up in benchmarks online and have been blasting the Intel desktop offerings, I'm actually kind of excited about what we'll see from the actual desktop SKUs.
Well my point was that the 10600K made a choice less clear when it comes to the current offerings from AMD. I want to upgrade my gaming rig and was basically looking at either the 3600X or 3700X, but it looks like either CPU is easily beaten in games by the 10600K for a very similar price. If I waited a little bit for the 10600KF then the price difference shrinks even further. That's why I'm saying I'm waiting for Zen 3 - I'm sure it will absolutely dominate on the desktop, so it seems like it's better to wait a little bit rather than buy now.
There might be almost no reason to buy an Intel CPU, but interestingly, they do have the advantage in very small desktops because they have powerful CPUs with integrated graphics. Cooling without throttling is probably tricky if you get super small, but AMD's APUs are in need of an update.
AMD APUs are not that bad anymore, I am certainly considering a 3000G for my next cheap build. Obviously it would be nice to have a 15W Zen3 APU though, but one can only dream.
I have one and it works well, but AMD doesn't have good CPUs with their APUs yet.
They aren't Zen 2, more than 4 cores or 7nm, all of which would make a huge difference. Putting the APU on the same chip makes them more difficult to cool, especially since they don't have quite as high quality packaging as the better CPUs. Getting smooth 4k realistically means messing with bios settings to get them to throttle less and run hotter.
That is true, but considering how terrible the AMD APUs were before ryzen, I’m still impressed by the performance leap (current dual-core APUs with HT perform better in multicore workloads than the previous ones with four). I’m still waiting for them to release something that has a low enough thermal envelope to go into a NUC-like while keeping decent performance.
There are "G" suffixed CPUs with a GPU, but limited to only a few parts and are currently one generation behind. The reality is said parts are in low demand as most consumers buying CPUs will also be buying a GPU separately.
Depends on the market. Enthusiasts/gamers/etc. are going to want a discrete GPU. For a random desktop / family (without too much gaming needs) / corporate usages, an APU is not only fine, but it does not even make much sense to use anything else. Now laptops are fashionable too in that area, and I don't know any hard figures, but I would not be surprised if most desktop client x86 CPU shipped on earth actually are of the APU variety and used without discrete graphics. In which case AMD APUs are actually really good: not many people will want an unbalanced system, and they are properly balanced. Beefy CPUs used with their integrated graphics are for specialized tasks (e.g. non-graphic related dev)
But AMD CPUs are so much cheaper and have so much better performance you could buy an old-gen graphics card and it still would still have better price/performance than Intel.
Yeah. There have been increasingly plausible rumours that the 4xxxG range is coming soon, but right now AMD seem to be focused on the much larger laptop market (which also benefits a lot more from the really good power efficiency of their current chips).
I bought a 2400G last year, which I've been pretty happy with - but I'm very much looking forward to dropping a 4xxxG into the same socket later this year!
I was compiling the new and experimental Emacs native package and was warned that the compilation time took over 7 hours on a "fast computer." I was pleasantly surprised that it finished in under 20 minutes on my 3900x. That thing is insane!
In the first graphic, if you click the right arrow, you see single-core benchmarks. Even a 5.0Ghz desktop chip from Intel is behind the 4.6/4.7Ghz Ryzen chips.
Yeah, really. If they can pull something amazing out of an old CPU again, that'd really be something. Perhaps a 128 penryn with modern accelerators bolted on? :p
If that's true, Intel has basically nothing. All they've got left at this point is "I want the best gaming performance and literally nothing else matters". AMD may just win that crown too.
It's never been more exciting to be an enthusiast, and Intel was asleep for a long time. It's about time someone woke the sleepy giant.
Weren't we all sick of Intel churning out 10% incremental improvements every year on an entirely new socket every time? They were phoning it in for a long time.