Hacker News new | past | comments | ask | show | jobs | submit login
AMD Will Build 64-bit ARM based Opteron CPUs for Servers, Production in 2014 (anandtech.com)
124 points by skept on Oct 29, 2012 | hide | past | favorite | 35 comments



AMD is no stranger to using busses and sockets that are compatible with "other" hardware.

The original Athlon was bus-compatible with DEC Alpha chips - some logic boards could take either with a firmware upgrade.

Also, there have been FPGA's that slot into Opteron logic boards (Celoxica made one around 2006), and various other chips that connect directly to the hypertransport bus as accelerators.

It remains to be seen what they'll do with this. Will it be a Xeon Phi competitor (lots of cores, high thermal footprint) or something aimed at lower end uses.


Finally, AMD is embracing ARM. It just might be the only thing to save them, but only if they are flawless in execution, and Nvidia and others already have years of head start in working with ARM chips.


They have SeaMicro: http://www.seamicro.com/ And given Nvidia has never tried to do anything on the server, it might be that AMD is already ahead of many others.


Nvidia ships quite a bit of Tesla hardware for GPGPU data center use; Amazon just bought a massive shipment of these racks for use through AWS.[1]

What's notable about Nvidia's Tesla offerings is that they sit as a separate 1-2U rack on top of the compute box. The space and power costs of operating Nvidia GPGPUs in a datacenter are nontrivial.

If AMD ships a solid ARM product with some good on-die GPGPU components, that might compete with Nvidia, but otherwise the two are in different spaces even within the server world.

[1] http://vr-zone.com/articles/amazon-orders-more-than-10-000-n...


Tesla boards haven't shipped in a separate 1U form factor for a few years; they're all passively-cooled PCIe boards inside a x86 server chassis now.


Actually both setups are possible. Sometimes vendors put the Tesla PCIe cards in a separate chassis, and link the chassis to the host via a PCIe cable, eg.:

http://www.dell.com/us/business/p/poweredge-c410x/pd


I was thinking a great deal about SeaMicro the moment I read this announcement.


So you think that NVidia's experience with consumer ARM chipsets (for tablets) is more important than AMD's experience in the server space?

Not so sure about that.


Shut up and take my money! Give me 64+ cores at an affordable price and my next build will keep you in business, AMD.


This is exactly what I like about AMD's strategy--they say more cores is pretty much all that matters and, you know what? I think that's true.


In today's marketplace, there's very little about the ARM instruction set that makes it better suited for low power applications. Yes, it is a saner instruction set than x86, requiring less silicon to convert into uOPs, but the difference is trivial in 2012.

The difference between x86 and ARM on the power/performance curve is almost purely due to design choices and trade offs. So why not create a new low-power x86 core instead of a new ARM core?

The only way this makes sense to me is for this to be a stepping stone into the mobile market. The mobile market is definitely stepping up the power/performance curve, and AMD's experience with GPUs may be a distinct advantage for them in the mobile market in the future.


> In today's marketplace, there's very little about the ARM instruction set that makes it better suited for low power applications.

So it's just a coincidence that ARM powers 95%+ of smartphones? I think not.

Given Intel's advantage in fabs and process technology I think it's all the more striking that to date they have failed at developing chips to effectively compete with ARM in the mobile market.

x86 is an ugly and inefficient ISA compared to ARM but it didn't matter as long as users plugged their computers into the wall.


"So it's just a coincidence that ARM powers 95%+ of smartphones? I think not."

ARM designs have been optimized for low power. x86 designs have been optimized for high speed. It has little to do with the architecture and lots to do with the design.

Nobody has ever tried to design a sub 1 watt x86 design. Nobody has ever tried to design a 100 watt ARM.

Only very recently have we had anything that's close to comparable. Medfield has a similar power rating to high performance ARM designs, and similar performance.


So Intel just hasn't bothered to getting around to designing a low-power x86 chip that can compete in mobile devices with ARM? The more likely explanation is that it can't be done because ARM is a better designed and therefore intrinsically more power efficient ISA than x86.

In any case, the more pressing problem for Intel is that in the mobile space consumers don't know or care what types of chips are in their phones so even if Intel could design an x86 chip competitive with ARM, consumers are unwilling to pay a brand premium for Intel that they're used to getting in the desktop market.


Intel recently released its "Medfield" SoC. It beats ARM cores at similar power envelopes, and is currently available in a few phones.

Of course, it's a single-core processor as opposed to the double or quad core ARM processors that it's up against, but the point is that they're getting quite close.


> Nobody has ever tried to design a sub 1 watt x86 design.

Transmeta tried and succeeded.


Intel tried the low-power x86 with the Atom, didn't really go anywhere. It's true that scaling up an ARM will be equally problematic. But, the point is that they use ARM because they don't want to push the power envelope.


Atom didn't go anywhere because it was a 10W processor benchmarked against 100W processors when running performance tests, but compared to 1W processors when doing battery tests.


It's not really a fair comparison. Atom was an attempt to hug the low-end of the computing market with a low-power CPU for use in netbooks. ARM on the other hand is a smartphone/tablet system-on-a-chip which is in a completely separate performance/power-usage realm. Different design goals, different tradeoffs.

However, Intel found out that the power/performance tradeoffs for the original Atom were not what the market wanted and they've continued to evolve the design. Today there is the "Medfield" Atom system-on-a-chip which is already making its way into smartphones and already giving ARM a run for its money. Given Intel's history and their initial level of success with this first generation SoC design it's definitely far too soon to write off Atom and bask in ARM triumphalism.


Note that this announcement is based on a processor license, not an architecture license - AMD are using an ARM design off the shelf, not designing their own new ARM core.


Yup, I had read the article the other way around. Designing their own ARM core didn't make much sense to me. Using someone else's does.


Considering AMD is now a fabless chip design shop, not doing any design work doesn't make sense to me. What is their value add? They may as well have ordered a shipment of completed processors from Nvidia.


In this case, the value add is the processor interconnect fabric. In other cases, it will be their GPU.


Nvidia's Project Denver [1] is very similar. A 64-bit ARM based CPU for servers that they started working on a few years ago.

[1] http://en.wikipedia.org/wiki/Project_Denver

Edit: It seems the announcement from AMD is in response to this announcement from Nvidia, the 2014 date also matches: http://www.xbitlabs.com/news/cpu/display/20120921010327_Nvid...


I love this announcement for no other reason than I've been predicting a large influx of ARM architecture into the server market. It makes a lot of sense. More importantly I believe it'll be large multi-core SoC clusters. This is the very logical transition. While a lot of our software doesn't fully utilize multiple processor support, our OSes are becoming a lot better at scheduling and are almost eliminating the impact of a context switch.


I don't see why AMD can do ARM better? AMDs strengths compared to Intel are in its APU and the number of cores they can cram on an x86.

I think they confused the market, severs, with the technology they actually have - x86.

Their biggest asset is the existing infrastructure and people to build x86 - there are 2 companies that can do this: Intel and AMD.


The problem is that there will be a market for ARM servers for the simple reason that power and core density is becoming a bigger and bigger part of total hosting cost and ARM does low power well. AMD would be ignoring that at their peril. They're much more vulnerable to this than Intel since they're currently not generally the preferred high-end choice for most people.


Is ARM still lower power when dialed up to perform?

I think there are some interesting possibilities, especially with the bursty nature of web traffic but there is also still a noticeable performance gap between ARM and x86.


AMD has some strong folks in low power design, something that Intel hasn't done well in the datacenter. Many companies bought lower-power Opteron parts for disk-bound or generally lower-performance-need applications, and saved a fair amount of money over time.

Also, AMD is still working on leveraging its strengths from ATI, which might be useful depending on their market targeting.


Those are their strengths compared to Intel, not compared to the other ARM manufacturers. Intel keeps beating AMD, but part of the reason is that Intel's process is really good. The fact that they reach a given process node much faster than everyone else is important, but even for a given node Intel's, e.g., drive currents tend to be much better than anybody else's. Against the rest of the ARM world AMD won't have that humongous handicap, and I expect that they'll be able to compete much better at things like straight up performance.


Now this is some interesting stuff. I wonder if they have any plans to make a dual instruction-set processor that can run both x86 and ARM-based operating systems... That's the kind of crazy design that just might work ;)

Aside: I wonder if it's possible to have one processor core with an ARM instruction set, and another with x86 - obviously, reading from different [segmented] memory locations, albeit simultaneously. I just wonder, since they mention in the article the new Opteron cores are designed by ARM, but the rest of the processor indeed will follow AMD's design.


It's interesting that are actually a processor licensee, as the article notes, and not an architecture licensee - in other words, they aren't designing their own core around the architecture, but instead using an ARM design. With Bulldozer AMD really started utilizing the many fab facilities that they have around the world, and this should continue that.


Interesting how they position it as one third of an ARM x64 GPU strategy. GPU is still the dark horse if we get serious general purpose programming. GPU and ARM works once sequential performance is not the selling point. ARM instruction set on GPU could work too.


A potent sign of times to come...


A comment empty of content...




Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: