Hacker Newsnew | past | comments | ask | show | jobs | submitlogin
Intel Core with Radeon RX Vega M Graphics Launched: HP, Dell, and Intel NUC (anandtech.com)
71 points by smcleod on Jan 8, 2018 | hide | past | favorite | 81 comments


OP added "ATI" by themselves rather than keeping original one. What a shame.

Anyway:

> t provides an additional six displays up to 4K with the Intel HD graphics that has three, giving a total of nine outputs. The Radeon Graphics supports DisplayPort 1.4 with HDR and HDMI 2.0b with HDR10 support, along with FreeSync/FreeSync2. As a result, when the graphics output changes from Intel HD Graphics to Radeon graphics, users will have access to FreeSync, as well as enough displays to shake a stick at (if the device has all the outputs).

Yes, if those NOCs/HTPCs provide all of those capabilities, otherwise it's just marketing words. In reality, only top notch has more than 1 DisplayPort, I guess.


The already-announced Hades Canyon NUC has 2x DisplayPort, 2x HDMI 2.0a and 2x Thunderbolt 3. That amounts to six ports capable of driving a 4K display. Thunderbolt 3 ports can drive two 4K displays at 60Hz if the controller supports it.

I suspect that other products using this processor will be less generous in terms of connectivity, simply because it's total overkill for the overwhelming majority of users.

https://www.anandtech.com/show/12226/intels-hades-canyon-nuc...


I dont know, but I heard about DP being able to daisy-chain. Meaning multiple displays on one output, including less cable stress.


Yes, that's called Multi-Stream Transport (MST). In general, I'd rather have multiple ports on my output source rather than daisy-chaining which requires not only the output to support MST but also the displays; most displays do not have an output port for MST.

https://en.wikipedia.org/wiki/DisplayPort#Multiple_displays_...


This way does emphasize the incongruity of Intel and AMD in one product. I like it.

However, I read the article hoping to figure out why Intel is doing this, and no luck there. Are they giving up on the Intel graphics?


No they don't, they'll fuse AMD's VEGA core with their CPU (including their IGP) to make this. AMD GPU will be used for heavy task like gaming, rendering while Intel IGP will be used for less power consumption tasks like displaying or h264/h265 encode/decode.


"fuse" is a little strong. There's a VEGA GPU in a multi-chip module, connected with a 8x PCIe lane. It's not like Intel licensed the GPU for integration into their own silicon.

We've already seen laptops with Intel/AMD hybrid graphics, this just moves it from the motherboard to the other side of the socket without actually giving you the high-speed interconnect that an GPU-on-CPU gets.


From the article:

Each of the new parts is a quad-core design using HyperThreading, with Intel’s HD 630 GT2 graphics as the traditional ‘integrated’ low power graphics (iGPU) for video playback and QuickSync. This is connected via eight PCIe 3.0 lanes to the ‘package’ graphics (pGPU) chip, the Radeon RX Vega M, leaving 8 PCIe 3.0 lanes from the CPU to use for other functionality (GPU, FPGA, RAID controller, Thunderbolt 3, 10 Gigabit Ethernet).

So Intel isn't throwing in the towel on the iGPU, they're just augmenting it with a discrete GPU in an all-in-one package for system builders.


Reading the title I'm curious why Intel went with a graphics core that's been dead for a decade.


The intel CPU still has an iGPU.


Sorry, force of habit I guess - mods feel free to correct if you notice this as I can’t edit the title after posting.


Editorializing in the title like that is pretty tacky, especially as there are mobile Xeons with ECC already.


'but still no ECC' Are these cpus going to be useless without ECC memroy?


These are designed for mobile computers, notebooks for that matter. You don't need ECC in notebooks (consumer hardware). So no they are not useless.


So here is the odd thing: servers are (often) managed by Trained Proffesionals, and have backups and failovers.

Personal computers are, well, not like that. And yet they often have peoples important creative works, correspondence etc on them. ECC is probably more useful there, it's just that it is harder to make any benefit visible to the customer.

(That doesn't mean customers don't care about realiability. It is just that they have no sane way of distinguishing a product that really is realiable from one with advertising that lies about being reliable).


"Only servers need ECC" is a big time meme that's been reinforced on computer forums for many years. It's near impossible to snap people out of it.


Servers accumulate bitflips over long periods of time because they run 24/7. If you reboot your computer every few days to clear the memory then it's not going to be a significant problem.


Running 24/7 is not that relevant. It's the number of writes/refreshes on a given bit of information that matters. A server running 24/7 and a laptop, both reading/converting/writing image files have the same chance to corrupt each single image. The server has more chances to corrupt its cached executables though, since they have longer lifetime.


Hehe. Sounds a great approach to computing. Let's keep doing that!


The need for ECC has nothing to do with maintenance or proper administration.

You really don't need ECC as normal user, most of the time bit flips won't really hurt you. However, if, for example, you run long-running tasks like 3D rendering or Physical Simulations you may want ECC just to be sure your OS won't be killed by a bit flip in the wrong section. Your photo gallery or music collection however will most likely never be hurt by something like that, so still, consumers don't need to waste their money on overpriced ECC memory.


> Your photo gallery or music collection

On the contrary, these are generally highly compressed. A single bit flip has major consequences.

My personal photo and music collections are what I care most about bit flips for, as I intend to keep them for a lifetime.

I suspect that the target market for this product would be mobile gaming and thereforce ECC would be a negative for the system as a whole.


> Your photo gallery or music collection however will most likely never be hurt by something like that

Citation needed that people don't care about their photos and music. Human error and storage failures are more likely sources of loss but we're a big industry and can make progress on more than one thing at a time.

> consumers don't need to waste their money on overpriced ECC memory

If ECC went mainstream, prices would drop as volume increased.


Considering Rowhammer, you do need ECC everywhere.


Except that ECC is insufficient because some Rowhammer attacks can flip more than two bits per memory word. The proper mitigation seems to be TRR (target row refresh).


I could imagine an OEM like Apple or Dell looking at these to free up space in their board layout on their notebooks or small form factor computers

In such a scenario ECC memory is not really a high priority, no?

Looking at the PCI-Express lanes available those other 8 CPU lanes look ripe for a thunderbolt 3 controller, with the rest of the peripherals being powered from the PCH (or does that make no sense at all?)


Not sure why Intel just doesn't release a full system in package for a basic laptop system, with 16GB or 32GB main memory and 256GB or 512GB of NVMe flash?

I think Intel still makes flash chips and they originally started out as a DRAM manufacturer as well..



Because that would mean discarding an entire package if just one component failed or is damaged during production.

It would also mean that manufacturers would have to order different parts for two systems that are identical, expect from memory or storage. In your example, with 16 or 32GB of RAM and 256 or 512GB of storage, you'd end up having to order four different SKUs from Intel, and Intel would have to manufacture four different SKUs, plus one without memory and storage.

Logistically I don't think it makes sense to add storage and memory to the package, it just adds inflexibility and more SKUs.


yeah, it would be harder to make and sell "handicaped" configurations to users for high prices, forcing them to prematurely upgrade... hence everybody would milk less money from the end-users. also, some still sell 5400 RPM HDDs (!!!) in some versions of current-gen machines and some people actually buy them!

all systems you see today with like 6-8GB of RAM and <500 GB SSD storage will practically force their users to upgrade in <3 years, whereas if you sell to the average home user a system with >=16GB RAM and >500GB SSD, he could keep using it for up to 6 years with no chance of it feeling underpowered... so very bad for business!


.. but it costs twice as much?


They make something similar. An Intel NUC is basically a full system laptop, with the laptop-specific bits missing (no screen, keyboard, battery, etc).

You can buy it barebones, but you can also buy some of them with everything pre-installed (including RAM and flash storage and licensed Windows OS)


While the main business of Intel around 1980s was certainly DRAM, their first product was SRAM.


That's a pretty massive "package", more like a small motherboard. You'd want to spread them out a bit for thermal reasons.


I think all the x86 complexity prevents them from doing that


Why are AMD helping to sell a competitor's product? The article says it's strictly business, but I would have thought the quick buck made today by selling graphics chips to Intel would be outweighed by the long term benefit (e.g. growth in the combined CPU/GPU market) this provides to Intel.


Maybe the likes of Apple said to Intel: "Look, if you don't find a way to supply GREAT integrated graphics, we're going to do our own laptop chips".

So Intel goes cap-in-hand to AMD for their tech.


If I remember the last numbers right, Apple Laptop sales share are around 10% at best, so if anything like that story happened, it would not be them but HP (25%)/Lenovo (20%)/Dell (15%).


Worth noting that Apple’s share of high-end Intel chips is much higher. Thus, Apple’s share of Intel’s consumer revenue is probably substantially north of 10%.


...do they have in-house CPU and GPU designs?


Are you suggesting that companies like HP or Lenovo couldn't get into producing ARM chips if they wanted to ?

Intel isn't threatened by ARM yet in a performance laptop.

Of course one could reply laptops don't need that much power anymore to do what most users use them for today (thus people even here using tablet as their main tool) but that's another issue entirely, "much cheaper and good enough", one where Apple isn't Intel's main opponent either (I would even wager in that one, Intel is its own ennemy, given how terrible they've been handling the Atom brand and performance to protect margins).


Those companies could conceivably do a lot of things, but that's a lot different to having in-house CPU and GPU design teams already staffed and firing on all cylinders for years.

Not sure I get you re Apple being an opponent to Intel for "cheap" parts. It goes without saying that Apple would not sell their own stuff to anyone else. And for their own use, I'm sure the hardware would be very cheap to manufacture.


Doesn't change what I said though. Unless you expect Apple to take over the laptop market (if they didn't reach above 10% during the last pro-apple decade, they're not going to do so now especially given their current issues in the field), or start selling their custom chips to third parties (they've never ever done so, unless I'm mistaken) AND those chips taking over the market, they're not the threat Intel has to worry about.

Intel has to worry about Nvidia taking over the graphics and machine learning, AMD punching above its weight in workstation with EPYC and ThreadRipper, and ARM being cheap and good enough in all the low power fields and growing.

A middle of the line, ARM based performance oriented chip for laptop is no threat to i5/i7, except indeed the risk to lose their place in macs.


I just don't see how other laptop manufacturers are doing as being particularly relevant to Apple's decision making.

If they ditched Intel, that would very publically rip a stripe off of Intel, regardless of absolute market share. Not something Intel wants, and is what I contended Intel have gone to an unusual length, integrating AMD graphics, to avoid.


This is a joint venture against Nvidia basically. ML is the next big thing, and Nvidia has smartly positioned GPUs as the best way to do ML. AMD needs to claw back marketshare from Nvidia, and partnering with Intel is a quick way to do it. Intel also needs to keep Nvidia in check.

Now they should co-develop and promote a well designed open source ML framework, something that can compete with CUDA. AMD isn't up to the task, but Intel is.


perhaps both sides don't want Nvidia to dominate the graphics market?


Then why wouldn't AMD just launch a combined chip with their own technology for both CPU and GPU?



AMD APUs, they're relatively popular and power the current generation PlayStation and Xbox.


AMD isn't able to compete well in the SFF desktop/heavy laptop market. Ryzen is great but they don't have any mobile offerings.


Well a big customer of both like Apple could make them play ball.


Hopefully the new Mac Mini's use these. If the Mac Mini line ever does get updated that is. :)


The new AMD APUs don't have ECC support either, I'm hopeful the future versions will


Why would they disable ECC support for the APU version?


Imagine if Intel ended up buying AMD.. That'd be weird (and bad).


In the UK that sort of merger would almost certainly be blocked by the Competition Commission: I'd be interested to hear what the situation is in the US, where Intel and AMD are based.


FTC/SEC would likely block such a merger... even in the current administration it would be nearly impossible to get done.


Thanks - reassuring to know.


> In the UK that sort of merger would almost certainly be blocked by the Competition Commission

It would be interesting to see what would happen if Intel ignore such judgement, what UK would do, stop selling all laptops/desktops?


Seems like a combination tailor made for Apple yet nothing I have read points to Apple actually using it, Dell and HP I have seen mentioned.


One question: free drivers?


ATI has been bought by AMD in 2006.


AMD, not ATI.


Whoops - s/ATI/AMD


I wonder what this will do for $INTC?


Just also wonder if this CPU is free from meltdown.


How ironic that AMD itself doesn't have an integrated full performance GPU/CPU package (cut-down APUs notwithstanding).

Also who negotiated the deal to let Intel slap only its logo on the packaging, even though the AMD die is clearly larger?


It's a “semi-custom” deal, like with the AMD silicon in PS4 and Xbox One, so the company getting it made for them (Intel) gets their name on the chip.


Also who negotiated the deal to let Intel slap only its logo on the packaging, even though the AMD die is clearly larger?

Well, take a wild guess at which way the money is going to flow.


Are Meltdown and Spectre pre-installed or do I have to download them via the Intel ME? \s

I don't know how they have the audacity to release new CPUs while all their products just went to shit...


Yeah, because all they have to do is just tweak some code and re-release... /s It takes 2 - 4 years to design a chip. If Intel doesn't release the new chip they started designing 2 - 4 years ago, they waste that development cost, fall behind AMD, and you complain that they are behind the times. If they do release the chip you complain they haven't fixed the bug you just learned about a week ago. It took a year for everyone to fix their software, and you want Intel to magically come out with new hardware? Sheesh.

I think all the "Intel sucks" comments should have a disclaimer "Full disclosure: I've never designed hardware in my life, I have no idea how to run a business, I really just hate Intel and this is an excuse to vent my hatred."


A valid perspective, although I'm not going to lie: I don't particularly appreciate the extra (non-value-adding) work that's been dumped into my team's backlog in terms of figuring out and implementing mitigations, ahead of applying patches, when we already have a dozen projects on the go, with about half of them scheduled to deliver in the next couple of months. _Thankfully_ we're running non-virtualised on dedicated hardware, due to performance and cost considerations, so the risks are somewhat reduced for us.

I also admit to taking a fairly dim view of Intel's PR around the issues, and the suspicion that Meltdown, in particular, exists because they - specifically - have played it a bit fast and loose with their processor designs in order to gain a performance edge, and perhaps to a greater extent than AMD and ARM.

Granted, this will take years for a final resolution in hardware though.


Nope. This bug is far from new. Intel knew about this since 2012 [0]. I don't expect them to release a fix within a year or something. I know that CPU design is far from easy or doable within weeks. But how they handled the whole matter shows that they don't care. Intel sucks, that's true. They should admit that they produced shit and start talking to their customers and show that they care.

Oh and also, who in their right mind would buy a CPU with those bugs?

[0]: https://twitter.com/TheSimha/status/949361495468642304


>They should admit that they produced shit and start talking to their customers and show that they care.

Someone correct me if I'm wrong but isn't almost every modern CPU vulnerable in some way with Spectre? If so isn't every modern CPU manufacturer "producing shit" in your eyes?


Most modern AMD CPUs are not vulnerable or at least not to the full extent, so no not all CPU manufacturers are producing shit.


The partnership with Radeon was a big deal sometime ago. They couldn't not release it. AMD would have been pissed.


What else are you going to buy instead?


Currently, Ryzen.


not immune from spectre.


Sigh, yes it is, Ryzen is only vulnerable to Spectre if eBPF JIT is turned on, which is not the default.


You are thinking of a specific attack that uses the 2nd spectre vulnerability (poisoning of the indirect branch prediction) to attack the kernel. I'm sure in the following weeks other attacks to the kernel will appear that do not use the eBPF. The same vulnerability can be in principle be used to attack other processes. Still there are both software and firmware mitigations for this attack.

The real elephant in the room is the first spectre vulnerability (exploiting the normal jump predictor), which allows untrusted code (think JS) to read anything in the same process. Apparently most [1] CPUs are affected by this. There is currently no mitigation for this, other than rewriting applications to use separate address spaces, and even then, I think with carefully crafted inputs even that can be exploited.

[1] apparently simple, in-order cpus are not, but in principle there is no reason for the attack to work there as well.

edit: another possible mitigation for spectre v1 is converting all bound checks controlled by untrusted input to force non-speculation (via if-conversion). It is going to take a very long time to covert all applications (it might be easier for VMs and JITs).


1) Ryzen is absolutely vulnerable to Spectre, however desperately you don't want to think so. (I don't want it either; I just bought a Threadripper 1950X, after all)

2) If you actually think the eBPF JIT being turned off is going to save you and that Ryzen is magically immune because it's off: you're deluding yourself completely. Thinking eBPF is the key or whatever is a full misread of the actual vulnerability... Attacks never get worse, they only get better. You are guaranteed to see more exploits that do not leverage eBPF, but other components of the kernel to compose gadgets. Modern systems are millions of LOC; there's ample surface area for this to happen.




Consider applying for YC's Fall 2025 batch! Applications are open till Aug 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: