OP added "ATI" by themselves rather than keeping original one. What a shame.
Anyway:
> t provides an additional six displays up to 4K with the Intel HD graphics that has three, giving a total of nine outputs. The Radeon Graphics supports DisplayPort 1.4 with HDR and HDMI 2.0b with HDR10 support, along with FreeSync/FreeSync2. As a result, when the graphics output changes from Intel HD Graphics to Radeon graphics, users will have access to FreeSync, as well as enough displays to shake a stick at (if the device has all the outputs).
Yes, if those NOCs/HTPCs provide all of those capabilities, otherwise it's just marketing words. In reality, only top notch has more than 1 DisplayPort, I guess.
The already-announced Hades Canyon NUC has 2x DisplayPort, 2x HDMI 2.0a and 2x Thunderbolt 3. That amounts to six ports capable of driving a 4K display. Thunderbolt 3 ports can drive two 4K displays at 60Hz if the controller supports it.
I suspect that other products using this processor will be less generous in terms of connectivity, simply because it's total overkill for the overwhelming majority of users.
Yes, that's called Multi-Stream Transport (MST). In general, I'd rather have multiple ports on my output source rather than daisy-chaining which requires not only the output to support MST but also the displays; most displays do not have an output port for MST.
No they don't, they'll fuse AMD's VEGA core with their CPU (including their IGP) to make this. AMD GPU will be used for heavy task like gaming, rendering while Intel IGP will be used for less power consumption tasks like displaying or h264/h265 encode/decode.
"fuse" is a little strong. There's a VEGA GPU in a multi-chip module, connected with a 8x PCIe lane. It's not like Intel licensed the GPU for integration into their own silicon.
We've already seen laptops with Intel/AMD hybrid graphics, this just moves it from the motherboard to the other side of the socket without actually giving you the high-speed interconnect that an GPU-on-CPU gets.
Each of the new parts is a quad-core design using HyperThreading, with Intel’s HD 630 GT2 graphics as the traditional ‘integrated’ low power graphics (iGPU) for video playback and QuickSync. This is connected via eight PCIe 3.0 lanes to the ‘package’ graphics (pGPU) chip, the Radeon RX Vega M, leaving 8 PCIe 3.0 lanes from the CPU to use for other functionality (GPU, FPGA, RAID controller, Thunderbolt 3, 10 Gigabit Ethernet).
So Intel isn't throwing in the towel on the iGPU, they're just augmenting it with a discrete GPU in an all-in-one package for system builders.
So here is the odd thing: servers are (often) managed by Trained Proffesionals, and have backups and failovers.
Personal computers are, well, not like that. And yet they often have peoples important creative works, correspondence etc on them. ECC is probably more useful there, it's just that it is harder to make any benefit visible to the customer.
(That doesn't mean customers don't care about realiability. It is just that they have no sane way of distinguishing a product that really is realiable from one with advertising that lies about being reliable).
Servers accumulate bitflips over long periods of time because they run 24/7. If you reboot your computer every few days to clear the memory then it's not going to be a significant problem.
Running 24/7 is not that relevant. It's the number of writes/refreshes on a given bit of information that matters. A server running 24/7 and a laptop, both reading/converting/writing image files have the same chance to corrupt each single image. The server has more chances to corrupt its cached executables though, since they have longer lifetime.
The need for ECC has nothing to do with maintenance or proper administration.
You really don't need ECC as normal user, most of the time bit flips won't really hurt you. However, if, for example, you run long-running tasks like 3D rendering or Physical Simulations you may want ECC just to be sure your OS won't be killed by a bit flip in the wrong section. Your photo gallery or music collection however will most likely never be hurt by something like that, so still, consumers don't need to waste their money on overpriced ECC memory.
> Your photo gallery or music collection however will most likely never be hurt by something like that
Citation needed that people don't care about their photos and music. Human error and storage failures are more likely sources of loss but we're a big industry and can make progress on more than one thing at a time.
> consumers don't need to waste their money on overpriced ECC memory
If ECC went mainstream, prices would drop as volume increased.
Except that ECC is insufficient because some Rowhammer attacks can flip more than two bits per memory word. The proper mitigation seems to be TRR (target row refresh).
I could imagine an OEM like Apple or Dell looking at these to free up space in their board layout on their notebooks or small form factor computers
In such a scenario ECC memory is not really a high priority, no?
Looking at the PCI-Express lanes available those other 8 CPU lanes look ripe for a thunderbolt 3 controller, with the rest of the peripherals being powered from the PCH (or does that make no sense at all?)
Not sure why Intel just doesn't release a full system in package for a basic laptop system, with 16GB or 32GB main memory and 256GB or 512GB of NVMe flash?
I think Intel still makes flash chips and they originally started out as a DRAM manufacturer as well..
Because that would mean discarding an entire package if just one component failed or is damaged during production.
It would also mean that manufacturers would have to order different parts for two systems that are identical, expect from memory or storage. In your example, with 16 or 32GB of RAM and 256 or 512GB of storage, you'd end up having to order four different SKUs from Intel, and Intel would have to manufacture four different SKUs, plus one without memory and storage.
Logistically I don't think it makes sense to add storage and memory to the package, it just adds inflexibility and more SKUs.
yeah, it would be harder to make and sell "handicaped" configurations to users for high prices, forcing them to prematurely upgrade... hence everybody would milk less money from the end-users. also, some still sell 5400 RPM HDDs (!!!) in some versions of current-gen machines and some people actually buy them!
all systems you see today with like 6-8GB of RAM and <500 GB SSD storage will practically force their users to upgrade in <3 years, whereas if you sell to the average home user a system with >=16GB RAM and >500GB SSD, he could keep using it for up to 6 years with no chance of it feeling underpowered... so very bad for business!
They make something similar. An Intel NUC is basically a full system laptop, with the laptop-specific bits missing (no screen, keyboard, battery, etc).
You can buy it barebones, but you can also buy some of them with everything pre-installed (including RAM and flash storage and licensed Windows OS)
Why are AMD helping to sell a competitor's product? The article says it's strictly business, but I would have thought the quick buck made today by selling graphics chips to Intel would be outweighed by the long term benefit (e.g. growth in the combined CPU/GPU market) this provides to Intel.
If I remember the last numbers right, Apple Laptop sales share are around 10% at best, so if anything like that story happened, it would not be them but HP (25%)/Lenovo (20%)/Dell (15%).
Worth noting that Apple’s share of high-end Intel chips is much higher. Thus, Apple’s share of Intel’s consumer revenue is probably substantially north of 10%.
Are you suggesting that companies like HP or Lenovo couldn't get into producing ARM chips if they wanted to ?
Intel isn't threatened by ARM yet in a performance laptop.
Of course one could reply laptops don't need that much power anymore to do what most users use them for today (thus people even here using tablet as their main tool) but that's another issue entirely, "much cheaper and good enough", one where Apple isn't Intel's main opponent either (I would even wager in that one, Intel is its own ennemy, given how terrible they've been handling the Atom brand and performance to protect margins).
Those companies could conceivably do a lot of things, but that's a lot different to having in-house CPU and GPU design teams already staffed and firing on all cylinders for years.
Not sure I get you re Apple being an opponent to Intel for "cheap" parts. It goes without saying that Apple would not sell their own stuff to anyone else. And for their own use, I'm sure the hardware would be very cheap to manufacture.
Doesn't change what I said though. Unless you expect Apple to take over the laptop market (if they didn't reach above 10% during the last pro-apple decade, they're not going to do so now especially given their current issues in the field), or start selling their custom chips to third parties (they've never ever done so, unless I'm mistaken) AND those chips taking over the market, they're not the threat Intel has to worry about.
Intel has to worry about Nvidia taking over the graphics and machine learning, AMD punching above its weight in workstation with EPYC and ThreadRipper, and ARM being cheap and good enough in all the low power fields and growing.
A middle of the line, ARM based performance oriented chip for laptop is no threat to i5/i7, except indeed the risk to lose their place in macs.
I just don't see how other laptop manufacturers are doing as being particularly relevant to Apple's decision making.
If they ditched Intel, that would very publically rip a stripe off of Intel, regardless of absolute market share. Not something Intel wants, and is what I contended Intel have gone to an unusual length, integrating AMD graphics, to avoid.
This is a joint venture against Nvidia basically. ML is the next big thing, and Nvidia has smartly positioned GPUs as the best way to do ML. AMD needs to claw back marketshare from Nvidia, and partnering with Intel is a quick way to do it. Intel also needs to keep Nvidia in check.
Now they should co-develop and promote a well designed open source ML framework, something that can compete with CUDA. AMD isn't up to the task, but Intel is.
In the UK that sort of merger would almost certainly be blocked by the Competition Commission: I'd be interested to hear what the situation is in the US, where Intel and AMD are based.
Yeah, because all they have to do is just tweak some code and re-release... /s It takes 2 - 4 years to design a chip. If Intel doesn't release the new chip they started designing 2 - 4 years ago, they waste that development cost, fall behind AMD, and you complain that they are behind the times. If they do release the chip you complain they haven't fixed the bug you just learned about a week ago. It took a year for everyone to fix their software, and you want Intel to magically come out with new hardware? Sheesh.
I think all the "Intel sucks" comments should have a disclaimer "Full disclosure: I've never designed hardware in my life, I have no idea how to run a business, I really just hate Intel and this is an excuse to vent my hatred."
A valid perspective, although I'm not going to lie: I don't particularly appreciate the extra (non-value-adding) work that's been dumped into my team's backlog in terms of figuring out and implementing mitigations, ahead of applying patches, when we already have a dozen projects on the go, with about half of them scheduled to deliver in the next couple of months. _Thankfully_ we're running non-virtualised on dedicated hardware, due to performance and cost considerations, so the risks are somewhat reduced for us.
I also admit to taking a fairly dim view of Intel's PR around the issues, and the suspicion that Meltdown, in particular, exists because they - specifically - have played it a bit fast and loose with their processor designs in order to gain a performance edge, and perhaps to a greater extent than AMD and ARM.
Granted, this will take years for a final resolution in hardware though.
Nope. This bug is far from new. Intel knew about this since 2012 [0].
I don't expect them to release a fix within a year or something. I know that CPU design is far from easy or doable within weeks. But how they handled the whole matter shows that they don't care. Intel sucks, that's true. They should admit that they produced shit and start talking to their customers and show that they care.
Oh and also, who in their right mind would buy a CPU with those bugs?
>They should admit that they produced shit and start talking to their customers and show that they care.
Someone correct me if I'm wrong but isn't almost every modern CPU vulnerable in some way with Spectre? If so isn't every modern CPU manufacturer "producing shit" in your eyes?
You are thinking of a specific attack that uses the 2nd spectre vulnerability (poisoning of the indirect branch prediction) to attack the kernel. I'm sure in the following weeks other attacks to the kernel will appear that do not use the eBPF. The same vulnerability can be in principle be used to attack other processes. Still there are both software and firmware mitigations for this attack.
The real elephant in the room is the first spectre vulnerability (exploiting the normal jump predictor), which allows untrusted code (think JS) to read anything in the same process. Apparently most [1] CPUs are affected by this. There is currently no mitigation for this, other than rewriting applications to use separate address spaces, and even then, I think with carefully crafted inputs even that can be exploited.
[1] apparently simple, in-order cpus are not, but in principle there is no reason for the attack to work there as well.
edit: another possible mitigation for spectre v1 is converting all bound checks controlled by untrusted input to force non-speculation (via if-conversion). It is going to take a very long time to covert all applications (it might be easier for VMs and JITs).
1) Ryzen is absolutely vulnerable to Spectre, however desperately you don't want to think so. (I don't want it either; I just bought a Threadripper 1950X, after all)
2) If you actually think the eBPF JIT being turned off is going to save you and that Ryzen is magically immune because it's off: you're deluding yourself completely. Thinking eBPF is the key or whatever is a full misread of the actual vulnerability... Attacks never get worse, they only get better. You are guaranteed to see more exploits that do not leverage eBPF, but other components of the kernel to compose gadgets. Modern systems are millions of LOC; there's ample surface area for this to happen.
Anyway:
> t provides an additional six displays up to 4K with the Intel HD graphics that has three, giving a total of nine outputs. The Radeon Graphics supports DisplayPort 1.4 with HDR and HDMI 2.0b with HDR10 support, along with FreeSync/FreeSync2. As a result, when the graphics output changes from Intel HD Graphics to Radeon graphics, users will have access to FreeSync, as well as enough displays to shake a stick at (if the device has all the outputs).
Yes, if those NOCs/HTPCs provide all of those capabilities, otherwise it's just marketing words. In reality, only top notch has more than 1 DisplayPort, I guess.