> The ME does have a legitimate function, but it does so much more now, as it takes care of the hardware initialization, the main CPU boot up, control of the clock registers, DRM management for Audio/Video, software based TPM and more.
In other words, the ME is to Intel boards as Apple’s T2 chip is to their recent notebooks: an SoC that takes on the real role of being the “system processor”, turning the [rest of the] socketed CPU into effectively an “application processor.”
In fact, given that it’s so self-sufficient, it’s interesting that Intel choose to ship the ME as an “IP core” of the CPU itself, rather than making it part of the off-die Intel PCH chipset that they supply to mobo vendors. Is it just to provide the ME with low-latency access to CPU components like ALUs (for TPM encryption circuits) and d-cache (for packet sniffing)? Because it seems like it isn’t really built this way, and an “external ME” would be just fine running without a CPU socketed in at all. (Which would be neat, honestly; if you could exploit an external ME, you could run software on your Intel-chipset motherboard without a “real” CPU!)
Given that industry players seem to be all trending toward this design in one way or another (even game consoles did the “system SoC” / “application CPU” split with the last two generations), I wonder if this design pattern will ever be standardized, in the way that interrupt controllers or MMUs were standardized. Will we ever see an open-hardware board with its own open-hardware system-management SoC running FOSS firmware?
The CSME is on the PCH[1] (when one exists). (However AMD elected to put their PSP on-die as an IP-block). It is the root of trust of the system and takes the CPU out of reset even though it's not on the CPU itself. x86-based machines have pretty much always had multiple auxiliary processors/SoCs that control various aspects of the system such as the embedded controller (a direct descendant of the 8042 keyboard controller). Another example of this is the AMD SMU which lives on the chipset and coexists with the PSP (the talk in the link gets code running on this as you described)[2]. Interestingly the Chromebook EC firmware is open source[3]. Joanna Rutkowska has done a nice overview of some of the security aspects of these topics as well[4].
Ideally there would be a standard way to verify and measure all the various firmware images (and potentially option ROMs which would be theoretically measured with SRTM via UEFI) for these system processors during boot during a DRTM event (with Intel's STM/AMD SMM supervisor enabled as well) but there is a long way to go for that.
> an “external ME” would be just fine running without a CPU socketed in at all
and made Ryzen a nearly-SoC, EPYC an actual SoC (no chipset at all required AFAIK). (To be fair Intel did integrate many things onto the die as well..)
> Interestingly the Chromebook EC firmware is open source
And even the Google Security Chip is included under that. You can't run a customized GSC firmware on a production device unless you have Google's keys, but you can look — and hopefully maybe reproduce the build?
Yes, the Raptor Computing POWER systems are almost entirely open. The CPU is entirely open, save the die mask.
It's worrying that these vulnerabilities are not disclosed in a way that lets people take control of their devices.
That the ME is so tightly integrated seems mainly to be for cost savings. There is also an argument for increased security, as now your attacker must be able to work with a decapped highly integrated CPU.
Intel did at one point do what you were suggesting -- if you look at the sandsifter project the author found a family of devices that had a parallel execution unit that was dispatched instructions with a "secret" prefix that had full access to memory. This could still exist in newer processors but be better hidden, it would certainly make a lot of one might want to do with the ME a lot easier.
This has been the case in Server systems for a while - IPMI[1] has been used for remote KVM, power management, remote mounting of ISOs, etc. on many servers for years. HP's iLO, Dell's iDRAC, and even SuperMicro boards have an implementation. This pattern is pretty standard, but none of it's FOSS.
Pretty sure the IPMI software is crap no matter the brand. I know the SuperMicro one requires a very old Java version fat client to interact with it, and is very flaky.
Also, if you need to interact with a Supermicro BMC that doesn't support the HTML5 console (for example, because it's running older firmware), I reverse-engineered the proprietary "iKVM" protocol (along with a lot of other parts of the BMC) and implemented support for it on a branch of noVNC, which you can find here: https://github.com/kelleyk/noVNC
Yes, but that stuff isn't actually IPMI. You normally only need it if need a graphical console (or haven't re-directed to the IPMI serial link), or need to mount a boot image, which is typically painfully slow. (FreeIPMI and associated tools like conman are good for IPMI management, with a set of workarounds for defective implementations.)
OpenBMC is free software, used by POWER9 systems, in particular. I haven't used it, but I think it's in the Summit and Sierra supercomputers as well as TALOS systems. IPMI/BMC implementations are a nightmare, and something you can potentially fix has considerable appeal.
It works if the computer is fully vPro compliant. There are castrated versions of vPro for small businesses that lack the KVM feature. It’s all free and you just need to do the initial provisioning in the MEBx (post UEFI).
Again, having the ME in the CPU is not enough. The chipset, NIC, and BIOS/UEFI support matter.
Some do have a subset of features enabled or that can be enabled (maybe, somehow). But my post was not meant to be generic or for a hacker, rather for a person who would like to use a Lenovo T450 with (AMT) KVM. While all Intel CPUs have the ME, this is not enough. The chances of "hacking" the rest of the features on such a system without explicit vPro support are slim to none for all intents and purposes.
Key words: AMT, vPro. You should have compatible CPU and board. Remote KVM is provided via VNC protocol, so any VNC viewer should work. You need to open web interface and proceed from there.
Basically it's a business feature and usually not available on consumer laptops.
And very annoyingly not available on the Intel NUC, even though remote-managing those would be very useful. I guess something about not wanting to canabalise their server market.
If IME could be reprogrammed then maybe there would be a way to add these features.
Well, AMT looks very tempting because of the functionality.
On the other side because of security, I'm not sure whether I should be glad to use NUC models that don't have it at all.
So assuming I take the risk, what networking link does AMT require? Do I guess correctly that it works only over Ethernet? Thinking of mobile devices that have only a cellular modem link.
Intel NUC are weird little devices; for my home file server, which at most ever has one user accessing at a time, I replaced an old PC tower with a NUC and a 5 disk USB3 SATA dock. It works better, has excellent Linux support, and uses less power.
The device is weird because it strikes me as the perfect HTPC, and yet it isn't really marketed as such.
I've bought something like 10 of them (the oldest two are Gigabyte Brix, but I'll count those with them). There are a few home servers, but the main use is as a pool of machines to use for oVirt, Open Stack, Kubernetes and RHEL work. The machines get wiped and repurposed quite a bit.
They're a lot more convenient for this than actual 19" servers, since they take up a tiny fraction of the space and power of even a single server. However they would be so much more useful if I could remote IPMI into them rather than having to find a monitor, keyboard, HDMI cable and mouse every time I want to fix them.
They would also be reasonable for home theatre since they are silent and low power, while at the same time having decent CPUs, but they use Intel graphics so I guess they probably can't drive 4K + 60Hz displays, although I've never actually tried.
I only drive a 1080p display with it, so I couldn't comment on 4k; however, it runs most everything I throw at it from my Steam library, reasonably well. The newer Intel GPUs aren't that bad, and most games don't need much to be fun.
Lenovo had at least one model in each generation's lineup with vPro support since at least the X200/T400 (so just over 10 years).
Rule of thumb (as far as I know, not exhaustive): No i3 model has vPro support, i5 models may or may not have vPro depending on the particular CPU used, and more or less all i7 models have vPro.
Yes. You can do most anything you can do with IPMI which includes loading a CD over the network to reinstall the OS or redirecting the serial port.
On newer machines you can set the IME to connect to a IPsec tunnel when some conditions are met and keep the IME enabled when the machine is on battery power and a wireless network. This allows you to administer the device when it is "outside of the office."
It is possible to use an Intel machine without the ME. Since there are constant vulnerabilities and exploits around the ME, many enthusiasts do not like the idea of a vulnerable and secret super-admin computer on their computer. There is the option to disable the ME on supported devices (usually old Thinkpads) using me_cleaner[1].
I personally run Coreboot on my Thinkpad with the ME "disabled" (essentially just broken and stuck in a constant bring-up state), and System76[2], Purism[3], and Dell sell machines with the option of disabling the ME entirely, if one is super-paranoid.
No, you cannot use modern Intel systems without the ME. me_cleaner will only remove parts of it. What happens exactly (whether it actually stops working or goes into some undocumented free-for-all debug mode) is unknown as there is no introspection into the ME. HAP bit is asking the ME nicely to disable itself, sometime after it booted the system.
Purism routinely overstates their capabilities in this regard, claiming to "neutralize" the ME.
Also note that the ME is a hardware feature. Most efforts to remove/disable it focus on the ME firmware, which is loaded only some time after boot. Some ME function remains even if you completely zero out the firmware.
See Peter Stuge's 30C3 Talk "Hardening hardware and choosing a #goodBIOS", noting IPv6 packet sent over the network interface even then (around 17:18 mark).
You can also flip the HAP bit in the bios descriptor region on the newest Intel chips (that me_cleaner does not support) using the Intel flashing tools. It's just called "reserved bit" and defaults to false.
I’m confused... other comments reference ME as the root-of-trust for the system, the chip that brings the CPU out of reset. How can a system be operational without that functionality?
Current Intel chipsets and CPUs cannot initialize the system without the ME. You can disable all the applications running on the ME, but it is required to bring up the system.
Just wanted to say that this was written really well. As someone who doesn’t know anything about the subject, I was easily able to follow along and understand. That’s kind of rare around here for me.
In other words, the ME is to Intel boards as Apple’s T2 chip is to their recent notebooks: an SoC that takes on the real role of being the “system processor”, turning the [rest of the] socketed CPU into effectively an “application processor.”
In fact, given that it’s so self-sufficient, it’s interesting that Intel choose to ship the ME as an “IP core” of the CPU itself, rather than making it part of the off-die Intel PCH chipset that they supply to mobo vendors. Is it just to provide the ME with low-latency access to CPU components like ALUs (for TPM encryption circuits) and d-cache (for packet sniffing)? Because it seems like it isn’t really built this way, and an “external ME” would be just fine running without a CPU socketed in at all. (Which would be neat, honestly; if you could exploit an external ME, you could run software on your Intel-chipset motherboard without a “real” CPU!)
Given that industry players seem to be all trending toward this design in one way or another (even game consoles did the “system SoC” / “application CPU” split with the last two generations), I wonder if this design pattern will ever be standardized, in the way that interrupt controllers or MMUs were standardized. Will we ever see an open-hardware board with its own open-hardware system-management SoC running FOSS firmware?