The smugness of this comment is unwarranted. Linux became a practical OS which runs everything from toasters to spaceships. The 'obvious superiority' of microkernels has failed to be borne out since their inception way back when it was first written. And the success of small parts of the idea within Linux does not implicitly prove that the idea is actually practical or performant as imagined.
Implementation details are everything and not always solvable.
The obvious superiority of microkernels is seen in the invisible ubiquity of QNX and L4 throughout the industry. There's probably more L4 deployments than Linux, in hardware like baseband processors.
The most common OS then is probably not Linux, but TRON. It's real-time, but I'm not sure if it's a u-kernel. Nonetheless, RTOS in general is well served by u-kernels.
Supporting your point, OK Labs claimed years ago their OKL4 microkernel had hit a billion units in phone market. Mainly for baseband isolation and legacy code (eg brew). So, it's exceeded Linux servers and slowly catching up to Android esp as it runs it virtualized.
And Samsung Galaxies use INTEGRITY Multivisor for Knox. Blackberry plus automotive often do QNX. Apple does a hybrid with Mach and wants a real one. A lot of design moving one way in particular. ;)
Apple does a hybrid with Mach and wants a real one.
I know that XNU is based on a chimera between OSF Mach and FreeBSD, but I'm pretty sure most practical u-kernel gains are lost in this process. They do have the basic resources like tasks, threads, VM and IPC with its system of port rights checking, but it's mostly leveraged as a convenient abstraction at best. I don't think OS X even supports per-application default memory managers, does it?
However, what you said about "wants a real one" really piqued my interest. Is it true Apple is researching pure microkernel designs for their products? That sounds great, do share some links.
Yeah, that and some basic functions are all they use it for. A write up way back indicated they though Mach was a mistake. So, the alternatives were a better microkernel or full monolithic. One unofficial project tried to port it to L4. Not sure what Apple's stance is under Tim Cook, though.
DOS did something similar back in the day and a desktop was even built on it. Windows was later the most widely deployed OS with billions of lines of COBOL powering much backend processing. Would you similarly say their architectural superiority is proven by the number of users or platforms?
Those of us who push microkernels do it because they were proven in practice (esp in embedded) for over a decade. Linus's complaints didn't pan out. MINIX 3 achieved in a few years better reliability than Windows or Linux had in nearly their first decade. Driver isolation had a lot to do with that. Anyone worrying about performance should look at the Playbook vs iPad demo they did a while back showcasing QNX microkernel.
Linus was repeatedly warned [1], was smug as heck in replies, his OS is now adopting microkernel-like techniques, and so I call it as he would. More humble than he would, actually. ;)
Implementation details are everything and not always solvable.