In the time since that comment was written (just under 5 years), Linux replaced most graphics drivers with ones based on in-kernel modesetting and the DRI2 interface. This was done without breaking applications. So, the assertion that changing the video driver model would be disruptive is kind of disproven by reality. I guess Linux has a liberal bias.
That's not to say that NT doesn't have benefits. Linux is still catching up with implementing some features that Windows has had for some time (and multiple GPU support is actually a great example of that), but so far there's no real evidence that these disparities are because of architectural differences.
Really, a worthwhile comparative analysis requires someone who has a deep understanding of the kernels they're comparing. I'm pretty familiar with Linux but know almost nothing about NT, so I'm a bad choice. But "Take the recent Linux arguments about the HardLocks code that is giving Linux trouble with multi-processor granularity"? That's not someone who knows Linux, otherwise they'd be using words that I recognise. "You call BSD a kernel, it technically is a set of APIs"? That's not someone who knows BSD either. This isn't an in-depth analysis of benefits that one kernel has over another. It's a handwavy justification of some NT design decisions without any reasoned comparison to Linux design decisions in the same area.
I'd love to read an in-depth comparison of the benefits of NT over Linux. This isn't it. Is there one?
When I first started at Microsoft, I once told a coworker something like, "Hey, I've heard NT revolutionarily elegant, but so far, I've just seen features that, while fairly nicely implemented, are universal across operating systems." He replied by reminding me that other operating systems have caught up, while NT was pretty close to its present design even in the early 90s --- it was like a modern operating system had travelled through time back to 1992. It was revolutionary.
(I spoke to someone else who was utterly amazed that NT allowed driver development without requiring that the whole kernel be relinked from object files.)
> In the time since that comment was written (just under 5 years), Linux replaced most graphics drivers with ones based on in-kernel modesetting and the DRI2 interface. This was done without breaking applications. So, the assertion that changing the video driver model would be disruptive is kind of disproven by reality.
mjg59, I could not disagree more. Without working drivers what good are working apps? I can run my Radeon x1900 under Windows 7/8 with Windows Vista drivers. Runs StarCraft2 and newer FPS games just fine.
The Linux kernel devs are so determined to break binary compatibility, I haven't been able to run with ATI's proprietary binary drivers for years. While AMD was a good open source citizen and released the specs, the open source drivers for my card are useless for anything other than 2D. That's about as disruptive as it gets.
"The Linux kernel devs are so determined to break binary compatibility...."
Well, of course. As long as no one else can use their drivers, they won't get a serious competitor in open source land. Pity about the rot in official drivers, though (e.g. for me, the latest is USB 1.x sound dongles in debian lenny -> squeeze).
Which suggests to me that if I wanted to write a OS that could become really popular I should use Windows device drivers in a microkernel approach where each lives in its own address space and process. Hmmm.
>other operating systems have caught up, while NT was pretty close to its present design even in the early 90s
The one thing you will find almost as a theme with Linux design is that there is minimum of "future" it. That applies to most of the Linux user space as well. This is a good thing and a bad thing.
For example multi-gpu systems with multiple outputs were not taken into consideration when both the Kernel and Xorg parts were designed. They took the most common usecase of single GPU, probably made some assumptions around it and coded something that works for that usecase.
Microsoft, on the other hand (almost out of necessity) had to actually design the NT Kernel while making no assumptions - multi GPUs may not have existed when XP was released but none the less MS had to design for the eventual reality when some OEM will stuff 2 GPUs and will have to write drivers for it without Microsoft being able to change the XP kernel design or even implementation in any way. I am sure the same can be said about scalability - when massively multicore CPUs weren't around NT kernel was mostly up for task of supporting them, whereas Linux adopted with lot of scalability related work that followed after the fact. (Remember the early NT vs Linux benchmarks?)
Designing for future is bad because it comes with certain baggage and to an extent added upfront costs. It is good for future compatibility, code maturity and obviously saves rewrite efforts - you don't have to rip off all the code and start over and take care not to break existing software etc. There is less code churn in Windows world I am sure than there is in the Linux world where people are constantly adopting to new realities. This creates an environment ripe for new bugs, instability and only with the amount of volunteer effort the cost factor can be lesser than prohibitively expensive.
> The one thing you will find almost as a theme with Linux design is that there is minimum of "future" it. That applies to most of the Linux user space as well. This is a good thing and a bad thing.
Maybe because it is mostly about the "past", bringing enterprise UNIX and mainframe designs into Linux?
That applies for the core kernel. But drivers/hardware support on the other hand is pure Linux way of doing - don't think any mainframes or enterprise UNIXes had to bother with multi-GPU systems, so what you see in that area of Linux design is pure, quick to market hackery aided by delay-design-until-it-no-longer-works-without-it.
It's depressing in a way but that's the best we got when it comes to a hackable Open Source OS that you can somehow run on your hardware and even patch things you don't like.
SGI, Sun, HP, even NeXT supported multiple displays, frame buffers, and GPUs.
You could put 2 or 3 NeXTdimension cards in a NeXT Turbo Cube, for example - each with its own i860 running Display PostScript and something like 32MB of RAM - and use a display attached to each.
And the other workstation vendors had even fancier graphics subsystems and options. You could get real time stereo (LCD shutter) 3D with full Z-buffer and overlay planes for UI use (requiring a few dozen MB of VRAM) for the cost of a sports car. And with some systems use several at once.
Find some old copies of UNIX World or Personal Workstation or BYTE from the era...
May be GPUs are a bad example - for USB, FireWire and most of the other driver code and supporting infrastructure that constitutes the Linux kernel, the point (that its design is mostly uninfluenced by older UNIX) still stands. Besides the old UNIX vendors did not seem to have contributed much in terms of Linux GPU stack, maybe because that's not where the money was/is. SGI contributions for e.g. are in the area of FS and scalability predominantly. (I am aware that some places use Linux workstations for Graphics - but Nvidia essentially ships there own Xorg replacement stack along with highly kludgy properietary driver - nothing remotely related to great design there.)
Filesystems, TCP/IP stack, Multi core scalability etc are the areas that most benefited from old UNIX design. (Questionable in case of TCP/IP - Alan Cox rewrote it and I am not sure if he was influenced by any older UNIX implementation or not.)
Yes, but still, how much of IRIX's GPU stack did SGI contribute to Linux? How much of the current DRI/DRM/GPU Driver infrastructure is inspired from old UNIX designs? The point I am trying to make is that core kernel code is influenced by old, tried and trusted UNIX designs even if it wasn't up to the snuff in the scalability area for a long time. But the rest (incl. GPUs, USB stack, V4L2, driver support code (kobject) etc.) is a different story altogether - where old designs were either inaccessible or largely inapplicable due to different requirements and that's not an area of the Linux kernel anybody is proud of.
I don't think you were implying that all other operating systems were catching up, just most of them.
For example, Solaris has had a stable driver DDI (Device Driver Interface) for years. This is what allowed ISVs to write drivers for ancient versions of Solaris that still work today (assuming the base hardware platform is still supported).
That's also what allows the latest versions of the nVidia driver to generally just work on different releases of Solaris.
IDK, from what I've just read NT looks like someone took Minix (released in 1992) and worked on it to make it useful. Plus adding a subsystem layer.
Frankly, I'm still more impressed by a monolithic kernel that did it better, before: Plan 9 (released in 1991). What use is for NT to be object-oriented, if it can't share its objects through a network like Plan?
> IDK, from what I've just read NT looks like someone took Minix (released in 1992) and worked on it to make it useful. Plus adding a subsystem layer.
VMS.
> Frankly, I'm still more impressed by a monolithic kernel that did it better, before: Plan 9 (released in 1991). What use is for NT to be object-oriented, if it can't share its objects through a network like Plan?
Sadly the UNIX crowd decided it was not worth it adopting it.
The article describes the elegance of NT compared to Linux. It claims that this elegance has some benefits, but it is probably unfeasible to verify that.
Object-oriented? Linux is partially object-oriented (e.g. file systems). Modularity? Linux does have modules and you can put substantial amounts of functionality in them. If you need or want to is another question. Client-Server? The discussion of micro- vs monolithic kernel is not completely settled, but so far Linux does pretty well despite theoretical and academic arguments against its architecture.
Elegance is nice, but at the end of the day it is the features and the performance which matter to the user. NT claims better flexibility, however it is Linux, which can be used from Cloud to Cluster to Desktop to Mobile to Small-embedded devices.
> Object-oriented? Linux is partially object-oriented
NT's OO is actually very pragmatic. Things such as files, pipes, processes and locks are objects that, because they are global, can be shared across processes and worked on by many different APIs.
Every object has ACLs that are managed by the kernel itself, so you can create a named pipe that is accessible by users with a given role, for example.
You can use synchronization APIs with heterogenous collections of object handles; for example, you can tel the kernel to wake up on the next mutex, file activity or process exit simply by giving those objects to the appropriate wait API. Unix' select() pales in comparison.
Even though it got settled in a couple of years, the agreement was then secret and BSD was still under a legal cloud. And legitimately so, the settlement was eventually made public and had this gem:
"7. Further Participation in Litigation. The University agrees that it will not actively assist or support BSDI's defenses or counterclaims in the Federal Action or the efforts of any other party who asserts in any action the right to copy, use, or disclose to non-licensees of USL any of the material contained in the Restricted Files or the invalidity of USL's proprietary rights in the UNIX System. However, nothing in this provision shall prohibit the University from responding to any discovery permitted a third party under federal or state law or from defending any claim that may be asserted against the University or the Individual Regents." (http://www.groklaw.net/article.php?story=20041126130302760).
(In retrospect it turns out SCO was never a real threat, for they didn't have the copyright to UNIX(TM), only the duty to collect fees for a 5% vig, and they knew before the lawsuits that they didn't have it (!!!).)
Feature wise, there isn't much difference anymore. Thats because Linux is not a community driven effort with volunteers sending patches. IIRC tehre was an article about how 75% of the code being contributed was by paid developers. That has caused most feature gaps to be minimized.
The primary difference that remains is that Linux has "evolved" where as NT was "designed". The reason for this is that modern linux has had to work around several shortcomings of the UNIX model. (for e.g. non-async I/O, suid crap, non pre-emptive kernel syscalls etc)
So, what that means is that NT (in design) has specific interface boundaries in the kernel. For e.g. the device driver design has specific interfaces for the I/O Manager, the Power Manager , PnP manager etc. and communication is achieved via I/O request packets in a highly structured way. Linux is much more monolithic in this way (even though both get the job done in the end) I don't have access to the source so I can't say how much this modularity exists in practice. But atleast in theory it means that major changes to the kernel become much more easier.
>Thats because Linux is not a community driven effort with volunteers sending patches.
Of course it is. Find something broken and submit your fix. Try that for NT Kernel.
Also it is true that a lot of GNU/Linux devs (it's not limited to the kernel) are paid by big companies. But there are a lot of volunteers who do this in their spare time and if they do a good job they may be hired by someone who pays them to do it full time. thats how I got my job.
My point was that commercial investment has resulted in minimizing any gap that existed between the two. If the majority of the work on Linux was done by unpaid volunteers working in their free time then Linux would have remained an obscure hobbyist OS. So yes, Linux is not community driven. Its totally commercial-interest driven. There is nothing wrong with that.
The company I work at employs a few open source devs and we usually hire people who are already doing things we want, but we just want more of, or to have our use case a bit better supported.
We don't really get to dictate what they code,we just pick people who are already, to scratch their own itch, coding what we need.
So yes, there is a lot of paid work, but it's got a very individual character to it compared to my job where theoretically I could be coding anything a customer had convinced sales we needed.
This might be wrong, but from what I've heard, multiple GPU (nVidia Optimus) issue has more to do with GPL than the Kernel itself. I had saved this quote but I forgot the source.
> ...won't let closed source drivers use shared memory with open source code or somesuch out of their belief that it would violate the GPL.
While I'm not aware of an "in-depth comparison" of the two systems, if you read the Microsoft Press "Windows Internals" book and some of the important books on Unix architecture, you should be able to draw your own conclusions fairly easily.
"You call BSD a kernel, it technically is a set of APIs"
If you read that in context of the Mac Os X operating system, then it would be true, as there is a BSD "layer" or "part" in the XNU kernel. So if it had been written in that aspect/context, it would have been correct.
On your first paragraph: Linux (the kernel, who cares about applications if the driver doesn't work?) has the luxury of not caring about binary compatibility. Windows can't do that.
I don't think there can be one. Even if one existed, its value would be questionable since the rest of us are unable to prove or disprove the claims therein, seen as we don't have access to the nt kernel code. We're stuck with uninformed rants by microsoft's evangelists, unfortunately.
The XP Professional x64/2003 Server SP1 kernel is available to those involved in teaching/research at higher education institutions. The license (https://www.facultyresourcecenter.com/curriculum/Eula4.aspx?...) would appear to permit such a comparison.
The article (not the comment linked to, which was quite informative) also has this:
> Anyone that’s ever manually compiled a Linux kernel knows this. You can’t strip ext3 support from the kernel after it’s already built any more than you can add Reiser4 support to the kernel without re-building it.
Even 5 years go (and at least 10 years ago) you could remove ext3 and add add another filesystem without rebuilding the kernel. I know Red Hat at least included a helpful Makefile for precisely this purpose.
Rebuilding entire kernels in order to compile a single kernel module is a well known habit of early Linux uses, following advice from pre 2.x kernel days when loadable modules didn't exist that seems to have stuck around in the collective mind of the Internet.
Before distributions switched to initramfs, you had to have at least one filesystem compiled in, for bootstrap reasons, so that the kernel could mount the initrd.
I don't remember exactly when did initramfs get introduced, but I remember it was a relatively long time before major distros switched to it. TFA is old enough so that might have been the case back then.
10 years ago we still had mkinitrd in RH (source: I'm a programmer now, but in 2003 I was worked for Red Hat as an instructor for these topics). Initial RAM disks still had kernel modules in their gzipped filesystem and were rebuild able without rebuilding the kernel.
It's weird to me that the original blog post put Soma's name in "scare quotes." Soma is the Corporate VP of Developer Division at Microsoft, which means he's in charge of—among other things—Visual Studio, .NET Framework, ASP.NET, the now-dead Expression Studio, and I'm sure a few other things.
He goes by Soma because his full name is Sivaramakichenane Somasegar (really, I looked it up in Headtrax once and remember it, for whatever reason, seven years later). And, let's be honest, that is really hard to spell.
From a developer's perspective, the main problem facing Windows is not the kernel itself -- despite common misconceptions to the contrary. For example, OS X is built on a BSD which has it's roots in 60's and 70's OS design, just like the VMS roots of WinNT.
OS X didn't change the world by bringing some great new underlying architecture to the table. In fact, their kernel and filesystem are arguably getting long in the tooth. The value that OS X brought to the table was the fantastic Carbon and Cocoa development platforms. And they have continued to execute and iterate on these platforms, providing the "Core" series of APIs (CoreGraphics, CoreAnimation, CoreAudio, etc.) to make certain HW services more accessible.
There's very little cool stuff to be gained in the windows world by developing a new kernel from scratch. A quantum leap would not solve MS's problem. The problem is the platform. What's really dead and bloated is the Win32 subsystem. The kernel doesn't need major tweaking. In fact, the NT kernel was designed from the beginning such that it could easily run the old busted Win32 subsystem alongside a new subsystem without needing to resort to expensive virtualization (as the original article mentions).
Unfortunately, the way Microsoft is built today it have a fatal organizational flaw that prevents creating the next great Windows platform. The platform/dev tools team and the OS team are in completely different business groups within the company. The platform team develops the wonderful .NET platform for small/medium applications and server apps while the OS team keeps crudging along with Win32. Managed languages have their place, but they have yet to gain traction for any top shelf large-scale windows client application vendors (Adobe, even Microsoft Office itself, etc.) Major client application development still relies on unmanaged APIs, and IMHO the Windows unmanaged APIs are arguably the worst (viable) development platform available today.
What Windows needs is a new subsystem/development platform to break with Win32, providing simplified, extensible unmanaged application development, with modern easy-to-use abstractions for hardware services such as graphics, data, audio and networking.
This is starting to come to fruition with WinRT, but the inertia in large scale apps is unbelievable.
And he dismissed "Shipping Seven" with hysterical handwaving and BS pedantic arguments, that were even wrong. He could not understand what SKUs were, he thought Seven referred to Windows as merely a kernel, like what you build in Linux, and other BS, he made some BS comments that only apply to monolithic kernels and ONLY if you compile extensions instead of loading them as modules...
I did a lot of development on OS/2 applications in the mid 90s, mostly on the Warp 3.0 version. Eventually we switched to Windows NT4 once it became clear that OS/2 had had its fifteen minutes.
I quite liked it at the time, but to be honest I think the wistful "could have changed the PC world completely" stuff I sometimes hear is just rose tinted nostalgia. It was a very good OS for its time; its main competition then was Windows 95, and there was certainly no contest there. But the NT kernel was a different matter, especially after it got a saner UI in NT4. There were no huge advantages to one or the other there (with one exception, see below). Compared to very different modern OSes such as OSX or Linux, OS/2 and NT4 were close siblings.
OS/2's one real Achilles heel, which gave us endless trouble, was the synchronous input queue, shared by all programs that had a GUI (including the OS desktop). The upshot of this was that, if a user-facing program crashed, it was very likely to freeze up the OS and require a hard reboot. When we switched to NT4, the vast improvement in reliable uptime was a breath of fresh air (if nothing to write home about by modern standards). I gather they partially fixed this in OS/2 4.0, but by then the writing was on the wall. OS/2 faded away before the rise of modern malware had really hit its stride, but I suspect the SIQ problem would also have led to all sorts of security issues. For example, look up "shatter attack"; that was bad enough on Windows, but I'm pretty sure OS/2 would have been even more vulnerable to that sort of technique.
While I certainly wouldn't claim that son-of-NT's victory over OS/2 had anything to do with its technical merits, I do think that, at least between those two lines of development, the (slightly) better OS won.
Former OS/2 developer here (I still have an OS/2 t-shirt around somewhere). While the single event queue was a problem, at the time the major competitor was Windows 3.x, which also had cooperative eventing for the UI. NT changed that for the better.
What OS/2 had was multi-threading. If you had to do some operation that might take longer than 1/10th of a second, the guidance from IBM was to put it on it's own thread. So by necessity, OS/2 developers became expert multi-threaders.
I think it was Stardock that had an excellent newsgroup reader for OS/2 -- it was multithreaded, so you could queue-up several requests for your newsgroups (alt.binaries.*, {ahem}) and the UI remained responsive and you could go do other things, like fire up GoldenCompass for your Compuserve fix. ;)
As a cs student, I must say that I'm very impressed. A dev kernel from the era where while(true); was absolutely forbidden cause of the lack of preemption in windows. Seems like a piece of History for me.
Would you accept to tell us more about what your team did, what were the goals and concerns about your system ?
I was a contractor, so I worked on several systems for various customers (banks, mostly).
There were a few approaches - one was multi-threading, the other was to divide your task up into small enough pieces to not violate the 1/10th sec. rule, and the other was to install another product that gave you Message Queueing.
When splitting up your unit of work, you would define your own custom events such that when the event loop came around to processing your events, the granularity was small enough to keep the system responsive. BTW, events were prioritized by the OS, so not all events were treated equally. Same with NT. So you'd think of your application as a state machine, with the transitions being driven by the OS events appearing.
Message Queueing (the current darling of the scalability crowd) has been around a lot longer than people realize. :)
Also understand that for line-of-business applications, once the screen was done painting and you were waiting for user action, the event loop went into "idle" state. So in most cases you only had to worry about timeouts when you were communicating with a server (database or mainframe).
I haven't seen this edition, but if the comments are correct, try and find an original from 1994 (has a shiny cover with big red letters) to avoid the printing errors.
It's written for a general audience, but still has some technical details in it. It's more interesting from a business and personality standpoint - at that level of software development doing things like kicking holes in your office walls becomes a little more acceptable (that would have gotten me fired at any job I've held)
History best viewed as "thank goodness we crawled out of that mud" ... it was not good, not a fun way to program when you had to do any significant task but not freeze the rest of the system; or if you screwed up an accidently went into a hard loop....
In Windows 3.x modulo the DOS boxes allowed by 386s and beyond; they and the Win16 subsystem were preemptively scheduled, and a lot of good software targeting Windows deliberately used those boxes running ancient, decrepit DOS.
OS/2's one real Achilles heel, which gave us endless trouble, was the synchronous input queue
So there was basically one global event loop for the entire OS? Interesting. I can see how that would make the window system a lot simpler to write -- not unlike the reasons why Node.js and other evented web frameworks have become popular.
At the same time it is, as you mention, an approach that would have been doomed to fail spectacularly by the late '90s when it became commonplace for desktop OSs to run unverified code downloaded from the Internet...
Or had a parent/child relationship or owner/owned relationship across every thread, or installed a journal hook, or... Don't get me started on Window's non-nicety if you want cross thread/process window relationships and want to be able to keep your primary UI thread from locking up if the child thread does.
Actually NT was originally supposed to be marketed as OS/2 3.0.
The OS/2 project was a collaboration by Microsoft and IBM. Sometime around 1989-90, the latter was in charge of OS/2 2.0 while Microsoft was working on the next generation. The unexpected success of Windows 3.0 made Microsoft see they could do it on their own without IBM, so NT became the next generation of Windows instead.
IBM's OS/2 2.0-4.0 was a competent 32-bit OS for its era, but NT's design was more modern. The only thing OS/2 had going for it was a smaller footprint (it could run on 8-16MB of RAM while NT practically required 32MB) and a more innovative user interface, but this advantage was basically eliminated when Microsoft ported the superior Win95 UI to NT.
For what it's worth I remember a distinct feeling of being really frustrated with Warp's GUI. I don't remember the specifics, but one thing stuck out - it didn't have something as simple as "arrange desktop icons on the grid", so you'd end up with this mess on a desktop. The GUI might've been innovative, but it was inferior to Windows 95 and IBM didn't seem to care about that.
As I remember it - you'd put your icons about where you wanted them, but they wouldn't be aligned pixel perfect. So you'd right click and see "arrange desktop icons on the grid" and think that must be it. But that would rearrange everything starting at the top left, like modern Windows does. i.e. Rather than doing what you needed, it would destroy ten minutes of painstaking layout.
There was no way to do the assisted alignment you actually wanted. They'd pitched a fabulous, malleable, document-centric desktop. Technically it was all there. They didn't quite get it to the user. People who were fluent in C could probably have done some quick fixes and made it hum.
I think this was in version 2.0, and I watched it go unfixed in 2.1, warp. Can't remember if it was broken in 4, think so. Good example of the way OS/2 dev focused on ticking feature checkboxes (3d icons! a different dock! obscure mainframe compatibility features that mainstream users will never care about!) not users.
Shadow's were pretty cool. It was a pointer to an original object (since the GUI was object oriented). If you deleted the original item, all the shadow's disappeared also. Microsoft implemented "Shortcuts", but those were files that pointed to another file, so deleting the original file just left around broken shortcuts.
You could also use the Font panel to drop fonts into any application and it would change the UI font. Some apps actually saved that as a preference and remembered it on next startup but others didn't.
I got eComStation running in a VM a while back and I realized that my nostalgia for OS/2 didn't quite match the reality.
I never used, just read about it in DrDobbs back in the day.
If memory serves me right, it imported the C++ code into a database and the editor was able to manipulate the code AST directly, but it consumed lots of memory, both RAM and disk space.
Maybe someone with more knowledge can fill the gaps.
I seem to remember having OS/2 on a few computers at school (K-8) for some reason, but those were the days that most were running Win95 and there were just a couple Macs that nobody knew what to do with (I eventually got permission from the IT team to "set them up", which was basically just feeding floppies and pressing okay).
I think the MTA (in NYC) still uses it to process subway fares...
He got it up, and running, at home, and work. Reported (and I recall agreeing after playing with it) that it was a better environment to run windows / DOS applications than was Windows.
Our employer (the US Marines) was not officially interested in the idea ... and so the idea never went past that.
I think "a better Windows than Windows" was an official IBM marketing line for OS/2 for a while.
I used it for a while in the early 1990s as a contractor in an "IBM shop". Literally everything they bought was IBM. From the mainframe to Terminals, printers, PCs, OSes. I remember it being painfully slow.... but the PS/2 PC it was running on was probably underpowered (as is typical, contractors got the oldest hand-me-down workstations).
"[M]ulti GPUs may not have existed when XP was released"
The NT kernel had been shipping for about 10 years by the time XP was released. Windows NT 3.5 and earlier didn't have any kernel-level graphics (though they did have kernel-level drivers); kernel-level GDI came in NT 4 (1995-1996) for performance.
Windows NT was also used by some graphics workstation vendors like Intergraph and SGI, whom I presume did support multiple GPUs like most workstation vendors did.
Macs also had actual GPUs (not just frame buffers, but real QuickDraw accelerators) and supported multiple GPUs in the late 1980s - on System 6!
All it really takes is a reasonable display abstraction model/API, and an ability for that to interface reasonably with hardware drivers. Seems Linux could've handled that too - and it wouldn't surprise me if it did, at least theoretically.
I didn't follow Linux's initial development as 386BSD had already come out. I do remember the first accelerated X11 for 386BSD supporting some card with a number like 911. I don't recall that it required explicit kernel support, other than perhaps a bunch of ioctl calls or something like that.
Given that the source code of the NT kernel is apparantly available for academic purposes, I wonder if there are university courses where the NT kernel is used as the subject, and students are exposed to this code. Does anyone have that experience?
"Anyone that’s ever manually compiled a Linux kernel knows this. You can’t strip ext3 support from the kernel after it’s already built any more than you can add Reiser4 support to the kernel without re-building it."
Ummm.. Kernel modules anyone?
Or do I have something wrong here? Have I taken something out of context?
This article is just wasted bytes since no one can work with the code or even see it. The author may as well be talking about a fictional kernel. Closed source isn't worthless because of its terrible code quality, closed source is worthless because it is closed.
The guy mentions that it's still possible for academics to get their hands on NT sources. I suppose, however, that it's forbidden to modify or compile them.
At some point in time, NT (3.1) was apparently shipped in source form: there was no binary distribution for the first 4 and 8 processors systems back then and you had to compile your kernel yourself on the target machine (after installing the vanilla NT first).
"During the development of Xen 1.x, Microsoft Research, along with the University of Cambridge Operating System group, developed a port of Windows XP to Xen — made possible by Microsoft's Academic Licensing Program. The terms of this license do not allow the publication of this port, although documentation of the experience appears in the original Xen SOSP paper."
As an outsider, my take is that MinWin is an edition of Windows. Aka something that is marketed and not something that describes the kernel alone in a technical sense.
I've already said too much. I just want to emphasize that the whole thing is quite subtle, and it's hard to make sense of it given only public information. What you should focus on is the documented ABI available on each OS and platform since that's the real place where what we do affects what you do.
For starters, it's ntoskrnl.exe. Dig out the original Windows NT stuff from the early 90s and you can infer Dave Cutler and team's original vision. The fact that Microsoft made hundreds of billions of dollars off it, layering all sorts of legacy chaos atop it, obscures the jewel at the core.
So? NT predates UTF-8. UTF-8 and UTF-16 are both awesome - they provide unicode support.
Windows first shipped with UCS-2 at a time when the rest of the world was stuck with ASCII and various random codepages, then switched over to UTF-16 at around the same time UTF-8 was picking up speed; UTF-16 had the unique advantage of being (mostly) backwards-compatible with UCS-2, whereas UTF-8 would have broken everything. I prefer UTF-8 myself, but that doesn't make UTF-16 a bad choice.
In fact, while for the default ASCII range of characters UTF-16 consumes one extra byte over UTF-8's single-byte encoding, once you get into the "normally-used range" of international characters, UTF-8 quickly jumps to 3 bytes while UTF-16 remains at two.
Yes there is. It's variable width without being backwards compatible to ASCII. Even worse it's variable width but lets people assume it's fixed-width. UCS-2 was okay. UTF-16 is a hack.
That's not to say that NT doesn't have benefits. Linux is still catching up with implementing some features that Windows has had for some time (and multiple GPU support is actually a great example of that), but so far there's no real evidence that these disparities are because of architectural differences.
Really, a worthwhile comparative analysis requires someone who has a deep understanding of the kernels they're comparing. I'm pretty familiar with Linux but know almost nothing about NT, so I'm a bad choice. But "Take the recent Linux arguments about the HardLocks code that is giving Linux trouble with multi-processor granularity"? That's not someone who knows Linux, otherwise they'd be using words that I recognise. "You call BSD a kernel, it technically is a set of APIs"? That's not someone who knows BSD either. This isn't an in-depth analysis of benefits that one kernel has over another. It's a handwavy justification of some NT design decisions without any reasoned comparison to Linux design decisions in the same area.
I'd love to read an in-depth comparison of the benefits of NT over Linux. This isn't it. Is there one?