Right, and there was even Multics in 1969. I was giving some benefit of the doubt, but I feel like Ingall’s quote was pretty dodgy even in 1981. Maybe it was tongue in cheek, or being intriguing and controversial just to get readers? Maybe there is some point of view that I’m completely missing?
I don't think you're missing anything. That was a very Smalltalk way of thinking at the time. However it flies in the face of separation of concerns, security, experience, .... It's a colorful quote but the marketplace of ideas hasn't been kind to it.
The hardware should be clean and the language should be able to do everything, there should be no OS/user boundary. There is no reason we can't get back to and surpass this previous state.
“The previous state” circa the era of discussion, was punching in via front panel, IO drivers for my disk, keyboard and 16x64 monochrome/composite display. Being that close to the hardware helped me learn assembler, but I would not want to go back there. I definitely would rather let the Linux crew write my IP stack and WiFi driver. And as far as my recent embedded system efforts, I’m pretty stoked running C#/WinForms on the Pi. But then again, I don’t use the OS for much aside from the UI. I use a custom kernel for running the attached machinery.
Why do you say that? What does it mean to surpass the previous state? What would it do that is better than what we have today? What does it mean to not have an OS/user boundary? I don’t know what you mean if you don’t elaborate.
Note: Ingall's quote was about the programming language in general, and in the context of home (desktop) computers. Things are different, and my reply here does not apply, if we're talking about microcontrollers like in the Toit article.
> There is no reason we can’t get back to and surpass this previous state.
Yes there is. There are a whole bunch of extremely critical reasons to have an OS and to separate it from a language, which is why we have them and why we’ve always had them (on desktop machines). I already gave some reasons above, but the reasons in my Nintendo story are some of the least important reasons there are today, and they’re still pretty important.
Here are a few other reasons:
Security. Some processes should be allowed to have special access and do things that most processes cannot. Think about what you’re suggesting if you remove the OS/user boundary: it means that daemon processes written by other people have root access to your system. You do not want that no matter what you claim to want in a programming language.
Management of shared resource contention is something an OS should handle. Do you really want to have to write your program to play nice with the network, hard drive, and GPU? I don’t, it would automatically add months or years to any development projects, even if you had libraries and language features to support it, because it would force you into an asynchronous programming model with a responsibility to handle a large number of error conditions (most of which are out of your control).
The OS handles virtual memory paging, so you would be on your own for providing a memory system that can have a resident size greater than available RAM. Not only that, every process would be on their own, there would be no shared paging file. (When you think about the paging file, don’t forget security).
Other simpler reasons including program bootstrapping (loading and execution), shell & file navigator access, shared system settings (display, audio, network, etc.), temp file creation, etc., etc., etc.
The difference between embedded devices and desktop machines is another good reason not to bake the job of the desktop OS into a programming language. So is the fact that there is more than one programming language - even at it's simplest, the OS boundary makes a great language agnostic interface. (Why should every language implement it's own storage, networking, and display? Wouldn't that be a complete waste?)
I can’t think of any good reasons why there should be no OS/user boundary, so that is my question: why do you want that? What good would it do? How would you handle virtual memory, shared resource contention, and security, if there was no OS/user boundary?
Sure, that’s effectively an embedded system, which puts it in the same category as a microcontroller. Like I said, my reply was about normal desktop machines, and not microcontrollers or embedded systems.
I have to admit it’s interesting in the context of virtualization, where deploying a program to a unikernel virtual machine might be perfectly fine for a lot of programs. In that case, some host OS is still handling security and resource contention, so this seems a little like ducking the question.
The unikernel design in practice does not put the kernel into the programming language either, it just allows compiling the kernel and the language together. It still has an OS/user boundary. Security is either non-existent or very difficult with unikernels. Running multiple programs at once is tricky.
“unikernels are unsuitable for the kind of general purpose, multi-user computing that traditional operating systems are used for. Adding additional functionality or altering a compiled unikernel is generally not possible and instead the approach is to compile and deploy a new unikernel with the desired changes.”