Hacker Newsnew | past | comments | ask | show | jobs | submitlogin
Plan 9 from Bell Labs in Cyberspace (bell-labs.com)
865 points by __d on March 23, 2021 | hide | past | favorite | 261 comments


A lot of people miss the fact that Plan 9 was a real distributed operating system. It's not just UNIX with a couple features ("ooh everything is a file" "ooh UTF8"). You can effortlessly execute any program across multiple hosts on a network. You can use any resource of any host on the network, including files, processes, graphics, networks, disks. It all just works like magic, using a single message-oriented protocol.

If Linux worked like this today, you would not need Kubernetes. You could run systemd on a single host and it would just schedule services across all the hosts on the network. Namespaces would already be independent and distributed, so containers would only be used for their esoteric features (network address translation, chroot environment, image layers). Configurations, files, secrets, etc would just be regular files that any process in any container on any host could access using regular permissions models. About 50 layers of abstraction BS would disappear.

I think because nobody's actually seen how it can work, they can't imagine it being this simple.


Yes, and it's helpful to remember why it's a distributed system. Plan 9 was created to support people working together in groups at the project or department level. The Plan 9 creators - the original Unix guys - liked the idea of a time-shared computer, where there is just one system to administer and everyone can easily access all the files and other resources. Then it became feasible to use many computers instead of just one, but they wanted to use them with not much more administrative effort than a single time-shared computer, and no additional barriers to sharing files etc. So in the original Plan 9 installations the computers used as terminals were stateless - you could walk up to any terminal and log in to your own customized environment that mounted just the files systems and other resources you wanted.

Also, they made use of specialized computers - the ones with nice displays were terminals, there were compute servers with powerful CPUs and file servers with big disks. Some computers were quite specialized, like the ones with WORM drives that supported the Venti versioned file system, that provided seamless automatic backups and even a sort of version control.

Now Plan 9 lives on, used (as far as I know) mostly by lone individuals. So now the Plan 9 terminal, file server, and compute server usually all run on the same computer. It works, but it's not the original vision.

I think one of the reasons Linux but not Plan 9 took off, besides licensing, is that this vision of a project-scale distributed system fell out of style. Many of the people who adopted Linux in the 1990s wanted a largely self-contained computer they could run themselves, they didn't want a terminal to connect to a distributed system. The original Plan 9 stateless terminals don't really fit in a world where everyone is carrying around their own laptop.

So now we have a world with a lot of mostly self-contained individual computers, that use cloud services far away run by huge corporations. The intermediate scale organized around projects and small groups isn't explicitly supported by the computer systems themselves. Plan 9 can live on in this world, but it's not the world it was originally designed for.


Part of that magic is the trust in the network computer. This could work very well for a corporate setting with thin clients working with a distributed cluster of network services where everything is owned by the corporation.

I'm not so sure that this model of trust works with the way computers have evolved since then.

Comparing the issues E.G. X11 has vs modern workarounds for direct user IO for games I also wonder how the security model and composition of file layers could negatively impact the experience.

Taking the ideas of Plan 9 as inspiration, the more realtime elements could be filtered in kernel and message passed to other processes via a single centralized security model. That might also include exposing shared memory via a memory mapped file, or possibly via a higher level message passing abstraction.


> Part of that magic is the trust in the network computer

All network connections are authenticated. Thanks to the way everything in Plan 9 is implemented using it in a network will be closer to how a VPN works: your "view" of the network will be of only trusted computers, and going to the outside world will go through a machine that can act as a firewall.


> That might also include exposing shared memory via a memory mapped file, or possibly via a higher level message passing abstraction.

Distributed shared memory needs quite a bit more than the simple "read" and "write" primitives that something like 9P provides. You basically need to replicate a low-level coherency protocol in software. Of course, expect it to be quite slow.


> I think because nobody's actually seen how it can work, they can't imagine it being this simple.

What's a good way to try it? Cluster of raspberry pi's, or just any given home-lab setup?



Single point of failure comes to mind though.


Why? There isn't a centralized message queue. Its just everything uses the same abstraction and is presented as filesystem or a file.


> systemd on a single host

How would a HA Plan 9 deployment handle disappearance of the "leader"?


The way I understand it, Plan9 provides the basics, so you can delegate to external nodes over 9P. But it doesn't provide an orchestration solution, so the answer would be "HA Plan 9 deployment" is not a thing - you'd need an extra layer.

Unless I missed something while researching P9 years ago.


So the argument then goes something like:

A HA k8s- (or Heroku)- esque platform is more easily built, understood, and operated with Plan 9 because it comes out of the box with many of Docker (and Swarm's?) features.

Is that right?


The main argument is it comes with primitives that makes most of the orchestration systems' features redundant, because all the components of regular applications can just interoperate over a single unified messaging protocol and API. To consume some custom service, you just read/write a file. To expose a custom service, you just create a fileserver. To do wacky networking, you create a union of files. No more masses of abstractions and glue to tie pieces together because they all speak the same language.

In terms of HA, the implementation will be up to your preferred architecture (master/slave, master/master, p2p). Plan9 doesn't schedule services itself, so systemd would need to be modified to adapt one of those architectures. But it wouldn't be a whole lot of work, and it could reuse Plan9's abstractions for most of it. Etcd's key/value store becomes just a filesystem, and Paxos/Raft could be implemented either as a network driver or a userland app, which combined with a union fs, means you just manipulate a single directory of files. You don't even need to futz around with TLS, and again: filesystem permissions.

It's not a 1/1 replacement for K8s, but I bet about 80% of the codebase would go away, and most (if not all) of the abstractions.


Surely they designed in failovers?


No, they did not.

Plan 9 was designed long before any high availability/CAP theorem/distributed databases lessons were learned: it embodies the Unix mindset of reliably available nodes talking over tcp.

Failover/distributed consensus/orchestration/load balancing/network split tolerance/replication could be built on top of 9P servers, but all of these concepts are pretty alien to Plan 9 itself.


Twitter announcement: https://twitter.com/plan9foundation/status/13743504723168174...

Plan 9 Foundation: https://p9f.org/

Wikipedia: https://en.wikipedia.org/wiki/Plan_9_from_Bell_Labs

There's still a pretty active community around Plan 9, too.

9Front, a popular fork of Plan 9: http://9front.org/

Interviews with some Plan 9 community members: https://0intro.dev/

Some videos: [A Tour of Acme](https://www.youtube.com/watch?v=dP1xVpMPn8M), [Peertube channel of sigrid's](https://diode.zone/video-channels/necrocheesecake/videos)


Plan 9 is also participating in GSOC this year. http://p9f.org/wiki/gsoc-2021-ideas/index.html


Rob Pike filled all the application spots himself.


with assistance from Russ Cox


I've seen Plan 9 in GSoC multiple times, now.

I personally learned about it from GSoC many years ago.


https link timing out for me http://p9f.org/


Huh. Ken Thompson and Rob Pike invented UTF-8. TIL.

https://www.cl.cam.ac.uk/~mgk25/ucs/utf-8-history.txt

>So we went to dinner, Ken figured out the bit-packing, and when we came back to the lab after dinner we called the X/Open guys and explained our scheme. We mailed them an outline of our spec, and they replied saying that it was better than theirs (I don't believe I ever actually saw their proposal; I know I don't remember it) and how fast could we implement it? I think this was a Wednesday night and we promised a complete running system by Monday, which I think was when their big vote was.

>So that night Ken wrote packing and unpacking code and I started tearing into the C and graphics libraries. The next day all the code was done and we started converting the text files on the system itself. By Friday some time Plan 9 was running, and only running, what would be called UTF-8. We called X/Open and the rest, as they say, is slightly rewritten history.


Plan 9 elicits in my head the same kind of thoughts as Lisp -- I find them both extremely appealing on an intellectual level, and can't help but wonder why there isn't more of them around in the "real world".

As a software developer, my main use of my computer is writing / building / running code and surfing the web, mostly in read-only mode but also to interact with others via stuff like slack. I have always wondered how difficult it would be to make Plan 9 a viable platform for these requirements, and why this is not so today (or maybe it is and I just don't know where to look): does the difficulty lie in porting programs that already run on other operating systems? Are there other, deeper reasons? Is it possible today to run stuff like vim, gcc / g++, python, java, node, zig, rust, firefox on Plan 9? If not, is it possible to port these, or are there fundamental architectural reasons against it?

Note: I am willing and happy to try other paradigms, such as acme for editing, but also find it quite baffling that, if it is technically possible, you could not install vim / emacs / vscode alongside acme.


> Can't help but wonder why there isn't more of them around in the "real world".

I think the biggest strength (and weakness) of the ideas both present in LISP and Plan9 is the consistency and internal integration. And, I see two big challenges with that.

One is technical: It is not that easy to reinvent everything and do it in such a way that makes it more consistent than existing systems. If we believe Conway's law such an effort would require a team as small as possible, optimally just a single person. Note that Plan9 for example does not fully integrate, the programming language and standard library are not composed of the same building blocks as the underlying system, there is a divide there.

The second is economical / political: While such a consistency and internal integration is desired by the users and developers, it is not very beneficial to business. Image if all components were actually integrated with one another. How would management divide that into projects? How would you make marketable products from it? How would you implement your branding and vendor lock-in? Where would SaaS and subscription models fit in?


There is a port of vim for Plan 9[1], I think it should be included in 9front as well.

[1]: https://vmsplice.net/9vim.html


Acme >>>>>>>>>> vim.

>Gcc/g++

Just get plan9's C.

>Python

They have mercurial, so yes.

>Node, Firefox.

Avoid that, seriously.

You have a Go compiler, BTW.


I cannot tell if you are responding tongue in cheek (in what seems to be a Plan 9 tradition) or being serious. My points are not that any of these tools are better or worse than the others; I am trying to say that I would like to play with Plan 9 as a working environment, but cannot because it lacks most of the tools I want / need to use. I don't use Go. I use C, some C++, and Typescript (so, Node). Why would I have to avoid those? Most importantly, if I have to (seriously) avoid Firefox, what is the alternative that will allow me to surf the modern web, which (again) I NEED to do daily?

If the answer is "nope, Plan 9 is not intended for this type of user" then I guess that's fine, although sad to me, because I cannot play around with something that is appealing to me. And, again, if this is the case, it would be interesting (to me) to understand WHY: why is there no Firefox (or any modern browser) for Plan 9. From another comment, I learned there IS vim ported to it, so I guess that means it is fundamentally possible to port medium-complexity software. Maybe nobody else cares about having these things in Plan 9, which again, is fine. Cheers.


Linux has almost 30 years of active development behind it, thousands of people wrote projects for it, which were then used by others. "Firefox" is not a simple application, it depends on multitude of other software and libraries to run. Let's see just the runtime dependencies from the Debian Stable `firefox-esr` package:

    libatk1.0-0
    libc6
    libcairo-gobject2
    libcairo2
    libdbus-1-3
    libdbus-glib-1-2
    libevent-2.1-6
    libffi6
    libfontconfig1
    libfreetype6
    libgcc1
    libgdk-pixbuf2.0-0
    libglib2.0-0
    libgtk-3-0
    libpango-1.0-0
    libstdc++6
    libx11-6
    libx11-xcb1
    libxcb-shm0
    libxcb1
    libxcomposite1
    libxdamage1
    libxext6
    libxfixes3
    libxrender1
    zlib1g
    fontconfig
    procps
    debianutils
And that's just runtime dependencies, I'm not sure what you need to compile it in the first place. Don't forget that these may have their own dependencies and so on... And for comparison, runtime dependencies of the `nvi` package which is probably closer to that vim port than normal `vim` package in Debian:

    libc6
    libdb5.3
    libncursesw6
    libtinfo6
Much less software overhead.

If you want the Firefox experience from your typical Linux distribution on Plan 9, somebody needs to either port all these libraries, or provide functional equivalents. The problem is that for the longest time Plan 9 license was not favorable for anyone wanting to develop it further - that's why we have all these forks and re-implementations around, which increases fragmentation. Hopefully now, that Plan 9 has been opened fully, there's a Foundation behind the project, people will pick it up again and start porting stuff to it. Forks could be folded back into the main project if their developers would want to. But don't expect Firefox right away just yet.

Also, don't forget that Linux had 20 years of development in its UI and UX department; Plan 9 didn't. You can see on the Plan 9 Foundation GSoC page[1] that there are plans to work on this part of the OS. When the work is done, hopefully more people will be attracted to the platform itself, which will mean more hands to work on porting or developing other software, and there's a chance that we can snowball an alternative ecosystem from there.

[1]: http://p9f.org/wiki/gsoc-2021-ideas/index.html


I vaguely recall there was some project a while back to get firefox running in the frame buffer. So the xorg dependencies can possibly be removed/replaced. Dbus is probably also not technically required to run firefox.

For removing xorg dependency from a web browser that isn't Firefox, I believe there is in fact a project for running webkit in a framebuffer, called WPE Webkit [0]

(I'm well aware that this doesn't mean you can run these on p9, just pointing out that xorg isn't a hard dependency)

[0]: https://wpewebkit.org


Thank you so much for this answer, which fits my question perfectly: it provides a plausible reason for the meager offer of packages for Plan 9 (the reason being the fragmentation of the ecosystem). I second your hopes that this situation will change in the (near) future.

And just to clarify, I used Firefox only as an example of a modern open source web browser. I don't really much care which browser it is, as long as it is relatively modern.


Plan9 and Inferno are shining examples of not being opened early enough. They could conquer the world, but their licences were not open enough at the critical time, and technically inferior but open Linux ate their lunch, along with the dinner.


I don't understand how the "everything is a file" could apply for everything in practice. For example, Linux has ioctl(), which in practice are like a side channel. Granted, Linux doesn't apply this API philosophy too thoroughly.

I guess the "everything is a file" might have multiple meanings. For example:

(1) Everything is represented by a (file) descriptor

(2) Same as (1) and the descriptor has a file-like API (think read(), seek(), write(), etc.)

(3) Everything is an "byte addressable blob of bytes"

Meaning (1) is OK. But it doesn't tell nothing about the API the (file) descriptor itself would use. It could be a fixed set (like meaning (2)), or be variable depending on something else (like the (file) descriptor type).

Meaning (2) looks like too restrictive and inefficient to me and is the one I really have trouble accepting as a general OS primitive.

Meaning (3) surely can't be used for everything in practice, right? It's just to generic like "every computer architecture can be emulated by a universal turing machine." And it also seems too inefficient. But it could be very useful if the blob of bytes had an API like (2) or any other, including having an API depending of the "file type".

Is option (3) that folks are meaning when talking about "everything is/should be a file"?

EDIT: formatted the meaning list correctly.


It's more like #2, but without ioctl() and similar brain-damaged abortions. If you open /dev/mouse, for example, you get a stream of events (encoded as blobs of bytes), which you can get by read(), not a byte-addressable blob of bytes.

But lots of things in Plan9 present an interface that isn't just a single file. The 8½ window system, for example, presents not only /dev/mouse but also /dev/cons (character-oriented I/O), /dev/bitblt (to which you write requests for 2-D accelerated drawing operations), /dev/rcons (keystroke-oriented character I/O), and /dev/screen (the contents of the window—which is just a byte-addressable blob of bytes). http://doc.cat-v.org/plan_9/4th_edition/papers/812/ explains in more detail.

And, of course, file storage servers similarly provide an arbitrary tree of files, and when you remotely log into a CPU server, you mount your local filesystem on the CPU server so that processes running on the CPU server can access your local files transparently—including /dev/screen, /dev/mouse, and /dev/bitblt, if you so choose.


That's actually really cool.. This democratization of useful information probably opens the door for lots of interesting interactions between distributed systems.

Even on my local Linux system I wouldn't know how to get hold of the mouse data without using an X Windows API (or SDL on a console only app before X is run).


I agree! I think that kind of transparency and openness to hacking is very important.

You may be interested in https://gitlab.com/kragen/bubbleos/blob/master/spikes/intell..., which reads from /dev/input/mice, although I haven't yet implemented support in https://gitlab.com/kragen/bubbleos/blob/master/yeso/yeso-fb.....


There are devices under /dev/input which are quite easy to decode for raw mouse data.


And what's the advantage vs an API via function calls? It's the same thing no? Calling 0x42 with a given ABI vs interpreting bytes as an ABI seems oddly similar.


I'm not overly familiar with Plan 9, but a significant advantage of the filesystem as an API is that it composes well. You can put any service inbetween that consumes the existing file and provides a new file with the same API contract, but different behavior (e.g. a /dev/screen that's cropped to a certain area, or that inverts colors on everything drawn into it). Then you can make a new mount namespace where the new file is put in the place of the old file, so that the actual process consuming it has no idea that someone is sitting in the middle. With a traditional shared object, you'd have to engage in LD_PRELOAD shenanigans to achieve the same, or if the library is statically linked and does not have any hooks, you're straight out of luck.


In other words, it gives you a system wide way to override any APIs. As long as you are compatible with the ABI of course.


At the cost of probably making all APIs a bit less efficient. Now I wonder if any Plan 9 derivative offers a 3D rendering API?


No, and even for 2D it lacks acceleration.


Not true:

http://www.cs.cmu.edu/~412/lectures/2009-10-23_radeon.pdf

>>Provide 2D acceleration via the 3D engine.


I stand corrected, however 2009 is already past the time I was paying attention to Plan 9.


As explained in the abstract of the paper I linked above, 8½, the Plan 9 window system, has supported 2-D acceleration since at least 01992. I also mentioned this in my comment, which additionally outlines how this acceleration support works across networks.


[From memory, so take with a grain of salt]

a) tooling and familiarity

b) it's leveraged for modularity, composition and access control. Normal unix basically has no modularity and little composition beyond piping to stdin (which is of limited applicability). And of course no useful access control mechanism to speak of. The last 5 decades mostly just added layers of shitty hacks that don't work and that no one understands anymore (by contrast e.g. the original unix ownership and permissions model was misdesigned but at least possible to grasp).

Plan9 does substantially better in both regards via clever use of a proper union file system (which incidentally makes another horrible hack superfluous: symlinks). If you want your shell, or some other executable, to find some executable, you mount it into its /bin -- there is no $PATH. If you want some process or user to have access to a resource (file, device, server in the network, ...) you mount it into their filesystem. This gets rid of a lot of problems and special purpose solutions.


Isn't the same possible now with mount spaces in Linux?


You mean union mounts? Typically doing a union mount is a privileged operation, while setting $PATH is not, so you cannot really replace the latter with the former. A bigger issue is that, even if mounting various things on /bin were unprivileged, removing $PATH from Linux would break an enormous amount of existing software. Similarly with file permissions. So union mounts don't permit you to simplify Linux in that way, even though they did permit you to simplify Plan9. (Incidentally, Plan9 did have the traditional user/group mode bits.)


An advantage is, you don't need any API or function calls. You can inspect or manipulate anything from the shell ad lib using cat, echo, and the usual commands. Plan 9 doesn't have ioctl() calls because it's not meaningful to send pointers across the network. Instead each device and service provides a file called cntrl (or something similar) to which you can read or write text - using cat, echo etc. So it's easy to write small test scripts or even whole applications, in the shell or any language you like.


Functionally it's the same. The functions called could live in the kernel (a system call), in the process itself or in a shared lib loaded into the process address space.

Interpreting bytes is message passing and this communication happens through the handle/descriptor to the file/object/whatever. As above, this handle could have come from the kernel, another process or the executing process itself (the process opened some file or whatever).

I think the message passing version happening through a handle can be more versatile as you could send this handle over to another process. Could copy it and distribute it to other processes in a easier way than injecting/changing a (presumably C ABI) into a process address space.


Well, as I said, one advantage is that processes running on the CPU server can access your local files transparently—including /dev/screen, /dev/mouse, and /dev/bitblt, if you so choose.

That means you can run graphical applications on the CPU server.


I think "everything is a file" also means: everything is addressable by a file path. Per-process namespaces are part of what makes this possible.

In a way it's similar to HTTP REST, which is also organized by file paths, except instead of the HTTP verbs GET, POST etc. you get open, read, write as your verbs.

A side channel is then just a directory entry.


It may be useful to see Russ Cox's "Tour of ACME" video [1]. ACME is a Plan-9 text editor, which applies the "everything is a file" philosophy pretty deeply. It doesn't answer your ioctl question (that I remember), but maybe it'll give you a better example of how other things can be accessed as files.

[1] https://www.youtube.com/watch?v=dP1xVpMPn8M


It's more or less 3. Not everything is a blob but everything is a stream of bytes. It think the confusion for most of us (me) initially is that a file means a blob that you read in, change and then write out more or less atomically. But the Unix originaters understood files as streams. So, for example, the input stream from your mouse is a file. Everything is a file really means everything is a stream.


It's basically your second option. In Plan 9, everything is a file server that presents a file system interface that speaks the 9P protocol.


I agree with you. A type amounts to the sum of operations that are valid on an object conforming to that type.

A file object is a very basic, general type, that allows open, read bytes, write bytes, close, maybe seek, maybe some ops are restricted (read-only, write-only) etc.

I don’t think it is generally appreciated how far it gets you to have a unifying simple interface. You can always add a complex one, you know?


It's interesting both of you had different answers. I realize that the streams interface was a Sys V thing but would the Unix's "everything is a file" generally be option (2) in the OPs comment then? I feel like I've heard the phrase so much and just always assumed it was (2.)


If Unix is "everything is a file", then Plan 9 is "everything is a network filesystem".

If you have a laptop and a desktop, and your desktop has a printer attached, can your laptop just print to it? In Linux, you have to set up CUPS, open network ports, download drivers, and generally set up both machines to be able to "talk printers". In Plan 9, your laptop just opens the desktop's printer file over the network, and prints.


Plan 9 provides a generic framework for writing server-client programs based on the 9p message-oriented protocol; nothing more nothing less! In Linux, there is no such widely generic equivalent. Every server-client application does that in its own way. Actually Plan 9 forces everybody providing a service to understand 9p and to be able to translate 9p message requests to whatever it means for that service (what does write(bytes) means for a printer? What does write(bytes) means for a screen? what does write(bytes) means for a GPU? Everyone of those should understand a 9p write() message and translate it to whatever service it means) then Plan 9 uses that power to forward requests over the network to remote servers, making easy to use resources in a distributed environment.


Google's Fuchsia is remarkably similar to this.

There is per process namespaces/"filesystems". Their 9P protocol is FIDL and the main system API is defined by FIDL protocols/APIs.


Linux had CORBA, DCOP and now DBUS, among other UNIX variants.


The latter does not prevent the laptop from the need of a "printer driver" to actually render something. But the driver can be unified (TTY or PostScript), and the printer object (not exactly like a file) has, well, methods to report options and set options, like the paper tray to use.

Or so I understand.

If we could imagine a network filesystem (spanning many hosts) full of Erlang objects which can receive and send data, it would be somehow similar.


I think, in Plan9 philosophy, if the desktop has a driver and it is started in the current namespace, it is represented as a file too (like /dev/my-printer/print-pdf). But it quickly becomes confusing for me:

1. Nowadays we can represent the essence of this approach with FUSE and SSH/SSHFS. Sadly, nobody does: local servers (i3, dbus, ..., vscode) use domain sockets and client executables, probably due to the lack of private namespace support by default.

2. The difference between hypothetical "/dev/my-printer/print-pdf" and almost-like-real-world "my-printer-print-pdf /dev/my-printer" looks similar to binding the first argument in OOP-style vs the explicit C-like call syntax.


>1. Nowadays we can represent the essence of this approach with FUSE and SSH/SSHFS.

These suck hard against 9p and factotum. Not even close, Linux and BSD's are jokes against what you can achieve with plan9/9front on networking whole componentes. You can run remote processes seamlessly.


Who are both of us?

The important point is that "everything is a file" is just a short way to say "everything is a file system". Your interface is not a file descriptor to which you read and write, but a whole tree where there are different files on which you can perform the operations defined in the 9P protocol (create/read/write/...). For example, in the windowing system, you open a file to create a new window, and the new window will have associated a directory with files that represent the screen, mouse and keyboard (and the process running in that window will work with those files exactly as it does with native devices).

You can have a look at the man pages (sections 3 and 4) to see how these filesystems work.


"Everything is a file but with different semantics"


Hmmm. Might be coincidence, but today I noticed Inferno was back on GitHub as well (it had been living on Bitbucket): https://github.com/inferno-os/inferno-os

This is great news altogether - I've been dabbling with Plan9 for a fair bit (mostly on Raspberry Pis of late as they are nicer "disposable" machines and I have plenty of them), so am hopeful that this will lead to more modern versions (especially something whose UX does not rely on mouse chording, which is a chore on modern machines).


Inferno was fun back in the day - Limbo is a neat language with some interesting features.


If inferno took up, it could fulfill the original promise of Java. Many approaches are quite similar between them — but Inferno also had working relocation of processes between hosts, and well-working IPC ("RMI") out of the box.


Java had it via Jini, pity it didn't took off.

And I rather use Swing than Tk.

Now one thing I agree is that it was Android done properly.


Specifically, it seems to be under the MIT license.


Good choice. A truly open source license.


Open enough for you to be sold into slavery.


Ah yes, the little-known software copyright slave trade.



WOW, thank you very much Nokia!!

Having said that..i installed 3 days ago a 9front "cluster",

2x RPI3 as cpu server

1x RPI2 with 2 external HD's (2x7TB) as Storage Server

1x RPI2 (down-clocked) as Auth Server

Love it to work and play with it.


> That community is organizing itself bottom-up into the new Plan 9 Foundation, which is making the OS code publicly available under a suitable open-source software license.

Didn't bell-labs already make the source available under the GPL years ago? Why is this necessary?


Bell Labs permitted UC Berkeley to fork and distribute Plan 9 under the GPL, but the main repository was still under the Lucent Public License or something weird like that. Relicensing under the MIT license will maximize the number of people who can use the code, even if the foundation they created does not catch on.


The source was available but the project was essentially dead. You could fork but contributing upstream was hard.

This removes the corporate roadblocks and lets plan9 exist but itself.


Right, this is why 9front and 9legacy came up and both are pretty well maintained, i hope they (9front) migrate the code "back" to Plan9, would be amazing to have one single point of development.


Kudos to Nokia on a great move.


If you used Go in the very earliest days, you'll recognise pieces of its toolchain in Plan 9.


Years back, I tweaked the Plan 9 kernel source so it could compile with Go's C toolchain. After a day or two it was booting, but we decided not to use it... I think the end conclusion was that the Go team was (reasonably) only interested in maintaining the compilers for building Go, and that they could at any time make changes that would break our builds.


Even the logo has some resmblance


Both made by Renée French, wife of Rob Pike!


Who were, I believe, brought together by Penn Jillette.


In the photo with the article, Rob a) has hair and b) dresses just like Penn. Wild times in New Jersey…


I think Tom Duff of Duff's device fame is also there, 3rd from left at the back.


Looks like they have a Raspberry Pi version. Might try to see what it can do this weekend.

I've always wanted to see more OSes , there has to be a different way of doing things then the Unix/Linux / BSD OSX or Windows operating systems


Millers pi supports wifi, 9front does not[1]. Though 9front is probably more newbie friendly as it has a little more polish and a the FQA/docs have been improving greatly as of late.

1. Millers wifi implementation is different than 9fronts wifi so a lot of work needs to be done to get Rich's driver working on 9front. Patches welcome :-)


>FQA/docs

Frequently Questioned Answers???


indeed. http://fqa.9front.org/

9front is peppered with extremely dry sarcasm, to the point of controversy (http://fqa.9front.org/fqa1.html#1.3.0.1)


It took me a while to figure out what is meant by Miller's pi. I assume the most recent image is the one you're referring to, and it does support wifi on the pi, is that right?

https://9p.io/sources/contrib/miller/


Being distributed operating system can I build a RPi cluster with just plan9 installed for distributed computing tasks?


Yes. You can boot the entire cluster from a single file server running on any other plan 9 machine be it pi, pc, VM, whatever. Each node boots from the same root file system so each node has the same fs view. This is where ndb[1] comes in Though you can change this any way you see fit. Network booting can be accomplished via pxe or a plan 9 kernel on sd with a plan9.ini configured to automatically pull root via tls or tcp.

have a look at thread(2)[2] and enjoy Go like channels and concurrency cleanly built on top of the nice pan 9 c library. You could just use Go but we dont have Go on pi yet, that's a google SOC 2021 project if anyone is interested.

1. http://man.postnix.pw/plan_9/6/ndb

2. http://man.postnix.pw/plan_9/2/thread


Thank you, looks very promising. I will definitely try it out when I get the distributed computing itch again.


Great. Now all it needs is a functioning "APE" ("ANSI Posix Environment"), which, iirc was built by Tom Duff and sadly neglected ever since.

Once P9 gets over its insane NIH syndrome (or IHBBTWP, "Invented Here But By The Wrong Person", i.e. Stroustrup) it could maybe do some weird stuff like, I dunno, run a web browswer? I vividly remember people in the Unix room moving to a PC (running Windows?) to browse the web, ffs. Back in '94.


It (9front) has two browsers, but no c++ for obvious reasons.


Which browsers? As far as I'm aware, there are multiple things that can kinda-sorta browse the web, for some value of "browse" and "web", but I'm not aware of anything a normal person would recognize as a web browser.


Mothra and Abaco.

>but I'm not aware of anything a normal person would recognize as a web browser.

Same could be said about hackernews (no facebook or twitter look...not even hashtag's), so be happy you are not "normal".


It 503's, or when it loads has a blank page.

Oh how the mighty have fallen.


HN hug of death


AFAIK, the mighty still have to cope with the systems that won the popularity contest.



As someone who wanted to use 9front as a toy OS for a bit, this makes me excited.


Plan9 is the definite proof that Worse is Better[0].

[0] https://en.wikipedia.org/wiki/Worse_is_better


Sure, Unix and Linux are worse than Plan 9, but I don't think that's the reason why they are more popular.


That's not the point of Worse is Better. The point is that, in this context, Unix and Linux are better than Plan 9, but for non-intuitive reasons.


Worse is better is just annoying. Better at what? Domino's pizza is better at market share. It's not better at making good pizza. Most people can agree with these two statements. But just saying "Domino's is proof that worse is better" causes unnecessary arguments


Pizza has weak network effects.

The capitalization, brand recognition, and streamlined corporate franchise structure, cooperate to make it easier to launch and run a Domino's franchise, than to start a pizza joint from scratch.

But not that much easier. There is plenty of room to market better pizza for more money.

Computer systems tend to have strong network effects, there's a lot to learn and a skilled developer is, ceteris paribus, more productive than a greenhorn. Most of the value in operating systems and programming languages is in the ecosystem rather than the core.

Worse is better isn't a universal solvent, there are plenty of areas where it isn't applicable. The original essay† is about why C and Unix were eating Lisp's lunch, and is worth reading.

https://www.dreamsongs.com/Files/LispGoodNewsBadNews.pdf


Sure. So to qualify what you are saying: C is better at achieving broad usage. But it is not better than Zig for writing secure software. It is not better than perl for text processing.


Better for what?


Better for the industry as a whole. It generated more value so far.


>It generated more value so far.

That's a really bad and wrong comparison. By your measurements Microsoft, Apple and Oracle is the best the industry ever had.


But they are! Or were! Maybe they will not be for the (moral?) standards of the next decade, though.


So the product that makes more money is the better one?


Come on :)

We are getting into philosophy territory here. You know what I mean and in which context.


That eccentric idea of calling the window system "8½" ... Back in the day when I was trying out Plan9 I spent a whole lot of time figuring out how to enter that on a keyboard.


/bin/8*


okay, I'm an idiot. This would invoke 8c et al.


I wonder what Stanley Lieber would think.


The mailing lists are public. I'm not subscribed to any 9front ones, but the people on the cat -v mailing list are happy about being able to complete the manual collection now.


I wonder what Uriel would think.


Allright, but isn't it like 20 year late? And does this contain any interesting parts which are not already in 9fans/9front build?


The cynical view is that it is now official Plan 9 cannot be used to make money anymore, so waiving away the remaining monopoly rights one may have over it is free positive publicity


And yet Coraid has a Plan 9 based product for ATA over Ethernet.


I thought they were dead, but it seems they're back.

https://www.theregister.com/2017/06/26/coraids_athenian_resu...


Are they using Plan 9 as a commodity (in which case it could be replaced by another OS at the cost of redevelopping their added value) or are they selling Plan 9 as a product ?


9front continues to be developed, and though all its changes are permissively licensed, the whole is still gpl'd for obvious reasons. This allows 9front's code to be opened up.


Most big ideas have cyclical openings where they can be applied. Kind of like a Mars transfer window.

So, if something had a window 20 years ago, but didn’t get applied, there’s a good chance that another window has opened up.


probably, but if this publicity works, there's a slim chance that the open source community might be able to rally behind it and cobble together a larger interest/community and application base to make it a viable personal product.... all big IFs.


Maybe the patents have run out? or any possibility of being sued for infringement.


Does this include sam?


Yes, of course. Acme also.


this sounds as a Kubernetes.


Yes, now just port Docker to Plan 9.


9reat!


I was surprised nobody mentioned the OS they built next, after Plan 9 - Inferno:

https://en.wikipedia.org/wiki/Inferno_(operating_system)

That's the one I've used and it was pretty cool. Limbo was a fun little scripting language.


I think Nokia doesn't own Inferno, it was sold to Vita Nuova around 20 years ago.


Indeed - there was post recently about port to Lua which want to investigate further


I had no idea that Nokia had bought Bell Labs.


More specifically, Nokia bought the parent company of Bell labs, Alcatel-Lucent.


To provide more context, Alcatel-Lucent was a merger of the French Alcatel and Lucent. And Lucent was formed when AT&T divested technology business units including Western Electric and Bell Labs.


To go further, in the 1920s, AT&T[0] was required to divest its international operations, which were sold to newly-formed ITT. Much of those assets eventually went to Alcatel.

[0] the original pre-1984 AT&T, not Southwestern Bell d/b/a AT&T


I have a bunch of old telecom / networking / engineering equipment from mostly the 80s-90s and it’s always fun to figure out who owns or owned some IP or product.


In 2015 apparently, says wikipedia.


That’s nice and all but I’m still salty about how Nokia shuttered Bell Labs Ireland last year. So many close friends lost their jobs.


I hadn't heard, it doesn't seem to have generated much press coverage now that I'm giving it a Google.


It was kept remarkably quiet. That site in particular received a lot of support from the IDA over the years so I can't imagine they were too happy about it. BLI was told year on year that it was a top performing site. Then a few months into lockdown they pulled the rug out from under everyone.


Were they working on products? Any examples?


I’d already left a couple of years. Most Bell Labs stuff is closer to research than development anyway.


That's what i thought also which is why I was curious it was considered top performing. I suppose there are metrics other than profitability to define performance.


KPIs related to patents, academic papers, contributions to various projects and yes technologies for products. The Irish site was always aware of its isolation from the mothership and usually went above and beyond.


I heard one Bell Labs scientist observe that labs like that pay for themselves many times over if someone only rarely invents something great, like the transistor.

It seems to me that if one were desperately looking for an objective measurement of what the lab was currently accomplishing, it would involve looking at the quality and retention of its hires. It sounds like they dropped the ball.


They were told the reason was that Bell Labs is moving to a start up model (whatever that is) and that means closing some sites other than NJ. Then the head of Bell Labs was fired along with the CEO of Nokia.

Basically Alcatel-Lucent was broke and was bought out by Nokia with Microsoft money. Nokia is now also broke and they're pawning the good silver.


What's Plan9?


Imagine if everything they told you about Unix was true. Plan 9 is the operating system Bell Labs made after Unix, to fix many of the problems the designers saw.

That said, it's based around what the designers of Plan 9 thought were problems with Unix. It's a very opinionated operating system. But it has so many ideas that were ahead of their time, and in many ways are still lightyears beyond what we have now.

It's a really cool piece of computing history, and if you haven't tried it, I suggest you look into it, but keep in mind that even though it looks and sometimes feels like Unix, it very much is not. It's not terribly useful as a daily driver OS due to a lack of software, but it's very, very cool.


> and in many ways are still lightyears beyond what we have now.

I think this wildly overstates it. Much of the good innovations have been adopted in Linux. 9p exists. /proc was adopted (though that was in UNIX first).

One unifying principle of plan9 is that everything is a file. But the (POSIX) file api has a lot of limitations. Fuschia, in contrast had some nice ideas about different types of file (blob/object, log, etc).


> I think this wildly overstates it. Much of the good innovations have been adopted in Linux.

The main innovation can't be done by addition. With plan 9, many special cases are removed. You no longer have to wonder what happens if you try to create a Unix socket on an NFS file system, and then mmap it: There's just 9p. Everywhere.

9p is nice, but it isn't special. Making the whole universe 9p is where the improvement lies.


9P is how Microsoft is bridging the filesystem between Windows/Linux in WSL2

https://devblogs.microsoft.com/commandline/a-deep-dive-into-...


Qemu uses 9p too.


>You no longer have to wonder what happens if you try to create a Unix socket on an NFS file system, and then mmap it

How does that work? I don't know the details of any implementation, but 9p the protocol appears not to have any concept of mmap: https://9fans.github.io/plan9port/man/man9/intro.html

I think I see what you mean about 9p not being that special, it doesn't seem much different than if Windows decided to export every system-level API as a DCOM object, that would also get you the same kind of "the whole universe is networked" kind of deal.


> if Windows decided to export every system-level API as a DCOM object, that would also get you the same kind of "the whole universe is networked" kind of deal.

The difference is that in Plan 9, there is no 'if', and there's no other option for accessing resources. All programs interface with the OS and other programs via 9p, more or less: Notable exceptions are process creation calls like rfork() and exec().

> but 9p the protocol appears not to have any concept of mmap:

Correct. Mmap is a kernel feature -- and mmap style stuff is only really done for demand paging of binaries at the moment. You get a cache miss and a page fault? Backfill with a read. Backfilling IO on page fault is really all mmap does, conceptually.


>there's no other option for accessing resources

That seems like it would create difficulties in porting software there. Please correct me if I'm wrong but the original plan9 appears to also have no support for shared memory or for poll/select.

>Backfilling IO on page fault is really all mmap does, conceptually.

For read-only resources yes, for handling writes to the mmapped region, that seems quite broken.


Plan 9 is not a posix system. That means it doesn't have to deal with legacy posix behavior. If you want unix, it's easy to get it.

> For read-only resources yes, for handling writes to the mmapped region, that seems quite broken.

No more broken than mmap of nfs. Consistency is hard.


>No more broken than mmap of nfs.

Right, I get that's what you meant, it doesn't seem to really change much versus NFS, or DCOM, or whatever. So it's unclear what benefit is being provided by 9p here.

Also upon further research I am not sure what you mean by this is the only option, plan9 seems to suggest use of channels for other types of IPC interfaces, which seem to not be the same as 9p and are not necessarily network serializable. (Or are they?)


Channels are not IPC -- they're a libthread API that stays within a shared-memory thread group.

There are few magic kernel devices that don't act like 9p, like '#s' which implements fd passing on a single node. And the VGA drivers expose a special memory segment on PCs to enable configuring VGA devices.

But the exceptions are very few and far in between, and affect very few programs.

> So it's unclear what benefit is being provided by 9p here.

A uniform and simple API for interacting with out-of-process resources that can be implemented in a few hundred lines of code.


How is that conceptually different from IPC? The graphics system appears to somehow pass mouse and keyboard events to the client programs over a channel. At least that part seems similar to an Unix X11 setup where this would be done over a socket.

I guess I just don't see what is conceptually the difference here versus something like doing basic HTTP over a TCP socket, it seems like the same kind of multiplexing. Either way, you still have to deal with the same issues: can't pass pointers directly, need to implement byte swapping, need another serialization library if you want the format to be JSON/XML or if you want a schema, etc... So in cases where that stuff isn't important, channels would come in handy, but of course that is now getting closer to a local Unix IPC. Am I getting this right?


> How is that conceptually different from IPC? The graphics system appears to somehow pass mouse and keyboard events to the client programs over a channel.

A thread reads them from a file descriptor and writes them to a channel. You can look at the code which gets linked into the binary:

    /sys/src/libdraw/mouse.c:61
Essentially, the loop in _ioproc is:

    while(read(fd, event)){
       parse(event);
       send(mousechan, event);
    }
And yes, once you have an open FD, read() and write() act similar to how they would elsewhere. The difference is that there are no OTHER cases. All the code works that way, not just draw events.

And getting the FD is also done via 9p, which means that it naturally respects namespaces and can be interposed. For example, sshnet just mounts itself over /net, and replaces all network calls transparently for all programs in its namespace. Because there's no special case API for opening sockets: it's all 9p.


Ok I see, that helps, thank you. That seems to be mostly similar to evdev on Linux after all, except it requires you to use coroutines instead of having an option for a poll/select type interface.

To me the problem with saying "no special cases" seems to make it quite limited on the kernel side and prevent optimization opportunities. For example if you look at the file node vtables on Linux [0] and FreeBSD [1] there are quite a lot of other functions there that don't fit in 9p. So you lose out on all that stuff if you try to fit everything into a 9p server or a FUSE filesystem or something else of that nature.

[0]: https://elixir.bootlin.com/linux/v5.11.8/source/include/linu...

[1]: https://github.com/freebsd/freebsd-src/blob/master/sys/kern/...


Yes, that's the meaning of no special cases*: it means you don't add special cases. But this is why plan 9 has 40-odd syscalls instead of 500, and tools can be interposed, redirected, and distributed between machines. I don't have to use the mouse device from the server I logged into remotely, I can grab it from the machine I'm sitting in front of and inject it into the program. VNC gets replaced with mount.

I don't have to use the network stack from my machine, I can grab it from the network gateway. NAT gets replaced with mount.

I don't have to use the debug APIs from my machine, I can grab them from the machine where the process is crashing. GDB remote stubs get replaced with mount.

You see the theme here. Resources don't have to be in front of you, and special case protocols get replaced with mount; 9p lets you interpose and redirect. Without needing your programs to know about the replacement, because there's a uniform interface.

You could theoretically do syscall forwarding for many parts of unix, but the interface is so fat that it's actually simpler to do it on a case by case basis. This sucks.

* In kernel devices can add some hacks and special magic, so long as they still mostly look as if they're speaking 9p. This is frowned upon, since it makes the system more complex -- but it's useful in some cases, like the '#s' device for fd passing. This is one of the abstraction breaks that I mentioned earlier.


That's what I mean though, I see the theme, but it seems to me to be about the same as trying to fit everything into an HTTP REST API, it all falls apart when something comes along that breaks the abstraction. For example if you have something that wants to pass a structure of pointers into the kernel, you can't reasonably do that with 9p, so now you've got a special case. The debug APIs can still only return a direct memory mapped pointer to the process memory as a special case, the normal case is doing copies of memory regions over the socket, no matter how large they are. If you want to add compression to your VNC thing, or add some more complex routing to your network setup, you have to start adding special daemons and proxies and translation layers into another socket, which is not really different from what you would be doing on a more traditional Unix. Or is there another way plan9 handles these?


These things have already been done with 9p.

> The debug APIs can still only return a direct memory mapped pointer to the process memory as a special case

Can you point to the special case here?

http://man.cat-v.org/plan_9/3/proc

Because it replaces ptrace, and seems to work perfectly fine when I mount it over 9p. It's used by acid, which needs no additional utilities: http://man.cat-v.org/plan_9/1/acid

> If you want to add compression to your VNC thing

Images may be sent compressed. More -- or at least better -- formats would be good, but this is done.

http://man.cat-v.org/plan_9/3/draw

For a full implementation of remote login using these interfaces, here's the code:

http://shithub.us/ori/plan9front/fd1db35c4d429096b9aff1763f2...

It's a bit complex because it needs to do more than just forward mouse, keyboard and drawing -- signals need to be interposed and forwarded, and there are a few other subtle things that need to happen in the namespace. And because it contains both the client and server code. Even so, it's still small compared to VNC.

And yes, shithub is hosted on plan 9.

> or add some more complex routing to your network setup, you have to start adding special daemons and proxies and translation layers into another socket

Here are the network APIs.

http://man.cat-v.org/plan_9/3/ip

What kind of complex routing are you talking about, and why would it be impossible to implement using those interfaces?


> I think this wildly understates it. Much of the good innovations have been poorly hammered into Linux.

FTFY.

> 9p exists.

A hacked up version called 9p200.u and later on, 9p2000.l which comes laden with posix and unix baggage and of course linux baggage in the lase of .l. This is to handle things like symlinks and special device file hacks inherited from Unix.

> /proc was adopted (though that was in UNIX first).

Linux proc is a mess. Plan 9 proc is just that, the interface to running processes. There's no stupid stuff like /proc/cpuinfo. wtf is that doing in there? http://man.postnix.pw/plan_9/3/proc

> One unifying principle of plan9 is that everything is a file. But the (POSIX) file api has a lot of limitations.

Plan 9 is not posix.

> Fuschia, in contrast had some nice ideas about different types of file (blob/object, log, etc).

A file is an array of bytes. Why complicate that simple approach?


> Linux proc is a mess. Plan 9 proc is just that, the interface to running processes. There's no stupid stuff like /proc/cpuinfo. wtf is that doing in there? http://man.postnix.pw/plan_9/3/proc

Do you think Plan 9 /proc would have remained as "clean" over time if it were as popular as Linux?

One thing that seems to be something of an axiom is that popular interfaces become messy over time. The location of /proc/cpuinfo seems to be an individual act of vandalism rather than being due to fundamental differences in underlying philosophy/approach.


> Do you think Plan 9 /proc would have remained as "clean" over time if it were as popular as Linux?

If people are allowed to submit "functionality" patches ad-hoc with little to no scrutiny or thought, then yes, any project will become a mess.

The general approach taken by plan 9 maintainers is to question functionality/feature patches and ask "Who does this benefit?" If the answer is only the submitter or rare edge cases then the patch is rejected. If the patch benefits a large audience, then it is accepted.

But to be fair, Linux is hammered on by large corps who's only goal is to make money by vomiting webshit from Linux servers. They don't care about simplicity, technical details, correctness, or anything like that, so long as it increases their bottom line. From my point of view the Linux I came to love is long dead.


Seems like you never loved the Linux in the first place, since Linux now is what Linux has always been.


>Do you think Plan 9 /proc would have remained as "clean" over time if it were as popular as Linux?

A lot of the appeal of Plan 9 is that it's not widely used, and so has remained opinionated. It's not a general use operating system. It's a research operating system.


These "nice ideas about different types of file" are not new. On the contrary, before Unix, this was common and it was one of the revolutionary approaches of Unix, that files - from an OS perspective - should be streams of bytes and nothing more.


I think before UNIX there weren't nice APIs for different types of file access. It was more like getting a raw block device and being told "fill your boots".

stream-of-bytes was the right idea then. It doesn't mean it is still right.


You think wrong, it was exactly the same approach of defining a "nice API" for each type of file.

It failed.


Before Multics...not UNIX ;)


Linux is not yet a fully distributed OS, and even the basic foundational work for that featureset is only just being undertaken now (and then mostly as a natural development of cointainerization/namespacing features, which you might or might not see as drawing from Plan9 itself).


By 9p you mean 9pfs? if so, it "exists" for Linux, but that's about all.


One can natively mount 9fs on linux and there's also diod which exports 9fs. Works well for my setup of local file share to both plan9 machines and my linux boxes. I use diod because my storage system is running a chunky zfs with lots of storage I wanted to share


Many years ago (30?), the Plan 9 shell, rc, was made available for other platforms. I was working at the Big Nerd Ranch and ran it everywhere I could (I was doing 2nd/3rd level support work that gave me unfettered privileged access almost everywhere), until the nascent in-house software release process (SRP) caught up and sudo started becoming more widespread and privileged access started getting locked down.

At that point, I needed muscle memory across all machines more than anything else, and switched back to sh (bash was still very new and not widely available, csh was born borked, and ksh was only available under certain OSes). That was sad.

rc had a beautiful, clean C-like syntax without any csh weirdness and was much more powerful than sh. Scripts were a joy to write and maintain.


I was going to say, 30 years ago, the 1st edition of the actual Plan 9 wasn't yet released (not until 1992), much less a clone/port of its shell. But it seems that Byron Rakitzis wrote his Unix clone of `rc` in 1991 before Plan 9 was even out! He based it on Tom Duff's paper which described Plan 9's shell.

(Plan 9's `rc` was originally written for 10th edition Unix, and would later get ported back to Unix as part of Russ Cox's plan9port in 2003.)


Yes, I did! I used Duff's paper as a reference, and relied on the good taste of all my beta testers to guide me towards a working shell. Back when source code was distributed via shar files in an email.


Thanks for writing rc! I’d forgotten about Duff’s paper and the exact circumstances of rc’s release until I read LukeShu’s and your replies.

I had a lot of fun with rc....


It's hosted on github now and still serves as the login shell for me and presumably many others!


I use bash for the muscle memory but my bash profile is increasingly always including

    PS1="$(hostname)=; "
because it's just nice.


The interesting conclusion of all this is that if everything looks like a file, then it doesn't matter what OS it runs on. A /dev/screen can be on your local Plan 9, on a remote Windows or on your Linux VPS; as long as it respects the protocol it doesn't matter. Plan 9 is the host of all this experiment but its findings can be (and have been) imported in other places.


Is that actually useful in practice?

When you're talking about things like displays, performance is extremely important. We're talking about 178MB/s to update a full HD screen at 30 fps, which requires networking pretty much no normal user has.


In my work, there's not much which requires updating a full HD screen at 30 fps... video calls, I suppose. Everything else updates small portions of the screen at lower rates.

There's a program called drawterm which implements Plan 9 graphics devices on Linux. You run drawterm locally, connecting to a Plan 9 system, and your applications on the Plan 9 system draw to your drawterm window over the network. I regularly run it at 4k and it performs quite well.


I'm guessing these applications do not have any kind of animations or smooth scrolling? That would be a simple test, make your web browser or your image viewer fullscreen in 4K and see if there is lag in the scrolling/panning/zooming.


/dev/screen was an example, in practice as said in the sibling comments you'd use drawterm which fulfills roughly the same usecase as what ssh or RDP do, so yes, the use is there. And you may not need a full HD screen at 30 fps to work

But it doesn't stop there. Wanna play local music remotely ? /dev/audio is there for that. Want to use a machine as a jump server ? Just mount their /net folder into yours and any network operation will go through them.

The ideas can be used today. I have a folder of Music with only lossless songs for personal reasons, but it's obviously not perfect for playing from my phone because of how large they are. So I had a server that transcoded them to Vorbis on-the-fly and served them with FUSE, and a sshfs on top of that to serve the transcoded fly to my phone. This composition of a common interface might use no line of code from Plan 9 but it definitely reuses its philosophy.


I think this looks at the benefit backwards. 9p allows resources to be where they make sense and abstract the location from the usage. Running a display over the network might not make sense but with 9p it also isn't necessary. 9p itself allows me to run my GUI locally while the data and processing live elsewhere.


You are seriously overestimating the needed throughput in practice. 60fps 1080p can be streamed with good quality over 16Mbps channel (2MB/s). The real problem is lack of good open source software that will eliminate the annoying latency due to desktop protocols (Xorg...). There are things such as SPICE or X2GO or RDP which are "OK" but I suspect much better experience is possible. The computers are extremely fast already but our software is so bad we can't see it.


178 MB/s is a calculation, not an estimate:

1,920 x 1,080 pixels @ 24 bits/pixel = 6,220,800 bytes/frame

30 frames/s = 186,624,000 bytes/s = 177.98 MiB/s

You are seriously underestimating the simplicity of plugging in a video encoder.


Images can be compressed when using devdraw. The compression formats are relatively primitive, but they're good enough in practice. Slotting in better ones seems like it should be straightforward, though video codecs don't fit cleanly.


So what is your estimation of simplicity of using video compression there? Is it possible?


But once you introduce a piece of software into the middle to make this usable, what's the actual difference between this and just using VNC?

At that point it doesn't really matter if the screen is a file or not -- you need a compressor that can easily provide the output on a network socket, and a client that can perform the decoding.


You're right that it doesn't matter if it's a file or not per se.

What matters - and what the file interface gets you, but you can do the same thing in many other ways - is introducing the concept of a generic pluggable, chainable API.


178MB/s is under 1.5 gb/s. It’s only because we’ve been stuck with slow gigabit Ethernet for 20 years that we think this is a hard problem.

10G ethernet can do it no problem and fractional speed like the 2.5 and 5 gigabit standards should have little issue as well.


I concur that it sucks that Ethernet is in a rut for some reason.

But even on 10G that's no picnic. Sure, that works for a single user, but add a few more people and it's not hard to run into trouble. Such a system can't for instance just drop frames when the network is overloaded which to me makes this more of a curiosity than something anybody would actually want to use in practice.


10G switches are old hat and can do full x-bar switching at 10G, unless your using very old tech got off ebay you shouldn't have issues.

Trunk lines of 100G and higher are pretty common in core networks now, if your big enough to span a single switch. The main limit was we had trouble doing 10G over cat-5e copper with long distances. 2.5/5 solve that problem and 10g is possible with cat-6. Fibre has no issue with super high rates for network backhauls to aggregate all that traffic. Most datacenters are moving to 25gig for server connections.

With the exception of the copper standards all of this has been roled out in the datacenter for years and is pretty mature.


I've speculated it's the patents on 10G over copper holding us back. IIRC we're just about at the point where the early over fiber modes are off patent in the US.

However the 10G (over copper) encoding format uses a complex forward error correction encoding that is a bit energy intensive, it also adds some latency. A smaller silicon production node and this being used outside of SERIOUSLY EXPENSIVE for pro-sumers / medium businesses would instigate a drop via commodity.


There's been some post about upscaling algorithms here lately, perhaps they could be used for diminishing the required bandwidth?


I can see it being very useful for events, call centers, or any kind of operations center where you want a lot of screens.


> and in many ways are still lightyears beyond what we have now.

How close were they to Lisp Machines? :)


Just build tiny Scheme compiler for plan9, anything with r5rS support and call it a day.


https://en.wikipedia.org/wiki/Plan_9_from_Bell_Labs

Plan 9 from Bell Labs is a distributed operating system, originating in the Computing Science Research Center (CSRC) at Bell Labs in the mid-1980s, and building on UNIX concepts first developed there in the late 1960s. The final official release was in early 2015.

Under Plan 9, UNIX's everything is a file metaphor is extended via a pervasive network-centric filesystem, and the cursor-addressed, terminal-based I/O at the heart of UNIX-like operating systems is replaced by a windowing system and graphical user interface without cursor addressing, although rc, the Plan 9 shell, is text-based.


FTA:

“Starting in the late 1980s, a group led by Rob Pike and UNIX co-creators Ken Thompson and Dennis Ritchie developed Plan 9. Their motivation was two-fold: to build an operating system that would fit an increasingly distributed world, and to do so in a clean and elegant manner. The plan was not to build directly on the Unix foundation but to implement a new design from scratch. The result was named Plan 9 from Bell Labs – the name an inside joke inspired by the cult B-movie "Plan 9 from Outer Space."

Plan 9 is built around a radically different model from that of conventional operating systems. The OS is structured as a collection of loosely coupled services, which may be hosted on different machines. Another key concept in its design is that of a per-process name space: services can be mapped on to local names fixed by convention, so that programs using those services need not change if the current services are replaced by others providing the same functionality.“


> inspired by the cult B-movie "Plan 9 from Outer Space."

It's of course not relevant in the current context but still fun so I dare add that it is famous because it's so bad. It's known as "worst movie ever". It is worth watching the start and a few scenes - even if one does not have the patience for the whole thing - just for some laughs. It already starts (4 minutes in) with a scene in a hilarious airplane cockpit, which does not look like one at all (those controls..!). The zombies are really funny too.

Full movie: https://youtu.be/Ln7WF78PolA


Comparing https://m.imdb.com/title/tt0052077/reviews (“Plan 9 from outer space? reviews) with https://www.imdb.com/title/tt0060666/reviews (“Manos: the hands of death” reviews), I don’t think it qualifies as “worst movie ever”.

One of the reviewers of ‘Manos’ says: “What can I say about a movie so bad that it makes Plan 9 From Outer Space look like Casablanca?”, another “I have endured a thing or two in my life: Plan 9 from outer space. Hercules against the moon men. Godzilla versus Megalon. German musical comedies from the early sixties. However, THIS was too much for me. About one hour or so into the movie, I quit.”


I have watched Manos.

I am very glad I had alcohol to go with it.


Maybe the "worst" but at least it's not boring like some more recent big-budget ones are. (Incidentally, there's some enjoyment to be had watching B-rated movies from the late 50s to the 80s.)


"Plan 9" is probably close to the worst film you'd watch for fun.

"Manos" is worse to the extent slogging through it is a chore, at least without commentary to add some entertainment value.



It’s a file.


[flagged]


A person is a phile, not 'file.'


I don’t know why this is flagged, I’m observing the point that a user, like everything else, is a file


[flagged]


Nokia is a communications conglomerate. They sold the mobile phone manufacturing division to Microsoft in 2014. They still have large operations in other networking technologies. Qt has also since been sold off and is being again developed by Trolltech, now known just as Qt Company.


I try to remind myself how small this little corner of the technology world I live in is, and still lose perspective sometimes.

If your career is, say, playing client-server ping-pong with JSON blobs, it can be really useful to look around at other industries and see what they're up to. The commonalities quickly give way to some interesting problems and perspectives.


I like this approach. I make it a point to try to understand other industries that I came across in daily life.

So much information is out there - you just need to learn basic research techniques and be patient/curious. Learning to understand financial statements is also a big one.

Why am I doing this?

a) Curiosity. I want to understand how the world works.

b) I'm looking for some niche industry market that can be improved, in some plausible way. By the way: There's so much weird market segmentation going on in most other industries than software.


>developed by Trolltech

I don't think I could ever take software seriously from a company name like that. Are these issues really bugs, or are we being trolled?


Trolls as in the Norwegian national mascot, unless you are in reality a S#%tposter from Bergen playing 12-D chess and trolling us.


Isn't that a Norwegian or Icelandic company? Where live Trolls live out in the wild?


I'm reading this on a Nokia!


That's part of their current business: the name is licensed for HMD Global, which then outsources production to Foxconn. It's Nokia in name only, literally. Seem to be very good phones though.


HMD was established, by Nokia, to produce phones, since Nokia wasn't allow to under the agreement with Microsoft, and is run by ex-Nokia executives. HMD bought "back" the Nokia Mobile division from Microsoft. Nokia holds shares in HMD. nokia.com sells the HMD phones. It is a bit more Nokia than just by name.


Yeah, the phones run Android One (so stock android, no bloat), and pretty solid in terms of construction (not 3310 level, but I've dropped it a number of times and it still seems to be working well).


They're pretty much the only ones still releasing Android One models. Out of 6 new Android One phones in 2020, five were Nokia.

I'm scared Android One is dying, Pixels are prohibitively expensive, and I'll have nowhere to jump ships once my current Nokia dies.


Pixel A phones are very good all-rounders in the midprice segment. I know lots of people won't pay 350€$ for a phone but I find them to be pretty good value at that price point.


Lenovo make Android One phones.

I got a new Moto G Pro from them this year. Very good phone for the price.


Yup, that's the sixth one. Other than that one and a few Nokias, there's nothing new since the beginning of 2020.


Unfortunately it seems that HMD's Nokia phones do not support OEM boot unlock, unlike many other Android One models. So no running Plan9 on your Nokia phone.


> They still have large operations in other networking technologies

Indeed, Motorola sold their mobile network infrastructure business to Nokia.


Tires conglomerate, that started a communications company. Nokia tires are pretty good.


Well, yeah, Nokia WAS a conglomerate that made all kinds of things like paper, electricity generation, cables, rubber boots, tyres, PC clone computers, and whatnot. But in the early 1990'ies all the rest were spun off and since then Nokia has been focused on the telecom business.

https://en.wikipedia.org/wiki/Nokia#History


This is not correct. Nokian Tyres was spun off Nokia already in 1988, it was listed in 1995 and Nokia sold its remaining shares in 2003. Nokian Tyres is still listed in Helsinki stock exchange.

Currently Nokia is focusing on mobile networks and patent licensing.


Different companies though, Nokia hasn't had any ownership in Nokian Tyres since 2003.


Using a Smartphone? Nokia collecting patent royalty on 4G/5G as we speak.

On a Mobile Network? You can bet with very high chance some part of the network are running on Nokia. Broadly speaking carriers like to use every vendor they have in part of their network just to have vendors competing on price.

Using DSL or Fttx Fibre at at home, you can bet somewhere along that fibre there is an Nokia or your DSL Modem is Nokia.

They might not be a household name, but they are still everywhere.


Also a decent chance your fixed-network connection is going through a Nokia edge/core router along the way.


Nokia sold Qt


You mean that they jumped off a proverbial burning platform.

https://www.google.com/amp/s/amp.theguardian.com/technology/...



I wish the so called open source projects were more inclusive. Many people would love to work on these projects, but are unable to because they need to have a paid employment to pay bills and feed family. Open Source projects give a platform for people from privileged background to show off their skills plus they are reducing the amount of work available. For example company won't hire someone to do X if they can find an open source project that does something like X. Big corporations are also exploiting those projects and make billions off of them while paying developers nothing. In some countries work for free is illegal and a person doing the work needs to be given at least a minimum wage. Companies who use such software are essentially getting labour without paying for it. We need to think about royalty system that will be reimbursing people working on those projects and creating a level playing field so people from poor backgrounds could also participate.


This is the most confusing comment I've seen on HN in years.

> I wish the so called open source projects were more inclusive.

- What barrier is there currently to open source projects. Take the code, do whatever you want with it.

- What is the difference between an open source project and a "so-called open source" project. Is the source available freely available? Then its open source. It is a gift!

> Many people would love to work on these projects, but are unable to because they need to have a paid employment to pay bills and feed family.

Why is that anyone else's problem? Billions of people don't get to do exactly what they want to do. Many (most?) open source contributors work on the open source in their spare time. "Many people would love to do X, but are unable to because Y". You can fill in those blanks for anything.

> Companies who use such software are essentially getting labour without paying for it. We need to think about royalty system that will be reimbursing people working on those projects...

They aren't paying for it because the person who wrote it didn't want them to pay for it. You are trying to control the explicit desires of the person who created and shared the software.


> What barrier is there currently to open source projects. Take the code, do whatever you want with it.

There is a barrier to participate in development of those projects. Developers are expected to work for free and not every developer can afford to do it, so these projects are dominated by people from privileged backgrounds.

> What is the difference between an open source project and a "so-called open source" project. Is the source available freely available? Then its open source. It is a gift!

I am not sure why do you mean by this question.

> Billions of people don't get to do exactly what they want to do. Many (most?) open source contributors work on the open source in their spare time.

The problem is that "Open Source" is a source of free R&D for companies who don't have to pay salaries and taxes and developers are expected to give up their time for free. Big companies are promoting this, because it saves them money in the long run at the expense of developers. This is the same situation as with unpaid internships. If a company offers an unpaid internships, only people from privileged backgrounds can afford that (e.g. parents pay their bills) and people from poor background are missing out because they need to find a paid work often in completely different sector. That's why in many countries (for example in the UK) unpaid internships are illegal to level the playing field and to reduce the social divide.

> They aren't paying for it because the person who wrote it didn't want them to pay for it.

As I wrote above, in many places it is illegal. Everyone should be paid for their work even if they are privileged and don't want money (then they can send it to charity).


You severely lack any historical context about open source.

You are speaking like it was started by big companies to get free labor.

It was literally the opposite. It was started by people who wanted to share and have the source code so they could make changes and not be beholden to large companies.

You are literally complaining that about GIFTs! It is a gift that you can get the source code. You can use it, change it, learn from it. You are welcome!

You also keep associating open source to unpaid forced labor. What about hobbies?

Please tell me anywhere in the world where it is illegal to give away something I created.

It’s hard for me to believe you are not trolling. If you are not, I don’t know how to help you.


I know how this started and my idea does not change the roots of the movement. Software will be free for individuals. The problem is that big corporations essentially stole the movement and spun it to get free labour. You can also give free labour to another individual - for example you can mow your elderly neighbour's lawn, but in my country giving free labour to a corporation is illegal. Please don't mix the two things. One is noble the other is exploitative!


> but in my country giving free labour to a corporation is illegal.

It is not illegal for me to work on a project then release it for free and a company to use it.

It’s not exploitative. It was my decision.


That's what used to be called a troll comment. These days, the term "SJW" is more commonly used.


How do you propose open source projects get the revenue to offer a royalty program?

Also, in what countries is volunteering for free illegal?


Well there’s https://gitcoin.co/

Not royalties, but dues at least.

A DAO structure would work well, but not many people seem to see it as viable.


Royalties should be based on % of revenue generated by a product where particular software is being used and distributed among contributors. You are conflating volunteering with internship. If a company recruits volunteers as software developers, this is a disguised internship and they'll get themselves liable. This is illegal in the UK.


and if the open source project generates no revenue?

Like, if Y is controlled by the Y foundation which is in part funded by Z company, Y foundation isnt really getting a direct revenue stream from the software - thats the thing about open source software, you dont have a say in how its used, and you cannot force payment.

I think your read of the law on what is or isnt an internship is a wee bit of a hot take, but IANAL, and certainly not a UK Labor Lawyer.


> and if the open source project generates no revenue?

For example 0.1% of 0 is 0 (https://en.wikipedia.org/wiki/Percentage)

Whoever gets the revenue from the software should be paying.

Regarding any constructs with foundations and other tax avoidance schemes, that's probably another topic.


So, basically, you're saying that if you make money from the software, you should be required to pay money to the copyright holders (aka, the developers).

At that point, what sets aside open source software from any other kind of software?


There is a time and place for this BS, this isn't one of those.


Middle class privilege is real.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: