Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

I was lucky to get to use an IBM 3090 (close to the peak of the bipolar mainframe) with the Computer Explorers troop that met at New Hampshire Insurance.

We used VM/CMS where VM was a virtual machine monitor and CMS was a single-user OS that felt a lot like CP/M or MS-DOS. (I understand CP/M was inspired by CMS) If you have a lot of developers, they all get their own OS and normally they would store their files on "minidisc" images.

Even though the experience wouldn't seem too foreign to somebody who works on the command line today, the I/O was not character based but instead buffered the way the article describes. Applications were built around a model where you program fields on the terminal which get submitted when somebody hits the send button such as the XEDIT text editor

https://www.ibm.com/docs/en/zvm/7.3?topic=zvm-cms-file-edito...

which was functionally similar to the TECO editor you'd see on DEC minicomputers but quite different in implementation. (e.g. 1970's mainframe apps were similar to 1990's web form applications)

Since we had Digital right across the border in Massachusetts, schools and children's museums in my area where saturated with PDP-8, PDP-11 and VAX machines. The computer club (which met at the physics clasroom) at my high school inherited an old PDP-8 when the school got a VAX, it was an unusual system that they were planning to ship to a newspaper that didn't buy it in the end which had terminals that used an ordinary serial connection but could be programmed to behave like the 3270, we didn't have any software that used that feature until I got out the manuals and wrote a BASIC program that would send the control sequences for that mode.



I've used a 3090 and some of the predecessors and VM/CMS. The "monitor" was called CP (control program) IIRC.

XEDIT was a great editor. There was also Rexx (and previously EXEC and EXEC/2) as the system's programming language that you could use to customize virtually every aspect of the editor and automated tasks. Rexx had integration with the editor and also integrated with the OS there were lots of these small integration points that let you do really powerful stuff. Applications like email were implemented on top of the basic OS and editor. A very unique and powerful architecture (mirrored to some degree in OS/2 later).

The ecosystem was incredible. The virtualization support in the CPU let you run a complete multi-user system with each user having a visualized CPU within one virtual CPU. I.e. it was "fully" virtualized. What's more incredible is that a lot of these pieces, like the OS, were all written in assembly. Super robust. Super clean. Amazing documentation for everything. As top notch engineering as it gets.

The full screen terminal (e.g. 327X) were part of the architecture, delegating a lot of the interaction to the terminal. Interesting enough you could poll the terminals for input which we've used for writing some games. A friend of mine wrote a library for doing that. There were also colour/graphics terminals like the 3279 and could be programmed e.g. with a library called GDDM.

EDIT: - https://en.wikipedia.org/wiki/VM_(operating_system)

Another interesting bit is that IBM shipped the full source code for everything (I think this was by default). They also had a shared bug reporting system (anyone remember what that was called?).


Right. It was really common for sites in the 1970s to compile their own OS so they could set the configuration. This was how you told it which devices where attached on which ports.


"compile their own OS"??? I thought SYSGEN is necessary. At least in 1990s when I supported these machines as OS (and later DB2) supports.

I suspect it is in general necessary even to these days, but whether they will start with MVS 3.8 and gradually build up to whatever machine we are onto, I am not sure.


I do not know you can do that as I think the 3270 block mode always did that. Might be CICS can do that but I wonder ... I only support block mode.

It would be interesting whether yo can release to six pack VM for us to try. We use 370 C to do gaming currently, ... does it allow this interaction I wonder.


If you want unfamiliar, try MVS on IBM s/370 mainframes. Version 3.8j (from the early 80s) is readily available, and runs great on the Hercules emulator: https://www.jaymoseley.com/hercules/installMVS/iMVSintroV8.h...

It made me realize just how many fundamental things that I completely took for granted in the "modern" computing world, were ultimately just concepts derived from UNIX (even for OSes that you'd think have little relation to it at all), and how there were (and in some capacity still are) very, very different worlds out there.


Yes, looking at IBM stuff is like being in a parallel universe where everything you take for granted is slightly off. You have token-ring instead of Ethernet, you have SNA (or something) instead of TCP/IP. Characters are EBCDIC, not ASCII. Terminals are connected with coax, not RS-232. For hardware, chips are packaged in metal instead of plastic. Circuit boards are a weird grid. Even the terminology and schematic symbols are different: if it looks like an AND gate, it's an OR gate.


Ethernet (OSA cards) and Fibre Channel (FICON cards) are standard on z Mainframes these days. TCP/IP is standard on z/OS, CP (z/VM), AIX and Linux. Terminal emulators connect over TCP/IP, not RS-232 or coax. etc.

But still today:

- The character set for most OSes (not Linux) is EBCDIC.

- The terminal is form-based (like a web page, but with invisible input fields) rather than character based.

- You ALWAYS have to have key punch and card reader devices defined (even on Linux).

- z/OS needs proprietary FICON (not plain fibre channel) connections to emulated ECKD disks (not block based) on one of just a few SANs that support it.

- VSE still needs a TCP/IP stack (one of two 3rd party vendors) purchased separately.

- You need several x86 computers (HMC and two support elements) to manage or even boot-up a mainframe.

- You have to painstakingly configure virtual to physical device address mappings (IOCDS) before you can boot-up anything.

And more.


> - You ALWAYS have to have key punch and card reader devices defined (even on Linux).

Linux doesn’t actually support physical card readers/punches, only paravirtualized readers/punches under z/VM (implemented using DIAG hypervisor call interface). [0] And that’s because paravirtualized card devices are heavily used under z/VM for IPC (inter-VM communication), although there are alternatives (IUCV, TCP/IP). So if you aren’t running under z/VM, Linux can’t use readers/punches, because the hypervisor interface isn’t there. And even under z/VM, Linux will work fine without them, because they are mainly used for sending data between Linux and other mainframe operating systems such as CMS and RSCS, and maybe you aren’t interested in that. And if somehow you managed to connect a real card reader or punch to your mainframe (you’d need to chain together a bus/tag to ESCON bridge with an ESCON to FICON bridge), bare metal Linux would have no idea how to talk to it, because it doesn’t support real card devices, only paravirtualized ones. Linux under z/VM might be able to do so, by relying on the hypervisor’s card device driver.

[0] Have a look at https://github.com/torvalds/linux/blob/v6.10/drivers/s390/ch... – if it encounters a real card reader/punch, it is hardcoded to return -EOPNOTSUPP. Actually it looks like it does use CCWs to write to punches, but it relies on DIAG for reading from card readers and device discovery. And due to that code, even if it is generating the correct CCWs to write to a physical punch (I don't know), it would refuse to do so.


> Linux under z/VM might be able to do so, by relying on the hypervisor’s card device driver.

Actually thinking more about the code I linked, I don’t think this would work - even if the z/VM hypervisor (CP) still knows how to talk to physical card devices (maybe the code has bitrotted, or maybe IBM has removed it as legacy cruft) - the DIAG interface would report it as a physical/real device, and hence that Linux kernel driver would refuse to talk to it


From the "If I could" files, I would have liked to spent 5 years on an AS/400, trying to make it work for whatever company I was working for.

The best way to learn this stuff is simply apply it, trying to solve problems.

Going from a High School PET to a College CDC NOS/VM Cyber 730 to a RSTS/E PDP 11/70 was very education cross section of computing that really opened my eyes. If I had gone to school only a few years later, it would have been all PCs, all the time, and I would have missed that little fascinating window.

But I never got to go hands on with an IBM or an AS/400, and I think that would have been interesting before diving into the Unix world.


The OS for the AS/400 is really remarkable as a "path not taken" by the industry and remarkably advanced. Many of the OO architecture ideas that became popular with Java were baked into the OS

https://en.wikipedia.org/wiki/IBM_AS/400

and of course it started out with a virtual machine in the late 1970s.


> Many of the OO architecture ideas that became popular with Java were baked into the OS

I disagree. OS/400 has this weird version of “OO” in which (1) there is no inheritance (although the concept has been partially tacked on in a non-generic way by having a “subtype” attribute on certain object types), (2) the set of classes is closed and only IBM can define new ones.

That’s a long way from what “OO” normally means. Not bad for a system designed in the 1970s (1988’s AS/400 was just a “version 2” of 1978’s System/38, and a lot of this stuff was largely unchanged from its forebear.) But AS/400 fans have this marked tendency to make the system sound more advanced and cutting-edge than it actually was. Don’t get me wrong, the use of capability-based addressing is still something that is at the research-level on mainstream architectures (see CHERI) - but the OO stuff is a lot less impressive than it sounds at first. Like someone in the 70s had a quick look at Smalltalk and came away with a rather incomplete understanding of it.

> and of course it started out with a virtual machine in the late 1970s.

If you consider UCSD Pascal, BCPL Ocode - far from a unique idea in the 1970s. It is just that many of those other ideas ended up being technological dead-ends, hence many people aren’t aware of them. I suppose ultimately AS/400 is slowly turning into a dead-end too, it has just taken a lot longer. I wouldn’t be surprised if in a few more years IBM sells off IBM i, just like they’ve done with VSE


I'll say this. There is more than one side to "object orientation".

A better comparison would be between the AS/400 architecture and Microsoft's COM. That is, you can write COM components just fine in C as long as you speak Hungarian. This kind of system extends "objects" across space (distributed, across address spaces, between libraries and application) and time (persistence) and the important thing is, I think, the infrastructure to do that and not particular ideas such as inheritance.

When I started coding Java in 1995 (before 1.0) it was pretty obvious that you could build frameworks that could that kind of extension over space and time and I did a lot of thinking about how you'd build a database that was built to support an OO language. Remember serialization didn't come along until Java 1.1 and than an RMI were still really bad and really cool ideas built on top of them often went nowhere, see

https://en.wikipedia.org/wiki/Tuple_space#JavaSpaces

there was the CORBA fiasco too. What's funny is that it just took years to build systems that expressed that potential and most of them are pretty lightweight like what Hazelcast used to be (distributed data structures like IBM's 1990s coupling facility but so easy... Not knocking the current Hazelcast, you can probably do what I used to with it but I know they've added a lot of new stuff to it that I've never used) Or the whole Jackson thing where you can turn objects to JSON without a lot of ceremony.

The more I think about it, objects have different amounts of "reification". A Java object has an 8-16 byte header to support garbage collection, locks and all sort of stuff. That's an awful lot of overhead for a small object like a complex number type so they are doing all the work on value types to make a smaller kind of object. If objects are going to live a bigger life across space and time those objects could get further reification, adding what it takes to support that lifetime.

I worked on something in Python that brought together the worlds of MOF, OWL and Python that was similar to the meta-object facility

https://ai.eecs.umich.edu//people/dreeves/misc/lispdoc/mop/i...

where there is a concept of classes and instances that build on top of the base language so you can more or less work with meta-objects as if they were Python objects but with all sorts of additional affordances.


Yes, AS/400 / IBM i is the other IBM OS I like to play with (I have an actual AS/400e at home), and in a lot of ways I consider it to be the polar opposite of MVS on the mainframe:

Where MVS seems to be missing very simple abstractions that I took for granted, AS/400 abstracts way more than I'm used to, differently, and most importantly far away from the very, very "file-centric" view of today's systems that was derived from UNIX. It indeed shows you what computing could have been, had AS/400 been more open and had those ideas spread farther.

Before I got to know AS/400, I thought UNIX was great, and that it rightfully took over computing. Now, not so much, and I've started to see how detrimental the "everything is a file" concept that UNIX brought into the world actually was to computing in general.


> From the "If I could" files, I would have liked to spent 5 years on an AS/400,

pub400.com still exists and probably will for 5 more years. not sure to what extent you can make it work for a company but you can at least do learning projects on it


I'd forgotten about the weird grid. Good times.


Probably the most expensive proto-board looking thing around.


In the 1980s a clear case of this was that MS-DOS 2.0 had system calls for file operations that basically worked like Unix whereas MS-DOS 1's filesystem API looked like CP/M.

It's an interesting story that IBM really struggled to develop an OS which was "universal" the way the "360" was supposed to be universal. The answer they came to was VM which would let you run different OSes for your batch jobs, interactive sessions, transaction managers, databases, etc. Contrast that to Unix, VAX/VMS or Windows NT where you can run all those things under one OS. In 1970 though, IBM had no idea how to do that.

Note today a Z installation is very likely to have Linux in the mix

https://www.ibm.com/z/linux

so it is no problem running your POSIX app together with traditional mainframe apps. Also there is a Java runtime

https://www.ibm.com/docs/en/zos-basic-skills?topic=zos-java

so you can host your Java apps.


> In the 1980s a clear case of this was that MS-DOS 2.0 had system calls for file operations that basically worked like Unix whereas MS-DOS 1's filesystem API looked like CP/M.

I honestly don't think that's a good example. On the contrary, I think it actually obscures what I mean, and would lead the casual reader to assume that things were actually much less different than they actually were.

Both MS-DOS and CP/M still had the very clear and almost identical concept of a "file" in the first place. I don't know if CP/M (and in turn CMS) was inspired by UNIX in that way, or whether the "file" concept came from a different common ancestor, but it's worth repeating that MVS has more or less nothing like that "file" concept.

MVS had "datasets" and "partitioned datasets", which I often see people relating to "files" and "directories" through a lens colored by today's computing world. But if you start using it, you quickly realize that the semblance is actual pretty minimal. (If you use the 1980s MVS 3.8j, that is.)

Both datasets and partitioned datasets require you to do things that even the simplest of filesystems (e.g. FAT12 for MS-DOS) do on their own and completely transparently to the user (or even developer). And moreover, datasets/members are usually (not always) organized as "records", sometimes indexed, sometimes even indexed in a key-value manner. This goes so fundamentally with the system that it goes down into the hardware, i.e. the disk itself understands the concept of indices, record lengths and even keyed records with associated values. MS-DOS, CP/M, and practically all modern systems, instead see "files" as a stream of bytes/words/octets or whatever.

A lot of this has been abstracted away and "pulled" into the modern and familiar "file" concept the closer you get to z/OS, but that's what MVS back then was like.

A C64 with its 1541 is closer to old school MVS than MS-DOS and CP/M both are, because an 1541 supports both "sequential" files (byte streams) and "relative" files (indexed record sets), and because it provides relatively high level interface to circumvent that altogether and work with the disk ("volume" in MVS parlance) more directly. There's even a "user defined" file type. However, altogether the 1541 is closer to MS-DOS and CP/M again, because usually (not always!) you leave the block allocation to the system itself. Like you always do in a modern system and MS-DOS or CP/M, there is basically no sane way around it (at best you can slightly "tweak" it).

That's not even touching on what "batch processing", and associated job control languages and reader/printer/puncher queues, mean in practice.

It's so alien to the world of nowadays.


> I don't know if CP/M (and in turn CMS) was inspired by UNIX in that way, or whether the "file" concept came from a different common ancestor, […]

CP/M drew heavily on DEC operating system designs, notably RSX-11M – it even had PIP as a file «manipulation» command as well as the device naming and management commands (e.g. ASSIGN). Perhaps something else. MS-DOS 1 has descended from CP/M whereas MS-DOS 2 diverged from it and borrowed from UNIX .

> […] but it's worth repeating that MVS has more or less nothing like that "file" concept.

Ironically, the today's blame that [nearly] everything is a file and a stream of bytes in UNIX is the root cause of all evil was the major liberating innovation and a productivity boost that UNIX has offered to the world. Whenever somebody mentions that stderr should be a stream of typed objects or alike, I cringe and shudder as people do not realise how the typed things couple the typed object consumer with the typed object producer, a major bane of computing of old days.

The world was a different place back then, and the idea of having a personal computer of any sort was either preposterous or a product of distant future set science fiction.

So, the I/O on mainframes and minicomputers was heavily skewed towards the business oriented tasks, business automation and business productivity enhacements. Databases had not entered the world yet, either, and as they were still incubating in research departments of IBM et al, and the record oriented I/O was pretty much the mainstream. Conceptually, it was the overengineered Berkely DB so to speak baked into the kernel and the hardware, so it was not possible to just open a file as it was not, well, a file. In fact, I have an opened PDF on my laptop titled «IAS/RSX-11M I/O Operations Reference Manual» that has 262 pages in total and has 45 pages in Chapter 2 dedicated to the description of how to prepare a file control block alone required just to open a file. I will take an open(2) UNIX one-liner any time over that, thanks.


> CP/M drew heavily on DEC operating system designs, notably RSX-11M

It mostly feels like a single-user version of TOPS-10, which Kildall initially used to write CP/M. OS-8 and RT-11 also borrow heavily from it, making it basically the common ancestor of anything that feels "DOS".


I think those things are largely orthogonal, though. Opening a file can be simple, while not everything having to be a file.

So, having the concept of files at all (and simple to open ones, to boot) is of course much better than MVS datasets, which barely abstracted the storage hardware at all for you. But on the other end of this, that does not mean everything has to be a file, as UNIX popularized.

To be clear, I am never defending MVS. We've come a long way from that, and that's good. I may however want to defend AS/400, which is far away from UNIX in the other direction, and so in a lot of ways the polar opposite of MVS. However, I haven't actually worked with it enough to know whether it's awesome seeming concepts actually hold up in real life. (Though I've at least frequently heard how incredibly rock solid and dependable AS/400s are.)


> Both MS-DOS and CP/M still had the very clear and almost identical concept of a "file" in the first place.

ms-dos files (after 2.0) were sequences of bytes; cp/m's were sequences of 128-byte 'records', which is why old text files pad out to a multiple of 128 bytes with ^z characters. ms-dos (after 2.0) supported using the same 'file' system calls to read and write bytestream devices like paper tape readers and the console; cp/m had special system calls to read and write those (though pip did have special-case device names). see https://www.seasip.info/Cpm/bdos.html

that is, this

> CP/M (...) instead see[s] "files" as a stream of bytes/words/octets or whatever.

is not correct; cp/m has no system calls for reading or writing bytes or words to or from a file. nor octets, which is french for what ms-dos and cp/m call 'bytes'

admittedly filesystems with more than two types of files (ms-dos 2+ has two: directories and regular files) are more different from cp/m


Granted, I wasn‘t aware. But unless you also have to tell CP/M how many cylinders and/or tracks you want to allocate for your „file“ upfront, and how large the (single!) extent should be should that allocation be exceeded, as well as having to separately enter your file into a catalogue (instead of every file on disk implying at least one directory entry, multiple for filesystems that support hard links), then CP/M and MS-DOS files are still very similar to each other.

Also, it sounds to me like those 128 byte records were still very much sequential. That is, 128 bytes may have been the smallest unit that you can extend a file by, but after that it‘s still a consecutive stream of bytes. (Happy to get told that I’m wrong.) With MVS, the „files“ can be fundamentally indexed by record number, or even by key, and it will even be organized like that by the disk hardware itself.


yes, exactly right, except that it's not a consecutive stream of bytes, it's a consecutive stream of 128-byte records; all access to files in cp/m is by record number, not byte number


While octet is used in French, it is also used in many other languages and it is preferred in the English versions of many international standards, especially in those about communication protocols, instead of the ambiguous "byte".


it was ambiguous in 01967


> This goes so fundamentally with the system that it goes down into the hardware, i.e. the disk itself understands the concept of indices, record lengths and even keyed records with associated values.

Interesting, so the disk controller firmware understood records / data sets?

I believe Filesystems with files that could be record-oriented in addition to bytestreams were also common e.g. VMS had RMS on Files-11; the MPE file system was record-oriented until it got POSIX with MPE/IX. Tandem NonStop's Enscribe filesystem also has different types of record-oriented files in addition to unstructured files.

I assume it was a logical transition for businesses transferring from punch-cards or just plain paper "records" to digital ones.


> businesses transferring from punch-cards... to digital ones.

A small quibble, but punched cards are 100% digital.

A card held a line of code, and that's why terminals default to 80 columns wide: an 80-character line of code.


> Interesting, so the disk controller firmware understood records / data sets?

Yep. The disk was addressed by record in a fundamental manner: https://en.wikipedia.org/wiki/Count_key_data

An offshoot of this is that the Hercules mainframe emulator reflects that in its disk image format, which unlike other common disk image formats is not just an opaque stream of bytes/words.

> I assume it was a logical transition for businesses transferring from punch-cards or just plain paper "records" to digital ones.

Yeah, that is a sensible assumption. In general, MVS's "not-filesystem" world looks in a lot of ways like an intermediary between paper records/tapes and actual filesystems.


> Yep. The disk was addressed by record in a fundamental manner: https://en.wikipedia.org/wiki/Count_key_data

Well, mainstream hard disks (what IBM calls "FBA") are also addressed by record in a fundamental manner. It is just that the records (sectors) are fixed length–often hard disks support a small selection of sector sizes (e.g. 512, 520, 524 or 528 byte sectors for older 512 byte sector HDDs; 4096, 4112, 4160, or 4224 byte sectors for the newer 4096 byte sector HDDs; the extended sector sizes are designed for use by RAID, or by certain obscure operating systems that require them, e.g. IBM AS/400 systems)

Floppies were closer to IBM mainframe hard disks than standard FBA hard disks are. Floppies can have tracks with sectors of different sizes, and even a mix of different sector sizes on a single track; IBM standard floppies (used by PCs) have two different types of sectors, normal and deleted (albeit almost nobody ever used deleted sectors); standard PC floppy controllers have commands to do searches of sectors (the SCAN commands–but little software ever used them, and by the 1990s some FDCs were even omitting support for them to reduce complexity).

And although z/OS still requires CKD (actually ECKD) hard disks, newer software (e.g. VSAM, PDSE, HFS, zFS) largely doesn't use the key field (hardware keys), instead implementing keys in software (which turns out to be faster). However, the hardware keys are still required because they are an essential part of the on-disk structure of the IBM VTOC dataset filesystem.

Actually, the Linux kernel contains support for the IBM VTOC dataset filesystem. [0] Except as far as Linux is concerned, it is not a filesystem, it is a partition table format. [1]

I think part of the point of this, is if you have a mixed z/OS and z/Linux environment, you can store your z/Linux filesystems inside a VTOC filesystem. Then, if you end up accessing one of your z/Linux filesystem volumes from z/OS, people will see it contains a Linux filesystem dataset and leave it alone – as opposed to thinking "oh, this volume is corrupt, I better format it!" because z/OS can't read it

> In general, MVS's "not-filesystem" world looks in a lot of ways like an intermediary between paper records/tapes and actual filesystems.

I think the traditional MVS filesystem really is a filesystem. Sure, it is weird by contemporary mainstream standards. But by the standards of historical mainframe/minicomputer filesystems, less so.

[0] https://github.com/torvalds/linux/blob/v6.10/arch/s390/inclu...

[1] https://github.com/torvalds/linux/blob/v6.10/block/partition...


Yet even Microsoft tried to shove something like that into the PC world, whether it was the OFS effort during Cairo development, up to WinFS that actually appeared in a developer's release of Longhorn.

And even more recently, there've been efforts to expose a native key:value interface on SSDs, to let the drive controller handle the abstraction of the underlying flash cells.

I'm not well-enough versed in this stuff to understand how similar these things are to what you're talking about, however. Very much appreciate any clue you feel like offering.


Easy way to do it:

    docker run -it --rm -p 3270:3270 rbanffy/vm370ce
This will set up a VM/370 Community Edition running on a beefy virtual 4381. Direct your 3270 terminal to port 3270 and check the README at https://github.com/rbanffy/vm370 for some bearings.


> It made me realize just how many fundamental things that I completely took for granted in the "modern" computing world, were ultimately just concepts derived from UNIX

Oh so very much yes.

And the flipside of this is that it's very very hard to get new ideas into the heads of people who only know and have only ever seen computers running Unix-style or Unix-inspired computers.

Some examples:

The product that the inventors of Unix did next was Plan 9. It's still alive and in development. Some things that Plan 9 takes as givens that blow Unix folks' minds:

* There is no one true view of the filesystem. It's all namespaces and every process can (and normally would) see a different namespace. They are all correct and there is no one valid underlying view: it's all relative, and all interpreted.

* Everything really is a file. For example the contents of a window on screen is a file. Arguably if your GUI doesn't do this, is it really Unix-like?

* There's no keyboard-driven command-line editing because of course your computer has a graphical screen and you can just use the mouse. Why would you pretend your input window is a 1970s 80x25 text terminal? What's the point of that?

Then Plan 9 evolved into Inferno. It breaks more assumptions.

* C is not low level any more because computers have evolved very far away from the machines C was designed for. C has no native standard build-in way to represent matrices of 256-bit values and apply SIMD transformations to them as one operation, and no way to farm out 10000 threads to 1000 compute units.

So, throw C in the bin, and replace it with something a bit more abstract.

* Computers have different types of CPUs. So, compile source± to intermediate code and embed the runtime in the kernel. No need for a JVM or any other runtime; no need for WASM or eBPF; it's stock.

* Any binary can execute on any CPU. Computers can have more than one type of CPU and compiled code can move between them without translation.

Or to move further afield...

* The Unix tradition puts the filesystem at the centre of the design. Not all do. So some OS designs do not have filesystems and do not have the concept of files. It's objects all the way down.

* Some OSes and some languages don't have compilers or interpreters or binaries. That's a 1970s implementation detail and it was eliminated 50+ years ago.

This kind of stuff can be much more powerful than the Unix model, but folks reared on Unix dismiss these ideas as toys, not understanding that clinging to a 1970s idea built on computers which are less powerful than toys today is limiting them.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: