Hacker News new | past | comments | ask | show | jobs | submit login

Wow. I have to say that it is funny how you hear the legendary tales of computing history over and over again and then some little crucial minor details like "we sold the CEO's office on it" eventually slip out.

Ok, that was important at AT&T, but not really why Unix currently rules the universe. Due to anti-trust settlements and cheap licensing, and copyright foibles, Unix was the cheap midrange choice for students and non-enterprise users. After a lot of pain, Unix eventually evolved into an "open standard". And that eventually evolved into an "open source" standard with Linux (and BSD, MacOS, etc.) To quote DEC's Ken Olsen[1]:

> [UNIX is] great for students, great for somewhat casual users, and it’s great for interchanging programs between different machines. ... It is our belief, however, that serious professional users will run out of things they can do with UNIX. They’ll want a real system and will end up doing VMS when they get to be serious about programming.

And that is why your cell phone runs Unix and not VMS.

[1] http://sinix.org/blog/?p=16




> And that is why your cell phone runs Unix and not VMS.

While this is unquestioningly accepted by the software and at large communities, for some reason "low-end manufacturing" in the US and the rest of the developed world is exempt from this logic. Somehow, we'll dispense with this pesky manufacturing economic sector, and when someone wants a "real" high-end manufactured system/product, they will end up purchasing it from these same developed world economies. We're VMS'ing ourselves to our detriment, and the developing nations are laughing all the way to the bank.

There is so much embedded knowledge that I've seen played out over and over again in different economic sectors in my paths consulting across different industries that I no longer buy this line of argument. Anyone who believes this must first prove to the disinterested observer/reader that the "low-end" they say is no longer needed is hermetically packaged to automation levels with no loss of institutional knowledge on all fronts of quality, step-wise improvements leading to innovations, environmental impact, etc.,..., in which case, we should simply automate it ourselves and retain the embedded wealth represented in that knowledge.


Stakeholders won't do this, very nearly ever. They'd rather go down with the ship than preserve the gnosis in a rigorous manner. People believe their only path is to charge rents to the company on pitiful subsets of the overall gnosis, as the firm experiences a slow decay in orbit.

And what if the gnosis is essentially wrong? It usually is; it's just not wrong enough to kill the firm outright. Humans consider themselves to be parasites when they can.


"...and it’s great for interchanging programs between different machines..."

This is like saying "DNA is great for interchanging phenotypes between different locations".

I think that the missing piece as to why it's universal is that Linux was better than DOS, not VMS. You kind-of, sort-of had to have a VAX to run VMS. Your desktop 386 could run SCO.

It's ... useable ( but of low useability ) so the Windows API doomed Linux to success.


What sort of things could/can VMS do that Unix couldn't? I know almost nothing about computer history.


VMS had early support for clustering and associated technologies like shared filesystems with a distributed lock manager.

VMS had amazing support for terminal servers, so that each terminal user could choose among several connected computers. Terminal I/O was almost completely offloaded: it was common for other users to get delays and random short freezes, but VMS terminal users had smooth responsive I/O all the time.

DEC mandated the Common Language Environment, something like the C library calling convention but much better thought out and specified. It was totally normal to write an application in two or three languages because you felt that each language had advantages for that part. Every compiler and interpreter knew how to call functions from the CLE and export their own.


> VMS terminal users had smooth responsive I/O all the time.

I recall that you could press Ctrl-T and it would instantly echo back some useful information about the running process, like CPU time and page faults. That was a nifty feature.

I loved VMS. Everything was so coherent, whereas Unix looked like it was hacked together overnight by a bunch of people who never talked to each other. These days, I like Unix, but back in the 80s, I only tolerated it.


I was 'exiled' to VMS for a couple of years, though a very nice system there was lots more typing on VMS's command line. Anyway I think it was scaling and software that allowed UNIX take over. I had a useful UNIX on a 8086 and software could be found. On VMS just about any useful programs cost $, never mind support costs for using VMS.


I really liked the VMS help system; it was hierarchical and you could start at the top and learn pretty much everything about the system by just navigating the help system. This was far better than man pages (which are mainly references for when you already know what to do).

VMS had early clustering technology, but that got added to TruCluster.


In VMS, revision control was built into the file system. It was easy to retrieve older versions of files. Unix didn't have anything like that until rcs was invented, and even then it was a tool, not something built in like what VMS had.


Brings back memories. If I recall correctly, when you had this turned on, files would get a hidden extension of .1 .2 .3 etc. And you could specify how many revisions to keep around? I also recall on our system, most of the time this was turned off because disk space was so precious. But I do recall this feature saving my bacon a time or two...


Yep! It goes up to 32k by default, but you can limit it:

"The /VERSION_LIMIT qualifier for the CREATE/DIRECTORY, SET DIRECTORY, and SET FILE commands lets you control the number of versions of a file. If you exceed the version limit, the system automatically purges the lowest version file in excess of the limit. For example, if the version limit is 5 and you create the sixth version of a file (ACCOUNTS.DAT;6), the system deletes the first version of the file (ACCOUNTS.DAT;1). To view the version limit on a file, enter the DIRECTORY/FULL command. The version limit is listed in the File attributes: field."

OpenVMS Manual http://h41379.www4.hpe.com/doc/731final/6489/6489pro_006.htm...

I thoroughly enjoyed using DCL (Digital Command Line -- the VMS shell) and VMS (with the wall of documentation, like http://www.pkl.net/~matt/photos/machines/tn/pict0157.jpg.htm...). It always felt structured and orderly, unlike UNIX systems, which are more "organic". :) (I love UNIX too!)


This wasn't unique to VMS. Symbolics Genera had this capability. ITS, VME-B, and I think TOPS-10, also had numbered file versions.


Fair enough. My computing career started in the nineties on VMS at the Help Desk of Brandeis University, my work-study job.


I wonder what the VCS landscape would have looked like if git was being developed on top of VMS rather than Linux...


filename.c;1 filename.c;2 filename.c;3

git checkout filename.c;2


The original MacOS was also supposed to copy this revision capability -- imagine, built in document versioning. Versioning was surfaced in the API but apparently never implemented. Had to always be set to 0 (1? long time ago).


This always caused me to run out of disk space (my quota was 512k).


Any idea if it was a full copy of the files, a rolling diff, or a Copy On Write at the block level?


Filesystem level versioning. TRUE async I/O. Reliability for real workloads. Plus a properly configured VMS cluster has never, ever, been hacked.


this means that back in my teenage years I never encountered a properly configured VMS cluster


cough SYSTEST/SYSTEST cough


It's been a long time since I've used VMS, but one neat feature I recall was sort of a system/group/user concept of environment variables. For example if you set an environment variable at the system level, the new value would immediately be reflected for all users. Powerful concept.


You are thinking of VMS logicals. Of course they can be used for more than just environment variables! They can also be used to provide an extra level of indirection to filenames. Neat stuff!


I forgot about that. Was an interesting concept. Question, how can one tell if a UN*X newbee came from VMS ? You get a question from them "How to I change an environment variable for the parent process ?" :) That was allowed in VMS but forgot which type of VMS variable that was.


It had fixed and variable record based files in addition to regular stream type files. All had built in versioning as well.

Also, fast loosely coupled clusters, with distributed file systems and locking...back in the early 80's.


Although I too was initially expecting a tale about how UNIX came to rule the world, I think "top" in the article's title instead means the top of Bell Labs/AT&T.


I risk sounding ignorant, but what does 'VMS' stand for?

I tried googling--led to several different results for VMS.


VMS (virtual memory system) was the OS that ran on Digital's VAX machines. Your sibling comments have more info as well.

- https://en.wikipedia.org/wiki/VAX

- https://en.wikipedia.org/wiki/OpenVMS

- http://www3.sympatico.ca/n.rieck/docs/vms_vs_unix.html


Legendary indeed ! I assume the author of this message is not using a pseudonym ?


No idea who you are thinking of, but yes this is a pseudonym based on a 1970s krautrock album.


Doug McIlroy, the author of the message referred to in this thread.




Consider applying for YC's Fall 2025 batch! Applications are open till Aug 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: