My favourite weird Ruby thing (is on the list, anyway) is that CONSTANTS can be changed, but it raises a warning...
Of course, you can also set up a mutable CONSTANT (a hash, for example) and change it without a warning.
For the same reason private methods are called private, even though you can still access them with `send`. Ruby does not allow one programmer to handcuff another; it only allows you to put up warning signs. "I don't expect you to redefine this reference, and you shouldn't depend on the stability of this internal method."
"Ruby does not allow one programmer to handcuff another..."
That's not a valid argument.
"Q: Why does your OS allow any program to write anywhere in memory?"
"A: Because we believe in freedom. Our profound belief is that the OS should not allow to handcuff programmers. They must be free to do anything. They should be able to write anywhere in memory and anywhere to disk"
How do you want people to be able to be able to reproduce state and to reason about a program if f*cking CONSTANTS can be modified!?
It's not an argument, it's a description. That is, in fact, the way Ruby generally works.
>> Q: Why does your OS allow any program to write anywhere in memory?"
That's not an accurate comparison. A user installing two programs cannot be expected to ensure that they play nicely together. A programmer can be expected to ensure that an application's code and libraries play nicely together - especially in a non-compiled language, where you always have the source code.
Even more especially where you have a community that generally promotes the best libraries and the developers who make them.
>> How do you want people to be able to be able to reproduce state and to reason about a program if f*cking CONSTANTS can be modified!?
I'm not proposing a crazy new system; I'm describing a language that has been quite successful. In actual practice, I have never, ever had a bug that was caused by a constant being redefined. I've never accidentally redefined one, and if, hypothetically, I did, I would see a warning.
Reducto ad absurdum! We're not talking about the same thing with full access to anyone's memory here. We're talking about boundaries that if you have control over your software should be handled the way you want anyway.
The type of flexibility that Ruby allows make it a really interesting language. In 99.999% of cases people never use all these features and it's considered bad form to do so.
Ruby is one of those languages that depends a lot on cultural queues to work so well but unlike other languages it doesn't enforce everything on you. It's a tradeoff and it is by no means better than other languages, just different and pleases certain developers more than other languages.
Of course it is a valid argument. You just don't like it. That's your right. But claiming the argument is invalid is just dumb.
Your OS analogy is also poor. There's about two decades worth of discussions about memory protection in AmigaOS and it's spiritual successors (AROS, MorphOS etc.) for example, and while some people involved with those are strongly in favour of adding things like memory protection (though it is hard to do without severely breaking compatibility), there are also a lot of people for whom part of the appeal with these OS's still is that a user can hook into anything and everything:
For AmigaOS there are user-level applications that can replace the task (process) scheduler, for example. There are user-level applications adding virtual memory. There's a library that gives any user-level application free reign to more easily manipulate the MMU. There OS itself provides API calls to allow user applications to replace the code used for ever library call (system wide - the equivalent of being able to override syscalls in Linux...). There are assemblers with explicit support to muck around with hardware registeres and OS structures. There are applications designed to trace everything that happens at the OS level. And a lot of these capabilities are in fairly common use by the (admittedly few) users of these systems.
You might argue (and I'd agree with you) that this isn't a great basis for a modern OS. But that does not make it an invalid stance - most of these people don't want what we consider a modern OS. Many of them instead want something where they can do exactly those kind of things with nothing like a userland-kernel barrier to deal with etc.
So, my point is, why are they called CONSTANTS?