.NET ResX localization generates a source file. So localized messages are just `Resources.TheKey` - a named property like anything else in the language. It also catches key renaming bugs because the code will fail to compile if you remove/rename a key without updating users.
I've only ever seen three reasons for Midori to shutdown:
1) they were hitting C# limitations (and started working on custom compilers etc) (and people involved in Midori say Rust has already shipped things they failed to do)
2) there was a bit too much academic overeagerness, e.g. software transactional memory will kill any project that attempts it
Midori is certainly an interesting project, but no; I meant the old "code access security" model that .NET Framework had.[0][1] Administrators (and other code) could restrict you from doing certain operations, and the runtime would enforce it. It was removed in .NET Core.[2]
Okay, that looks really funky. Like, libraries explicitly state what access they have ambient authority to use, and then callers can be constrained by an access control list, or something like that. Really weird design.
I'd love to see someone put genuine thought into what it would take to say that e.g. a Rust crate has no ambient authority. No unsafe, applied transitively. For example, no calling std::fs::open, must pass in a "filesystem abstraction" for that to work.
I think the end of that road could be a way to have libraries that can only affect the outside world by values you pass in (=capabilities), busy looping, deadlocking, or running out of memory (and no_std might be a mechanism to force explicit use of an allocator, too).
(Whether that work is worth doing in a world with WASM+WASI is a different question.)
Given enough motivation, access control is irrelevant too. See early Windows "private" API that was used for decades and Microsoft supported despite being "private", because they knew it was being used and they (used to) care about their users.
IBM*. They approached Intel for the 8088 for the 5150, but said "We want a second source". So Intel reeled in AMD. Second sourcing at the time was pretty common.
Crucially, "the 7-bit coded character set" is described on page 6 using only seven total bits (1-indexed, so don't get confused when you see b7 in the chart!).
There is an encoding mechanism to use 8 bits, but it's for storage on a type of magnetic tape, and even that still is silent on the 8th bit being repurposed. It's likely, given the lack of discussion about it, that it was for ergonomic or technical purposes related to the medium (8 is a power of 2) rather than for future extensibility.
Notably, it is mentioned that the 7-bit code is developed "in anticipation of" ISO requesting such a code, and we see in the addenda attached at the end of the document that ISO began to develop 8-bit codes extending the base 7-bit code shortly after it was published.
So, it seems that ASCII was kept to 7 bits primarily so "extended ASCII" sets could exist, with additional characters for various purposes (such as other languages, but also for things like mathematical symbols).
Mackenzie claims that parity was explicit concern for selecting 7 bit code for ASCII. He cites X3.2 subcommittee, although does not provide any references which document exactly, but considering that he was member of those committees (as far as I can tell) I would put some weight to his word.
When ASCII was invented, 36-bit computers were popular, which would fit five ASCII characters with just one unused bit per 36-bit word. Before, 6-bit character codes were used, where a 36-bit word could fit six of them.
reply