Too bad it won't be available at scale. We had a customer who asked us if he could get 200 of those boxes for his AI scientists. But they're apparently only sold in single quantities on the NVidia store or something, not through the official distribution channels
There is a variable declared right before the waste space function. The 'wasted' space is statically allocated memory for the variable 'ospace' just before it.
There's nothing in that repo that says, but at a guess: old machines often had non-uniform ways to access memory, so it may have been to test that the compiler would still work if the binary grew over some threshold.
Even today's machines often have a limit as to the offset that can be included in an instruction, so a compiler will have to use different machine instructions if a branch or load/store needs a larger offset. That would be another thing that this function might be useful to test. Actually that seems more likely.
It might be instructive to compare the binary size of this function to the offset length allowed in various PDP-11 machine instructions
Yes it seems like this is something to do with hardware testing. Maybe memory or registers or something that needed just X bytes etc for overflows or something. It’s really random and the only person who would know it is the one who wrote it :)
Wild guess: it was a way to offset the location of the "main" function by an arbitrary amount of bytes. In the a.out binary format, this translates to an entry point which is not zero.
" A second, less noticeable, but astonishing peculiarity is the space allocation: temporary storage is allocated that deliberately overwrites the beginning of the program, smashing its initialization code to save space. The two compilers differ in the details in how they cope with this. In the earlier one, the start is found by naming a function; in the later, the start is simply taken to be 0. This indicates that the first compiler was written before we had a machine with memory mapping, so the origin of the program was not at location 0, whereas by the time of the second, we had a PDP-11 that did provide mapping. (See the Unix History paper). In one of the files (prestruct-c/c10.c) the kludgery is especially evident. "
Yeah, I was wondering about that too. But if he stopped some weeks ago, why did he continue till now under his alias? Just to make people think they're not the same person?
the high bit one is pretty ancient by now. I don't think we have transmission methods that are not 8-bit-clean anymore. And if your file detector detects "generic text" before any more specialized detections (like "GIF87a"), and thus treats everything that starts with ASCII bytes as "generic text", then sorry, but your detector is badly broken
There's no reason for the high-bit "rule" in 2025.
I would argue the same goes for the 0-byte rule. If you use strcmp() in your magic byte detector, then you're doing it wrong
The zero byte rule has nothing to do with strcmp(). Text files never contain 0-bytes, so having one is a strong sign the file is binary. Many detectors check for this.
that might be true for ASCII but there are other text encodings out there
And again, if a detector doesn't check for the more specific matches first, before falling back to "ah, that seems to be text", then the detector is broken
As I understand it (which might be incorrect), they don't want to tell people "use Apple encryption" anymore and e silently removed that advice from their websites. Probably due to the fact that they didn't get their Backdoor access to user data, so now they want people to just now encrypt stuff