It certainly runs 16-bit Windows games better than Windows 11, which can't run them at all. Not that there are a ton of those, but it's still pretty neat that they work.
One bit of magic you may be interested in is pivot_root, which allows another filesystem to take the place of the root filesystem (e.g. / and /mnt become /old and /). It's usually used during startup, to allow the "real" root filesystem to take the place of the initrd, but could have other uses.
Last time I tried to use it though I just could not get it to let go of the main filesystem even after repeatedly killing the processes I could and restarting the rest.
Taking control at the initrd stage, as in the second page of the article, is significantly more reliable.
But have busybox in your initrd so you don't have to suffer. It takes up 0.5% of the size of my initrd file.
If I had a nickel for every AI-poisoned "researcher" I'd seen with a preprint full of nonsense buzzwords like "quantum fractal holographic resonance matrix"... well, I wouldn't be rich, but I'd probably at least have enough to buy a coffee.
The Art of Electronics, by Horowitz and Hill, is aimed at a university or professional audience, but could also be an incredible learning resource for a younger student (or older hobbyist!) interested in learning more about the field.
Speaking for myself, I would have loved to read something like this when I was first experimenting with electronics as a child. A lot of the details would have gone over my head, but even just knowing the general outlines of the topics it covered would have been a huge step up.
A lot of modern SUVs have "360° backup camera" features which work similarly - the car uses footage from cameras mounted around the vehicle to synthesize a top-down view. It's great for backing out of tight parking spaces, and I can only imagine it's even more useful on a bus.
> zstd does everything in frames and everything in those frames can be decompressed separately (so you can seek and decompress parts). Bzip2 doesn’t do that.
This isn't accurate.
1) Most zstd streams consist of a single frame. The compressor only creates multiple frames if specifically directed to do so.
2) bzip2 blocks, by contrast, are fully independent - by default, the compressor works on 900 kB blocks of input, and each one is stored with no interdependencies between blocks. (However, software support for seeking within the archive is practically nonexistent.)
The biggest savings for a service like GMail are going to be based around deduplication - e.g. if you can recognize that a newsletter went out to a thousand subscribers and store those all as deltas from a "canonical" copy - congratulations, that's >1000:1 compression, better than you could achieve with any general-purpose compression. Similarly, if you can recognize that an email is an Amazon shipping confirmation or a Facebook message notification or some other commonly repeated "form letter", you can achieve huge savings by factoring out all the common elements in them, like images or stylesheets.
I kind of doubt they would do this to be honest. Every near-copy of a message is going to have small differences in at least the envelope (not sure if encoding differences are also possible depending on the server), and possibly going to be under different guarantees or jurisdictions. And it would just take one mistake to screw things up and leak data from one person to another. All for saving a few gigabytes over an account's lifetime. Doesn't really seem worth it, does it?
That's why a base and a delta. Whereas PP was talking about general compression algorithm, my question was different.
In line with the original comment, I was asking about specialized "codecs" for gmail.
Humans do not read the same email many times. That makes it a good target for compression. I believe machines do read the same email many times, but that could be architected around.
These and other email specific redundancies ought to be covered by any specialized compression scheme. Also note, a lot of standard compression is deduplication. Fundamentally they are not that different.
Given that one needs to support deletes, this will end up looking like a garbage collected deduplication file system.
Also potentially relevant: in the 00s, the performance gap between gzip and bzip2 wasn't quite as wide - gzip has benefited far more from modern CPU optimizations - and slow networks / small disks made a higher compression ratio more valuable.
reply