Hacker News new | past | comments | ask | show | jobs | submit login

Its largely a variation of delta encoding (https://en.wikipedia.org/wiki/Delta_encoding) which is using xor to determine the variation between values. Both are exploiting the fact that in certain kinds of series/samples the range (relative entropy) between values is far less than the range of the underlying value representation.

Whether or not the resulting ranges are encoded in a space saving manner (as this algorithm is doing) or used as input into a BW transform/Huffman/etc sorta depends on the resulting value sequences.

Much of this though should apply the understanding that data being compressed can frequently lead to data specific preprocessing that can significantly increase the compression ratios over generic encoders. Just taking the output of this routine and running it through deflate might yield another couple percent if it were tuned. Or put another way, all the bit prefix/shifting decisions might be redundant for certain cases because the 0's would end up being single bit tokens in a Huffman encoder.




> Its largely a variation of delta encoding

Amiga famously used a version of hardware backed delta encoding [1] to display more colours on the screen than normally possible within its memory limits.

[1] https://en.wikipedia.org/wiki/Hold-And-Modify


HAM would have worked even better if it had been in the intended hue-saturation-value (HSV) colour space as Jay Miner originally intended. Sadly, HSV support was dropped, but due to schedule constraints, there wasn't enough time to rework the design and remove HAM. I found it fascinating to learn these things long after I was no longer using my Amiga for day to day work.




Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: