They call him the "grandfather of MP3" but I do not remember seeing a single LZ-family algorithm in its spec; Huffman might be more suitable to that title. If they said "grandfather of ZIP", that would definitely be true.
It's always seemed surprising to me that variable-length codes (Huffman, Shannon, etc.) are conceptually more complex than replacing repeated sequences (LZ family), yet the former were "discovered" first. Even RLE, which is like a very limited subset of LZ, came after Huffman. If you compare the amount of code needed to decode even simple static Huffman vs. a common LZ variant like LZ12/4, as well as the typical compression ratios of those schemes, it's clear that the latter is a huge win in efficiency and simplicity. Maybe the lesson here is that if you take something that seems almost like common sense or trivial and turn it into an invention, you just might get lucky and become famous?
It's quite common that the algorithm itself is the trivial part in an algorithms paper, and the real contribution is the analysis.
If you have a sequence of symbols emitted by a known source, it's often easy to compress the sequence optimally. But if the properties of the source are unknown, optimal compression becomes much harder. The key part of the LZ77 paper was showing that a simple algorithm is asymptotically optimal for a wide range of sources. That the algorithm is a universal compressor rather than a mere heuristic that seems to work pretty well with some test inputs.
Well variable length codes go way back, Morse is from early 1800s. If you are already using such codes it's a fairly natural question how good you can make them.
Edit: Arguably language itself uses uses variable length codes of letters / phonemes but that might be a more difficult parallel to spot.
There are many places information theory crops up in linguistics (perhaps unsurprisingly).
For example, there's a correlation between the numbers of vowels/consonants in a language and the length of each syllable, time-wise. Languages like English form slow complex syllables, and languages like Japanese have fast simple syllables. Each English syllable conveys more information but takes longer to do so. In the end, English and Japanese have about the same effective number of distinguishable states in the same amount of time -- the same effective bitrate.
There is an uncanny parallel to the trade-off between the symbol constellation size and baud rate with modems.
Interestingly enough, geographies with drier climates tend to develop languages with more consonants and more complex syllables whereas geographies with more humid climates tend to develop languages with more vowels and simpler syllables. Similar bitrate, but different coding strategies for different conditions.
Having made a few algs over the years for myself. (someone nerd sniped me years ago). The one property I always found interesting was the trade off between the dictionary table size and the data. The bigger you make one the smaller the other can be.
As always, the true legends of computer science are somewhat understated. It's a shame. Lempel's work has ushered developments of formats such as PDF and common compression algorithms that benefited all of us. Rest in peace.
I third the motion. (But also… I am not expecting it to happen, low public profile people with pivotal or otherwise tremendous work have gone without the black bar treatment quite often mainly because it there does have to be a threshold and it’s all too easy to say they deserve recognition than to make the hard call and decide not to)
It's always seemed surprising to me that variable-length codes (Huffman, Shannon, etc.) are conceptually more complex than replacing repeated sequences (LZ family), yet the former were "discovered" first. Even RLE, which is like a very limited subset of LZ, came after Huffman. If you compare the amount of code needed to decode even simple static Huffman vs. a common LZ variant like LZ12/4, as well as the typical compression ratios of those schemes, it's clear that the latter is a huge win in efficiency and simplicity. Maybe the lesson here is that if you take something that seems almost like common sense or trivial and turn it into an invention, you just might get lucky and become famous?