> the next generation will see it as the current see the TV - unimportant old boring low-entropy data which is not entertaining anymore.
Don't you mean high-entropy data? High entropy data would be less orderly, more compressible, and have a lower signal to noise ratio ... like TV shows compared to a textbook.
Low entropy data is what's more compressible. High entropy means unpredictable, which could mean high noise (like TV static which is incompressible) or high signal as GP intended.
Thank you. I just learned about Shannon Entropy and that it grows with information content. The negative sign in the formula is due to the negative log of the normalized population probability.
Yes. You can also take the log of the reciprocal probability (or "luck") and then you don't need the negative sign. This value is the expected number of trials you would need for the outcome to occur once. I find this presentation a bit more intuitive. See also Boltzmann entropy S = k log W which takes this form.
Don't you mean high-entropy data? High entropy data would be less orderly, more compressible, and have a lower signal to noise ratio ... like TV shows compared to a textbook.