Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Entropy is a measure of complexity or disorder of a signal. The interesting part is that the disorder is with respect to the proper basis or dictionary. Something can look complex in one encoding but be low entropy in the right encoding. You need to know the right basis, or figure it out from the context, to accurately determine the entropy of a signal. A much stronger way of building a tool like the OP is to have a few pre-computed dictionaries for a range of typical source texts (source code, natural language), then encode the string against each dictionary, comparing the compressibility of the string. A high entropy string like a secret will compress poorly against all available dictionaries.


I only briefly browsed the code, but this seems to be roughly what yelp/detect-secrets does.

Anyway, that doesn't really answer my question. To summarize answers in this thread, I think PhilipRoman has captured the essence of it: strictly speaking, the idea of entropy of a known string is nonsense. So, as I suspected, information theory definition isn't meaningfully applicable to the problem. And as other commenters like you mentioned, what we are really trying to measure is basically Kolmogorov complexity, which, strictly speaking, is incomputable, but measuring the compression rate for some well-known popular compression algorithm (allegedly) seems to be good enough estimate, empirically.

But I think it's still an interesting linguistic question. Meaningful or not, but it's well defined: so does it appear to work? Are there known constants for different kinds of text for any of these (or other) metrics? I would suspect this should have been explored already, but neither me, nor anybody in this thread apparently has ever stumbled upon such article.


bookmarking to think about later... does this hold for representing numbers as one base compared to another?

Regarding a prime as having higher entropy / less structure than say a perfect square or highly divisible number

a prime is a prime in any base, but the number of divisors will differ in non-primes, if the number is divisible by the base then it may appear to have more structure (smaller function necessary to derive, kolmogorov style),

does prime factorization have anything to do with this? i can almost imagine choosing a large non-prime whose divisibity is only obvious with a particular base such that the base becomes the secret key - base of a number is basically specifying your dictionary, no?


I don't think there's any interesting difference with different bases. Usually the base you represent stuff in is a relatively small number (because using very large bases is already wildly inefficient). I think it only usually makes sense to consider constant or logarithmic bases. If your base is scaling linearly with your number then things are going to be weird.

The problem of finding factors is only complex when you're asking about relatively big factors. If you're looking for constant or log sized factors you can just do trial division and find them.


Changing the base of number representation with a random basis feels like XORing a string with a random string, which is to say you're adding entropy equal to the random string. My thinking is that for any number representation M, you can get any other number representation N given a well-chosen base. So when presented with the encoded N, the original number could be any other number with the same number of digits. But once you put reasonable bounds on the base, you lose that flexibility and end up adding negligible entropy.


> So when presented with the encoded N, the original number could be any other number with the same number of digits

Not necessarily the same number of digits, when changing the base the number of digits may change as well. E.g., decimal 8 becomes 1000 in binary.


Also bookmarking to think about it.

My mind drifted towards Fourier transform. Using the transform as a way of describing a system with less entropy?

Or am I butchering all of mathematics by making this comparison?


There's some precedence for that. I'm pretty sure wavelets are SOTA for compression.


> the number of divisors will differ in non-primes

Could you please present an example of this?




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: