Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

I think some formats already support embedded thumbnails. (EXIF metadata can contain thumbnails, I believe)


But that is not the same. That is a small, but fixed size, second version of the image, embedded into the same file.

With FLIF you simply read the first N bytes of the full image and have a resonable preview. You choose how big or small N has to be, depending on the size of the thumbnail you want to show. Maybe first read N bytes for each image to get a quick but rough preview, then repeatedly read a few bytes more to enhance the thumbnail.


To be fair, that's not the same thing either. A thumbnail is resampled in a way to resemble the original at a smaller size. This will be resampled with (more or less) nearest neighbor, which means lots of aliasing and possibly looking nothing like the original, depending on the subject.


That's true. I tried a variant where the resampling would be better, but while it is possible, it hurts the compression rate significantly. At the level of a single step: if you have two pixels A and B that are both 8-bit numbers, then if you want some lossless way to store both (A+B)/2 (an 8-bit number) and enough information to restore both A and B, it will take at least one extra bit (9 bits, e.g. to store B and the least significant bit of A+B). So a 24-bit RGB image would become a 27-bit images when interlaced with better resampling (except for the very first pixel, which would need only 24 bits).

In practice, the simplistic resampling is not likely to be an issue -- of course you can create a malicious image which is white on the even rows and black on the odd rows, and then all previews would be black while they should be grey. But most of the actual images are not like that -- e.g. photographs. You can just decode at somewhat higher resolution and scale down from that. (You have to start from a power-of-two scaled image anyway.) Also note that Y is emitted at more detail earlier than chroma, so most of the error will be in those less important chroma channels. Other than that it's just Adam7 interlacing, but with no upper bound on the number of passes (so you could call it Adam-infinity interlacing).


I haven't checked, but I suspect that a better down scaling method can be used while still being compatible with the same decoding algorithm.


hmm, how could it be? The pixels values have to fit into the eventual reconstructed image. If the values were different than any pixels found in the final image, it wouldn't be progressively loading, it would have several different size images embedded in it.


(edited) After reading the author's comment above mine, you're very probably right.


Part of the beauty of FLIF is that there are no dedicated thumbnails, though.


As long as you don't directly work with FLIF, cause tools don't support it, having the file browser's preview-feature work using FLIF would be awesome. This way it doesn't need to store n previews for the supported preview resolutions but could store just a flif of the max preview size per document (psd, jpg, png, svg, pdf) and then, when you seek 3 pages down, the system reads the first kB of each such preview to render it and reads the next kB to improve the images, etc.

Sure, for actual flif files you would not have a separate preview file.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: