Hacker Newsnew | past | comments | ask | show | jobs | submit | more spider-mario's commentslogin

Lossless JPEG XL encoding is already fast in software and scales very well with the number of cores. With a few cores, it can easily compress 100 megapixels per second or more. (The times you see in the comment with the DPReview samples are single-threaded and for a total of about 400 MP since each image is 101.8MP.)


HALIC does almost the same degree of compression tens of times faster. And interestingly, it consumes almost no memory at all. Unfortunately, this is the case.


HALIC being faster still doesn’t mean that lossless JXL is so slow as to warrant hardware acceleration.


Yes, I also think that HALIC should be destroyed ;)


> However, perhaps are you talking about an image on JPEG XL, using features only in JPEG XL (24 bit, HDR, etc...) that obviously couldn't be converted in a lossless way to a JPEG.

A lot of those features (non-8×8 DCTs, Gaborish and EPF filters, XYB) are enabled by default when you compress a non-JPEG image to a lossy JXL. At the moment, you really do need to compress to JPEG first and then transcode that to JXL if you want the JXL→JPEG direction to be lossless.


Good news! As mentioned in the article, it also does lossless.


52 911 bytes instead of 53 kB is really not that far off.

And your AVIF is certainly not without visual changes. The colours are off and there is visible ringing.


you are right, i changed it to "acceptable visual changes".


Careful, next they’re going to argue that once you copy the raw files off the SD card, they’re not the same images anymore.


If you copy something, by definition, it is not the same file. It is a copy of the file not the original file.

If you copy a Van Gogh is it worth the same as the original?


No, but it’s also a painting instead of a digital file, so different considerations apply (maybe the copy wouldn’t be strictly identical, maybe the value is affected by “knowing that Van Gogh is the one who applied the paint to the canvas” or by the fact that only one such copy exist), and this is therefore a false analogy.

If you copy the number written on a piece of paper to another piece of paper, is it the same number? Yes, it is, and a digital photograph is defined by the numbers that make it up. Once you have two identical copies of a file, what difference does it make which one you read the numbers from?

Or are you arguing that when the camera writes those numbers to the raw file, it’s already a different image than was read from the sensor? After all, they were in volatile memory before a copy was written to the SD card.


There is always noise. In fact, in absolute terms, there is more of it as the amount of light increases. It just increases more slowly than the signal, so the signal-to-noise ratio increases.


> RAW photos are by definition not compressed.

There is no such definition. Most cameras’ raw files nowadays are losslessly compressed. The compression being lossless means that the bits that were taken out can be reconstructed identically.

It seems you might be confused as to what lossless compression means?


The Photos app supports JPEG XL as far as I’ve been able to tell.


This is from the same team as JPEG XL, and there is no lock-in or overlay. It’s just exploiting the existing mechanics better by not doing unnecessary truncation. The new APIs are only required because the existing APIs receive and return 8-bit buffers.


To display it at all, no. To display it smoothly, yes.


From a purely theoretical viewpoint 10+ bits encoding will lead into slightly better results even if rendered using a traditional 8 bit decoder. One source of error has been removed from the pipeline.


Ideally, the decoder should be dithering, I suppose. (I know of zero JPEG decoders that do this in practice.)


Jpegli, of course, does this when you ask for 8 bit output.


Has there been any outreach to get a new HDR decoder for the extra bits into any software?

I might be wrong, but it seems like Apple is the primary game in town for supporting HDR. How do you intend to persuade Apple to upgrade their JPG decoder to support Jpegli?

p.s. keep up the great work!


I tried to reach to their devrel person Jen Simmons here: https://twitter.com/jyzg/status/1763141558042243470

I didn't follow up and I don't know if she read it or understood the proposal.


How does the data get encoded into 10.5 bits but displayable correctly by an 8 bit decoder while also potentially displaying even more accurately by a 10 bit decoder?


Through non-standard API extensions you can provide a 16 bit data buffer to jpegli.

The data is carefully encoded in the dct-coefficients. They are 12 bits so in some situations you can get even 12 bit precision. Quantization errors however sum up and worst case is about 7 bits. Luckily it occurs only in the most noisy environments and in smooth slopes we can get 10.5 bits or so.


8-bit JPEG actually uses 12-bit DCT coefficients, and traditional JPEG coders have lots of errors due to rounding to 8 bits quite often, while Jpegli always uses floating point internally.


Consider applying for YC's Winter 2026 batch! Applications are open till Nov 10

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: