It is just a spec on something widely implemented already.
Assuming Next gen PNG will still require new decoder. They could just call it PNG2.
JPEG-XL already provides everything most people asked for a lossless codec. If there are any problems it is its encoding and decoding speed and resources.
And not just decoding speed but also encoding speed with difference of an order of magnitude. Some new results further down in the comments in this thread. Had it not been verified I would have thought it was a scam.
I'm using png in a computer vision image annotation tool[0]. The idea is to store the class labels directly in the image [dispensing with the side car text files], taking advantage of the beautiful png metadata capabilities. The next step is to build a specialized extension of the format for this kind of task.
I hope I am very wrong but this isn't given. In the past reference encoder and decoder do not concern about speed and resources, but last 10 years have shown most reference encoder and decoder has already put considerable effort into speed optimisation. And it seems people are already looking to hardware JPEG XL implementation. ( I hope and guess this is for Lossless only )
I would agree we will see less improvements that when comparing modern jpeg implementation and the reference one.
When it comes to hardware encoding/decoding, I am not following your point I think. The fact that some are already looking at hardware implementation for JPEG XL means that….?
I just know JPEG hardware acceleration is quite common, hence I am trying to understand how that makes JPEG XL different/better/worse?
In terms of PC usage, JPEG, or most image codec decoding are done via software and not hardware. AFAIK even AVIF decoding is done via software on browser.
Hardware acceleration for lossless makes more sense for JPEG XL because it is currently very slow. As the author of HALIC posted some results below, JPEG XL is about 20 - 50x slower while requiring lots of memory after memory optimisation. And about 10 - 20 times slower compared to other lossless codec. JPEG XL is already used by Camera and stored as DNG, but encoding resources is limiting its reach. Hence hardware encoder would be great.
For lossy JPEG XL, not so much. Just like video codec, hardware encoder tends to focus on speed and it takes multiple iteration or 5 - 10 years before it catches up on quality. JPEG XL is relatively new with so many tools and usage optimisation which even current software encoder is far from reaching the codec's potential. And I dont want crappy quality JPEG XL hardware encoder, hence I much prefer an upgradeable software encoder for JPEG XL lossy and hardware encoder for JPEG XL Lossless.
Lossless JPEG XL encoding is already fast in software and scales very well with the number of cores. With a few cores, it can easily compress 100 megapixels per second or more. (The times you see in the comment with the DPReview samples are single-threaded and for a total of about 400 MP since each image is 101.8MP.)
HALIC does almost the same degree of compression tens of times faster. And interestingly, it consumes almost no memory at all. Unfortunately, this is the case.
WebP lossless is close to state of the art and widely available. It's also not widely used. The takeaway seems to be that absolute best performance for lossless compression isn't that important, or at least it won't get you widely adopted.
I don't know that i have ever used jpg or png lossless in practical usage (e.g. I don't think 99.9% of mobile app or web usecases are for lossless). WebP lossy performance is just not worth it in practice, which is why WebP never took off IMO.
Are there usecases for lossless other than archival?
I definitely noticed when the Play Store switched to lossy icons. I can still notice it to this day, though they did at least make it harder to notice (it was especially apparent on low-DPI displays). Fortunately, the apps once installed still seem to use lossless icons.
A lot of images should be lossless. Icons/pictograms/emoji, diagrams and line drawings (when rasterized), screenshots, etc. You can sometimes get away with large-resolution lossy for some of these if you scale it down, but that doesn't necessarily translate into a smaller file size than a lossless image at the intended resolution.
There's another problem with lossy images, which is re-encoding. Any app/site that lets you upload/share an image but also insists on re-encoding it can quickly turn it into pixelated mush.
Only downside is that webp lossless requires RGB colorspace so you can't, for example, save direct YUV frames from a video losslessly. AVIF lossless does support this though.
When it comes to metadata, an implementation not being widely implemented (yet) is not that big a problem. Select tools will do for meta, so this is an advancement for PNG.
I don't really understand what the new PNG does better. Elements such as speed or compression ratio are not mentioned. Thanks also for your kind thoughts ksec.
Apart from the widespread support in codecs, there are 3 important elements: processing speed, compression ratio and memory usage. These are taken into account when making a decision (pareto limit). In other words, the fastest or the best compression maker alone does not matter. Otherwise, the situation can be interpreted as insufficient knowledge and experience about the subject.
HALIC is very good in lossless image compression in terms of speed/compression ratio. It also uses a comic amount of memory. No one mentioned whether this was necessary or not. However, low memory usage negatively affects both the processing speed and the compression ratio. You can see the real performance of HALIC only on large-sized(20 MPixel+) images(single and multi-thread). An example current test is below. During operations, HALIC uses only about 20 MB of memory, while JXL uses more than 1 GB of memory.
I had a very busy time with HALAC. Now I've given him a break, too. Maybe I can go back to HALIC, which I left unfinished, and do better. That is, more intense and/or faster. Or I can make it work much better in synthetic images. I can also add a mode that is near-lossless. But I don't know if it's worth the time I'm going to spend on it.
> In other words, the fastest or the best compression maker alone does not matter.
Strictly true, but e.g. for archival or content delivered to many users compression speed and memory needed for compression is an afterthought compared to compressed size.
Storage is cheaper than it used to be. Bandwidth is also cheaper than it used to be (though not as cheap as storage). So high quality lossy techniques and lossless techniques can be adopted more than low quality lossy compression techniques.
Today, processor cores are not getting much faster. And energy is still not cheap. So in all my work, processing speed (energy consumption) is a much higher priority for me.
You're right, but aren't you forgetting that for each image, the encode cost needs to be paid just once, but the decode time must be paid many many times? Therefore, I think it's important to optimize size and decode time.
HALIC's decode speed is already much faster compared to other codecs. When you look at the compression ratios, they are almost the same. There doesn't seem to be a problem with this. There are also issues where encode speed is especially important. But I think there is no need to spend a lot more energy to make a few percent more compression and decode it.
Assuming Next gen PNG will still require new decoder. They could just call it PNG2.
JPEG-XL already provides everything most people asked for a lossless codec. If there are any problems it is its encoding and decoding speed and resources.
Current champion of Lossless image codec is HALIC. https://news.ycombinator.com/item?id=38990568