Hacker Newsnew | past | comments | ask | show | jobs | submitlogin
Is WebP really better than JPEG? (siipo.la)
424 points by kasabali on June 23, 2020 | hide | past | favorite | 310 comments


Every time I run an image comparison, the webp version looks worse and yet Google insists it's the same quality. It's baffling.

Even if the above were just an individual... bafflement? and not an actual issue, the size savings really don't seem worth the compatibility hassle, the extra manpower/workflow complexity to support 2 formats, the additional storage (and caching) caused by this duplication.

And the above is if it's done RIGHT. 80% of people outside this forum won't understand that you're not supposed to convert from a lossy format to another, and just convert jpegs to webp.

Webp seems so pointless. A progressive jpeg with an optimized Huffman table can be understood by a 27 year old decoder without issues, achieves 90% of the quality/size claims of webp and can be produced losslessly from your source jpegs. This is without even touching Arithmetic Coding (also lossless, part of the official standard, but poorly supported due to some software patents, even though they all expired by now), or playing with customized DCT quantization matrices to get more compressible outputs (incurs a generation loss, but produces standard files).


You can argue the toss over quality, but JPEG can't do transparency while WebP can. So if you need transparency, you need to compare the size savings to PNG, not JPEG. WebP is the clear winner there.


WebP also supports animation, which is important because GIFs are notoriously large.


WebM (despite being closely related to WebP) can be 5-10x more efficient than WebP.

That's because WebP focused so strongly on being a GIF equivalent that it also dropped all things that made WebM efficient and instead adopted GIF's awfully inefficient architecture (just dumb frames overlaid on top of each other, without motion vectors or predicted frames).

Safari shows how it can be done: it supports silent MP4/H.264 straight in <img>. You get all the ease of use GIF, order of magnitude smaller file, and hardware acceleration.

It's unintuitive, but well-compressed videos are cheaper to decode than dumb "animation" formats, because file size differences are so massive that it's cheaper to decompress a small amount of complex data than to chew through vast amounts of poorly compressed data.


There's also the practical matter of network speed. In many common scenarios a 300kb webm can be fully downloaded and playing while a roughly equivalent 5mb gif is paused a few frames in, buffering.


Also video decoding is cheaper because most GPU/SoCs supports hardware decoding.


This isn't a good assumption to make, many people own cheap phones that lack hardware codecs that can decode videos with modern encodings.


I can't imagine any cheap SoC for smartphones without H.264 HW decoder because playing video is essential and it can record video. Possibly exists for low-feature phones without camera but it's not for web browsing.


The licenses for codecs sometimes cut into the thin margins on budget phones. A large amount of people outside of the US and EU are stuck on low-end phones, and I'm pretty sure many of the government subsidized phones in the US are in the same boat.


Do people really use Safari?


Yes, lots of them... on iOS.

Also worth noting that all the alternative browsers on iOS are just reskinned versions of Safari as well.


Somewhere between 25 and 50% globally based on your audience.


Lots of folks using Mobile Safari.


As a point of data, Animated PNG is finally (as of ~2019) supported natively among all major browsers.

https://caniuse.com/#search=animated%20png

Edit- I was unaware the recently-announced Safari 14 Technical Preview adds support for WebP too! Making both formats viable for all browsers, finally.


Recently we've been going through the browser stats on our websites, and while globally apparently IE is dying, it's certainly not dead yet. In the specific industry I work in (Heavy Automotive Retailing, eg Truck Sales and Servicing), we see upwards of 30% of our users on browsers <= IE11. All of our sites are getting poor lighthouse scores on performance with the main suggestion to be changing to webp, but that would exclude 30% of our customers.


There is a way to offer both jpg and webp, letting the browser to choose.


Ooh, does that mean that slack supports WebP now?

Update: nope. Maybe another year :(


Animations should just be videos. No reason for an animated image format, an animated image is just a video with no audio track.


My understanding is that webp is video. That is, it's a still frame using the vp8 codec.


It only has VP8's intraframe compression, not VP8's interframe compression, so animated WebP is not a proper video codec and is a different beast to WebM video.


As far as I can tell though, it has to be VP8 and can't be VP9. Maybe I'm mistaken about that, but all the webp literature I've read or skimmed talks about VP8 specifically. So what's the advantage of an animated webp using VP8 over a webm using VP9?


No need for image formats either as an image is just a very short single frame video.


I think this a very poor comparison to make because it denies the time dimension of video completely, which fundamentally affects how we perceive and use video, and also ignores the fact that still images linger and have different requirements.


The only advantage I can see it where videos are not allowed to autoplay you would want a gif to play somethig small.


For me is all about sharing. The website itself usually displays the "video" as if it was a gif, but as soon as you go to share anywhere, you're sharing a video on platforms that handle it like a video. No looping, seek controls, complicated UI, etc. I just want to post a photo that auto plays and loops anywhere. I don't are about the internals, but I want all applications to recognize that file as an animated photo, not a video.


Videos can autoplay on browsers if they are muted


An animated WebP is basically a WebM video.

Or let me rephrase, why is WebP as a animated picture / video format insufficient?


video format usually take up too much memory: what you gain in efficiency costs in resources. Conversely, animated WebP are dirt cheap: one buffer only, written over and over.

Then, there's optim being made in WebP to allow fast jump to keyframe, even when there's transparency. Video codec don't allow that, and you can have an arbitrary long torture sequence of transparent frame that needs to be decoded back when the video comes in the view again.

Last, animation are usually low-fps (~10fps): there, video codec don't perform very well and are basically keyframes. So the difference isn't as great as one would think.

Oh, and hardware need a 'reset' between decoding tasks, to reconfigure memory, and decoding can't be parallelized.


Eh. An animated image (an image that contains some sort of video media, whether gif or h264 or whatever) is a hint to the browser to play it automatically and without sound. If everything were true video (say, mp4) you couldn’t make that distinction. As long as the video in the animated video is encoded properly — as real video and not a sequence of frames as gif does — then animated images definitely have their places.


> An animated image (an image that contains some sort of video media, whether gif or h264 or whatever) is a hint to the browser to play it automatically and without sound. If everything were true video (say, mp4) you couldn’t make that distinction.

<video autoplay muted> says the same thing.


Also, gif only supports one-bit transparency; Whereas webp and png allow partial/gradated transparency.


If transparency was the main issue, adding transparency to JPEG sounds easier than coming up with a completely new encoding, especially a lossy one.


How? Do you know how many binaries link to some old version of libjpeg? You're breaking 99.9% of the world and 80% of it is stuff that can't even be recompiled, probably.

In fact, JPEG 2000, which was standardized, in, well, 2000, already has transparency support, and 20 years later still have no meaningful support.


All that is true for WebP too, isn't it? If you cannot recompile a piece of software, how are you going to add WebP support to it?

And you aren't breaking it. In a theoretical world where transparency got added to JPEG, software that doesn't support it will show a fully opaque JPEG (adding transparency in a backwards compatible way isn't rocket science). Compare that to using WebP instead, where software that can't handle it won't even show the image.


It's not too unreasonable to assume that a given ancient program would have logic to differentiate between "This is a jpg and I know what to do with it" and "this is a something else and I don't know what to do with it" so it will technically handle webp images safely while it may not safely handle something that claims to be a jpg but apparently is not.


> In a theoretical world where transparency got added to JPEG

It's not theoretical. It's the JPEG XT spec. It adds alpha channel and HDR to standard JPEG in a backwards-compatible manner.


You offer two versions:

* A webp version with transparency, which won't be supported by most clients but the ones which do will support everything

* A JPEG version without transparency, or with "faked" transparency (i.e. baked-in background color), which will be supported by basically everyone but with less quality.

That way, if the client is capable of loading the "good" version it will, and if it can't then it will load the "good enough" version.


You don't have 15 year old binaries linking against an 18 year old version of libwebp.


No; you just have no compatibility whatsoever.


On the web you can do it with SVG.

https://github.com/leni536/trans_jpg


That is really cool. I had never thought about (ab)using SVG in that way.

SVG is such an underappreciated technology that is in every browser. Why do icon fonts exist when you could just use SVGs just like you do PNGs and JPEGs? You can even inline them in your HTML so there isn't an additional HTTP request if you want.


Wouldn't it probably be one request per icon instead of just one request for a while font?


Not necessarily. You can have a single SVG file with multiple 'defs' that you can reference around the page, or just embed the SVG code in the HTML itself (via a templating language) if you're not using too many.


Great idea, I love 'thinking outside the box' solutions, but doesn't seem to work on Firefox mobile. (Perhaps it's just me.)


Because JPEG 2000 was essentially a patent-covered mess that nobody wanted to touch


Shouldn't those patents be expiring soon?

Edit: Did some more research and the patent risk appears to have passed as of 2016. Still nobody seems to have interest in JPEG2000.


webp was around since 2010 and has google's backing. Microsoft tried their own thing (Jpeg XR; 2009) and the actual JPEG committee announced their work Jpeg XL (with google backing) a while ago, while Apple switched to HEIC/HEIF as of 2017 (HEIC is like webp just based on HEVC aka H.265 instead of vp8 i-frames; and it's a patent mess of course).

There is simply no reason why people in 2016 or now would be interested in a format from 2000 that was a patent minefield until at least 2016.


Except for JPEG the alternatives are also patent minefields...

Are people just more cavalier about the patent risk these days? The problem with JPEG2000 wasn't the patents we knew about, it was the possibility of submarine patents. People were still wary after the GIF debacle. Nobody wanted to be charged $0.05/image after the fact when they've delivered literally billions of images. Plus the courts were seen as very favorable towards patent holders, even when they were acting in bad faith.


It’s too compute intensive


That's kind of scary for something developed in the 90s. It was originally run on Pentiums, K6s, PPC 604s, and the like and it's still too expensive for a Ryzen 7?


Yep. There's a ton of data dependencies, where e.g. you can't begin decoding the next bit until you've finished decoding the current bit. It's all about progressive decoding so there's multiple passes over each group of wavelet coefficients, accessed in a data-dependent sequence. Each wavelet transform involves accessing and interleaving the same 2D arrays in both row- and column-oriented fashions.

These design decisions all made sense when clock rates were exponentiating, but they're all nightmares now that we rely on branch prediction and memory prefetching and superscalar execution units. The codec is simply not a good fit for the computing architectures we have today.


> These design decisions all made sense when clock rates were exponentiating, but they're all nightmares now that we rely on branch prediction and memory prefetching and superscalar execution units. The codec is simply not a good fit for the computing architectures we have today.

Arguably not a good choice for the year 2000, either, considering that all high performance CPUs at that time were out-of-order, superscalar and deeply pipelined.


Images got bigger... it’s not that it’s too expensive it’s just way more expensive that good enough alternatives. There are other reasons too but that’s a surprising one


No.


AFAIK it's pretty standard in DCP files for Digital Cinema.


That was a little of it but the patents expired years ago. The big blockers I saw were basically that it was a very complex format with a specification behind a paywall and no good open-source implementation or test suite. Interoperability was bad until a couple of years ago and it took significant amounts of complex code to make it fast enough to use.

They've subsequently improved that — OpenJPEG is quite good now https://github.com/uclouvain/openjpeg — but probably missed the window for adoption barring a major upset, which is a shame because it's a very powerful codec and has some neat tricks like progressive decoding (imagine if you could have one file in storage and your responsive design simplify specified the HTTP range requests to get for successively large resolution images?). You could ship it in a browser using WASM but I think the browsers are — not without cause — being really reluctant to add new formats and the ensuing security risks without a good reason, and without browser support no format will be more than a niche.


In fact, JPEG 2000, which was standardized, in, well, 2000, already has transparency support, and 20 years later still have no meaningful support.

Every iOS and macOS device supports JPEG 2000… that should be pretty meaningful.


That accounts for significantly less than half the whole market, and half the mobile and desktop markets if you split it out. If it's also supported by other OS's and headsets and browsers that bring it significantly closer to 100%, then maybe we have something meaningful.


Well, 2 billion devices is a meaningful number of devices, no matter how you slice it.


That's entirely dependent on context. In the context of "can I use this and expect it to work most the time" it's nowhere close enough. All that matters in that respect is percentage of the whole, and at less than half, it's not really much.

As an example, we could talk about the number of connected IoT devices that are supported and up to date, and it's probably in the billions. But compared to the number of connected out of date and unsupported devices, it's likely inconsequential in comparison by metrics of total numbers, percentage of a whole, and importance (alternatively, total number of unsupported and possible exploitable devices does matter, because of what it implies about how they can be used destructively).


Is there still some licensing issues with jpeg 2000?


Chrome and Firefox don't.


I did some proof-of-concepts ages ago (October 2010) exploring different ways of embedding a PNG alpha channel in JPEG application specific fields to have 100% JPEG compliant files which web browsers could display with transparency. It worked just fine.

If you want transparent JPEGs on your web page that works.

Back then the number of GETs was really important, so stuffing the mask into the JPEG made sense. Now with our HTTP/3 and QUIC world that isn't such a big deal.You might be better off just using a CSS mask image.


clearly we need to dig in past work to find who tried that


Flash could do it


I think the idea is "if you're going to break compatibility, might as well make all the improvements at once".


Do you though? The transparent png use case is largely being replaced by svg.

There are some situations where transparency is legitimately needed but im not sold it can justify webp and all that comes with it.


vector graphics and raster graphics aren't really comparable.


Why not?

They display an image. Bit if a weak argument to say they dont matter when theyre being used to create the same final outcome.


How would you show a photographed image as a svg? Either you would have to "vectorize" the image which would look totally different or you'd create a grid of vectors with different hue values, ultimately representing pixels in an extremely inefficient format.

Raster graphics and vector graphics can't be compared because sure, you could create 2 million vector squares with individual positions, sizes and colors and align it into a 16:9 grid or you could just use a jpg. The latter being a fraction of the size and processing power needed to display it.


Why would you show a photograph as an svg? You wouldnt. Thats what jpegs are for.

We're discussing transparency and the use case for it. I dont think ive ever heard of someone wanting transparency in a photo. The only times i see the need is if theyre doing something that is better off in a vector format. I.e. png, which is being supplanted by svg, which alleviates the size problems raised about pngs.

I.e. webp is a format solution in search of a problem.

Downvoted, wow, petty.


>I dont think ive ever heard of someone wanting transparency in a photo.

OK, but they do. It's pretty common for someone to cut out an object in a photo and have a transparent background.


>JPEG can't do transparency

JPEG "can't" do depth maps either, and yet it manages to do them just fine.

"Portrait mode" on Android is simply JPEG + greyscale JPEG embedded in metadata.

If anyone really wanted transparency, it is very easy to add it to JPEG.

That said, in practice, transparency is not needed in non-vector / non-generated graphics formats. The nature doesn't really have an alpha channel, photographs certainly do not.

Transparency is extra information, and can be passed along as such.


The problem is always standards. I want transparency to work correctly everywhere, not just in one application and broken everywhere else.

Even if transparency was add to JPEG right now, it would take sometime to become available everywhere, and you would have tons of legacy devices showing broken transparency.

The advantage of a new format that supports transparency from is interception is that every device that is compatible with it will display it correctly. So there is no issue with some devices supporting the format partially.


The larger picture is that you can use PNG or SVG anywhere you need transparency, which are also better suited for the content that has transparency.

JPEG is optimized for photos, which don't have transparent parts.

SVG and PNG are optimized for graphics.

So, if you need transparency, WEBP is probably not the format you'd choose anyway.


> The larger picture is that you can use PNG or SVG anywhere you need transparency, which are also better suited for the content that has transparency.

It is pretty common to have anime characters where you remove the background to use as reactions, and while you can use PNG in those cases it makes the images huge. WebP makes it ideal for those cases, this is even the case that I saw this being used in some image boards (specially also including animation, another thing that is popular).

Just because you can't think in a use case there isn't mean that there isn't a use case.


And that is the reason why I dont like Open Media Alliance in General.

Not only does the best JPEG encoder perform as good if not better than the best WebP encoder. With a JPEG Repacker [1] JPEG file size could easily be 20% smaller. If you have to support a new format with relatively little benefits, why not just support using the repacked instead.

[1]https://github.com/google/brunsli


Absolutely. Simply unscrewing the Huffman stage from the bottom of JPEG and replacing it with a more effective, modern lossless stage kills webp's gains and does not incur in generation loss - the image is pixel accurate to the source jpeg.

You'll probably know this already, but Brunsli made it into the JPEG XL standard. I'm quite happy about it!


JPEG XL uses this new ANS coding as e.g. zstd: https://en.wikipedia.org/wiki/Asymmetric_numeral_systems


What has AOMedia to do with this? That alliance is what has made AV1 (and thus AVIF), which is clearly superior to JPEG. AOMedia had nothing to do with WebP (though Google is a member, and WebP is based on VP8, and AV1 is based in part on VP9 which was the successor to VP8).


Is it better? The AVIF examples in the article look even worse than the WebP ones


At 30% reduced size. The true comparison would be at the same file size, which format is able to squeeze more perceivable information into the same bucket?


Indeed. The article's comparisons seem a bit useless - neither the quality or the filesize is kept constant.


I thought the JPEG people were supposed to have JPEG XL [1] out by now, and it was supposed to be the open standard for JPEG's next generation. Anyone know what's happening there?

[1] https://en.wikipedia.org/wiki/Joint_Photographic_Experts_Gro...


Yes, here's an update: https://cloudinary.com/blog/how_jpeg_xl_compares_to_other_im...

TL;DR: We're now at "Draft International Standard" stage; soon we'll enter "Final Draft International Standard" stage and at that point the bitstream is effectively frozen and adoption can start. The officially ISO-published International Standard will take until first half of 2021, but the codec should be ready to use before that.


Do you mean the Alliance for Open Media? WebP was created solely by Google years about AOM was founded.


It's the same situation with VP9. It's supposed to be a competitor to HEVC, but in reality lack of serious psychovisual optimizations in any of the publicly available encoders make it at best on par with x264 compression-wise, while being significantly more computationally expensive.

Its only appeal is that it's royalty-free, but since all devices that support VP9 decoding also support h.264, is it really worth it?


Using VP9 recently has really made me appreciate how great x264 is both in terms of quality and speed.


The quality of encoders have a much bigger effect on whether or not a codec is 'good' or not. LAME, mozjpeg, and x264 are great examples.

MP3 is competitive with AAC and far more compatible, if you use LAME. Same with JPEG and WebP, or x264 and VP9. All 3 of those encoders deliver higher quality AND better encoding speed than their competitors.

Switch encoders before you switch formats.


You are comparing an encoder (x264) with a format (VP9). While there is a reference encoder for VP9 (libvpx). x264 has been one of the most heavily optimized encoders whose development was sponsored by many different organizations, whereas libvpx did not see the same kind of love and most of the work came from Google's employees (with some from outside, I'm sure).

There exist other VP9 encoders, but all for specialized purposes.


> x264 has been one of the most heavily optimized encoders whose development was sponsored by many different organizations

No, it did not.

> whereas libvpx did not see the same kind of love and most of the work came from Google's employees.

You're getting your facts wrong: x264 was developed by a very small open source community, with at most 5 developers on it, on their free time, while libvpx got quite a few people from Chrome Media Team paid to work for years on it.


What’s interesting is that Google’s Lighthouse tool for measuring site speed pushes for “Next Gen” image formats like WebP. But I often find that WebP files are larger than jpg, too. Yet Lighthouse recommends it. And, of course it’s a great idea to follow Google’s advice since they include the Lighthouse score in their ranking algorithm.


> And, of course it’s a great idea to follow Google’s advice since they include the Lighthouse score in their ranking algorithm.

AFAIK they don't? They use some of the metrics which are provided in Lighthouse (Largest Contentful Paint, First Input Delay, Cumulative Layout Shift) all play a role in ranking, but the score itself is meaningless.


You're both wrong, but you're only slightly wrong. It's been announced that those metrics will contribute to ranking in the bear future. Search for Core Web Vitals.


You are, of course, technically correct, which is the best type of correct.

There's been some level of performance weighting for a while (but only very slight and mobile-only), but I don't actually know what metric they were using for that (this was from ~2018, before any of those metrics landed in Blink).


The irony is that the Lighthouse site itself scores really badly! Can you accept its advice?


Idk. I optimized the end-user facing part of my website heavily, while the backend part (which is used only by a few people), I don't really care. I didn't even minify the JS.


And then there's times I want to save an image, and can't convince the web site to give me a jpeg, so I have to convert it again.


> Every time I run an image comparison, the webp version looks worse and yet Google insists it's the same quality. It's baffling.

Could you share your comparisons? From the ones I've seen, it looks like WebP significantly outperforms JPEG encoders[0].

[0]: https://wyohknott.github.io/image-formats-comparison/#endeav...


I also don't get it why people insist on the same quality with lesser size. How about more quality and the same filesize?


Because the files already looked fine for the most part. Assuming the JPEG is sensibly encoded the human eye probably won't be able to detect the quality improvements in a different algorithm.

So the only place they can target is where JPEG starts to break down: When you've cranked the quality knob down to 20 and you're seeing macroblocks and other artifacts in the image. But this is a niche use case as network speeds continue to improve every year.

The worst part is that they have to compete with free, and that's never easy, especially if you're talking about "well, it's still distorted, but the distortion is less displeasing to the eye", also you need to configure some pain in the ass licensing system and work out how to do the payments.


> people insist on the same quality with lesser size.

Because a great proportion of people browsing the web have slow connections and/or data caps.

> How about more quality and the same filesize?

We're already able to get acceptable quality.


>Because a great proportion of people browsing the web have slow connections and/or data caps.

That doesn't matter one bit. See "webpages are doom".

>We're already able to get acceptable quality.

Apparently not when some examples are worse.


Maybe it's just me but I couldn't tell the difference between the comparison images in the article without zooming in a bunch.


If you can get one then you can get the other, for any compression format that allows varying compression ratios.


Part of it is about quality. These new formats now support 10+ bit color depth (which is critical for modern displays and for HDR) and full 4:4:4 chroma subsampling (to preserve pixel-level as needed). Many apps/services are currently evaluating or already migrating to AVIF or HEIC because these formats unlock those improvements in image quality, even without a major reduction in file size.


PNG already exists.


The answer is simple: money


I wonder, why nobody mentioned Patrice Bellard's BPG[1]. It is based on HEVC and can be supported in any browser via a Javascript. From his website:

  BPG (Better Portable Graphics) is a new image format. Its purpose is to replace the JPEG image format when 
  quality or file size is an issue. Its main advantages are:

  * High compression ratio. Files are much smaller than JPEG for similar quality.
  * Supported by most Web browsers with a small Javascript decoder (gzipped size: 56 KB).
  * Based on a subset of the HEVC open video compression standard.
  * Supports the same chroma formats as JPEG (grayscale, YCbCr 4:2:0, 4:2:2, 4:4:4) to reduce the losses during the 
  conversion. An alpha channel is supported. The RGB, YCgCo and CMYK color spaces are also supported.
  * Native support of 8 to 14 bits per channel for a higher dynamic range.
  * Lossless compression is supported.
  * Various metadata (such as EXIF, ICC profile, XMP) can be included.
  * Animation support.
[1] https://bellard.org/bpg/


It's great in terms of compression (close to half size of WebP for comparable quality), but:

• In countries with software patents it's illegal to use BPG without a patent license for the H.265 codec, and that patent pool is a mess. Someone needs to do BPG with AV1 payload, or just wait for browsers to finish implementing AVIF.

• JavaScript adds significant latency. Browsers request native images before running any JS, so even an infinitely fast JS polyfill already starts from a losing position. On top of that, low-end devices are likely to spend more time and energy on running the JS decoder than on downloading a larger JPEG.


can be supported in any browser via a Javascript

I think you answered your own question right there.

I'm not going to use a dumptruck to bring a single sack of sand to my garden.


For an image gallery, the balance could be the opposite: a small (56k) download of a decoder allowing to show many large (a few megs each) pictures.

I wonder how good the performance of that decoder is, though; if the decoding delay is much longer that the network transfer delay, the approach becomes much less appealing.


It's also not common to need a ton of huge images all at once — in almost all applications you're either displaying thumbnails and JPEG is fine and will load faster thanks to the browser's preloader getting those requests in faster than the JavaScript can run or you're looking at something like pages where the latency is easily hidden by preloading.

That means that in most cases the main driver is lower network transfer and you need to be serving a LOT of images to outweigh the cost of having to support something new and complicated versus something as widely tested and supported as JPEG.


can is not the same as must ... there is absolutely the possibility of adding support as a binary implementation, especially with the relatively short turn around of green browsers. IE11 and Safari are the last of the old guard on this and mobile devices have a 2-3 year burnout in terms of support.

Not to mention it could be feature detected via browser, js and other means as an interim solution. Also, I'm pretty sure the implementation is wasm with a very thin JS shim, at least that would be my presumption as I'm not familiar with this format/project.


It's extremely ironic then that IE11 and Safari are the only browsers to support HEVC, the rest didn't bother because of the licensing issues (which BPG inherits) severely limiting any chance it has of ever being natively supported.

AVIF however may have a chance.


Thats a great and accurate metaphor. Maybe its not a good metaphor since those compare dissimilar things with common attributes. So, this is just accurate.


Those are not the key features :(

Key questions: (1) Is it encumbered by any patents? (2) Does a formal description of the algorithm exists, to allow for independent implementations? (3) Does a MIT/BSD or at least LGPL implementation exist? (4) Has it been submitted for standardization?

Technical merits are of limited value if you can't deploy them.


Hard to imagine HEVC and unencumbered in the same sentence...


Apparently LGPL, while not ideal for redistribution should be enough for general usage.

https://github.com/mirrorer/libbpg


Relying on JavaScript just for images is a terrible idea.

That said, I do think it's a neat project and can be used if there are proper fallbacks to formats that are natively supported.


> I wonder, why nobody mentioned Patrice Bellard's BPG[1].

Isn't it Fabrice Bellard?


Oops, sorry Fabrice, wherever you are.

@als0: Thanks for pointing that out.


It is


I just have to re-affirm I do appreciate the lower quality render path of hevc (blurry) more than the default for jpeg and others (blocky mess). I really don't visually notice the lower quality reductions at much lower quality levels/file sizes than comparable blocky formats.

That's not to say it isn't lower quality, when looking in detail you see it... but for a lot of use cases (and in video) much better experience in general.


To be fair, decent JPEG deblocking should have been a solved problem already and included in decoders, see for example:

https://github.com/victorvde/jpeg2png


Very cool.. though GPL is pretty much a non-starter for getting it into widely used applications (browsers, windows namely)


Isnt' Javascript single threaded? Seems a terrible way to do image decoding.


Of course any format can be decoded in js or wasm, that’s not an advantage and doesn’t tell us anything. The most important thing for a format is probably vendor buy-in. Why would anyone choose this over HEIF, which also uses HEVC (optional), is backed by MPEG (like it or not, MPEG represents the industry), and has vendor buy-in from Apple and Google? (I know it’s not supported in the browsers, at least not yet, but you can also use a library.)

Also, the latest news entry:

> (Apr 21 2018) Release 0.9.8 is available

Edit: Spoke too soon. “Official” HEIF JavaScript port measures ~500-600k gzipped, so 56k gzipped is an advantage.

https://github.com/nokiatech/heif/tree/gh-pages/js


> MPEG (like it or not, MPEG represents the industry)

Not so much anymore. The licensing fiasco with HEVC has reduced MPEG's relevance, particularly for web video. Here's what Leonardo Chiariglione, founder and chairman of MPEG, says about it:

https://blog.chiariglione.org/a-crisis-the-causes-and-a-solu...

https://blog.chiariglione.org/stop-here-if-you-to-know-about...

And they're shaping up to make VVC (https://en.wikipedia.org/wiki/Versatile_Video_Coding) licensing just as bad.

Leonardo says there is no longer a united MPEG:

https://blog.chiariglione.org/a-future-without-mpeg/

Why would anyone waste their time on a format with complex and uncertain licensing when they can just implement AV1 with its simple, royalty-free licensing?


Also BPG predates HEIF.


I think BPG should be standardized since it provides very good image compression better than Webp.


There's already a standardized version of BPG, HEIC is based on the same principle of using the HEVC video codec to store still images

https://en.m.wikipedia.org/wiki/High_Efficiency_Image_File_F...

Windows, OSX, iOS and Android have support baked in already


This is kind of neat, but when I think of "HEVC-based image codec" I think of Apple's .heic format. Unfortunately I didn't see a high-level comparison to HEIC on Bellard's site, so I'm not sure what the advantages of BPG are. Wouldn't I want to use the more common format?


BPG came before HEIF. It was a great proof of concept, but there’s little reason for it to exist now. BPG was an impressive one-man spec; HEIF is based on input from OS developers, camera makers, IC designers, display manufacturers, etc.

HEIF is just a generic image container format. It’s almost identical in structure to a .mp4/.mov/.av1 file (follows ISO BMFF) and can be parsed in an identical way using a simple tree structure. That’s a perfect fit for wrapping a single video I-frame as an image, the basis of all of these new image codecs.

Note that video I-frames can often only be properly rendered using metadata from outside the bitstream, such as HDR characteristics, color profile, or orientation. Sharing that metadata structure with the codec’s canonical video format is the only way to be forward-compatible.

A HEIF that wraps AV1 frame(s) is an .avif

A HEIF that wraps HEVC frame(s) is a .heic

Codecs will change, but the HEIF container is probably the last bitmap file format that we’re going to need for decades.


It's not Apple's format, it's MPEG's.


I just relaunched a website that makes extensive use of photographic collages that necessitate alpha backgrounds. They were previously shipping "retina-grade", multi-megabyte images all over, a typical page load could easily reach 25mb.

I managed to refit it with `<picture>` tags using WEBP as well as JP2. The latter was a great deal of trouble, it appears that "the community" (notably Gatsby and Contentful) are very happy to talk about the benefits of WebP but conveniently ignore that Safari is the most important mobile browser, and Safari desktop is not insignificant.

I bring this both to point out that there is a very valid reason to use WebP over JPEG — alpha blending — and that this entire field is still a gigantic mess.


On iOS, any screenshot taken while iOS background blur is active will balloon from 0.15MB to 15.0MB, because iOS uses PNG for screenshots and blurred backgrounds are apparently irreducible by PNG.

Does the WebP format permit bounded areas of an image to be represented at lower fidelity with a smooth blur, so that the blurred-background effect can be stored and retrieved using fewer pixels and a blur algorithm rather than more pixels and no blur algorithm? Do PNG or HEIF (h265) support this? Does any image format support bounded areas of lower fidelity?

The tricky part is that this implies that some areas of the image should intentionally be reproduced as ‘lossy’ and ‘lower fidelity’ and so forth, which is precisely correct - but goes against the grain of image encoding in the past.


I did some reading of the PNG specification and it may be possible to take advantage of Adam7 interlacing (8 passes total) and scanline filtering (several methods) to write out an image where the scanlines of 'known to be blurry' areas contain only 1 pass of image data and 7 passes of highly-compressible scanlines with filters that generate blur at decode-time.

Doing this formally at scale would require the compositor and the encoder to cooperate, as the encoder would benefit greatly from having access to both the 'blurred' and 'unblurred' areas without the blur filter having been applied to the former, as it could then construct low-fidelity, low-bandwidth, visually pleasing blur for the 'blurred' segments.

This exceeds my ability to write PNGs by hand and it certainly exceeds the bounds of what most people think 'an encoder' should be capable of doing, but at least it presents a path forward. I'll post to HN someday if I ever somehow manage to do this.


HEIC (HEVC-based image) and AVIF (AV1-based image) both support a lossless mode that does efficient compression of blurs and gradients, without losing any visual fidelity. They're good candidates for screenshots in the future. Lossless HEIC is often 50% of the size of the equivalent PNG.


HEIF, AVIF and JPEG XL all support multi-layer images, so you could e.g. encode the background in a lossy layer (which works well on blurry stuff) and foreground in a lossless layer (which works well on screenshots).


At that point you might as well use something like PDF, where you can choose exactly what image format you need for each region. Put a JPG as the blurred background, then put a transparent PNG as the foreground for text and buttons in your screenshot.


macOS screenshots appeared to be PDF natively for the first few years, which made sense since their window compositor was operating what appeared to be a PDF canvas in-memory. They're still clearly using a PDF-like canvas, since there are certain windows that aren't included in screenshots — but what's underneath them is — which is not possible without a layered canvas somewhere. I think after the first decade they stopped offering .PDF screenshots and switched to .PNG now, and based on other replies alongside yours, it's likely they'll be replaced by HEIF soon, but it bodes well for improvements here that their screenshot engine has access to the original layers.

(This isn't just relevant to Apple users — imagine if Photoshop's PNG/HEIF encoder could export blurred segments with lower byte density than unblurred segments, for example. Folks generally are not used to thinking about fidelity-sensitive image encoding, and that's why this is so interesting to me.)


The only image format I can think of where you can encode a blur applied to some part of the image is SVG


WEBP support has been added to the next version of Safari (14).


WEBP support has been added to the next version of Safari (14).

Good to hear. But that still means it'll still be 1½ years before enough users in the wild are on iOS 14 to make the change without breaking the sites for a large number of people. At least, from what I read, iOS upgrade adoption is very quick compared with other platforms.

WebP is something to look forward to on my company's internal projects, but externally we still have to support IE11, as it's still very widespread in the medical field, and I haven't seen the user numbers budge on that.


Safari users seem to update very quickly (based on stats about iOS and MacOS update timelines). I've already got some desktop-oriented websites that are WebP only, and once Safari supports WebP on iOS, I'll finish transitioning them all.

The space savings is quite nice for my bandwidth bills (my websites don't have video, so I estimate saving about 20% in bandwidth costs, or approximately $120/month across all of them).


I wouldn’t be able to view any of your sites as I’ve disabled webp in my browser. It always looks blurry to me and if I save an image I can only view it again in a browser.


Most of the images are converted losslessly. So are pixel-for-pixel identical. Some are lossy but only with conversion settings that make them essentially identical. So far no one has ever been able to tell.


IE11 supports the figure element, so it is possible to use webp and have a fallback to the png or jpeg, but since I already use figure for multiple image sizes I will support webp with fallbacks for a while...


Yes, and I submitted it on HN.

https://news.ycombinator.com/item?id=23614193


!!!!


> I bring this both to point out that there is a very valid reason to use WebP over JPEG — alpha blending — and that this entire field is still a gigantic mess.

Yeah, you can do manual alpha masking in webkit, but it's freaky and stupid.


Full blending, or just masking? i.e. can alpha values take on the full range from 0-1, or are they limited to full on or full off?


Looks like mask is standard now. In Webkit it is behind the -webkit-* prefix, and it only implements mask-image (which is what you want anyway).

-webkit-mask-image does exactly what you're looking for (poorly) :+ )

The other way to do it is SVG, but if you want that to be data efficient, you generally end up with three HTTP requests at two levels for a single image, or you have base64 and it has to be gzipped or it's larger.


What was the result of the switch, how heavy is a full page load now?


2.44mb, down from 22.2mb (just checked). And the new site has more, even larger images. https://www.prior.club

Still something like 5x what I would recommend to most folks, but there’s just no getting around a site like this being very photographic.

(Edit) I can’t attribute all of this to next-gen images, also doing lazy load and other tricks.


Contentful person here - what could we do to make your experience better?

Feel free to shoot me an email if you don’t want to respond here. rouven _at_ contentful _dot_ com


Safari Mobile is a broken mess not worth supporting.


I think by the nature of being the second most used browser, it's worth supporting. You don't have to like supporting it however.

https://www.w3counter.com/globalstats.php


It's impossible to test for without buying expensive hardware, so I'll pass.


A used iPod Touch is $35, the same price as a Raspberry Pi.

Is a Raspberry Pi "expensive hardware"?


Right, I'll put "Designed for used iPod Touch" in the footer then.


If the browser is the same what difference would it make?


Use a service like browserstack


Shades of IE6 Stockholm syndrome.


Very different problems.

Many ISVs were stuck supporting IE6 for so long—long after it went into single-digit percentage usage—not out of some vague fear that someone, somewhere still used it; but because the particular moribund enterprise clients that they wanted to sell into still used it. (Otherwise, IE6 would have been just another irrelevant minority browser, like Opera.)

Mobile Safari, meanwhile, is "still" used by 25% of people; but more importantly, iOS is used by 26% of people (52% in North America!), and those people can't actually get any other renderer than WKWebView, whether they use Safari or not.


The IE6 problem is the same reason I'm currently stuck supporting IE11. The vague fear that we don't want to cut off like 3% of our users.


But it is the only browser on iOS (even if you install another browser, under the hood it MUST use the safari rendering engine, so all browser use on iOS is really Safari). Not supporting means you are not supporting any iPhone or iPad user. You might be anti-apple in your personal technology choices, but can you be anti-apple for whoever you work for?


If you're outside of the US then the answer can easily be "yes". I don't recommend it though.


Since Safari (and WebKit) are the only available Web-rendering engine for iOS and iPadOS (and the default one for MacOS), it's kind of important to be able to at least render usefully on it.


Said that about IE5 in 1999. Didn't work out for me.


From a business point of view, it makes sense to waste a few days once in a while to support it.

But sometimes you can't support it without huge hacks like implementing everything on CPU with WebAssemply. For example MediaSource is disabled on Safari for iPhones (but enabled since a few months for iPads). I think the only reason is to force developers to publish apps in the AppStore. Which is a pain.


If Safari was your main browser during your web development you wouldn't have any issues would you?

Realistically people should be using either Safari or Firefox during their development. Then check for chrome compatibility after.


Pretty sure you can just use Firefox and then check in Safari and Chrome. It's pretty hard to make a website for firefox that will be broken in any modern browser...


What problem exactly you have?


Unfortunately, targeting same SSIM makes this test basically bullshit. It would be far better to make images that are the same size, and then we can use both our eyes and various objective metrics to compare the images.

The reason for this is basically that most codecs do not target SSIM internally. They're using their own algorithms to determine how to allocate bits, so two images with the same SSIM may look better or worse depending on what codec is used. Many modern codecs (e.g. from x264 onwards) deliberately take approaches that lower the score of the result on objective metrics but usually look better to the human eye.

This exact issue was originally highlighted on the x264dev blog: "How to cheat on video encoder comparisons" #3: "Making invalid comparisons using objective metrics". [1]

If you really want to compare image codecs, I'd look at one of the many comparisons from this family on Github. [2]

In my judgment WebP is clearly better than Mozjpeg for most images.

[1] https://web.archive.org/web/20141103202912/https://x264dev.m...

[2] https://wyohknott.github.io/image-formats-comparison/#abando...


This. JPEG has way too many artefacts on naturally smooth areas (like sky) to be ever considered a pinnacle of lossy image compression. WebP or not, but some adaptive technique must be the future, if there is any space for lossy compression in the future at all.


This should be higher up. siipo.la's benchmark is anecdotal evidence, the image formats comparison shows crisper edges and smoother gradients for WebP even at large filesizes. For small files, mozjpeg can't compare.


In my opinion, the true contender to JPEG is JPEG XL. [1], where is has better quality than even AVIF at >0.5 bpp ( Bits per Pixel ) [2].

But the truly amazing next generation VVC manage to compress image with even better ratio than JPEG XL.

Both JPEG XL and VVC is expected to be finalised in July/Augest. JPEG XL will be royalty free. VVC as usual is lots of unknown.

[1]https://cloudinary.com/blog/how_jpeg_xl_compares_to_other_im...

[2] https://medium.com/@scopeburst/mozjpeg-comparison-44035c42ab...


VVC is definitely not going to be royalty free, and it's unlikely its intra coding will compete with JPEG XL for the same target (relatively high bitrate high quality images).


Some data from when we deployed WebP generation: https://blog.cloudflare.com/a-very-webp-new-year-from-cloudf...


JPEG XL is worth mentioning here. It includes a JPEG1 repacker (Brunsli) that saves about 20% and allows recovering the original file exactly. All that existing JPEG1 content isn't going anywhere, so that seems good. It has some other neat features like the ability to mix a DCT-based codec with lossless (modular) mode in an image. Its lossless/modular mode (successor to FLIF/FUIF) is interesting on its own.

It can be illuminating to stretch codecs a bit past the visually-indistinguishable threshold, both to better nail down where that threshold is and to see how annoying the artifacts you end up with are. Lots of comparisons you can do; this is a fun one: https://encode.su/threads/3108-Google-s-compression-proje%D1...

At very few bits-per-pixel like the first (bridge) image, everything has some artifacts. On that image AVIF tends to just blur lower contrast areas, whereas HEIC produces some visible ringing (faint lines that weren't there before) around the bridge. Unlike JXL, AVIF and HEIC both have spatial prediction (extend pixels above/to the left of this block in whatever direction), which is particularly helpful with straight sharp lines like the bridge image happens to have. I wouldn't put too much stock in results on that one image, but I do like being able to judge results subjectively.

(FWIW, a presentation on JPEG XL suggested they were targeting the equivalent of ~2bpp JPEG quality[0], so perhaps performance in this super-low range wasn't as much of a priority.)

Anyway, I hope both these formats get wide support, because they each have clear benefits in some application: JXL's include the upgrade path for JPEG1 content and the multiple modes, and AVIF may be able to salvage a bit more quality at super low bitrates. AV1 (on which AVIF is based) has wide industry backing, and the JPEG XL effort is also going for a royalty-free codec and has support from Google, so hopeful they actually will be usable relatively soon.

(Although HEIC's images are OK and it has Apple's support already, it's patent-encumbered, limiting what you or I can do with it.)

[0]: https://www.spiedigitallibrary.org/conference-proceedings-of...


> Note that Mozilla somewhat walked back from this and implemented WebP support for Firefox in 2019

I wouldn't call the decision to support a format 6 years after first evaluating it a walk back, it's a concession that the format has been adopted by the industry. If the most popular browser supports WebP and sites are using it, it makes sense that you should support it.


WebP also supports animation, and is clearly more efficient than GIF.


What's the benefit of WebP over something like h264? Most sites today just convert uploaded GIFs to MP4 or GIFV.


Or just use .webm without audio.


Webm is a container, not a codec.


My results with libvips have been way more encouraging than this post suggests with MozJPEG and the WebP reference encoder!

Here's a MEGA folder with my example files so you can compare for yourself. Sorry I know this host might be blocked in many places, but every image host I tried (imgur, postimg) recompressed my uploaded files negating the fairness of the test. If anyone has a suggestion for a better host I'm happy to re-upload.

https://mega.nz/folder/JsUXWIqI#xYIohBdiVKrGYTi-BeENEA

e: another host: http://www.filedropper.com/libvipswebpvsjpeg

My random sample is a black-and-white JPEG photograph vs its WebP equivalent. The results are in line with my results for many other images of different types, like other black-and-white photos, color photos, clean computer graphics, screenshots of UIs and media/games, etc etc. The original image for this test can be found here: https://en.wikipedia.org/wiki/Sand_Hill_Road

Both test images were output by libvips after loading the original JPEG. The libvips-generated JPEG is 1.22mb, smaller than the 1.26MB original, but the libvips-generated WebP is 944KB! I included two sceenshots of each libvips-generated version opened in a same-size window too, both much smaller than the full res. Tabbing between the two on my PC does not visibly change on screen at all.

I don't entirely enjoy WebP just because the software support is still spotty, but I haven't been able to argue with the compression benefits!


Extremely subjective, personal data point: I've been running with an extension (https://bandwidth-hero.com/) that compresses all images (including jpeg) and renders them in webp.

I'm a typical social networks browser, so among the images I see a high majority of them don't need high quality. I'm running the extension with max compression but still keep colors: compression artifacts are clearly visible, but in my opinion they're not a problem. Looking at /r/all right now, there are 2 images that "deserve" to remain at the original resolution, the rest is typical pictures of text/memes that don't need all the bytes they currently have.

The result: overall I saved 80% of data, just with images.

In my case it's not so much a debate of JPEG vs WebP, but a debate of whether it's ok to keep images _on the web_ in their unoptimized form the way they are today, to which of course my answer is no. We don't need high quality images of a Facebook screenshot when compressing it to 20% of its original size can convey the same information with no subjective loss.

I do agree on the unwieldy nature of WebP though, it's still mostly a read-only format right now and turns the web into a consumption platform


How does an extension resize an image without downloading it? Can it selectively read bytes from the stream?


It uses a proxy server as said on the landing page. That said, the service is shut down apparently so you have to self-host your own proxy server. That's not too useful for me so I uninstalled the extension.


I self-host my own proxy server on my VPS, so I'm hoping said server is close enough to the origin to be useful in the overall reduction of transferred bytes


I suppose you could click on the provided link and find out?


I'm surprised this doesn't include Google Guetzli (https://github.com/google/guetzli) in the comparison. In my experience trying to optimize product images for an eCommerce site, this provided the best compression. Yes, it's ridiculously slow, but for encode-once-transmit-often scenarios, it's perfectly usable.


Did anybody really find any reason to use that?

https://www.pixelz.com/blog/guetzli-mozjpeg-comparison/

"MozJPEG files had fewer bytes 6 out of 8 times

MozJPEG and Guetzli were visually indistinguishable

MozJPEG encoded literally hundreds to a thousand times faster than Guetzli

MozJPEG supports progressive loading"


I am compressing 200x200px and 800x800px images. For those sizes, while the encoding speed is slow for real-time use cases, for product images it's completely irrelevant - anything under 1 minute/image gets a pass and lower, generally, isn't better (or worse). I'm looking at absolute image quality/byte and it's even more niche than that - it's bytes to pass a threshold of acceptable quality (85 on Guetzli IIRC). I don't care about very low or very high qualities.

In terms of absolute bytes to pass this threshold, Guetzli felt the best when I was doing the research (~ mid-2017). I don't have any hard data to back this though - I did the experiment with 5-6 different product images, drew the conclusion and started using Guetzli.


> I think Google’s result of 25-34% smaller files is mostly caused by the fact that they compared their WebP encoder to the JPEG reference implementation, Independent JPEG Group’s cjpeg, not Mozilla’s improved MozJPEG

Well, MozJPEG didn’t exist until four years after WebP, so I suspect that might explain why.


One of the biggest barriers against adoption of these newer formats is the lack of pure java implementations for use server side.

The Java wrappers around a native binary are helpful, but for many projects not very useful and this hinders adoption.

https://github.com/haraldk/TwelveMonkeys/issues?q=is%3Aissue...


Using Java code for things that are this performance critical doesn't feel like a good idea. It will work of course, but it will probably GC a lot, and while modern JVMs do JIT, the code they generate can't be as optimal as something that has manual SIMD optimizations.


Using C code is how you get the next cloudbleed. I'm happy to pay a performance cost for the sake of memory safety.


Fix this easily by abstracting the manipulation of user uploaded files to chroot/serverless.


That doesn't fix the problem. The process can still fix any data that it has access to, which includes other users' confidential uploaded files.


Honestly, you'd benefit more from idiomatic bindings rather than a pure Java implementation. There are quite a few high quality Java libraries that wrap C/C++ code but maintain a nearly 1:1 API, which is what ends up making them difficult to work with. OpenCV is a good example of this.


Is JAVA still a thing?


Is there something wrong with the AVIF decoder's chroma upscaling? E.g. look at "Kodim 3", where the colored hats overlap each other. There's severe blocking, where I'd expect blurring instead. All the other formats blur, which is much less distracting.


I've noticed that too. I've been testing AVIF, and the reference encoder[0] defaults to 4:4:4 (no chroma downsampling) if you don't tell it otherwise. I tried 4:2:0 and it looked horrible, but the size hit for 4:4:4 wasn't bad and looked much better.

[0] https://github.com/AOMediaCodec/libavif


Subjectively, I'd say at those settings the WebP versions look slightly worse, though they don't suffer from the blocking artifacts native to JPEG.

For the images I serve on my sites, a few KB here and there aren't going to make a huge difference, and I'd rather avoid the hassle of serving content that isn't universally supported.


In most cases I prefer the WebP versions, but the JPEG formats handle chroma noticeably better. E.g. look at the purple band on the rightmost helmet in the 500 pixel version of Kodim 5. WebP reduces the saturation by excessive blurring. CJPEG handles this image best IMO.


Take a look at the Kodim 3 example image, either the '500px' version or the '1500px' (but not the '1500px' version).

Zoom in (400%) on the cap of the yellow hat. The lines of the cap are muddied under all lossy compressed formats, but surprisingly, plain old cjpeg does so less than the others.

Generally though I find the new formats look better than JPEG.


So what's this AVIF format they mention that's actually better than both?

WebP did not take off.

But will AVIF be any better?


It's based on the next-gen AV1 video codec. I've tried it out, and it's amazing.

https://netflixtechblog.com/avif-for-next-generation-image-c...

https://github.com/AOMediaCodec/libavif


> So what's this AVIF format they mention that's actually better than both?

AVIF is an image format based on the intra-frame coding of AV1, as WebP is to VP8 and HEIF is to HEVC.

AV1 and HEIF are super recent though, the AVIF 1.0 spec was only released in early 2019. By comparison, WebP was initially released in 2019. So AVIF currently has very little support (it's behind flags in Firefox and that's it).

However all the big names are part of AOM, which oversees AV1 (and AVIF): http://aomedia.org/membership/members/

So chances are pretty high it'll get widespread support, eventually.

There’s one problem though: like WebP, AVIF just is not solving a big issue.


- WebP was initially released in 2019

+ WebP was initially released in 2010


I think it's just about to take off. Browser support across the board is coming by the end of the year. PNG also took a while to become fully supported, so we just need some patience.


I really want a better alpha-mask standard to emerge. PNG24 is attractive, but huge. PNG8/GIF works well, but is pretty limited.

I don't think WebP is it, but we'll see what comes out of the scrum. I'll work with whatever that is, but I won't waste my time chasing will o' the wisp "standards."


It's still relatively huge, and not widely supported without a polyfill, but FLIF [0] is a pretty neat format that tends to be smaller than the equivalent PNG, and can be converted to a lossy image "on the fly", and also supports alpha transparency.

The polyfill is still kind of slow, at least compared to native PNG rendering on the browser, but it does work, and hopefully some day it can be integrated into the standard.

[0] http://flif.info


>hopefully some day it can be integrated into the standard.

see jpeg-xl


Some time ago I experimented with a reduced color palette and dithering while keeping the fill alpha info. It reduced the file size a lot while keeping the alpha.


Yeah, there's a number of ways to reduce GIF/PNG8 sizes.

I started writing Web sites in the mid-'90s, where a page was supposed to be about 30K (quaint, huh?).

The prevailing wisdom, then, was no dither, adaptive palette, and reduce the color palette until it hurts, then back up one.


Does anyone know how these compare with the HEIC format that it seems my iPhone now uses by default? Is that non-mainstream, or otherwise not fit for comparison?


I like HEIC under the hood, but it's not suitable for web as it has exactly zero browser support.

https://caniuse.com/#search=heif

I'm surprised that Safari doesn't support it.

I believe Apple released a bit of Javascript that lets you embed the animated HEIF images (I think they're called "live pictures") in your web site, but that's kind of a niche use.


It's a proprietary patent-encumbered format which isn't really widely supported in the web browsers.


Patent-encumbered? Yes.

> proprietary

No. It is an open standard (ISO/IEC 23008)


Being patent-encumbered kind of makes it moot on the web though, just like H.265 video.


Oh - so actually AVIF (as discussed in the original article) is an HEIF container with AV1 as the codec, and HEIC is the same but with H.265.


I happened to look it up on caniuse.com yesterday and it's just a field of solid red: nobody supports it, not even Safari.


I haven't follow the trend much nor seen a recent comparison, so take this with a grain of salt.

WebP is based on VP8, which was considered to be competitor to H.264 (AVC). HEIC is based on H.265 (HEVC), which competes with VP9 and AV1. So I'd assume WebP is one generation behind HEIC. WebP was introduced 5 years before HEIC though.


Correct. WebP is 10 years old, based on the VP8 codec that is about 14 years old, which lost in the market to H.264 that is 17 years old. In video streaming VP8 has been completely replaced by VP9 years ago, and VP9 was going to be replaced by VP10 which became part of AV1.

So WebP is hardly new, and a couple of generations behind. Now the next hotness is AVIF (or maybe JPEG XL), so ironically, WebP is becoming usable and obsolete at the same time.


I’d be curious about that as well. I made my own minor comparisons, and heif did well, but there are a ton of possible pitfalls my comparison may have been subject to so I’d love to see a comparison by someone with expertise


I thought HEIC is apple only and not supported by any browser. Is it even supported by Safari?


HEIC/HEIF is not an Apple format. It's an MPEG format that's been around for half a decade.

Apple started supporting it in 2017. Microsoft in 2018.

You are correct, though — it's not supported by any browser.


As someone who hosts images on their site, what is the best software (preferably open source) one can use to compress JPEG images imported from a digital camera? I would be glad to serve only JPEGs and dispense with WebP versions and simplify my site html.

A straightforward encoding of a JPEG into a WebP (using GIMP) does give me an almost 1/3rd reduction in file size, which is not insignificant.


Re-encoding will produce smaller images because it’s discarding information - if you compare that WebP image you the original you’ll inevitably see fine detail is missing. That may or may not be reasonable but to be fair you’d want to compare it with different quality levels using MozJPEG.

The easiest way to do a simple comparison might be to install ImageOptim and process a few images with various tools. Based on that you could put the same tools into your site’s workflow.


Others have mentioned you're using 2 lossy formats. You would probably get a similar end file size with jpegoptim. Compile it with mozjpeg for even smaller sizes.


Not sure about his images, but WebP supports lossless compression. If they are coming from a camera, it seems reasonable that they might be lossless, depending on the camera settings.

[Update] Just noticed he's going JPEG->WebP so my comment makes no sense.


imager.io was on here recently and seems to do a good job compressing into normal JPEG images.

https://github.com/imager-io/imager


Just going by elsewhere in the thread, if you want to not lose any quality then brunsli / Jpeg XL sounds like the way to go.


ImageMagick


When you convert JPEG to WebP, you're converting data in a lossy format to another lossy format. Of course you're going to get good JPEG to WebP compression, because you're starting with a low-quality source. If you start with a lossless image (RAW/PNG/TIFF/etc.) instead of a JPEG, you might get different results.


Improving image compression should really free up some space on my website so I can fit in a few more javascript frameworks.


Obligatory mention of FLIF [0], FUIF [1], pik [2], and JPEG XL [3] as well as Jon Sneyers

FLIF & FUIF were created by Jon Sneyers - http://sneyers.info/ and are lossless but progressive, generating better resulting images at every step of the image download and brilliant mobile support (your browser can just stop downloading the image once it reaches the quality sufficient for the viewport).

A new update on the JPEG XL is worth reading:

--> https://cloudinary.com/blog/how_jpeg_xl_compares_to_other_im...

[0] http://flif.info/ -- superseded by FUIF

[1] https://cloudinary.com/blog/fuif_new_legacy_friendly_image_f... -- FLIF is lossless and amazing, in part absorbed by JPEG XL

[2] https://github.com/google/pik - from Google, got absorbed by JPEG XL

[3] https://cloudinary.com/blog/how_jpeg_xl_compares_to_other_im...


Question for people more knowledgeable on the topic: is it reasonable to expect a future where audio, image and video are easy to work with and compression / decompression / encoding / decoding is done transparently by the hardware? Like, an OS read call giving you a standard plain format directly? I know PS5 is doing something like that for 3d assets, but I wonder how difficult it would be to do this and just spare the rest of humanity from having to suffer with formats for the rest of our lives. Or would there be too many cons?


It has been said PS5 uses Kraken. If its this http://www.radgametools.com/oodlekraken.htm then nothing really changes. PS4 games use RAD Kraken today. Texture assets themselves have standardized long time ago, currently on the desktop we are at http://www.reedbeta.com/blog/understanding-bcn-texture-compr... with history going all the way back to 1990 patents for Apple Video 'road pizza' codec, later ripped off by S3 and re-patented as S3TC texture compression in 1997.


Av1 images are way better, and you can already do them in Firefox with a video with 1 frame in it. It's a shame it hasn't been adopted as a singular format yet


>It's a shame it hasn't been adopted as a singular format yet

it has been. look up avif, you can already sorta try it out on firefox nighty.


I make a lot of memes for Reddit with MS Paint. I tend to avoid WebP because of compatibility issues. Neither Reddit's native image host nor Imgur allows WebP files, and I save memes using images from multiple formats (e.g. a jpg, a png, and a WebP) in jpg or png to avoid that. When a WebP file is saved into jpg or png Paint gives an error message about how this will erase all transparency, but for my purposes that is not a problem.


I mean Photoshop doesn't even support it either so every encounter I have with this file format has been an exercise in frustration and honestly would have preferred it if it just went away and was replace with something companies are free to implement safely.


IIRC WebP is open-source and unencumbered by patents so there's no reason it couldn't be implemented. That said, it's not a good source format.


In my experience, when building websites, webp has always been smaller thus far.

For anyone interested and building sites with Next.js, I've released a plugin for optimizing images and converting to webp with fallback to jpg/png: https://github.com/humaans/next-img/


ASK HN: Since this discussion has come up, I would like to ask HN if you are interested for an in-depth comparison on a production workload.

I run this https://www.gumlet.com which serves more than 50M images per day. We can put entire study on different image formats if HN finds it helpful. Please let us know.


Yes, I'm personally quite interested. Although more for my own curiosity than any business use case.


Definitely interested-- a site with an image heavy workload such as yours is the perfect testbed for this sort of exploration.


I've recently converted some images to WebP on my side project Portabella [1]. I did it to appease Googles page speed test for SEO. But I agree with other comments in this thread that it's annoying so I might switch them all to JPEG and be done with it.

[1] https://portabella.io


What really sucks about all this is that they're actually ruining the credibility of lighthouse audits this way; what has been for years the best all-in-one automated performance+accessibility analysis tool for any web app/site and is now being used to promote some random new image format no one wants to care about.

You spend all this time improving your stack, reducing your app bundle size, convincing your company on the benefits of server side rendering, improving your devops, getting your manager to see these measurable benefits by running lighthouse audits themselves etc...

Then all of a sudden it turns out that the main tool you used for this has been turned into a.. a fking ad platform.

I apoligise for this emotionally charged rant, but seriously though, is that how you plan on getting developers adopt your dubious new standard, by p*ssing them off?


FWIW one of the reasons this has become relevant again is because the next version of Safari supports WebP. The article's discussion of AVIF is interesting but you're not going to be able to use it on the web any time soon.

I'd be curious to see a WebP vs PNG comparison, since (IIRC) it has a lossless mode too.


Lossless is pretty easy, there's no subjective qualities to compare, just file size.

WebP Lossless almost always results in a smaller file than PNG. I have seen exceptions to this, but they're sufficiently rare that it's not worth worrying about.


If you're writing an Electron app, using WebP/WebM is a Free space optimization, especially in lossless mode for GIFs or sprite maps. You can also make really beautiful, 60FPS animations with WebM that are as easy to use as GIF


This is a weird comparison, while I get the comparison of non-transparent images jpeg or webp are probably fine.

However the main benefit of webp is alpha backgrounds or transparency. It can destroy PNG when it comes to compression and size.


I would say it really depends on the PNG compression library you are using. Tools like RIOT or Compressor.io can give you optimal results on PNG compression.


> Only concern I have is the excessive blurring of low detail areas?

Couldn't the person who picks the compression ratio tell the encoder which part of the image is important? And even have it done automatically to some extent. If you are going to send a 5MB picture from a person at their birthday party, if feels worth it to spend a few more bits on that person than its surroundings.

Extrapolating to smartphone pictures where you generally select a focus area (and sometimes selectively blur the rest), they could use that information to keep more bits of information in this area.


JPEG quality can probably be better (but slower) when using sophisticated decoder libraries. This way, a JPEG image with a WebP size may look with the same quality. But I didn't test.

https://news.ycombinator.com/item?id=22245788

Also, Google has a "brunsli" library, that can recompress JPEG to JPEG XL format without loss.

https://google.github.io/brunsli/


The software versions are never mentioned and since the article says "cwebp only supports TGA input" I suspect that's a very old build of cwebp.


Two major advantages: transparency and optional lossless. JPEG blocky artifacts are very hard to remove for high resolution images unless you hand tune them.


An implicit assumption is that "smaller file size is better" and if this discussion were about audio would probably be mp3 vs ogg vorbis... but flac is the BEST to em because I want all of the audio data in the best lossless compression scheme. As a photographer, I would like to have ALL of my image data in a losslessly compressed format as well. What is the FLAC equivalent in the photo/image space?


Some formats that are lossless: PNG, FLIF.

Some formats that have a lossless mode: TIFF, WebP, BPG, JPEG XR.

FLIF claims to have the best ratios, and they have some benchmark info on their web page (https://flif.info/).

There are other things to consider like encoding/decoding speed, compatibility, and patents/licensing.


[replying to myself since I can't edit now]

Oops, I failed to mention JPEG XL!

JPEG XL is newer technology, more likely to be supported by browsers, and it pretty much obsoletes FLIF. It's still pretty new but may be the best bet.

More info: https://jpeg.org/jpegxl/index.html



TIFF is just a image container. I'd make sure when you save as TIFF, you specify a bit depth equal to your source image, and a format that isn't lossy (like standard JPEG).

If you've got a standard sRGB 8-bit-per-channel image, though, the loss from JPEG at q95 or q100 is negligible, and means your image still enjoys wide app compatibility.

If it's something else (like a raw format from a DSLR), the best you can do is retain the original file.

Also: don't forget file metadata! I inadvertantly deleted all EXIF headers from a number of my original images many years ago because I used jpegtran (without the -copy option!) to losslessly rotate images. I ended up adding a bunch of heuristics in PhotoStructure to automatically infer missing metadata to repair this mistake.


There are lossless versions of PNG, BPF and there is also a lossless JPEG enhancement.


PNG is always a lossless format. You can only control the level of compression.

There are tools that let you re-encode a PNG after applying a lossy filter, but the PNG still encodes it losslessly.


Regardless of whether it is better, I sure wish Macs would support them natively, such as in Preview and Finder. Really annoying that if you save an image from the web they aren't as usable as pngs and jpegs. Now, I typically save it locally, see it's a webP, then go back and screenshot it so it is a PNG. (and then find it on my desktop, rename it, move it where I want it to be, etc)


You can make a lossless webp from a png. I use this on my site, and it consistently results in smaller images than what I get from png.


Another reason to prefer MozJPEG to webp is that Safari and Edge don't support webp yet. [1] Although it looks like that's changing very soon for Safari.

1: https://caniuse.com/#feat=webp


As an observer of video codec development, I heard Daala (PVQ) used activity masking to make quantization steps finer (don't discard all detail) in low-contrast areas. I don't know if that's practical in AV1 (a DCT block-based codec).


These numbers need decompression computational complexity for completeness. I am not better off if I save 1ms downloading, but it takes 5-10 more ms decoding. Bandwidths in the 100s Mbps are common now..


Not for me. I'm in the United States in an urban area and mobile data is 1 cent per megabyte (my provider is Google Fi). Decoding is nearly pretty fast for me, but the time for page elements to download is in the seconds (not even milliseconds). And if there's any low signal (wifi or mobile data), then over 10 seconds is not uncommon.


Does anyone have insight as to whether this same evaluation re:AVIF applies to WebM?

Edit: saw somewhere that it is used in production for real-time video, but had trouble using rav1e as opposed to libvpx


Yes: Google heavily promoted their format but unless you’re serving a high volume of traffic it’s not worth the cost of doubling your storage and maintaining a separate toolchain to lower your transfer by perhaps 10%. By now more devices have hardware support so its performance is more competitive but at this point I’d go straight to AV1.


Perhaps should clarify. Much of the reaction appears to be to Safari support for WebP. Those reacting believe that WebP will stick around even though used VP8 (Could have just hurdled straight to AV1). So curious to ask if those same people reacting think WebM will also stick around longer. It’s a bit different since WebM uses VP9, but, there perhaps there are still potential issues, and personally not know about browser landscape/hardware support.


It's all about Google wanting control. Their own file formats, their own protocols, AMP, not showing URLs in Chrome. The goal is to turn the web into a closed Google ecosystem.


How does a royalty-free codec with an open source implementation turn the web into a closed ecosystem?

Not that I trust Google, but in this case I don't see the harm.


How does a royalty-free codec with an open source implementation turn the web into a closed ecosystem?

Google did it with Android. It's "open source", but try to do something without Google Play Services.


I'm more interested in hearing about support for AV1.

* https://en.wikipedia.org/wiki/AV1


AVIF is in the comparison, and the article notes it's in Firefox (behind a flag) and ought be coming to Chrome soon.


Of course. JPEG doesn't support transparency and animation.


Summary: - No. It's roughly the same.

Mandatory Google bash comment: - It's mainly another wheel reinvention from Google, because reasons.


> Mandatory Google bash comment: - It's mainly another wheel reinvention from Google, because reasons.

WebP seems to have been the first "let's just use intra-frame coding" image format out there though, and it's hardly the first format to have tried to unseat JPEG.

WebP is 4 years older than BPG (Bellard's HEVC-based format), 5 years older than HEIF (MPEG's same) and AVIF was only specified 9 years later.


It's a sad story of broken promises. Back when video == Flash, Google bought and opened VP8 codec and got a bunch of companies to promise support for VP8 video.

Back then it made a lot of sense to also have an image format based on exactly VP8 (despite it being a poor fit for still image format), since everyone promised to support it, including hardware acceleration.

But Adobe never added VP8 to Flash, hardware support was too little too late. VP8 died, and WebP is burdened with compatibility with a world that never materialized.


Mandatory Google bash comment: - It's mainly another wheel reinvention from Google, because reasons.

Pithy comment, and reaction, aside, that's enough to give me pause about implementing it.


*pity


We have too many different formats! Let's make a new standard!

2 years later

We have too many different formats! Let's make a new standard!

shrug


I can't wait for AV1 and AVIF to take over the world. We've needed better formats for awhile now.


Does anyone know of a comparison that looks at the encoding and decoding times / cpu requirements?


We use WebP for a number of reasons:

- supports lossless

- supports alpha

- excellent decode speed

- open business friendly license

In essence, it eliminates PNG.


I'm using Firefox and just tried to save a webp image and it tried to save as a web page. [initially, I said no.. see edit below]

Edit: Looks like it was that specific website and they are doing something odd, so I retract my initial judgement.


Is that server sending the image with the correct mimetype?


Good point! I just went back and checked that specific site, and they are definitely doing something odd. I retract my previous judgement


tl;dr: it's not.

And it's also a very annoying to deal with them, I might add, especially when you need to save them or to work with them, as they are not natively supported by macOS or win.


bad enough that google are pushing AMP, let alone trying to take over the webs image formats as well. No more Google!


chrome keeps changing downloaded images to webP which is annoyin af. can't even find a way to disable it


It's not and you can't. The website is service up the images as webP. Chrome is just giving you what the website gives you, there's nothing to "disable".


There's an extension you can use to save it as a different format, but it's transcoding.

https://chrome.google.com/webstore/detail/save-image-as-type...

In theory, you could probably do some combination of altering the user agent, and/or blocking .onload for webp images (which is often used to detect webp support).

If you changed both, most sites would probably serve you a jpg/png version.


You could remove webp from the Accept header so the web server thinks your browser can't interpret webp. Not guaranteed to work, obviously.


Are you sure that’s Chrome and not a website which selects the format based on the client’s advertised capabilities? I know Facebook did that and it confused a lot of people when a .jpg wasn’t a JPEG.


For me was the same problem on Chrome. Switched to Firefox, same sites are happy to let me save their images as .jpeg instead of .webp


I thought this might have been caused by a difference in whether image/webp is included in the Accept header on requests for web pages (not requests for images), since some people who run web sites prefer to serve different html to UAs that support and do not support webp rather than serving different image formats at the same URL. But when I looked at the Accept header Firefox sends for pages, it included image/webp. So I'm not sure what these sites are doing.


Chrome sends "Accept: image/webp" by default when requesting images. Some CDNs send webps to clients who send this header and original-format images to clients who do not.


Use the developer tools, right click on the image's request and from there copy as curl... paste in an editor and remove all the headers, etc and you should get a more compatible version.


Perhaps a site is giving you a different version of a file based on your user-agent? Try changing your user-agent and see if that changes. Or contact the website and ask them.


WebP would never have been adopted and nobody would care about it if it wasn't developed by the owner of the world's largest browser and the world's largest website (which ranks everyone else based in part on what formats and conventions they use), who has the ability to implement it on both sides, and force everyone to care.

WebP is a blatant example of how monopoly power is used to push bad things upon everyone else, and as seen here, for literally no good technical reason.




Consider applying for YC's Winter 2026 batch! Applications are open till Nov 10

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: