Downloading the ros-k sample and playing with it in GIMP, it looks like exactly the same kind of chicanery using a few simple steps:
1. The "original" image is saved at a very high JPEG quality setting, somewhere around 99% by GIMP's figuring
2. The "JPEGmini" version is saved with a slightly lower, but still high quality setting of about 85%.
3. The "comparison" on the website shows the images scaled down to 25% of their encoded resolution.
In other words, the JPEGmini version is nothing special. If you save a JPEG at 85% quality and look at 1/4 scale, it will look exactly the same as a JPEG saved at 99% quality at 1/4 scale. And it will look just as good as if you pass it through Beamr's software.
I see now that this is what they purport to do. However I still maintain that the presentation is dishonest. Showing a comparison at 25% scale gives the impression that the tool is better at choosing "nearly lossless" setting than it really is. Look at the dog image, which on their demo appears to be identical to the original.
Now look at it at 100% scale: http://imgur.com/z12mHnd . Block artifacts galore. Now sure, it may still do a better job than just choosing a constant quality setting and applying it across the board. But the demo doesn't show us that. It doesn't even make the right kind of comparison.
What we need is a comparison of choosing a single quality level, and using JPEGmini. To be useful, here's the kind of demo we'd need to see.
On the right side: five JPEGs saved with JPEGmini, having a total file size of X, shown at 100% resolution.
On the left side: five JPEGs saved with a constant quality setting, chosen so that the total file size is X, also shown at 100% resolution.
Then we'd honestly know whether the program is worth using.
My guess? Probably not. The train station image, for example, is so grainy that you can compress it to damn near 600KB (50% in GIMP) before the artifacts are really noticeable (and well beyond that if you scale it down to 25% afterward). So did JPEGmini's visual model detect this and cut the bitrate down accordingly? No, it decided that the image should be saved at the equivalent of 83%, making the file more than twice as large as necessary. And this is on an image that was presumably handpicked as a shining example of how well the product works.
My guess is that if you just chose around a 75% quality setting and compressed all of your JPEGs that way, you'd do just as well as JPEGmini.
That's actually a very good analogy to why Beamr's "minimal bitrate for no quality loss" isn't groundbreaking at all, especially since there is necessarily quality loss in lossy H264 -> H264 encoding.
85% is already a quality number, saying relatively how much quality you are willing to give up for kilobytes in your output JPEG. Similarly, x264's CRF option is a quality number, saying how much quality you are willing to give up for bitrate.
Inevitably Beamr will produce some files that are inefficient, as well as some files that have noticeable banding and banding. The difference is CRF allows adjustments.
The files in the before/after preview slider are exactly the same file size when they're supposed to be showing how their compressed image looks as good as the original. They're also not the same files you get when you click the download button.
Those are just preview files used for fast page load, but they are based on resized versions of the original and the JPEGmini version. You are welcome to download the original and JPEGmini full resolution files, and compare them at "Actual Size" (100% zoom).
I downloaded the pic of the dog, opened the 'Original' in Paint.net, re-saved it at the same filesize as the JPEGmini version and it's indistinguishable from the other two versions. JPEGmini doesn't seem to do anything that I can't already do in any image editor.
Note my previous reply on this issue: JPEGmini adaptively encodes each JPEG file to the minimum file size possible without affecting its original quality (the output size is of course different for each input file). You can take a specific file and tune a regular JPEG encoder to reach the same size as JPEGmini did on that specific file. And you can also manually tune the quality for each image using a regular JPEG encoder by viewing the photo before and after compression. But there is no other technology that can do this automatically (and adaptively) for millions of photos.
>JPEGmini adaptively encodes each JPEG file to the minimum file size possible without affecting its original quality
That is indeed what the FAQ says, but that being the case, the tool does not actually work very well, and the presentation is incredibly dishonest. The fact is that the JPEGmini versions of these images do lose noticeable quality, but Beamr is hiding this by giving a demo where the images are shown at 25% scale.
Take the dog image, for example. Using the slider, you'd think that JPEGmini nailed it; no visible artifacts whatsoever. But let's look at a section of the image at 100% scale, and see if this tool is really that impressive: http://imgur.com/z12mHnd
Maybe they do that for lossy images, but I just compared what's on imgur to the original on my computer, and it is pixel-for-pixel identical. They did shave about 7KB off of it though, so maybe they push PNGs through pngout or similar.
I mean, as far as I know, there aren't even any practically useful algorithms out there for doing lossy compression on PNGs (although there should be).
Fireworks, pngquant, and png-nq have quantization algorithms that will dither a 32-bit PNG with alpha down to 8-bit palletized PNGs with alpha. The palette selection algorithms the free tools use (I haven't used Fireworks) sometimes drop important colors used in only a small section of an image, resulting in a blue power LED losing its blue color.
Yeah, technically that's lossy compression, but what I meant was lossy 32-bit PNG; that is, a preprocessing step before the prediction step which makes the result more compressible by the final DEFLATE step while having a minimal impact on quality.
That sounds very interesting. I wonder what kinds of transforms would improve compressibility by DEFLATE. I know a bit about PNG's predictors, but not enough about DEFLATE to confidently guess. If you ever work on this, please let me know. I'd like to collaborate.
>But there is no other technology that can do this automatically (and adaptively) for millions of photos.
Excluding of course, the for loop or the while loop; particularly when used in a shell/python/perl/etc script. Matlab is also particularly well suited for this. Then there are visual macro thingamajigs. iMacros for browser based repetitive tasks, which could then be used with an online image editor. Irfanview, I believe has batch image processing, as do many other popular photo/image editors. And last, but not least, Imagej.
You may have added feedback to the loop where many would have had none, but feedback is not a new or novel concept. The only thing I can see that is possibly non-trivial is your method for assessing quality of the output. Given the many high-quality image-processing libraries, and well-documented techniques available, and the subjective nature of assessing "quality" with respect to how an image "looks", I doubt there is anything original there. You've enhanced the workflow for casual users, the uninformed, and those who prefer to spend their time on something else. That arguably has value. While it seems a bit of a stretch to call it "technology" to this audience but, that is what the word means.
IMO, you'd receive a "warmer" welcome from the more technically-minded folks here if you'd dispense with the marketing hype (definitely stop making impossible claims), and show some real evidence of just how much "better" your output is over some reasonable defaults, including cases where your system fails to meet your stated goals (even a random quality assessment will get it right sometimes). Nobody is ever going to believe that any system as you've described works for every case, every time (simply impossible). In other words, you aren't going to sell any ice to these Eskimos.
edit: accidentally posted comment before I was finished blathering.
i would appreciate an explanation of what you mean by "without affecting its original quality." We're talking about lossy compression, so whether that goal is achieved is purely subjective, isn't it?
JPEGmini operates on a similar principle to Beamr. The JPEGs coming out of your camera are compressed with a very high (wasteful) quantizer setting, so JPEGmini analyzes the image and recompresses with a much lower but visually identical quantizer setting.
I'm curious what an analysis of one of their JPEG's would show?