Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

This is really an image codec, isn't it? Since it doesn't have any temporal compression capabilities.

It's interesting to see how well such a simple technique performs. I wonder what would happen if you added trivial temporal compression by simply subtracting the color values of the previous frame from the next and encoding the residual. How would that perform?



Why would temporal compression be a necessary requirement to be called a video codec?

Quite a few codecs in the "intra-frame only" section of this Wikipedia list, and that section is within the "Video compression formats" section:

https://en.wikipedia.org/wiki/List_of_codecs#Intra-frame-onl...


Because it is trivial to turn any image codec into a video codec by simply encoding each frame individually, and despite the article talking about temporal redundancy, doesn't actually attempt to show any code that deals with that.


mjpeg is a popular video codec where each frame is jpeg compressed


Nitpick: mjpeg is a video compression format, not a codec; the codec is plain old JPEG (which is not a temporal codec).


It's a bit debatable but he definitely only did the image coding part of the video codec. All of those listed formats also support the metadata required for video.

I was certainly expecting some motion coding.


some early implementations of mpeg-1 compressors only supported I frames. amusingly, this is still a valid mpeg-1 bitstream.


Not particularly strange, a lot of compression formats work like that. E.g. you can make a zip file at STORE level and there will be no actual compression.


You can do the same with a modern encoder too by setting the keyframe interval to 1 and "amusingly" the bitstream is still valid.


I-P-B.

Some video formats only go I. Then there's not a lot different between images and video, as far as editing goes. Decoding for end user transportation has a lot more going on, but one has to start somewhere.

Anyway - I think that this kind of work is a great starter and gets more people interested in this.


A simple delta between frames wouldn't perform well if there was any camera movement: you'd pay for every edge twice.

Instead of working with a delta, conditionally using previous frame as prediction source could work (e.g. if pixel A was closer to previous frame's A than to current frame's B, predict from previous frame's X). Or you could signal prediction source explicitly per block or with RLE. Ideally you'd do motion compensation, but doing that precisely enough for a lossless compressor is more than 100 lines.


What about a "don't bother" bit for when that happens?


While delivery formats often use P- and B-frames, editing and recording formats often go all-intra. e.g., the Sony FS7 only supports all-intra XAVC-I for recording at full resolution and framerate.

Personally, I use ProRes 422 for recording, and DNxHD/DNxHR for proxies (and that's only because DaVinci Resolve's free edition can't create ProRes Proxies).

Both of these codecs are all-intra formats in mpeg containers.


the only requirement is to support video, and support compression

see: mjpeg


Video codecs need not support compression.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: