Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

A simple delta between frames wouldn't perform well if there was any camera movement: you'd pay for every edge twice.

Instead of working with a delta, conditionally using previous frame as prediction source could work (e.g. if pixel A was closer to previous frame's A than to current frame's B, predict from previous frame's X). Or you could signal prediction source explicitly per block or with RLE. Ideally you'd do motion compensation, but doing that precisely enough for a lossless compressor is more than 100 lines.



What about a "don't bother" bit for when that happens?




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: