If you’re having an existential crisis over interpolated/extrapolated/hallucinated images, and have been assuming that every stage of a camera throws away bits instead of interpolating, here is a list of stages in most camera pipelines that try to interpolate information already:
* demosaicing: interpolates color from nearby pixels. Each pixel gets just one of the tree color components. The other two are interpolated.
* decompressing jpeg: tries to guess information the compressor lost.
* black field correction: adjusts the brightness at every pixel to compensate for the different sensitivity at each pixels.
* de-vignetting: compensate for the border of the image being darker than the center.
* auto white balance: compensates for the fact that your eye’s color consistency doesn’t work as it would in the natural setting. This is a complicated way to get you to see the color you would have you seen the full scene.
All of these try to recover some aspect of the signal that was irretrievably lost by a previous step. They do this by making plausible guesses.
* demosaicing: interpolates color from nearby pixels. Each pixel gets just one of the tree color components. The other two are interpolated.
* decompressing jpeg: tries to guess information the compressor lost.
* black field correction: adjusts the brightness at every pixel to compensate for the different sensitivity at each pixels.
* de-vignetting: compensate for the border of the image being darker than the center.
* auto white balance: compensates for the fact that your eye’s color consistency doesn’t work as it would in the natural setting. This is a complicated way to get you to see the color you would have you seen the full scene.
All of these try to recover some aspect of the signal that was irretrievably lost by a previous step. They do this by making plausible guesses.