It could, but you'd have to start with a sensor that had both a high number of bits per sample and super-high spatial frequency. If you had such a sensor, there would be no point in decimating; you'd just use the samples the sensor gave you directly. The point with decimating is to sacrifice some spatial resolution for an increase in bits-per-sample from a low-bps (but high-frequency) sensor. This is done routinely for audio signals but it would be a lot more tricky with a two-dimensional signal. Not impossible, but it would require processing elements between the pixels to diffuse the error properly without losing information.