I can understand their choice of a tested technology, but I don't understand the bandwidth argument. Is the processor so underpowered that it couldn't crop the image when needed?
If you crop 2MP out of a 4MP image, you might as well have started out with a 2MP camera in the first place. Except that if your 4MP CCD is the same size as your 2MP CCD, the light gathering area per pixel is significantly less -- you capture fewer photons, hence less information about what you're pointing the camera at! Raw pixel count is not a good measure of the imaging accuracy of a digital camera. In the case of Curiosity, they may only be 2MP CCDs, but they're the best 2MP CCDs that money can buy, and they're being fed by the best optics NASA could source. It's a far cry from your phone camera ...
That's why I stated "when needed". I thought I didn't have to explain it in more detail, but it seems I do. You obviously have an interest in these things and rightfully have issues with pure MP arguments. In this case you read something into my comment that wasn't there. That's my despair, and one of the reasons why I try to avoid commenting on technical posts.
Note that I also stated that I did understand that there were other technical reasons. My comment was limited to the question of bandwidth only.
My original question that you didn't reply to still remains. From a pure bandwidth point of view, they could crop the image when they need a high transfer rate so that they can have a higher resolution when they have enough available bandwidth.
If they chose 2MP for other reasons, that's fair enough.
I don't think you quite understand the difference between cropping and compression. Cropping an image is the equivalent of choosing which part of the image Facebook displays as your profile picture. While this method could reduce the size of the image to transmit down to '2MP', there's really no purpose in taking photos that are essentially cut in half.
Scientists have historically been very skeptical about automated region-of-interest cropping, or fancy novelty-detection methods, or even certain kinds of compression. They are always afraid that something of importance will be filtered out. It's a difficult argument to win.
Prioritized downlink is accepted (you still get everything, but there's some automation that finds the most interesting stuff and sends it first).
There's even experimental acceptance of planning/machine vision systems that choose targets opportunistically while a rover is moving from point A to point B.
That is, points A and B were chosen by science planners. But while the robot is moving from A to B, it looks at stuff and stops en route to collect more data if it sees something interesting. You can sell this to scientists because they still get the data from points A and B (they're in control) but they also get more data from in-between, that might be interesting, and that they would not get otherwise.
This has been used on Opportunity and won the NASA software of the year award last year (http://www.jpl.nasa.gov/news/news.cfm?release=2011-380). It's a harder problem than it sounds like, because the robot has to re-plan its activities on-the-fly ("plan" in the sense of moving cameras, turning the robot, etc.)