>> Party A discovers very high probability evidence that Party B is committing crimes within the property ...
> This isn't accurate: the hashes were purposefully compared to a specific list. They didn't happen to notice it, they looked specifically for it.
1. I don't understand how the text that comes on the right side of the colon substantiates the claim on the left side of the colon... I said "discovers", without mention of how it's discovered.
2. The specificity of the search cuts in exactly the opposite direction than you suggest; specificity makes the search far less invasive -- BUT, at the same time, the "everywhere and always" nature of the search makes it more invasive. The problem is the pervasiveness, not the specificity. See https://news.ycombinator.com/user?id=aiforecastthway
> And of course, what happens when it's a different list?
The fact that the search is targeted, that the search is highly specific, and that the conduct plainly criminal, are all, in fact, highly material. The decision here is not relevant to most of the "worst case scenarios" or even "bad scenarios" in your head, because prior assumptions would have been violated prior to this moment in the legal evaluation.
But with respect to your actual argument here... it's really a moot point. If the executive branch starts compelling companies to help them discover political enemies on basis of non-criminal activity, then the court's opinions will have exactly as much force as the army that court proves capable of raising, because such an executive would likely have no respect for the rule of law in any case...
It is reasonable for legislators to draft laws on a certain assumption of good faith, and for courts to interpret law on a certain assumption of good faith, because without that good faith the law is nothing more than a sequence of forceless ink blotches on paper anyways.
I don't think that changes anything. I think it's entirely reasonable for Party A to be actively watching the rented property to see if crimes are being committed, either by the renter (Party B) or by someone else.
The difference I do see, however, is that many places do have laws that restrict this sort of surveillance. If we're talking about an apartment building, a landlord can put cameras in common areas of the building, but cannot put cameras inside individual units. And with the exception of emergencies, many places require that a landlord give tenants some amount of notice before entering their unit.
So if Google is checking user images against known CSAM image hashes, are those user images sitting out in the common areas, or are they in an individual tenant's unit? I think it should be obvious that it's the latter, not the former.
Maybe this is more like a company that rents out storage units. Do storage companies generally have the right to enter their customers' storage units whenever they want, without notice or notification? Many storage companies allow customers to put their own locks on their units, so even if they have the right to enter whenever they want, regularly, in practice they certainly do not.
But like all analogies, this one is going to have flaws. Even if we can't match it up with a real-world example, maybe there's still no inconsistency or problem here. Google's ToS says they can and will do this sort of scanning, users agree to it, and there's no law saying Google can't do that sort of thing. Google itself has no obligation to preserve users' 4th Amendment rights; they passed along evidence to the police. I do think the police should be required to obtain a warrant before gaining access to the underlying data; the judge agrees on this, but the police get away with it in the original case due to the bullshit "good faith exception".
This isn't accurate: the hashes were purposefully compared to a specific list. They didn't happen to notice it, they looked specifically for it.
And of course, what happens when it's a different list?