I have spent a few minutes trying to think of an alternative to this. I understand that users getting tricked into installing malware is a big issue. I think we, as developers, are paying a bit of a price for the number of popups we used to put in front of people so they don't read or understand anything that is on them (see: EULAs). Could you maybe comment on other paths you considered when making this change?
Could there be a way to just have the signing cert for an extension be provided/known by google, even for third parties, so that if reports happen the extensions using that certificate could be remotely disabled? I'm seeing both sides of the argument, just seems like this issue goes even beyond extensions.
Extension packages are already signed, but we can't track the signatures outside the WebStore. You can prevent key churning by centrally tracking keys, but that generally devolves into the ActiveX Authenticode model. So, you've traded a basically free central distribution point for a high monetary cost to entry because that's a big part of how you penalize bad behavior.
As to what we've done previously, it's involved SafeBrowsing integration and blocking known malicious extensions. However, that approach has serious limitations: responses are an immediate oracle; preemptive manual intervention isn't practical; users are vulnerable to MitM and/or DoS; latency means you'll almost always have victims before your analysis completes; you may never see the full malicious extension files.
Its not surprising to me at all that you guys have spent a lot of time thinking about this. I think people with a legitimate use case for not hosting their plugins on the webstore will be able to cope with the workaround.
Could there be a way to just have the signing cert for an extension be provided/known by google, even for third parties, so that if reports happen the extensions using that certificate could be remotely disabled? I'm seeing both sides of the argument, just seems like this issue goes even beyond extensions.