I thought I explained the reasoning on the bug, but I can give it another go. Users are being tricked into installing malicious extensions, and thus getting compromised in significant numbers (Facebook "like" selling is a major target). We have continuously improving measures in the WebStore to catch malicious extensions, and the volume there is very low. However, we don't have a way of catching malicious off-store extensions. So, the best solution is to disallow other sources by default, and provide technically inclined users and enterprise administrators with the ability to add trusted sources.
This is basically the same model that every Linux distro uses, so I'm not sure what the complaints are about. I'm also surprised Firefox hasn't proposed something similar, since I'm told they're seeing the same trend.
I publish a number of Chrome extensions outside of the web store (thousands of users).
I avoid it intentionally. There is no reason why Google needs to know everybody running my extensions, and nor do I want or need Google sending me users.
The understanding was that Chrome extension development was an open environment. Hence why a lot of developers use Chrome and develop for it.
I don't see how this is different to users being able to click on an download and run an executable file. Add an additional security warning if you like, there are already two (as many as an executable) and they are pretty clear about what is taking place.
What you are proposing to do (or have already done) is a crap solution to a real problem.
The web store is no safer. I searched for 'youtube video downloader' the other day and the top result was malware. It had solid (turns out, fake) reviews and registered has having tens of thousands of users. I only found this out after installing it.
I see this as nothing more than a play to control the ecosystem surrounding Chrome. I completely regret investing so much time into developing Chrome extensions. Push issues like this one or two more times and you may lose more than the developer community.
As I've explained to others, this was a considered decision taken to protect our users. I don't claim that the WebStore is immune from malicious extensions, but all our data very clearly showed off-store extensions to be the major vector by a very large margin.
I don't think this marginal increase in friction for off-store installs is a bad compromise. However, if you have a better suggestion, I'd be happy to hear it. Also, I'd ask that you report the WebStore extension you believed to be malicious, in the interest of protecting other users.
In my case the .crx I want users to install is for an internal application and is a small "hosted app" in the terminology of Chrome. It can only access the domain of our internal app and that's the domain that serves the extension. I don't know how this got lumped in with extensions that access all user data. If it's just a wrapper around my own app and that's all I can access, I don't see how I can do anything malicious.
We just use it to manage permissions and in some cases make use of the full screen feature. There has to be a better way than to treat all .crx files as the same. I don't have a problem with the web store other than that the app is completely inappropriate for the web store (it's only useable to people who are using an internal app).
The point of this change was to add controls for managing trusted installation sources. So, you can use the enterprise policy support to push your installation source out to all your users, or install the extension as part of your Chrome deployment.
> So, the best solution is to disallow other sources by default
No, it's not. It's the best solution you can come up with at the time, and one that likely presents the least amount of effort. Doesn't mean it's not a piss-poor solution. It also doesn't mean it's not going to cause additional problems.
Now if we want to distribute a plugin, we effectively have to bow to the whim of Google. And when I read stories like this: http://news.ycombinator.com/item?id=4092080, I'm not comforted.
> I'm also surprised Firefox hasn't proposed something similar
Maybe because this isn't the only solution?
Let me put this in simple terms: this is the easy way out. The simple way out. This is the low hanging fruit. It's a solution that is, in effect, the worst possible solution. It's the lazy solution. If you can't admit to that much, you're blind.
I'm sorry you're unhappy with the change. We spent quite a while considering this, and it's the best solution we could come up with for the safety of our users. As has already been mentioned, it's still very easy to install extensions. It's just a few extra steps per install, or some additional configuration by the user or administrator. However, we're always open to better proposals.
As for how the WebStore is handled, the story you linked to seems to show a positive resolution. The extension was improperly flagged, but the developer appealed and his extension was quickly re-listed.
It is absolutely not "still very easy to install extensions" outside of the webstore. If it were then how would this be a solution at all? The reason you're calling it a solution is because you're making it so much harder that it will happen far less frequently.
I don't think anyone would claim it's hard to drag and drop a file into the extension manager window. Yes, it creates enough friction to stop the typical drive-by download attack, but it's not a difficult operation.
The change adds support for configuring off-store installs, in addition to changing the default configuration. So, an enterprise can add a list of trusted install sources and distribute it through global policy, Puppet, etc.
The complaints are about a major change occurring with little warning.
For context, I've spent the last 2 months working on an extension that asks users to install from our site during signup. Making them install through the Web Store will increase confusion (why am I no longer on CoolWebsite.com?) and hurt conversion rates.
That should address any user confusion, while still hosting the extension in the WebStore. It also saves the hosting bandwidth and guarantees that your extension updates over a pinned SSL channel, which protects your users in case your key is ever compromised.
>Making them install through the Web Store will increase confusion and hurt conversion rates
will it really? do you have any metrics or research that backs that claim up?
I would have hoped that users would be more trusting of an installer from a vetted source like the app store than they would be of an unfamiliar file extension from an unknown website.
I don't have the stats yet because I haven't yet made the change. With the inline installation that Justin mentions above (thanks, I hadn't seen that yet), I'd guess that it'd be a wash.
Still, it makes me nervous to be building a startup around a platform that can make such major changes so quickly.
You do know that after installing the downloaded deb it adds the google apt repository to your sources and adds the key so that all future updates are done the apt way? (You can also edit files to prevent this behaviour.)
I believe the parent knows that and you are missing the point, "the apt way" is what Chrome did before this change - you could simply click-install extension from any source and future updates would automatically use its custom update_url (if any).
Now after this change it becomes analogous to "the App-Store-only way unless you are technically inclined enough to google up how to workaround". Definitely not "the same model that every Linux distro uses".
The closest thing is Android, but Android has an obvious easily accessible checkbox "Allow installation of non-Market apps". Chrome didn't get such a checkbox in Options (nor a toggle in chrome://flags/).
Technically, there is a new command-line switch, but its current full name was never mentioned even in the related bug tracker issues (http://crbug.com/128748, http://crbug.com/55584). You have to look into the source code to find it: --enable-easy-off-store-extension-install
Indeed. I can simply double click a downloaded .deb file and it'll install. To get the equivalent of this new Chrome policy, it'd have to require me to save the .deb file to a specific location, open Synaptic Package Manager, and click-and-drag the .deb file into the package manager to install it.
The extension manifest has update information in it, so when you install a CRX you're inherently configuring an installation source. That's something you have to take into account here.
The official Google Chrome .deb for Debian/Ubuntu downloaded from google.com/chrome sets up a software source as well, so you can't say that's not how .deb files work.
kerrick@psyduck:~$ ls /etc/apt/sources.list.d/
google-chrome.list
kerrick@psyduck:~$ cat /etc/apt/sources.list.d/google-chrome.list
### THIS FILE IS AUTOMATICALLY CONFIGURED ###
# You may comment out this entry, but any other modifications may be lost.
deb http://dl.google.com/linux/chrome/deb/ stable main
I didn't say DEBs can't work that way, but they sure aren't assumed to work that way. Whereas the act of installing a CRX is assumed to configure an update source.
That's the real point here. For the average user, disabling off-store installs by default is much safer, and it will dramatically reduce the number of compromises. For developers, it's a simple matter of passing a command-line switch.
Let's be honest: The average user doesn't expect anything at all to do with CRX files or extension updates. They just expect an extension to install when they click install, and to work from then on.
That's not really true. the average user has no clue what a file format even is, much less what a CRX is. And to be entirely honest, it's not fair to expect them to know what files are safe to just download and run. So, we made the decision to protect the average user at the cost of an extra command-line flag or a drag-and-drop operation for developers. I think that was a good trade-off, but you're certainly welcome to suggest something better.
The problem is that you're thinking of extensions as files, which most users don't understand. They want an extension; they don't care how it's delivered and they have no idea what an auto-update source is.
I have a button on my website that says "Install extension." I don't tell them it's a link to a .CRX file; that's an implementation detail. If you asked any of my users whether they had downloaded a file (let alone what its extension is), most couldn't tell you. Similarly, the Chrome Web Store has buttons that say "Add to Chrome," not "Download Trusted .CRX File."
I have spent a few minutes trying to think of an alternative to this. I understand that users getting tricked into installing malware is a big issue. I think we, as developers, are paying a bit of a price for the number of popups we used to put in front of people so they don't read or understand anything that is on them (see: EULAs). Could you maybe comment on other paths you considered when making this change?
Could there be a way to just have the signing cert for an extension be provided/known by google, even for third parties, so that if reports happen the extensions using that certificate could be remotely disabled? I'm seeing both sides of the argument, just seems like this issue goes even beyond extensions.
Extension packages are already signed, but we can't track the signatures outside the WebStore. You can prevent key churning by centrally tracking keys, but that generally devolves into the ActiveX Authenticode model. So, you've traded a basically free central distribution point for a high monetary cost to entry because that's a big part of how you penalize bad behavior.
As to what we've done previously, it's involved SafeBrowsing integration and blocking known malicious extensions. However, that approach has serious limitations: responses are an immediate oracle; preemptive manual intervention isn't practical; users are vulnerable to MitM and/or DoS; latency means you'll almost always have victims before your analysis completes; you may never see the full malicious extension files.
Its not surprising to me at all that you guys have spent a lot of time thinking about this. I think people with a legitimate use case for not hosting their plugins on the webstore will be able to cope with the workaround.
And I think the concern is that a closed ecosystem might gradually replace the open one that exists now. I don't have a philosophical problem with protecting users from themselves (or even being protected from myself).
I don't want my extension on the Chrome webstore discoverable by the whole world. Extra steps will lower my conversion rate tremendously given how non-technical my users are.
If the reason you don't want your extension discoverable is because its an internal extension for an enterprise, then you should be aware that you can add trusted sites via enterprise policy. So this won't be a problem for you.
I think you've parsed that sentence incorrectly. The issue is that some sites are selling "likes" on Facebook -- e.g, you pay them $X and a site of your choice gets some large number of FB likes driven its way. The way many of these sites probably operate is presumably by way of a a bunch of Facebook users with malicious Chrome extensions.
Mozilla does have an extension "blocklist", but it's used very infrequently, mostly for confirmed malware. It also distinguishes between "soft" blocks, which the user can re-enable, and "hard" blocks, which the user cannot.
The solution is to take the same method to detect malware in the store and add it to the browser, unless you guys won't support open source solutions because you think it will lead to more security.
I realize why this seems like a good solution at first glance, but it's actually very impractical. First off, much of the detection you want to do is computationally expensive and just can't be bundled into a standalone application shipped to end users. Consider simulated/sandboxed execution, which is great for identifying malicious code but typically requires a whole supporting environment to work.
Second, even when you can push some subset of detection to the client, the fact is that doing so undermines its own effectiveness. Malicious code detection is an arms race, and once the bad guy knows exactly how your detection works he'll alter his signature accordingly. So, by keeping the detection private you force the bad guy to come to you if he wants to test it. And he needs to keep resubmitting his code until it passes, which is in-and-of-itself a useful collection of data for malicious extension detection.
Finally, malicious code detection is often a rapid-response situation. You're constantly getting more data, and evolving and experimenting with different methods. So, you don't want to be bound to a multi-week release cycle, or risk destabilizing Chrome by forcing more frequent releases.
This is basically the same model that every Linux distro uses, so I'm not sure what the complaints are about. I'm also surprised Firefox hasn't proposed something similar, since I'm told they're seeing the same trend.