Not a whole lot of new lessons to be learned from this, but basic reinforcement of old ones:
* It's easy to get users, even high-profile at-risk users, to install arbitrary applications. Since there's little to be gained from litigating this basic fact, we have to work around it. We recommend at-risk users stick to relatively recent iPhones, not because Android phones can't be made to be asymptotically as secure, but simply because it's more difficult (technically and logistically) to set up a deployment process that gets an application installed on an iPhone that can do as much as these backdoored Android apps can.
* The biggest threat facing users on general-purpose computers (Windows or Mac) is email attachments. The most profitable desktop infection vector here seems to have been Word macros. There's no point in litigating whether people should or shouldn't use Word documents; they're going to do that. So we have to work around that. Our recommendation is that users be trained not to view attachments on general-purpose computers by clicking on them. Two options: view attachments on iOS devices, where the viewers are less privileged and less full-featured, or always opening them using Google's office tools.
To me, the big lesson of the past few years working with non-technical users targeted by attackers is: general purpose computers simply aren't secure, and can't (for normal users) be made secure. Get people out of computer apps and onto phone or web apps.
To me, the big lesson of the past few years working with non-technical users targeted by attackers is: general purpose computers simply aren't secure, and can't (for normal users) be made secure.
Can't; and I'd even go so far as to argue, shouldn't be made secure. Because then it's not a general-purpose computer anymore, and IMHO freedom is already being sacrificed far too much in the name of security. The fact that you can install malware on a device you own is not a flaw; it's freedom to control what you own --- and of course, the definition of "malware" is subject to perversion.
Get people out of computer apps and onto phone or web apps.
I read your comment before the article, thinking it was a summary, and thought the EFF was somehow now advocating for walled gardens. Thankfully not.
I’d say freedom to “control what you own” includes freedom to control what an app can do to the device you own - that is, to sandbox the app. The status quo on desktops is that you have to basically trust every single program you install not to be malicious, because they all have full access to everything that matters. That doesn’t feel like control to me at all.
Sure, it’s important that there be a way to override the sandbox, for certain use cases where you need to hook into existing programs and change their behavior, or do other things that can’t be nearly circumscribed in the notion of an “app”. But that should be rare. If you just want to run something that can be described as an app, then it shouldn’t require gambling with all your data.
And on that front, as it stands, mobile platforms and browsers are both significantly ahead of desktop operating systems.
If you can override the sandbox, or just give arbitrary permissions for some legitimate purposes, then untold numbers of "common users" can be tricked into doing this for malware which masquerades as useful software. They don't know or care what is "gambling with all their data" and what is "just installing the app and enabling it or whatever".
These Fake Secure Messaging apps were probably mostly for surveillance of secure messaging. The only way iOS is better than android in this specific case is that the iOS App Store has more strict review.
So we're back where we started. Super locked-down walled gardens, or general-purpose computing. I want the latter to continue to be a practical option, for me, my children, and the world.
iOS offers more than just app review when it comes to sandboxing applications.
From tptacek's comment: "[...] it's more difficult (technically and logistically) to set up a deployment process that gets an application installed on an iPhone that can do as much as these backdoored Android apps can."
If you can override the sandbox, or just give arbitrary permissions for some legitimate purposes, then untold numbers of "common users" can be tricked into doing this for malware which masquerades as useful software.
As the old Gandhi quote goes, "Freedom is not worth having if it does not include the freedom to make mistakes."
> They don't know or care what is "gambling with all their data" and what is "just installing the app and enabling it or whatever".
If your sandbox override requires you to type in "I want to void my warranty, IT support contract, and want to let criminals open credit cards in my name" into a clipboard-disabled chatbox 10 times - I'm willing to bet it's going to at least have a small impact on the behavior of even the average user.
They might still not care - okay, fine, that's their decision as long as it's not on my network or hardware. But there's at least a little marginal value to be had there, maybe.
> So we're back where we started. Super locked-down walled gardens, or general-purpose computing. I want the latter to continue to be a practical option, for me, my children, and the world.
I'm greedy and want both options. I once dreamed this could be done on a single device with:
1) Thorough adoption of sane languages (e.g. not my day job of C++)
2) Proper sandboxing (e.g. the iOS model on steroids.)
2 hasn't happened to my satisfaction even on mobile, progress on 1 is even more glacial despite good options, and the likes of meltdown, spectre, rowhammer attacks, etc. have moved my goalposts to include a third requirement: secure hardware. To me this isn't "back to where we started" - it's reverse progress.
You - quite reasonably, I think - lament walled gardens eroding the ability to do general-purpose computing. There's a reason I don't have an iOS device!
I - reasonably as well, I hope - lament the insecure garbage that get shipped daily eroding the will to do general-purpose computing. I'm coming to the conclusion I need at least 3 devices - one for play, one heavily restricted and firewalled for e-commerce, and finally an airgapped and epoxied machine running a toy microkernel in a safe language without so much as a display driver in need of auditing for financials - on a self-assembled monster 6502. Even if I were still willing to wear cargo pants, I still wouldn't have the pockets to handle that! I don't even have the desk space to handle that to my satisfaction!
And even all that wouldn't be to make me "unpwnable" - it still wouldn't, not even close - so much as it'd be just for the slightest evidence and reassurance that someone, somewhere, was trying to do the right thing security-wise when yet another company (or government agency) leaks my information, again. I could switch to carrier pigeons and still not solve the problem! It'd just get scanned by some "paperless office", printed out in triplicate for dumpster divers by the same, uploaded to five clouds, copied to a passwordless network share, and left in plain view of at least one botnet smart toaster alongside a sheet of passwords to all of the above (because we're dealing with the weakest link type of stuff here.)
> Get people out of computer apps and onto phone or web apps.
That's a funny line to take in a comment thread about the rising threat of compromised mobile devices. From the article: "Mobile is the future of spying, because phones are full of so much data about a person’s day-to-day life." Also "it doesn’t require a sophisticated or expensive exploit.. Instead, all Dark Caracal needed was application permissions that users themselves granted when they downloaded the apps."
So actually, by pushing people to mobile, you're pushing them exactly where bad actors want them to go.
Yes, it's fairly difficult to pwn an iPhone, but we know that state actors can actually do it when they really want to (we know from Snowden that they stockpile exploits), and there is nothing you can do to actually increase your defenses - you cannot install firewalls, antiviruses or any other threat-detection software. Android (as demonstrated here) is trivial to compromise. On top of that, mobile apps and OSes simply cannot be airgapped. They rely on the cloud, which is another humongous surface attack.
A well-configured opensource-running personal computer where the user does not have the root password is as secure as an iPhone, if not more.
What does the root password have to do with anything? Linux-on-the-desktop-types have been dropping this line for years. What does a user want to protect on their computer? Ask a Unix sysadmin and you'll get answers like "the integrity of /sbin, /lib, and /etc". Ask a human and you'll get an answer like "my email and personal information". The humans are right: that's what the targeted attackers are after (as you'll see if you read this report, where the programs that get installed don't even try to privilege escalate).
The root password has to do with installing apps. These apps go through a process designed to allow non-root users to install privileged apps - the privilege escalation is there, it’s simply allowed by design. And that’s Wrong. It’s almost as wrong as the standard Windows mode of running as a local administrator.
I’m not sure what the solution is, but it’s fallacious to say mobile security is so much better than desktops, because it’s not. Desktops can be secured by an “adult” sysadmin, in the mobile world this role is left entirely to Apple and Google - and as it has been proven over and over again, they fail at it, probably because they are too big. A local admin has to review a handful of apps, Google and Apple have to do it with bazillions apps and they will inevitably slip.
You don't need root privileges to install an application on a general-purpose operating system (or to install plugins and extensions to applications). Root credentials are a red herring --- except, ironically, on iOS.
Desktop OSes can be locked down pretty extensively, and that’s where the root password (a concept that includes sudo, policies, UAC etc) comes into play. You can have all sorts of locks that won’t open unless the admin says so. That is the point.
On mobile you get only what Apple and Google give you, and there is very little you can do about it.
I made this point [1] about the latest side-channel attacks: mixing various kinds of activities -- business and pleasure, familiar and unknown, critical and discretionary -- on the same computer is bad hygiene, at least with the sort of computers that are widespread today, and we ought to lift that conversation into the mainstream to point out the deficiencies of contemporary technology in the light of users' expectations and demands.
A robust defense against the likes of Meltdown and the likes of Word macros is to perform the work in an environment from which the context of execution cannot escape, and cannot cause the infiltration of data into itself by exfiltrating it from a sibling or a neighbor. Historically the most successful way of achieving this is to use separate hardware, but that's a burden most users won't accept.
I think this is absolutely correct which is why I believe the days where we can count macOS on having the ability to execute non-sandboxed applications are numbered. I think in hindsight apps that have already made the transition to the App Store (or were designed for it) will be at a tremendous advantage over the ones that either aren't adopting that model or have left it.
Google has an even greater advantage than Apple here because they became fully invested in sandboxing much earlier (the browser).
Chromium's sandbox's main innovation was shoehorning Windows' permission model into a sandbox without having to modify the operating system. It's a masterpiece, don't get me wrong, but it doesn't transfer well to establishing good sandboxes on other platforms. Chromium's macOS sandbox just uses the one built into the OS and the Linux one doesn't leverage the effort put into the Windows variant either (though there is no need to anyway).
That said, the fact that Apple's sandbox relies on inserting a full Lisp implementation into the kernel has always rubbed me the wrong way. I'm not sure if anybody is very good at this yet.
There's definitely room for both; for example, a developer mode that requires physical intervention, tripping a fuse, or complicated instructions that prevent 95-99% of users from ever looking into it.
I glanced at the report, but I'm not seeing much on how these devices get compromised.
Let's say I'm a targeted individual with an iPhone or an Android phone. I've already got Signal installed through the app store, which should be vetting the apps I download. How does my legitimate Signal app get replaced with an infected one?
iPhone users aren't targeted. It's difficult to install trojaned Signal on an iPhone, since you can't sideload them from random websites.
You're infected by being phished to a staging server that looks like a legitimate launcher/installer site for secure messengers; that site delivers Android Java applications for WhatsApp, Signal, &c, but those applications are backdoored to ask for all possible permissions and to quietly set up a C&C channel back to the operators.
It’s not really that difficult. Just create a clone app, pass the initial store review, then change the icon/description to be close to the official app in an update, and opt out of AppStore search, so that it can only be accessed through a link. It’s just that the payoff is much less impressive than on Android.. although, I guess, you can always try to sneak in a VPN entitlement, and hope the reviewer is extra incompetent that day..
I hate it when Apple is right. And I hate that their golden numerical prison has genuine advantages over the alternatives.
Assuming users cannot be fixed, what can? Users sideload apps from the web, right? Can't there be an easy way to distinguish trustworthy communications from potentially dangerous ones? Would such a way be systematically worked around by ever cleverer phishers?
Making side loading opt-in and warning users of the security risks seems like a better compromise to me than requiring that every app on my phone is vetted by a single gatekeeper.
"Those who would give up essential Liberty, to purchase a little temporary Safety, deserve neither Liberty nor Safety." and all that.
> requiring that every app on my phone is vetted by a single gatekeeper.
What are you referring to here?
I just installed an SNES emulator on my iPhone without jailbreaking and without paying Apple for a developer account. I don't think Apple knows anything about the app.
The EFF report specifically said they don't ask for more permissions. They just chose apps with good permissions already. Most of the scary stuff in the report is from actual Windows desktop trojans. The mobile stuff is just plain repack.
The largest infection vector for Signal are people that don’t use the Play Store. (Well, used to be).
For a long time, Moxie refused to offer it on any other store, or official APK downloads, so users then just googled for it, and downloaded the first result.
Whenever you make it hard to do something securely, users will find a way around that. And that will usually break all your security precautions.
> It's difficult to install trojaned Signal on an iPhone, since you can't sideload them from random websites.
FWIW, you definitely can do this (you go to a webpage and it asks you to install a product) with an "enterprise" certificate, but the idea (true or not) is that Apple then knows who the developer is; so, so if you someone noticed and reports them, Apple is supposed to be able to do something about it (again, that's the theory).
Additionally, at least on the Play Store, you can have ads showing before search results.
Right now, if I search for Signal, WhatsApp, Telegram or Wire, the first result is the official app.
If I search for Messenger, an ad for Facebook comes at first and the second entry is something called "The Messenger" from a company I never heard of. There's where user confusion starts enabling these attacks.
I remember this being worse in the recent past though.
Thanks. I'm more scared than I was. I think I was naieve enough to believe the automated store checks picked this stuff up.
I suspect the T&C were written by competent lawyers but this feels like a situation where if the barriers to entry in the store aren't high enough, then google as the storekeeper has the liability, if they determine trustedness as the 'editor' of what is or is not allowed into the store.
When it comes to content, the government seems to take this view: if you publish the kind of material the government forbids in law, you either have a common carrier defence that you publish EVERYTHING or, you have the role of selecting what is published and acquire liability because of that moment of choice.
Automatic choice doesn't absolve you of the onus of checking.
* It's easy to get users, even high-profile at-risk users, to install arbitrary applications. Since there's little to be gained from litigating this basic fact, we have to work around it. We recommend at-risk users stick to relatively recent iPhones, not because Android phones can't be made to be asymptotically as secure, but simply because it's more difficult (technically and logistically) to set up a deployment process that gets an application installed on an iPhone that can do as much as these backdoored Android apps can.
* The biggest threat facing users on general-purpose computers (Windows or Mac) is email attachments. The most profitable desktop infection vector here seems to have been Word macros. There's no point in litigating whether people should or shouldn't use Word documents; they're going to do that. So we have to work around that. Our recommendation is that users be trained not to view attachments on general-purpose computers by clicking on them. Two options: view attachments on iOS devices, where the viewers are less privileged and less full-featured, or always opening them using Google's office tools.
To me, the big lesson of the past few years working with non-technical users targeted by attackers is: general purpose computers simply aren't secure, and can't (for normal users) be made secure. Get people out of computer apps and onto phone or web apps.