Is there any way you can substantiate any of this? I wouldn't be shocked, but it seems borderline implausible that we'd be getting all this interest in various ways to hack into iPhones physically if you could just dial a number--and I think it also goes without saying that, whether Apple provides official backdoors or no, it has a high interest in ensuring that there aren't any unofficial backdoors for many other reasons (preserving DRM for one example, if you need them to have a selfish motivation).
I'm fascinated by this idea of a deep web only accessible by the cognoscenti. Presumably if a link slips out then the deep web becomes a lot shallower?
Well actually the relatively hard part is hosting a crawler of decent size, and then if you crawl in violation of robots.txt its pretty straight forward to use iptables to ban you, of course you then spend money on hiring a botnet to mask your traffic footprint, except that on that same darknet there might be people who are friends of the owner of that botnet.
Does this not rather beg the question - is there a deep web, and how big (or small) is it. I can easily understand the desire for a coherent group of people putting up vpns etc to keep their world seperate from others - but that implies you join in based on some other criteria, which sounds not very deep web but pretty secret-VPN-we-are-not-telling-you-about-unless-we-cross-your-AS-Number-when-something-is-obvious
It just has that feel of "secret society" to it, which tended just to reflect the informal power structures of the wider world anyway.
No, the "deep web" is real in the sense that there are billions of network addresses that contain content or services which are not accessible through the 'standard' discovery services (Google). In many ways things like Usenet are still part of it as there are netnews groups, and they get used, but there isn't a lot of indexing going on. Further there are at least two 'separated' NNTP type networks that are invitation only.
So it is a "collection" of secret societies, each with their own quirks. As a collection is constitutes a 'web' and perhaps the only commonality is the desire to not be part of the "public" web.
Can confirm this. I interviewed for a UK based competitor who was scared NSO were better. The competitor's supposed capabilities were scary enough for me to bin my phone contract at the time because they had my contact details. The agent was less than honest about the job description as well. Arseholes all around.
Stipulate that somebody has an exploit for libjpeg, and that's probably enough to own a phone by texting them. That said, with a libjpeg exploit, there's a lot more fun one can have.
It's possible to do this in a staged way -- basically, give me 100k phone numbers, I'll do automated attacks and catch 25-50k of them (old unpatched OSes for which if I had a $5-10mm budget I'd have 0-days ready, phishing, etc.).
Then, use the early victims to catch the rest -- hopefully they're admin assistants, HR people, etc. Targeted attacks on the rest.
Black bag jobs on the remainder, using legal or extralegal means, based on value of the target. It's not worth bothering to black bag someone who you only want to get the big boss if the big boss is otherwise exploitable.
The key is you don't need to have a single exploit which works on 100% of your targets; you can do multiple things.
"We're going to need verifiable sources for claims like that."
This entire parent+thread argument back and forth is completely absurd.
It doesn't matter whether he has sources. It doesn't matter whether that firm does or does not exist. It doesn't matter what you think of their tech or his explanation or who is who or what is what.
Your phone has two[1] completely independent, full-featured computers inside of it, totally distinct from the actual computer that is your phone (that you use) that are completely out of your control, and depending on the model, have up to DMA control over your device.
Whisper systems does not solve this. SecurePhoneBlahBlah does not solve this. Moxie Marlinspike does not solve this. If you have a smartphone, you are owned at a deeper level than you've ever been owned before and there is nothing you can do about it other than removing your SIM card. Game over.
[1] The baseband processor and the SIM chip itself.
Great point -- that is 101 of any serious security equipment validation. It is not that this software package/app or that card and so on are certified. The whole package from ground up (hardware components down to analog bits, EM emission... up to top level application get certified as secure) has to be.
I can't buy some mathematically proven secure software, install it on a Chinese tablet and claim it is secure and expect it to get approved.
This is a funny market as some domestic analog components are hard to find today. Micron, I think, makes some but heck most are sourced from China.
This makes 'secure' hardware ridiculously expensive. As in $50k+ for switches and routers and there is a whole market specializing in it.
Now, one can look at it another way -- some security is better than no security. I can see the argument on both sides. At least if NSA can record my phone calls maybe the local cops can't and so on...
Use separate devices: one with SIM/baseband, one without (wifi only).
Only encrypted traffic goes through the mobile device, e.g. cheap Firefox phone. Decryption takes place on wifi-only "media player" device in the form factor of a phone.
This is still exposed to DMA attacks from wifi device, but it's a smaller attack surface. Next level of protection is a hardware IOMMU on Cortex-A15 or x86 VT-d, plus a Type-1 hypervisor to isolate the wifi device.
Keep in mind that even without a SIM, the GSM radio is still active[1]. From my GSM-layman perspective, it sounds safer than being in a "trusted" pairing with the network, yet since it's all closed source, you have to wonder if there are magic packets that can own your device just as badly as if you have a SIM in.
>> Whisper systems does not solve this. SecurePhoneBlahBlah does not solve this.
1. The SIM chip generally is not a full featured computer and I'm unsure that it would have DMA access. But yes the baseband processor is indeed an issue.
2. Products like this prevent the kind of passive data-slurping that has been popular so far - i.e. install a box at the telco and record everything. That's a good start.
So yes, it does matter and it's a good start, and it pushes up costs for pervasive surveillance.
The SIM card is a full featured computer. It has memory, a CPU, and your telco operator can upload java applets to it which can interact with the baseband and the application processors.
And that's the point ... right now the stingrays and such simply act as IMSI catchers, etc., but if they can impersonate the carrier they can upload arbitrary java applets to the SIM card which can undermine the call-encryption app you are using. It's an obvious next step which you aren't protected against.[1]
I don't know if any SIM cards get DMA access the way some baseband processors (not all) do ...
[1] You could get one of those little sim wrapper foils and enable encryption-only for your SIM (which it almost certainly does not have now) which I think would defeat a lot of the carrier-impersonation attacks ...
This is an important point. We waste a lot of breath accusing people of having deliberately planted backdoors, and moving to alternatives that we think are too trustworthy to have backdoors in them.
Whether or not the programmers behave ethically, they're still going to make mistakes and write vulnerable code like everyone else, and you'd better believe the security services (and their contractors) are looking for them.
To be fair, the stuff you are talking about is targeted malware. The odds of people being actively targeted rather than passively surveilled is orders of magnitude in difference.
Everyone is being passively watched at some level, even if it is just for billing purposes.
Signal makes it much harder to tap your phone and makes mass surveillance extremely difficult, both of which are still important. But you're right that people need to be informed of the risks they still face.
It seems realistic to me. Just send a phishing SMS ("Your bill of $103.54 is due TODAY: http://payments-comcast.net/83954583"), hope the user clicks it, have the webpage exploit one of the numerous iOS Safari vulnerabilities, and you are done. There are tons of vulnerabilities in smartphone browsers: iOS 7.1.2 alone fixed 28 UNIQUE VULNERABILITIES in Webkit (http://support.apple.com/kb/HT6297) 7.1.2 was released merely 2 months after 7.1.1, so at least 3 vulnerabilities are discovered and fixed every week.