This is at best imprecise. The HN privacy advocates have taken to calling all vulnerabilities backdoors on the (not insane) belief that manufacturers are out to get them, but there is a distinction.
Backdoors are a special case of vulnerabilities inserted intentionally by an attacker into ostensibly secure designs to allow the attacker back in once the device has left his control. For example, if Apple configured iPhones to accept two PINs, one set by the user and the other set by Apple at manufacturing time, we could say the iPhone is backdoored. If some engineer went rouge and marked his own code signing key as trusted, that's a backdoor.
When you say something is a backdoor, you assign blame for its actively malicious and undisclosed insertion. Either the organization is evil or someone subverted the organization, and this person must be caught and punished.
Most vulnerabilities happen because their creators didn't know better, or found mitigation to be not worth the cost. Apple is a little bit extraordinary in considering "itself, under legal coercion" under its threat model at all.
You might disguise a backdoor as an ordinary accidental vulnerability, and this is a reasonable assertion to throw around when someone who should have known better, whose peers were doing better chose an insecure design. i.e. Juniper switching their design to use known-broken cryptography.
The signs here certainly point to a vulnerability that ought to be mitigated (and possibly has been under the Secure Element system), but not a deliberate flaw in iOS's design. Vulnerability, not backdoor.
Apple intentionally created a platform where they hold almost absolute control over devices.
They can install new software, inject code into auto-updated applications, MITM SSL by hooking local device APIs, MITM iMessage by using their CA to sign new certificates. Their control over end-user devices is astonishingly complete.
They can do all of this without no external validation (except when jailbreaks are found), as the platform itself prevents 3rd parties from decrypting Apple's OS updates, viewing the operation of the device, introspecting Apple's code, or introspecting the encrypted application code shipped on the device.
They've intentionally done this, and justified this massive set of backdoors under the ridiculous assumption that they can defend device owners against all possible compromises of Apple's trust position, both from within (such as changing business management) and without (governments, espionage, and compromise), not just now, but many decades to come, all without any checks or balances.
Suppose the manufacturer exerted no control over the device. No code signing, no sandboxing, no security hardware, etc. Then anyone, not just the manufacturer, would be able to replace the PIN verification routine.
If the manufacturer's ability to sign code is a backdoor, then so is absence of code signing.
You're assigning them the moral failure to perfectly secure the iPhone only because they have tried to secure it at all. It's subject to strictly fewer attackers; in my book, that's a good thing.
> If the manufacturer's ability to sign code is a backdoor, then so is absence of code signing.
In that universe, you can run any encryption software of you're choosing, and verify the software written by others.
Nobody has more privileges over the device that exceed those of the device's owners.
> You're assigning them the moral failure to perfectly secure the iPhone only because they have tried to secure it at all.
Legally and ethically, it doesn't matter.
Legally, cryptography is protected expression, and you can use cryptography to encrypt whatever you like -- including information that would otherwise be subject to a lawful search under the fourth amendment.
However, the fourth amendment doesn't have a crypto "participation award" clause; if you left a backdoor in place, the government can compel you to use it.
Ethically, if they chose to leave themselves this backdoor (and all the others), the system is weak. If it's not the FBI today, it's the NSA with a NSL tomorrow, or foreign espionage, or a disgruntled employee, or new corporate management.
Indeed I can run the encryption software of my choosing, and so can the FBI to bypass its rate limiting, in which case it's exactly as weak as the iPhone.
With physical access I can replace any security software you have in place. The hardware needs to enforce its integrity.
iMessage is definitely "backdoor-friendly" due to how the CA server works. In past court cases (e.g. [1]) Apple has said they do not have the capability to wiretap iMessage and they have refused to create it, and the government has backed off. Maybe in part because in those cases there were active iCloud backups which provided messages in near-enough-to-realtime that the government wasn't in a good position to force Apple to hand over access to the backdoor. Or maybe because the targets of those particular investigations weren't unsympathetic enough. This case is the one the government has been "waiting for" to push the envelope and see if they can force Apple to provide a backdoor against their will.
Your reply does nothing to justify why "backdoor" and "vulnerability" should be considered synonyms. What you say is true of "vulnerability", I've never heard someone call a buffer overflow vulnerability a "backdoor".
A window is a vulnerability into my house, but it is not designed for passage in the same way that a backdoor is. There is a subtle difference between a backdoor and a vulnerability that is worth respecting, in my opinion.