Hacker Newsnew | past | comments | ask | show | jobs | submitlogin
Answers to your questions about Apple and security (apple.com)
545 points by mrexroad on Feb 22, 2016 | hide | past | favorite | 365 comments


This FAQ has more than 1000 words, but these are the words that matter:

  The only way to guarantee that such a powerful tool isn’t abused
  and doesn’t fall into the wrong hands is to never create it.
Can a court force a company or an individual to create something that does not exist?

China required Google to actively censor search results about sensitive topics, and Google quit China. (They may now be heading back [1].)

Bing stayed in China and silently replaced their organic results with government-approved propaganda [2].

The best way to prevent governments from oppressing their citizens is to refuse to create tools that enable oppression.

[1] http://www.theatlantic.com/technology/archive/2016/01/why-go...

[2] http://www.theguardian.com/technology/2014/feb/11/bing-censo...


> This FAQ has more than 1000 words, but these are the words that matter:

> > The only way to guarantee that such a powerful tool isn’t abused and doesn’t fall into the wrong hands is to never create it.

And of course, that's also wrong. The only way to guarantee that such a powerful tool isn't abused it and doesn't fall into the wrong hands is to make it impossible for such a tool to exist, not to refuse to create it.

Right now, today, Apple has the ability to create such a tool. Some finite number of human beings at Apple have the ability to create such a tool on their own initiative, e.g. were they disgruntled.

It should be impossible for Apple, or disgruntled Apple employees, or any nation state, to create such a tool.

Otherwise, it will eventually be created, because if something is possible then is eventually probable.


Naturally "can't create" is better than "won't create", but this case is about the precedent and not this particular iPhone.

If a court can force a company to create new technology that didn't exist before, what other tools might they be forced to build?

1) Stream keystrokes, audio, video, location to law enforcement prior to encryption

2) Replace actual search results with government-approved results

3) Force Apple to create target lists of suspects based on their location, apps installed, sites visited, keywords typed

Bad things happen when the government can force private companies and individuals to create new tools for law enforcement.

There is no technology that prevents Apple from subverting Apple's security. If Apple loses, the new precedent could open Pandora's box.


"There is no technology that prevents Apple from subverting Apple's security."

That's just in this universe. If users truly owned their devices, with full knowledge of the hardware and control of the software + data, we would never have this discussion.

In this world, you have to jump through flaming hoops just to prevent Apple from automatically swapping out your OS!


For what values of "full knowledge" - you bootstrapped your compiler and verify every line of source code you've downloaded. But then if you're up against nation state actors they can probably force your source code source to be compromised for you and only you - so those "many eyes" are seeing different code. You are after all checking that other people have the same hashes for the download that you use...

Then we're getting into compromised hardware, you verified that the chips actually are the same as the circuit diagrams after all.

There is only so much tinfoil you can deploy. If an advanced nation state truly has you in its sights there isn't much you can do about it, and to be honest that's the way it should be. We sacrifice individual liberties for the advantages of being part of an organised society, and the limitation of coercive power to the state is a key part of that.

A well regulated state in which the right people get targeted, with an acceptably low false positive rate that scales based on the degree of privacy violation of the targeting seems to be a perfectly sensible state of affairs. Apple's argument that they are being asked to do something that isn't justified fits just fine into that.


Just because you can never get full coverage doesn't mean it has to be easy. For example, here are some things that could be imlpemented to make firmware backdooring harder:

* Certificate Transparency-style public ledger for all firmwares that Apple has signed. Refuse an update if the firmware version has not been published to the ledger. This would make these backdooring attempts public, since we can assume people would watch the ledger and ask questions in case of an unknown version being published.

* For open source firmwares, reproducible builds are a great tool to make sure the distributed binaries actually match the publicly reviewable source code. Third parties could recompile the firmware and attest (through signing) that they get the same result.

I don't know any implementation of either of these two systems, at least publicly available (hint: Bazel does reproducible builds, and you can guess what is driving this requierment). Debian and Mozilla are experimenting with reproducible builds but I don't know that it ever reached any useful conclusion.


> Certificate Transparency-style public ledger for all firmwares that Apple has signed. Refuse an update if the firmware version has not been published to the ledger. This would make these backdooring attempts public, since we can assume people would watch the ledger and ask questions in case of an unknown version being published.

That's seems like a fantastic idea but without open source firmware I don't see how it could be enforced.


E.g. Bitcoin has reproducible builds in the wild.

https://github.com/bitcoin/gitian.sigs


Reproducible builds, and a release process that incorporates digital signatures from multiple members of the Bitcoin development community.


That stuff (checking hashes, etc) is currently mostly theatre because, as you say, a nation-state can easily serve you different hashes to check.

But it's pretty easy to imagine a feature where, similar to cert-pinning, many clients report the hashes that they see on the website and if you see something different from what everyone else does, you know something is wrong. It wouldn't need to be a "techie" feature - it would simply show a package as broken and refuse to install if it wasn't verifying, etc.

And the goal isn't some impossible 100% security, the goal is enough security to end mass interception and make the government rely on warrants and targeted tactics again.


> That stuff (checking hashes, etc) is currently mostly theatre because, as you say, a nation-state can easily serve you different hashes to check.

This is why as well as serving those hashes over HTTPS, you sign the hashes with a key, the public part of which is distributed/retrievable/verifiable via some other means.


Right. But if I were attacking you and was in your site and knew that you signed your hashes I'd be targeting your build script to save your key and passphrase, or build a backdoored version at the same time. Something like Certificate Transparency for code-signing would be really solid though; tampering would stand out. https://www.certificate-transparency.org/log-proofs-work

I think there's value in the purely user-based solution though. If two people go to a site at the same time and aren't served the same binary they should be curious why, even if the company securely and auditably signed both images.

And, it helps in all the cases when companies don't implement great release-security.


tl;dr: "it's hard to verify code. therefore, we shouldn't even try!"


Not hard. Impossible. Have you seen obfuscated C? Is it possible for enough people to read it and understand it that it would make a difference? What about if you can read it but I can't? Do I trust you? How many people have to verify it before a non-expert can trust it? Theoretically, none. Everyone could be in on the conspiracy except me.

And how do I verify that the code I've just read is what shipped on my computer? If I install it myself, how do I verify the CD or ISO I have is what I read? If I compiled it myself, how do I verify the compiler didn't change anything? What about the system that compiled the code, do I trust that? How far back do I have to go before I have trusted everything?

And then what about hardware? Is there a chip that changes some of the code that's running? Can I verify every piece of hardware in my system, a 16-core CPU with 64GB of RAM and a high-end video card? How do I even do that? And what are the odds I am a world class C coder and a world-class hardware expert?

And then what about client code? Am I a world-class Javascript expert too, with full source access to the Django backend? And the system that is running that server code, do I have access to that hardware to make sure it's not going to compromise my security?

No. Fully trusting a computer is impossible. It doesn't mean we shouldn't try, we should at least make it as hard as possible for the bad guys to trick us. But what's better, IMO, is to create an environment where it's detrimental to companies to trick their users like that. Apple seems to have gotten a lot of great press for standing up in this situation. There's a market for it.

Can you trust Apple tech? Maybe not. But can you trust it more than Google or Microsoft or Samsung or Facebook or Amazon etc etc? It does seem that way. Without being a world-class expert in everything, eventually you have to trust someone.


Sure, you can't be 100% sure of anything. In the same vein, you can't even be sure that the world is real. The fact you can't have 100% assurances doesn't mean anything. What matters is how strong you make your assurances. A company having an existing system allowing for the installation of backdoored firmware is an example of something that shouldn't be possible. It should be possible to reasonably assure that someone cannot flash something onto a person's phone without either wiping the data or having the update signed by the user.


> No. Fully trusting a computer is impossible. It doesn't mean we shouldn't try, we should at least make it as hard as possible for the bad guys to trick us.

I completely agree. I liken it to something like world-peace: It's an ideal, and we'll probably never get there. Doesn't mean we shouldn't try to get there though.


We still need to have a realistic idea of what's possible though and not make claims that are stronger than what's likely.


A modern software stack involves in excess of 500 million lines of code. One can properly audit no more than 5000 lines of code per day. The math dictates that trust will be a given.


I agree. It's an unfortunate inevitability. Doesn't mean we shouldn't try. We shoud be able to verify any subsection of it as well, whenever we need to.


> If users truly owned their devices, with full knowledge of the hardware and control of the software + data, we would never have this discussion.

We should have a series of standardized trusted execution modules which are open source hardware and open source software. There should be a public process by which these are vetted. Watchdog groups could disassemble samples of the chips and place them under electron microscopes and otherwise examine them for tampering.

The effect of this would be to place individuals and grassroots organizations on an even playing field with corporations and governments with regards to such tools. It's a disaster for large organizations to have access to individual private information, yet it's a societal boon for such organizations to be open and for individuals to know what they are doing. The same asymmetry applies to trusted execution. It's a disaster when big organizations use it against individuals, but it would be a societal good if individuals could revoke access to their information from corporations when they break goodwill.


They made it impossible for the new versions of the hardware, I think. (The Secure Enclave, in the phones that have it, would delete the decryption keys stored within it after receiving ten incorrect auth attempt requests from the CPU. The Secure Enclave cannot be updated—as far as we know—to be brute-force-able.)

It's just the old versions, before Apple started making privacy a political tentpole, that they could ever manage to backdoor in this particular way. That's still bad—but what Apple are mainly worrying about is that this precedent would force them to aid the FBI in other act-of-creation ways, not just in ways involving introducing backdoors.

The FBI might compel Apple to, for example, build a monitoring and clustering service into iTunes Genius/Apple Music to de-anonymize people through their music preferences; or they might request Apple build a facility to censor all messages travelling through APNS that mention the names of informants in active sting operations. Or they might compel Apple to build a secret CPU ring-0 elevation handshake into the A10 chip.

Even if the new generation of iOS devices was completely un-backdoor-able, and the usage of the old generation was effectively nil, that still wouldn't minimize the FBI's request. The FBI are likely much more concerned about all the orders they could give to, effectively, turn Apple's engineering talent into a domestic-surveillance consultancy.


Apple -- or any other tech company -- could probably prevail in court against a lot of the more outlandish FBI scenarios that you describe. Most appellate courts take privacy quite seriously. So does the Supreme Court, with both right- and left-leaning judges getting there just fine on their own. The Fourth Amendment is pretty clear on this.

The problem for Apple, or any other tech company, is that once you say "okay" once, you can't swat away the next one by saying "Impossible" and being done with it. You have to lawyer up. That takes time and costs money.

So Apple's real issue is that they don't want to spend many millions of dollars (billions?) on lawyers to get the FBI and other alphabet agencies to behave properly. That's a very legitimate business position. It's just a little more nuanced than what we were told on day one.


It will never be impossible for Apple to weaken the security of Apple's sofware.

For example, the government could require Apple to build future iPhone software to permit backdoor access. The question today is whether the government can do so with a court-issued warrant, or whether it will require legislation, as it did with CALEA.


It could be impossible for Apple of the future to break the security of a device sold by Apple in the present.

Apple can hopefully win this case based on the forced creation aspect. But they'd likely lose the next iteration where the FBI subpoenas technical documentation about the key derivation function, and builds their own hardware to talk to the hardened chip. Of course the FBI has little interest in doing this, because the whole affair is a psyop.

If you want to fully protect your customers, don't design backdoors into your products in the first place!


> And of course, that's also wrong. The only way to guarantee that such a powerful tool isn't abused it and doesn't fall into the wrong hands is to make it impossible for such a tool to exist, not to refuse to create it.

And Apple has done that: the type of hack the FBI is asking for cannot work on newer iPhones.


Source? Current discussions surrounding your statement point to it being false - that Apple can update the firmware of the secure enclave without it wiping out it's data.


I think the confusion stems from the iOS security guide that Apple published. Page 7 of the guide states that "The Secure Enclave is a coprocessor fabricated in the Apple A7 or later A-series processor. It utilizes its own secure boot and personalized software update separate from the application processor," which implies that somehow updating it is more secure, without saying exactly how much control Apple has over updating it, and whether or not the phone needs to be unlocked before it accepts new firmware. Given that they haven't come out and said that they can't override the firmware for locked phones, I'd say they can. Although, before Apple's recent statements I would have assumed that they couldn't, so the confusions understandable. The guides at https://www.apple.com/business/docs/iOS_Security_Guide.pdf


> Source? Current discussions surrounding your statement point to it being false - that Apple can update the firmware of the secure enclave without it wiping out it's data.

Do you have the source for that, too? I was still under the impression that the Secure Enclave would erase the keys if it's updated.

It's a pretty huge flaw if the firmware can be updated and the keys preserved without first validating with the passcode.


AFAIK, the only sources on this matter are the white papers that Apple has produced. You can't even attempt to update the firmware without having Apple's signing keys.



Is that really true though? My understanding here is that the biggest blocker is the code signing process - the FBI or the NSA could probably build a backdoored OS, but they couldn't install it on the phone without apple's signing cert. Depending on Apple's internal security, it may very well be impossible for a single person to sign anything other than an official iOS release, making it impossible for an agressor nation or a disgruntled employee or anybody else other than apple as a whole to get a backdoor onto an iOS device.


I believe the parents point is that Apple having a signing key is no different in principle than them making an operating system and signing it. Apple possesses information that can compromise a phone, and if it's possible for an signed OS image to be compromised, then it's possible for the signing key to be compromised.


This is what I've been saying all along. If Apple has the ability to do this then the security is already compromised. The fact that it hasn't been physically created yet is a minor detail.

I also don't understand why they couldn't provide the FBI with an OS that refuses to run on anything other than the specific device in question. Isn't that the whole point of code signing?


Amen brother. Free software, in my opinion is the answer here. And of course actually working democracies.

Tim Cook today doesn't create it. Cook Tim tomorrow will.


The democracy is actually working: the small minority of tech industry people who think security and privacy are important are correctly overruled by the majority who don't.

If you think (and it's not crazy) that sometimes small, elite populations know better than the public at large, great, but please don't call that democracy.


You could argue that parliaments are a small elite population.

https://en.wikipedia.org/wiki/Representative_democracy

This is what we mean when we use the word "democracy" today, unless you live in ancient Greece or some small village.

So having experts on technology,security and privacy being overruled by an uneducated mob is madness, not democracy.


And this is exactly what the constitution is for. Changing the constitution requires much more than a simple majority.


The real answer is to do a Putin -- buy a typewriter.

If you're doing something that will attract negative political attention, mail a letter.


>The real answer is to do a Putin -- buy a typewriter.

Even that might not help:

http://arstechnica.com/security/2015/10/how-soviets-used-ibm...



So what?

It's pretty trivial to obscure that metadata, and frankly the content of your correspondence is more important than the metadata.

Your mail has fairly robust legal protection, and you can spend more money (i.e. Registered mail) to provide a higher level of tamper evidence and accountability in transit.


> Some finite number of human beings at Apple have the ability to create such a tool on their own initiative, e.g. were they disgruntled.

Is that true? I mean, wouldn't those employees need the root signing certificate... something I suspect is very tightly protected even within apple.


> something I suspect is very tightly protected even within apple.

By, say, breaking up its custodianship among a … finite group of human beings.


You just made an important distinction, however, between a singular employee, and a group.


I could be impossible for apple to more easily brute force iOS if they prevent installing updates when the device is unlocked. However, being forced to make changes to iOS to grant access to FBI opens the door to many ways which security can be weakened or removed.


I'm all for Apple here but this is kind of silly. At some point someone has to design the product so someone has to do the creation in that sense. Unless you are proposing a jesusphone will somehow spring forth fully secure safe from even Apple's design team.

But that said I'm definitely for the stance that technology be designed in such a way to be unbreakable to Apple or anyone else once the device is sold.


Correct, thank you for summing this up so nicely. I would just like to add that, when possible, such a thing often already exists. Food for thought.


I love this line of thinking! What lines will we allow the government to cross?

Can it force you to buy goods and services? Sure, health care! Can it prosecute pre-crimes? Yep, drunk driving! Capture and log all human communications without a warrant for unspecified future uses? Yes, Prism. Get into the religion business? Why not? Marry people! Then there's using taxing authority as a weapon against political opponents (Tea Party thing.) And, taking land and property from citizens for profit (Kelo decision.) Control/erode the value of currency. Endless, shameless pork for political friends, bundlers, and donors. And on and on and on.

I won't be surprised when the US government compels Apple to sign a hacked copy of iOS. When Apple Computer is your last line of defense against tyranny, guess what, you're already screwed!


> Get into the religion business? Why not? Marry people!

Actually, you got that one round the wrong way. Governments have been in the marriage business before religion was. (Early Christians were married under Roman civil law and did not observe marriage as a sacrament.)


Drunk driving isn't a pre-crime. It's just a crime.


Ok, thought experiment.

You get liquored up beyond the set legal limit, you get in your car, put the key in the ignition, and start the engine. Is that a crime, or is it a pre-crime?

Next experiment. Same as above, but you put it into gear and keep your foot on the brakes. Is that a crime, or a pre-crime?

Last experiment. Same as above, but you take your foot off the break and drive home without incident, observing traffic signs and not harming anyone or their property. Is that a crime, or a pre-crime?

If you answered "it's all crimes because you can't drive drunk because there's laws" then you've accepted prosecuting made-up pre-crimes.

In my opinion, you have to actually do some damage to be a criminal. _Then_ you throw the book at 'em, no mercy.


> Last experiment. Same as above, but you take your foot off the break and drive home without incident, observing traffic signs and not harming anyone or their property. Is that a crime, or a pre-crime?

Its a crime, generally, as soon as you are driving on a public road at above the legal limit.

(And, incidentally, its "brake". "Break" is a word, but not the word you are looking for.)

> If you answered "it's all crimes because you can't drive drunk because there's laws" then you've accepted prosecuting made-up pre-crimes.

All crimes are "made-up", but its not a pre-crime (which is a prediction of a future violation of a criminal law), but an actual defined prohibited act.

> In my opinion, you have to actually do some damage to be a criminal.

You are entitled to your opinion of what should and should not be a crime, but it probably doesn't help discussing those to redefine the word "crime" to mean "thing that EdSharkey thinks should be a crime" and "pre-crime" to mean "actual violation of criminal law that EdSharkey doesn't think should be a crime".


As I predicted, you are okay with prosecuting victimless crimes (ouch, I used a liberal catch phrase, 10 demerits for me), which is your opinion.

In any case, we're on a slippery slope conditioning the masses for control. I marvel at it. I wonder what's next, jail time for prohibited speech and later prohibited thoughts?

> You are entitled to your opinion of what should and should not be a crime, but it probably doesn't help discussing those to redefine the word "crime" to mean "thing that EdSharkey thinks should be a crime" and "pre-crime" to mean "actual violation of criminal law that EdSharkey doesn't think should be a crime".

Am I really entitled to my opinion, though? It sounds like I probably shouldn't be entitled to an opinion due to my dangerous ideas.

(BTW, you had a run-on sentence there. You probably want to break that up so that others don't nitpick you parenthetically, but I took your meaning. I want you to know that I empathize and figure you were probably writing that on-the-go and didn't have a chance to fully wordsmith it. I don't think lesser of you even when what you write has flaws.)

Stepping away from my devil's advocate role for this thread for a bit ... IRL I have kids too, and I struggle with freedom issues like this. I have had crackups with distracted drivers that messing with cellphone, kids, and eating/smoking behind the wheel that I suspect are way more dangerous than your average drunk that can hold his drink and regularly drives. I was rear-ended on the freeway going freeway speed (not braking) by a guy screwing around on his cellphone. (Gasp, I used 'braking' not 'breaking'! English is hard! ;)

What can you do? Life is so dangerous. How much freedom do you surrender before you feel secure?? Fear drives people into oppression.


> As I predicted, you are okay with prosecuting victimless crimes

I've described what is a crime, not what should be a crime.

> In any case, we're on a slippery slope conditioning the masses for control.

Maybe, but essentially regulatory, preventive offenses like drunk driving aren't new, so its hard to describe a "slippery slope" based on their existence. I mean, that type of offense has been around in the systems from which the US systems descend longer than the US itself has been around; if its a slippery slope, where is the slipping?

> I wonder what's next, jail time for prohibited speech and later prohibited thoughts?

While there has been some mixed results, I think the long-term trend in the time that regulatory, preventive, offenses have been around has been toward greater protection for speech and conscience, not less. So I don't see those things as likely to be what's next -- though, certainly, they do get proposed from time to time, and have even been enforced. (The use of the Espionage Act and the Sedition Act during the Red Scare, for instance.)

> Am I really entitled to my opinion, though?

Yes.

> It sounds like I probably shouldn't be entitled to an opinion due to my dangerous ideas.

"Sounds like" based on...what actual thing that's been argued in this thread?

> (BTW, you had a run-on sentence there. You probably want to break that up so that others don't nitpick you parenthetically [...])

I don't mind parenthetical corrections, but I think if you go and diagram that sentence out you'll realize that its just long, not a run-on.


Mechanically thinking, you are right. There are drunk driving laws and you probably had a long sentence, not a run-on one.

I wasn't mechanistic in my thinking though, you know what I'm saying? I was making a point, did you grok it?


So if I shoot at you and miss, it's all good, right? No harm, no foul!


> You get liquored up beyond the set legal limit, you get in your car, put the key in the ignition, and start the engine.

Not a crime.

> you put it into gear and keep your foot on the brakes.

A crime.

In the first case you are not driving. In the second case you absolutely, unambiguously are driving.

It's not a crime because you might kill someone, it's a crime because it has been defined as such in legislation. The legislation exists because you might kill someone.

Law is arbitrary because it has to be. And we have picked this arbitrary line because society broadly agrees that this line makes the most sense.

> In my opinion, you have to actually do some damage to be a criminal.

Then I'm very glad you don't have control over the law. I don't want drunk people to drive because I drive on the road, my family drives and travels on the road, my friends drive and travel on the road.


> Not a crime

Afaik most jurisdictions would consider that "in control" of the motor vehicle and thus operating it under the influence of alcohol, and thus a crime.

In Australia I don't think the engine even has to be running, I think keys in the ignition are enough to show "control" and thus charge you.


Correct. If you're drunk in the driver's seat on a public roadway (say, street parking in front of a bar), you're breaking the law. Doesn't matter if the car is parked, or if the engine is even on.

If you want to sleep it off, make sure you're in the passenger seat.


In many US states, being in the car and in possession of the keys while legally intoxicated is grounds for a DUI conviction.


How about these thought experiments:

A) A man walks in the street carrying a shotgun visibly

B) A man walks in the street waving around his shotgun

C) A group of men walk in the street waving around their shotguns and various rifles

D) A man points an unloaded gun at another man (a total stranger) in the street in broad day light.

I would argue all these are crimes because you are terrorizing people, even if you never injured anyone nor had the intention of injuring anyone.


What about a person carrying a weapon do you find terrifying? Does it include police? Does it include other equally deadly things that aren't politicized such as a baseball bat?


Should people be allowed to walk down the street with a suitcase full of explosives? Serious question.


I believe A is legal in many states. Not sure about B, C, D - menacing behavior like that would be disturbing the peace or terrorism as you say.

What decent gun owner would walk around "waving his shotgun"? That sounds crazy, who would do that? Gun owners are people. Like, normal good people - neighbors I would probably trust. Does your view of gun owners differ?


>What decent gun owner.. >Gun owners are people.. >Does your view of gun owners differ?

There was no mention of the decency or otherwise of the hypothetical people, and it didn't read like they where intended as a moral illustration of gun owners in general. Sounds like you have a defensive agenda.

>..menacing behavior like that would be disturbing the peace or terrorism as you say.

So we can observe that some of the hypothetical people, who may or may not be gun owners, who may just be gun borrowers, are probably not, in this case, decent and probably are intending to menace.


Agreed that hypothetical people can be menacing with guns. Go on.


I would rather not discuss my view of gun laws in the US.

The thought experiment was meant to counter the arguments in the parent comment about whether drunk-driving is a crime or merely a "pre-crime".


But going back, do you think drunk driving should be classed as a pre-crime or a crime?


By this argument, police would have to stand there and watch a person get stabbed before doing anything, rather than grabbing the guy running and shouting with a knife.


What if you put a bullet in one chamber of a revolver (leaving the other 5 empty), spin the cylinder, point it at someone's head, and pull the trigger. If no bullet comes out, have you committed a crime?


Yes!


The question isn't whether the government can force Apple to compromize their own security--the answer is, it can.

The question is whether a court-issued warrant is sufficient to do so, or whether that power is only available via the Congress.


> using taxing authority as a weapon against political opponents (Tea Party thing.)

Can you elaborate on this one? I can't figure out what it's referencing.


Check out the controversies section on https://en.wikipedia.org/wiki/Internal_Revenue_Service#Contr...

Here's a little bit from it:

> In 2013, the IRS became embroiled in a political scandal in which it was discovered that the agency subjected conservative or conservative-sounding groups filing for tax-exempt status to extra scrutiny.[56]

> On September 5, 16 months after the scandal first erupted, a Senate Subcommittee released a report that confirmed that Internal Revenue Service used inappropriate criteria to target Tea Party groups, but found no evidence of political bias.[57] The chairman of the Senate Permanent Subcommittee on Investigations confirmed that while the actions were "inappropriate, intrusive, and burdensome," the Democrats have often experienced similar treatment.[58] Republicans noted that 83% of the groups being held up by the IRS were right-leaning; and the Subcommittee Minority staff, which did not join the Majority staff report, filed a dissenting report entitled, “IRS Targeting Tea Party Groups.” [59]

The shenanigans happened in the run-up to the 2012 elections. The "extra scrutiny" was basically stonewalling, delays, and creepy investigations into personal lives of individuals in the groups that were filing for tax-exempt status. If I recall correctly, when you had "Tea" or "Patriot" in the name of your organization, your application automatically got forwarded to a dedicated abuse squad at IRS. Tea parties had trouble organizing due to these tactics because they couldn't fundraise.


> Can a court force a company or an individual to create something that does not exist?

This is the most interesting part of the question here. They aren't forcing Apple to give up something they have, or divulge information they already know. They're basically commandeering the services a private company for the purpose of the state. It makes me wonder where it would end - could the judge - judging it useful to help a court case, just pull a citizen off the street and tell them to go and get him / her a coffee? Is there any limit to what they can ask a private citizen to do, once they decide it is in some way useful for solving a crime? Could they even demand someone commit a crime and then prosecute them for the crime?


> Can a court force a company or an individual to create something that does not exist?

You know, all this discussion has reminded me of the fact that what we know as "corporations" today descend from the historical concept of "charter companies"—literally, corporations the government brings into being to serve the public good through private enterprise (in modern terms, to increase GDP), with an expected finite lifetime.

The "charter" of such companies was both a little law that individually granted the company a set of group rights, but also a statement of purpose and boundaries: both an enumeration of what the company should pursue (e.g. profit through importation of trapped furs), and what it shouldn't (e.g. profit through actually trapping those furs oneself; profit through trade of sugar-cane; a monopoly; colonial expansion; lobbying...)

It seems to me that the crux of the issue, here, is that we think that somehow the American corporation is no longer a thing that is brought into being by the state to serve the public good, but rather something private individuals have the right to bring into being for their own purposes, with the state having no say except insofar as to make certain corporate practices illegal.

In other words, we now do think of corporations as people—or more specifically, like the children of people. As with children, everyone seems to think they have the right, with no government interference, to create a corporation; everyone seems to think they have the right to guide and "raise" their own corporation however they wish; everyone seems to think it's perfectly okay if their corporation does things that benefit it at the expense of the state, as long as none of those things are explicitly declared illegal; etc.

In the historical model, none of this was true; corporations were the "children" of the state itself, raised to pursue its goals—with the CEO and directors taking a stewardship role, guessing what the state would prefer and executing on that. If they were ever wrong, they'd be punished—historically, by simply having the corporation disbanded at the whim of the king. (Also, probably, the board of directors would be punished as individuals; chartered companies had no concept of limited liability.)

Chartered companies still sort of exist:

• Many countries have "crown corporations"—like chartered companies, these are brought into existence and given a mission by the government—but they effectively have the government sitting directly on their board with voting rights, and often their cashflow happens through the state treasury. (And other modern corporations, although not explicitly set up this way, can be thought of as effectively crown corporations—China's airlines, for example, are explicitly owned by the Chinese government as majority shareholders, although in theory those shares could be sold.)

• More interestingly, in some countries (e.g. Britain, India, Canada) there are municipal corporations: cities, townships, etc. are brought into being using exactly the same sort of "charter" as historical charter companies, and run with exactly the same goal: to further the interests of the state that created them, with the municipal "government" (actually a CEO and board of directors—"mayor" is just a cute title that is meaningless to the law) acting in a stewardship capacity.

---

To get to the point: if the California State Legislature wanted to (at the behest of the FBI), they could just change Apple Inc.'s charter to make "doing everything possible to aid FBI investigations into data held on, or passed through, Apple-produced devices" an overriding concern above corporate profit.

Charters are a thing given by the state to the individual, just like, say, copyrights are. Like copyrights, they are brought into being to further the public good. The benefit to the private individuals holding a corporate charter (like the benefit of having a copyright) is a cost the state has decided is worth paying, to get the resultant benefit to the public good. But the state gets the final say: it can revoke or modify whatever it likes if it doesn't think the public good is being served.


That model came from a time where the state set somewhat more reasonable boundaries for itself and also a time when doing continuous observation of all members of society was not only moral abhorrent but also impractical. If you really feel that the state in it current incarnation should have these sorts of unrestricted powers, I'm not sure how to have a productive conversation. They've shown themselves wholly irresponsible from my vantage.


You seem to believe that governments exist to grant permissions to citizens as they go about their lives. I sincerely hope you are never in a position of power.


My belief is that a "corporation" is a fiction brought into existence by the state—like copyright. Government gets to say what corporations do or don't do, because without government, corporations just don't exist as a "thing" in the first place—they just become groups of people who happen to be working together.

People have inalienable rights, because people existed before governments did. Corporations don't, because corporations exist as a contract between people-taken-as-a-group (i.e. the state) and people-taken-as-individuals.

This is a good and a bad thing: it means that precepts like "corporations exist to pursue profit at all costs" are not actually "hardcoded" into the laws of the universe, but are just part of the particular way the state has chosen to make corporations work. If we (people-taken-as-whole) stopped thinking that that was a good idea, we could just change the corporate "goal conditions"—and the world would suddenly be different.


> The best way to prevent governments from oppressing their citizens is to refuse to create tools that enable oppression.

Are search warrants oppressive? Can you distinguish this request from the FBI requesting a safe manufacturer modify the firmware on a single safe?

I doubt either of these can be reliably argued.


What about all the conversations the terrorist had with his fellow terrorists in person? Is it an awful, terrible, no good thing that the FBI can't retrieve those conversations? Should we put in place ubiquitous video and audio surveillance in every square foot of the country just in case the FBI ever wants to review something that happened?

It's not "going dark" that is an unprecedented change, it was the brief "going light" period that preceded it. Law enforcement has apparently lost the ability to do on the ground investigation work in favor of whiz bang-ery (and intelligence has lost the ability to do humint). Well, whiz bang-ery is a two way street. Now is the time to revive those old skills and deal with a _return_ to a world where you can't just outsource your job to a wire tap.


What is it the fbi is after? What data exists only on the phone and no where else?

Phone call, texts, or data logs? Subpoena the cell company or the ISP.

Email? Subpoena the hosting provider.

Any communications used on that phone traveled over the Internet, and the logs are most likely preserved. They could get all of the logs of outgoing requests from the cell company and then go to each service provider and demand dumps of data from any accounts that Phone logged into right?


The FBI is after a legal precedent.

Surely they have enough evidence to convict the accused. The phone likely wouldn't give them anything essential to the case. It's already a wrap.

However, the legal precedent would be invaluable for future <s>rights violations</s>legal proceedings. They have 1. Nothing to lose in this particular case. 2. A maximally effective situation for getting this ruling, complete with irrational fear-based public support ("bcuz turrists!1!").

Seriously. If your goal is to create this legal precedent, go ahead and try to imagine a better scenario under which this could rule in your favor. I'll wait.


You have to understand the accused are dead, and destroyed their personal iPhones before the attacks. There is no criminal trial for dead people.

The only iPhone the FBI have is the work phone, which he might not have used to contact terrorists with it. If he did he'd destroy it as well.

The problem is that judges and politicians are not tech savvy enough to understand why they shouldn't force Apple to make a tool to break passcodes on iPhones. How that makes any iPhone insecure in the wrong hands.


There is no accused. They're both dead.


iMessage chats are only on the phone. Same for Telegram which what FBI suspects.


> They could get all of the logs of outgoing requests from the cell company and then go to each service provider and demand dumps of data from any accounts that Phone logged into right?

The primary service provider being Apple, who operates a fully encrypted messaging system where the cleartext only ever exists on the device.

It's a child abuser's wet dream.


OK, but sometimes when I send iMessages, they actually send as SMS if my service is poor or the receivers service is poor, someone is in the subway, on a flight, etc etc.

Chances are, this happened at least once over the lifetime of the terrorists ownership of the phone - thus we can at least see who they are communicating with, right?

Wouldn't that be a good starting point?

And call records - they must have called someone - if only to talk about the weather.


You can turn off the SMS fallback in Settings.


> Chances are, this happened at least once over the lifetime of the terrorists ownership of the phone - thus we can at least see who they are communicating with, right?

Is that really acceptable to you? Maybe they accidentally screwed up once so it's OK to keep the private affairs of a dead terrorist secret for no good reason?

> Wouldn't that be a good starting point?

I would have thought the best starting point was unlocking the phone, given the owner is dead and committed horrific crimes.


Apple do not have the ability to unlock the phone. They have the ability to create a tool that could unlock any iPhone. That is not a starting point.


Therefore they have the ability to unlock the phone. I mean I knew people defended Apple here but I didn't think they'd demand logic no longer works to satisfy that.


I would start with anything available, dig deep, and never give up. Including breaking down the doors of anyone they texted or called.


> Including breaking down the doors of anyone they texted or called.

But not the service provider intentionally denying access to this vital information? What a bizarre twist of logic!


> Are search warrants oppressive?

A search warrant doesn't require me to create a new capability that did not exist before.

From the FAQ:

  The digital world is very different from the physical world. In
  the physical world you can destroy something and it’s gone.
  But in the digital world, the technique, once created, could
  be used over and over again, on any number of devices...

  The only way to guarantee that such a powerful tool isn’t abused
  and doesn’t fall into the wrong hands is to never create it.


> A search warrant doesn't require me to create a new capability that did not exist before.

It might, depending on the analytical outcome of the three-factor test introduced in the _New York Telephone_ case. (This case was decided before compelled pen register assistance was prescribed by Congressional legislation.)

The case itself is illustrative and enlightening: https://supreme.justia.com/cases/federal/us/434/159/case.htm...

"The power conferred by the [All Writs] Act extends, under appropriate circumstances, to persons who, though not parties to the original action or engaged in wrongdoing, are in a position to frustrate the implementation of a court order or the proper administration of justice..."

The factors are:

(1) whether the third party is "so far removed as a third party from the underlying controversy that its assistance could not permissibly be compelled by the order of the court"

(2) Whether the burden placed on the third party is "unreasonable"

(3) Whether the assistance is "essential to the fulfillment of the purpose" of the warrant

It's unfortunate that Hacker News isn't commented upon by more attorneys, and that most news articles don't link directly to the legal filings in these cases; much confusion and false assumptions about the law could be clarified.


Orin Kerr has written about the implications of that case specifically:

https://www.washingtonpost.com/news/volokh-conspiracy/wp/201...

A potentially important factor is that the phone companies used pen registers themselves all the time, whereas Apple has no desire at all to write the software that the FBI wants.

It's worth noting that despite the outcome of that case, Congress still had to pass CALEA, which seems to imply there are limits to what can be compelled under All Writs.


There are very few limits to the power of the All Writs Act -- the act, passed the same week as the fourth amendment, grants the courts (not the FBI) the authority to "issue all writs necessary or appropriate in aid of their respective jurisdictions and agreeable to the usages and principles of law".

However, you actually have to get to the courts before the All Writs Act can be applied.

CALEA was passed to create a preemptive requirement for standardized wiretap interfaces/equipment, processes, etc, prior to any actual judicial warrant or writ being written.

The fourth amendment grants us protection against "unreasonable" search and seizure, but doesn't grant the government an affirmative right to require that future "reasonable" searches be easy, or possible.

It also (unfortunately, in my view) doesn't prevent the government from requiring preemptive action to support future searches, and that's what CALEA does.


I think the FBI is acting like there was a pre-emptive requirement for Apple to build a system to permit access.

The reason I say that, is that now that it has been proven the phone does not permit access, the FBI is trying to use a warrant to force Apple to break into the phone. But if providing access was never a legal requirement in the first place, why is it Apple's problem now?

Yes, only Apple can do what the FBI wants to do. In my mind, that should not be sufficient, in the absense of a legislative requirement, to force Apple to break a software system against their will--even their own software system.

I mean, let's say the FBI wants to run a sting operation against a gangster. Can a court use All Writs to force some random person to participate in the sting? I would think not.

Let's say law enforcement needs to pull a hard drive from a 30th floor apartment, without alerting the doorman. Can a court use All Writs to force a rock climber to climb up the building and go in the window to get it? Again, I would think not. Even if there was only one rock climber in the entire U.S. who could do what the FBI needed, it doesn't seem likely to me that an All Writs warrant would succeed against that person.

So why should it succeed against Apple? I mean, Apple is the only company that can do what the FBI wants--true. And they did build the phone to prevent access. But there was no requirement to build it any other way, so why would that be relevant?


This is straightforward: Apple already has a backdoor into the key derivation firmware (technically, the OS).

If they didn't have it, they couldn't be required to use it.


The question isn't whether Apple has a capability, the question is why can they be forced to use that capability on a phone owned by a 3rd party?

Apple has many capabilities--they're a $500 billion dollar technology company. Which capabilities are not available to the FBI via an All Writs warrant?


You're conflating very different scopes of "capability" and responsibility.

I'll quote the DoJ's legal brief on how Apple is not "far removed" from this phone owned by a 3rd party:

"... the government is seeking to use capabilities that Apple has purposefully retained in a situation where the former user of the phone is dead ..."

"... iPhones will only run software cryptographically signed by Apple ... Just because Apple has sold the phone to a customer and that customer has created a passcode does not mean that the close software connection ceases to exist; Apple has designed the phone and software updates so that Apple's continued involvement and connection is required."

"More generally, the burden associated with compliance with legal process is measured based on the direct costs of compliance, not on other more general considerations about reputations or the ramifications of compliance".


The DoJ is just re-iterating what everyone knows, which is that Apple can push software updates to a phone.

That does not in any way explain why Apple should be compelled to write new software, that they would not otherwise choose to write, before pushing it as an update.

Apple retained the ability to push updates to improve the performance and security of products, not to make it easier to hack them. There is a difference!


Apple can push software updates to a locked phone without the owner's permission.

Nobody else can.

That's a big difference.


Put * on each side of text you want italicized. More at https://news.ycombinator.com/formatdoc


Edited accordingly, thanks!


Thanks for that link. Always worthwhile to read Kerr's analysis.

ISTM the right outcome is for Congress to weigh in with specific legislation that can supersede AWA in cases like this.


The FBI or other agencies require things that cost money all the time. In this case, they offered to compensate Apple for they work. And it's disingenuous for Apple to say "create a whole new operating system". That's so absurd as to be pretty much a lie. What the FBI wants is not much more work than unlocking a door, just for more highly paid people.

And calling it "creating new technology" (not your words, but others'), is silly too. Doing almost anything with software is creating new software, but calling it new technology makes it sound like they have to go out and do a bunch of R&D and make some significant breakthroughs or something. In reality, it's a trivial thing.

To address the quote you posted, it sounds like Apple is probably lying there too. Especially given the vast exaggerations in other parts. I highly doubt they would have a hard time making a modified OS that was device locked.

But if it's really true that they can't make a device locked firmware, well, then of course I hope they win this fight.


> A search warrant doesn't require me to create a new capability that did not exist before.

I don't see how that's relevant. If I invent a new kind of lock you've never seen before, you'll have to come up with a way to break it.

I'm asking for why a search warrant would be oppressive, even if it involved creating new firmware for a safe.


> If I invent a new kind of lock you've never seen before, you'll have to come up with a way to break it.

Why?


That's the whole issue, isn't it? You can't create software that operates only on a single instance of a computer. It'll run on all of them.


That's not the case – if Apple produce a signed iOS version that allows the passcode retry limit to be bypassed on a single device, then in what meaningful sense does it operate on another device?


> if Apple produce a signed iOS version that allows the passcode retry limit to be bypassed on a single device, then in what meaningful sense does it operate on another device?

What prevents the iOS image from being loaded onto another device?

Apple isn't magic; the code they write to verify device identity isn't going to be the first perfect, unbuggy, unexploitable code written in human history. if(device_udid == terrorist_id) {...} might seem infallible, to you, but the reality is that the device_udid is just SHA1(Wifi MAC + Bluetooth MAC + ECID + Serial). All of those are writable, some via the Baseband and some via physical access. Generating SHA1 collisions is completely feasible for ~$1,000,000 of computing time, which is chump change to nation states.

There is no infallible way for Apple to make an iOS version for one single device.

Edit: And stories like http://abcnews.go.com/Technology/york-da-access-175-iphones-... make it absolutely clear that this is not stoping at a single device.


I guess when the FBI compels them again. Or once they have a foot in the door, compels them to remove the signing requirement, or wants their own ability to sign.

This is not mysterious. Can the FBI require a software company to subvert their own security guarantees? Should they? Do bank vaults have back doors for the FBI?


I guess when the FBI compels them again

That's a bit of a specious argument though – of course, if they produce different software in the future, it could be used again!

If the FBI want their own ability to sign Apple updates, then that's obviously a different situation. But the point you were making – that it's not possible to write software that will run on only one device – is not true in this case.

This is not mysterious. Can the FBI require a software company to subvert their own security guarantees? Should they? Do bank vaults have back doors for the FBI?

I don't really know the right answer.


Apple could write code that checks the hardware ID supplied and sign only that version. It's been sufficient to prevent hacking so far and is the major protection relied upon until the latest generation, so I don't think this is an issue at all.


The issue is the precedent that is made. Once they create the special firmware, even if it is locked to one device, they have stated that law enforcement have already told them they have hundreds more devices they want unlocked. They would have a much easier time going to court and compelling Apple to resign the existing special firmware they made to work on these other hundreds of devices. Apple could no longer argue they had to create something to fulfill the warrant, since it was already created.


From what I can tell the UDID lock is not actually secure enough and is not designed to be a critical path around the encryption algorithm.

An earlier comment with more technical detail: https://news.ycombinator.com/item?id=11141499

UDID lock is good enough for tying development builds to specific devices but is not an unbreakable guarantee the software cannot be run on another device.


This comment simply makes claims like altering data in transit inside the architecture of the CPU.

Yes, if you can do that, then you can flash anything to the phone. When someone shows me that happening, then I'll believe in this technology.


No, I proposed 3 different ways of hacking the UDID in that comment. 1) disrupting the read of ECID in transit, 2) re-writing ECID by hacking the BPP, 3) a SHA1 hash collision. Any one of the 3 will do.

Even then you cannot "flash anything" to the phone. But you can flash a build signed by Apple which hard-codes a UDID check.


Wouldn't these mechanisms apply to key verification? How exactly can you argue that you can modify data in realtime inside the phone's architecture, but couldn't possibly modify data in realtime inside the phone's architecture?


Only certain parts of the hardware responsible for key storage and verification are architected to be resilient against physical and electronic attack, but the entire phone is not.

For example changing the WiFi or Bluetooth MAC is not locked down and effects the UDID. Because the UDID was never intended to be a way to bypass the device encryption it was not designed with anywhere near the same level of care and sophistication as handling the encryption key itself.


> For example changing the WiFi or Bluetooth MAC is not locked down and effects the UDID. Because the UDID was never intended to be a way to bypass the device encryption it was not designed with anywhere near the same level of care and sophistication as handling the encryption key itself.

I never suggested the UDID. Please don't ascribe a claim to me that was never made. SHA512 the serial and macs independently. Job done.


...and they give it to the FBI who then changes that ID as they please? Remove signing altogether? Its the fear of a blank check that drives this issue.


No – iOS is signed, and changing the ID would invalidate the signature.


Taking the private key from Apple with a warrant, national security letter, subpoena, or other methods is much easier than creating a custom version of iOS.

It is foolish to assume that the FBI would stop at this special version of iOS, especially given how they have been arguing and fighting to break encryption for over two decades.

edit, on icebraining's interpretation:

That's exactly right - once the difficult part about targeting an individual phone is finished, it's easy to take the iOS signing key and target any device you want.


> Taking the private key from Apple with a warrant, national security letter, subpoena, or other methods is much easier than creating a custom version of iOS.

Clearly it is not, otherwise they would have done this. They did not.


I think it's meant taking the key used to signed that custom version of iOS. That is, if the FBI can force Apple to write the custom version, they can also coerce the key from them to sign their own copies, preventing your suggested blocking mechanism.

This is what I understood, at least, from the parent's post.


And can Apple guarantee that the ID can't be changed inside the device? If they can't it makes a lot of sense for Apple to refuse creating such a tool.


The ID can be easily changed. A quick Google search will easily confirm this.

The question is can you change one devices ID to exactly match another. This might require hacking the BPP or finding a SHA1 hash collision, both of which are absolutely possible but not necessarily trivial.


Pretty much, as far as I'm aware; the device ID cannot be changed. Bear in mind that this entire discussion is applicable only to older iPhones, in any case.


That guarantee until the latest generation was the only thing protecting the hardware. It has never been altered or hacked that I am aware of. Doing so would require modifying the physical construction of the chips on the device.

Possible? Maybe, but if it was that simple the FBI could simply change the signing key. It's not, and they can't.


One thing that I don't think has been covered enough in this whole debate about forcing Apple to unlock the iPhone is that "Farook and his wife destroyed their personal iPhones, and the hard drive from their computer was removed and has never been found"[1]. The iPhone the FBI is after is one that was issued to him by his employer. It seems to me to be very unlikely that Farook and his wife would go to the trouble to destroy all their other electronics but somehow forget to destroy his work phone (assuming his work phone had incriminating information on it in the first place).

[1]: http://www.usatoday.com/story/opinion/2016/02/18/apple-court...


This is the piece of information that makes it clear to me that this case is not about this phone in particular. The FBI is trying to use an act of terrorism to acquire the tools to hack iPhones. They can't actually expect to find anything valuable on the work phone the attacker chose not to destroy.


"Never let a good crisis go to waste."


And moreover it seems incredibly unlikely that a man planning to commit an act hostile to the US government would discuss that act on a device issued to him by the government (his employer), especially when he also carried a separate personal device.


I agree, this has constantly been understated. The headlines make it seem that the whole case hinges on this phone, which is one of the least likely locations for incriminating data to be located.


I am guessing they have the call records from the phone company and found something interesting enough to go through all this effort.

We dont know what they found in the lake.


What would they get? Best case, there is a contact name associated with a phone number they think is suspicious? Wouldn't that have been backed up on iCloud?

Also, what's the endgame? They want to convict the dead guy? They want to convict the neighbor who sold them guns, maybe?


You think so? It seems disingenuous for you to suggest that when all the evidence points to this phone having nothing useful on it, that this is just a thing the FBI wants to have done for precedent. The FBI has been crowing for backdoors into all our private communications long before this phone became interesting.


"Law enforcement agents around the country have already said they have hundreds of iPhones they want Apple to unlock if the FBI wins this case. In the physical world, it would be the equivalent of a master key, capable of opening hundreds of millions of locks. "

Actually - the master key, the backdoor, already exists. The master key is Apple's ability to sign a new version of IOS, and update the software on a locked phone.

The Federal government isn't asking Apple to create a backdoor. Their asking apple to use the backdoor that already exists.


> The Federal government isn't asking Apple to create a backdoor. Their asking apple to use the backdoor that already exists.

Basically. Unfortunately, most of the reporting is focused on the payload Apple is being asked to create and doesn't draw enough attention to that "existing backdoor" that will allow such a payload to be successfully installed.

Eliminating that "existing backdoor" should be a priority. I see some, here, expressing the thought that Apple might be working on that. I think Apple needs to be pressed, hard, on that very subject.


Supposedly, the backdoor has already been fixed by the Secure Enclave, which was included in newer models of the iPhone.


Supposedly, and also supposedly the Secure Enclave doesn't defend against this, depending on who you ask and how they're speculating. I've yet to see anything authoritative on this from someone in a position to know for sure.

The good news is that if you have a good password, not just a simple numeric passcode, you should be safe against this sort of thing regardless, unless the authorities can coerce or trick you into revealing your password.


Apple is reframing it this way to the media but not technically untrue.


Is this the same key that Apple uses to publish updates to iOS?


It's sloppy to call a vulnerability a backdoor. Yes, Apple failed to fully secure the 5c and may even possibly have failed to fully secure the 6s against this attack. So it is vulnerable to this attack, but the backdoor doesn't currently exist.

Also, I would expect if it is technically possible, that Apple will be pushing an iOS update which closes this particular vulnerability going forward. Obviously not an update the FBI will be installing on this particular phone!

I believe their intent is to make iOS secure even against an adversary that has physical possession of the phone and even against an adversary with the ability to sign custom builds.


In cryptography and computer security, a vulnerability (a backdoor) exists whether or not an exploit exists.


>vulnerability (a backdoor)

This is at best imprecise. The HN privacy advocates have taken to calling all vulnerabilities backdoors on the (not insane) belief that manufacturers are out to get them, but there is a distinction.

Backdoors are a special case of vulnerabilities inserted intentionally by an attacker into ostensibly secure designs to allow the attacker back in once the device has left his control. For example, if Apple configured iPhones to accept two PINs, one set by the user and the other set by Apple at manufacturing time, we could say the iPhone is backdoored. If some engineer went rouge and marked his own code signing key as trusted, that's a backdoor.

When you say something is a backdoor, you assign blame for its actively malicious and undisclosed insertion. Either the organization is evil or someone subverted the organization, and this person must be caught and punished.

Most vulnerabilities happen because their creators didn't know better, or found mitigation to be not worth the cost. Apple is a little bit extraordinary in considering "itself, under legal coercion" under its threat model at all.

You might disguise a backdoor as an ordinary accidental vulnerability, and this is a reasonable assertion to throw around when someone who should have known better, whose peers were doing better chose an insecure design. i.e. Juniper switching their design to use known-broken cryptography.

The signs here certainly point to a vulnerability that ought to be mitigated (and possibly has been under the Secure Element system), but not a deliberate flaw in iOS's design. Vulnerability, not backdoor.


Apple intentionally created a platform where they hold almost absolute control over devices.

They can install new software, inject code into auto-updated applications, MITM SSL by hooking local device APIs, MITM iMessage by using their CA to sign new certificates. Their control over end-user devices is astonishingly complete.

They can do all of this without no external validation (except when jailbreaks are found), as the platform itself prevents 3rd parties from decrypting Apple's OS updates, viewing the operation of the device, introspecting Apple's code, or introspecting the encrypted application code shipped on the device.

They've intentionally done this, and justified this massive set of backdoors under the ridiculous assumption that they can defend device owners against all possible compromises of Apple's trust position, both from within (such as changing business management) and without (governments, espionage, and compromise), not just now, but many decades to come, all without any checks or balances.

How is this not the very definition of backdoor?


Suppose the manufacturer exerted no control over the device. No code signing, no sandboxing, no security hardware, etc. Then anyone, not just the manufacturer, would be able to replace the PIN verification routine.

If the manufacturer's ability to sign code is a backdoor, then so is absence of code signing.

You're assigning them the moral failure to perfectly secure the iPhone only because they have tried to secure it at all. It's subject to strictly fewer attackers; in my book, that's a good thing.


> If the manufacturer's ability to sign code is a backdoor, then so is absence of code signing.

In that universe, you can run any encryption software of you're choosing, and verify the software written by others.

Nobody has more privileges over the device that exceed those of the device's owners.

> You're assigning them the moral failure to perfectly secure the iPhone only because they have tried to secure it at all.

Legally and ethically, it doesn't matter.

Legally, cryptography is protected expression, and you can use cryptography to encrypt whatever you like -- including information that would otherwise be subject to a lawful search under the fourth amendment.

However, the fourth amendment doesn't have a crypto "participation award" clause; if you left a backdoor in place, the government can compel you to use it.

Ethically, if they chose to leave themselves this backdoor (and all the others), the system is weak. If it's not the FBI today, it's the NSA with a NSL tomorrow, or foreign espionage, or a disgruntled employee, or new corporate management.


Indeed I can run the encryption software of my choosing, and so can the FBI to bypass its rate limiting, in which case it's exactly as weak as the iPhone.

With physical access I can replace any security software you have in place. The hardware needs to enforce its integrity.


As if your everyday person is capable of verifying or validating or even making these decisions if left to them.

I work in tech and I wouldn't feel comfortable making all those decisions.


iMessage is definitely "backdoor-friendly" due to how the CA server works. In past court cases (e.g. [1]) Apple has said they do not have the capability to wiretap iMessage and they have refused to create it, and the government has backed off. Maybe in part because in those cases there were active iCloud backups which provided messages in near-enough-to-realtime that the government wasn't in a good position to force Apple to hand over access to the backdoor. Or maybe because the targets of those particular investigations weren't unsympathetic enough. This case is the one the government has been "waiting for" to push the envelope and see if they can force Apple to provide a backdoor against their will.

[1] - http://www.nytimes.com/2015/09/08/us/politics/apple-and-othe...


Your reply does nothing to justify why "backdoor" and "vulnerability" should be considered synonyms. What you say is true of "vulnerability", I've never heard someone call a buffer overflow vulnerability a "backdoor".


A vulnerability is a backdoor if it's placed there intentionally.

That's what Apple has done.


A window is a vulnerability into my house, but it is not designed for passage in the same way that a backdoor is. There is a subtle difference between a backdoor and a vulnerability that is worth respecting, in my opinion.


The key analogy starts to break down at some point. The "master key" as you put it is that Apple has the engineers, experience, and digital key that enables them to create a second key for any IPhone manufactured pre secure enclave. The argument here is that once Apple creates that second key the effort required to steal it is significantly lowered. You can't steal something that doesn't exist yet.

It's more analogous to forcing everyone to use a shorter password composed of only numerical digits because it makes brute forcing easier.

The mere existence of this new less secure IOS makes Apple even more of a target. The slightest chance that it could leak would render more than the hundreds of iPhones the FBI want's to unlock completely insecure.


> The argument here is that once Apple creates that second key the effort required to steal it is significantly lowered. You can't steal something that doesn't exist yet.

The key exists. If someone were to steal it, they could sign any version of iOS they want.

I think the resources required to successfully steal Apple's private key for signing updates, which is presumably locked down very robustly, is an order of magnitude greater than the resources required to modify iOS to remove the erase and timeout features. The latter is very difficult especially without the source code, but it's hard for me to believe that it's harder than stealing Apple's key.


Just having the key doesn't automatically make it easier for you to create the new IOS. That task is significantly easier for Apple to do than for you to do. Much of security is raising level of effort for an exploit to a high enough level that it's not worth the investment required to create it.

If Apple creates the new IOS for you though and all you have to do is get your hands on it then the investment required just got smaller. Any reduction in the investment required for an exploit should be very carefully evaluated. Saying things like "This can't possibly be used by someone else." is either disingenuous or signifies that you don't adequately understand the issues, or you are lying to yourself. Whichever one of those that it is. It's in Apple's and frankly my best interest that someone highlight the truth of the situation.

[EDIT]: The below is more hyperbolic than I intended after a second reading. I'm leaving it since I already hit submit but I wanted to apologize for going there. :-(

This is like saying

    Look... We'll create a nuclear bomb but it's okay we're
   the only ones who will have it and nobody else is going
   to be able to get one. 
Maybe you can guarantee that no one else get's enough information from you to build one. However once you've built one the chance that someone could get the information from someone involved in your project to accelerate their creation of a nuclear bomb just increased by a non-trivial amount. And the potential risks are really high. Pretending otherwise would be foolish.


My claim is that the resources required to steal Apple's keys are significantly higher than the resources required to modify iOS in the way required. Since an successful exploit requires both modified iOS and the keys, if I am correct then creating the modified version does not significantly decrease the resources required to exploit.

When I claim it can't be used by anyone else, I mean without Apple's signing keys. There are no shortage of jailbreak developers that would be happy to work for whoever pays them, and could build tweaks in to iOS with the signing keys.


For the government to make the necessary modified version of iOS, they need to have or acquire a team of skilled engineers that either already have detailed knowledge of iOS and the devices it runs on, or a way for their team to learn that knowledge. This would probably be somewhat expensive in both money and time.

To acquire Apple's keys, the government only needs someone who can replace the device key and re-sign a new iOS package, and a sympathetic judge that will sign a subpoena, warrant, or national security letter. This is practically free and shouldn't take more than a day or two (warrants are sometimes granted near-realtime).

I don't understand why would think taking Apple's keys (which wouldn't require "stealing") somehow requires more resources? A warrant or nsl is much cheaper than a team of developers.


You make this sound hard: there are tons of qualified people who could do this in less than a week, including myself. We already have all of these tools just sitting around from the iPhone 4, and some of us have emulators for more recent devices: the only thing we don't have is Apple's key.


I was talking about getting someone to steal it.

I doubt that it's as easy as you think it is to get a NSL.


Tom Cross[0] in this tweet[1] was the first person I saw with the distinction in calling the FBI request an exploit rather than backdoor. This Vectra Networks blog post[2] explains this distinction in better technical detail.

[0] https://en.wikipedia.org/wiki/Tom_Cross_%28computer_security... [1] https://twitter.com/_decius_/status/699932604737417216 [2] http://blog.vectranetworks.com/blog/apple-vs-the-fbi-some-po...


Actually that's irrelevant because the government can always coerce companies or individuals to introduce backdoors if there aren't any. Software developers have the arrogance to think that technology can solve political issues.


> Actually that's irrelevant because the government can always coerce companies or individuals to introduce backdoors if there aren't any.

The FBI is trying to do just that and Apple is fighting it. That is why this case is so important.


Except they're not. If the FBI was trying to force Apple to keep this avenue of attack open in all future versions of iOS then I would be 100% behind Apple, but that's not what's going on. The FBI is asking Apple to exploit an already existing vulnerability.


Do you not think this is where it is headed? The FBI has been complaining about device security for awhile. They finally have a 'terror' case to use in order to try and swing public opinion. To think that the FBI only wants a single phone (we have found out later they have many) unlocked, and only wants it to work until Apple closes the security hole is naive.


The difference in this case is that a backdoor can be introduced after the fact. That is, it is possible to load a new version of iOS onto this locked phone with the backdoor in place. Ideally, one should not be able to update a locked phone in such a way as to introduce a backdoor to allow for brute force passcode attempts.


First of all this isn't about brute forcing passcode attempts, it is about bypassing the passcode altogether. If the phone is open and can be made to automatically download and install a software upgrade, then this is possible, even if the storage is encrypted. Also, given that Apple is the phone maker, they can of course be coerced to produce phones with this capability, whether they want it or not.

There is is no "ideally" ;-)


Not that I agree with backdoors, but Congress should be able to. The courts shouldn't.


Are you supportive of the government's position in this?

There is clearly a vulnerability on top of which a backdoor could be built. But equating the two seems to be interpreting things in favor of the government.


Are you saying the Government has no legitimate interest in the contents of a suspect's or deceased's mobile phone?


I believe that police forces have an interest in using whatever resources are legally available to them to pursue their inquiries, and as such, the FBI is acting according to its mission.

My position is that the government as a whole - including the legislative, executive, and Supreme Court, have a duty to national security and civil liberties that would be severely compromised if the FBI prevails here.

I agree with the ex director of the NSA, that strong encryption and secure devices are good for national security, and protect us against cyberterrorism, and that this is far more important than this individual case, and so the FBI should not be allowed to prevail.


The question I'm most interested in is whether newer iPhones are still subject to this attack. Unless I missed it, no answer to that question is presented here.

It sure feels to me like Apple is dancing around that issue. I'm betting that newer iPhones are still vulnerable, and Apple is a bit embarrassed at dropping the ball there. (The Secure Enclave stuff doesn't necessarily protect against this attack, it depends on how it's implemented and the official documentation doesn't quite say.)

If nothing sooner, it's going to be interesting to see what happens in the fall when iOS 10 and the iPhone 7 presumably will ship, along with a new version of Apple's iOS security guide. Diffing that with the 2015 edition could prove quite educational.


Yes, it seems pretty certain the Apple is going to lose this in court, and will be required to break the security on the 5c. But newer hardware with Secure Enclave could be rendered Apple-proof by preventing an update to the Secure Enclave firmware without first unlocking the phone (or still allow a locked upgrade, but wipe all user data when upgrading in that manner). The fact that they haven't said unambiguously that they can't get into newer hardware at all definitely makes it seem like they allow non-destructive updates to SE on a locked device.


I am not a security expert, but my understanding is that the main problems for the FBI are: a) There is a timed delay for trying new passwords after enough unsuccessful attempts and b) The phone will wipe itself after 10 unsuccessful attempts.

It's pretty much only sufficient to have (a) since the delays will make it take years to guess the password by brute force.

I just ctrl-f'd for "delay" in the security guidelines[0], which claim that the secure enclave is the one that enforces the timed delay[1], so I guess the only attack vector would be if you could somehow backdoor code onto the chip. I can't find anything in the guide from a quick skim, but I'd suspect the code is on a ROM chip or is somehow prevented from an upgrade without an unlock?

[0] https://www.apple.com/business/docs/iOS_Security_Guide.pdf [1] Page 12


> I can't find anything in the guide from a quick skim, but I'd suspect the code is on a ROM chip or is somehow prevented from an upgrade without an unlock?

This is the big unanswered question. I suspect the same, but so far have not been able to find anything that actually says so. The code is not in ROM, as it can be updated, but it could wipe data if updated without an unlock. But I can't find anything saying whether it actually is. The mere fact that it's possible isn't enough, and their silence is a bit odd.


Seems like it'd be a pretty huge oversight if that vector were open, but I agree with you that it's not very confidence-inspiring if nothing about this is out there.


Depends on your threat model. It's possible that the Secure Enclave was just intended to be a defense in depth against malware and common criminals, not something that could keep Apple themselves out. If so, I'm sure they're re-evaluating that now.


The attack that the FBI is trying to facilitate by getting Apple to make a custom OS would only work up to the iPhone 5C and earlier models. Any iPhone with the TouchID and Secure Enclave would not be susceptible to this attack (it may be to other attacks). The Secure Enclave has a hardware enforced back-off delay that would stop the brute force attack. The older iPhone 5C does not have the Secure Enclave so the brute-force protection is implemented in the operating system. As such, Apple can load a new OS on the phone that does not have those protections.

Here is a great technical rundown: https://blog.trailofbits.com/2016/02/17/apple-can-comply-wit...


The Secure Enclave's back-off delay is not hardware enforced, it's enforced by the software running in the SE. The big question is whether the SE's software can be updated without first unlocking the device and without wiping the user's data. If such an update restriction is in place, then newer phones would not be vulnerable. If no such restriction is in place then newer phones are vulnerable, they just need the SE software to be modified instead of the main OS software. So far, I have not been able to find any concrete information about whether or not the Secure Enclave in current phones have such restrictions.


Correct, I should have clarified - I mean the back-off is enforced by a distinct piece of hardware (vs it being enforced by the OS). So theoretically they could burn in the firmware for the SE at the point of manufacturing and it could not be updated, but you could still update the phone OS like normal. I imagine they will consider doing this going forward so they cannot comply with these types of requests from the government.


Burning in the firmware would do it, but I'm not sure they want to give up the ability to update this software. I'd say a better plan would be to give the SE a small amount of internal storage. Two 256-bit fields would do. Place a master key in one field, place a secure hash of the current firmware version in the other. On boot, if the firmware hash doesn't match but the firmware signature is valid, rewrite the master key with a new random value. Then build the firmware to allow updating the hash, but only if the phone is unlocked. The normal update procedure would then be: install update, tell SE about the new hash, reboot. A "forgot passcode" update would be unable to do that second part, and would then effectively wipe the phone.

You might want a third field, so you can store both "old" and "new" firmware hashes to temporarily bridge the gap in case the procedure got interrupted at just the wrong moment. But the point being, you could allow updates without this vulnerability, and it shouldn't be particularly hard.

Obviously, that doesn't mean they've actually done it yet. I imagine they will do something like your scheme or mine for the next hardware design.


> whether or not the Secure Enclave in current phones have such restrictions

They don't, see https://www.techdirt.com/articles/20160218/10371233643/yes-b...


It seems likely but I'm still not entirely convinced.

I can't entirely figure out how much the former Apple engineer quoted there was involved. He clearly knows how the SE firmware is loaded, but I'm not clear about the level of his knowledge from the SE side. The described mechanism of running a signed blob from RAM doesn't exclude some sort of wipe-on-update mechanism, but it would have to be very low level, part of the hardwired secure boot stuff.

The blog post that's quoted afterwards is total speculation and contains no information about this, even though it claims to.

So, probably, but still lacking a properly definitive statement.


Can someone ELI5 a typical opinion in support of the Government's case? I've read through various comments and I haven't seen a concise opinion in favor and am genuinely curious.

Does it boil down to (1) trust that the Government won't abuse the existence of the tool and (2) trust that the tool will never be leaked?

Or is it more fundamental - that the target data is so valuable that the ends justify the means?

I know it's more nuanced than that, but I think - in particular -someone's view on the All Writs component just follows their view on the above in most cases.

[edit]: I'm considering this a research sub-thread, not a debate sub-thread. Trying to understand, not convince. So forgive me for not responding one way or the other.


I'm staying with a friend who is is retired LEO, and he strongly believes Apple should/must comply. He wants big jail time if they don't.

He thinks a it's a matter of safety because terrorists. 14 people were killed, therefore the ends justify the means as you say.

We got into it the other night, and I think the case boils down to if we should be allowed to have secrets from the - secrets the government can never unlock, no matter what. he feels strongly that we should not be permitted to do so, because when a legal warrant is written, the government must have access to everything.

He also doesn't believe the Snowden leaks, and thinks that when it comes to the pursuit of justice, the government should be trusted with everything.


> He also doesn't believe the Snowden leaks, and thinks that when it comes to the pursuit of justice, the government should be trusted with everything.

Not believing the Snowden leaks is akin to denying the Holocaust IMO.

It's completley delusional to reject the validity of empirical evidence on the basis that it does not align with your preferred political narrative.


> Not believing the Snowden leaks is akin to denying the Holocaust IMO.

The parent's wording may have been off. My parsing of "He also doesn't believe the Snowden leaks" inferred a missing "were justified".

Then again, I could see a few different interpretations. We need a clarifying phrase after "doesn't believe the Snowden leaks", like "doesn't believe the Snowden leaks were justified" or "doesn't believe the Snowden leaks tell the whole story". One parsing is "doesn't believe the Snowden leaks are legitimate and authentic".


To clarify:

He doesn't believe that what Snowden leaked is true.

He doesn't believe the NSA is recording internet traffic / phone calls etc.

He doesn't believe multiple global intelligence agencies are working together to gather as much information as physically possible.

In short, he doesn't believe the NSA, etc. are doing the things Snowden said they're doing.

(Yes, I showed him the classified slides, he still doesn't believe it's true)


I'm not sure where you can justify inferring that extra information. From what you've described, the context simply doesn't exist outside of your mind.


A case I heard recently is that if a undercover CIA operative was captured in e.g. China (with an iPhone), should Apple be compelled to break the security if demanded by the Chinese government?


Compelled by whom?

Setting aside patriotism, seems like that's a choice between the less painful of the consequences of a) non compliance set by the Chinese government vs b) the cost of _compliance_ set by the US government.

EDIT: Also, why would the CIA ever entrust their communications and data to a format/device that could be cracked by a commercial entity that could be subject to the above scenario?


I think people would argue that since Apple is not a Chinese company they'd be ok with Apple ignoring the request.


I'm sure Apple could just set up its iPhone production lines somewhere else in no time at all, right?


Ask your friend how that's any better or safer than what the East German Republic did to its people under the Stasi. If he tries to dodge then I'm gonna say he's not much of a friend. When it comes to protecting the rights of others which includes their privacy I would probably have cut that person off day one. I'm a bit of an extremist to be sure, but some things are more precious than a friend and freedom, IMO, is one of them.


Thanks for writing -

The 'legal warrants' part that you said is interesting. I'm not sure I'd heard that thought clearly articulated. The idea that when a legal warrant is issued the Gov't should have access to everything. Something to consider.

OK - thanks for the input!


This is not really a new issue though. If a guy has a safe in his house and the cops get a warrant to search his house, they'll ask him what the code to the safe is. At that point he can tell them, or tell them he doesn't know and got it at a yard sale and was hoping to crack it one day. Of course I guess they could just take a diamond cutter to it at point. Can't really do that with software (barring quantum computers).


I don't believe the government should be able to conscript a company to create something that doesn't exist.

That said, I do think that people on the Apple side are being insanely hyperbolic. The x-tries feature is at best security through obscurity. In my mind, this really isn't different from having a feature on your desktop's operating system that deletes data after x-tries. It is not a "backdoor" or "breaking encryption" to remove the drive and plug it into a powerful computer that attempts to crack the password.

They aren't asking for encryption to be weakened or a master key to be created. If the key is bruteforcable and the only thing that prevents it an arbitrary limit from the OS, I don't really consider removing that arbitrary limit to be weakening the encryption.

The entire point of encryption is that you are protected by math and nothing else. The only time that protection can be undermined is BEFORE encryption happens. In my opinion, any changes to the algorithm or environment AFTER encryption are completely fair game.

If the government goes further and attempts to undermine that protection, I think that's cheating. If all the government is asking is for a change to the environment, I don't see how that qualifies for any of the words Apple is using ("backdoor", "master key", ...) - their security depends on obscurity and I guess the question becomes whether the government can compel them to shed light onto that obscurity.

Apple can't do anything about the fundamentals of math, it's still on the government to crack the password and for users to have strong passwords. A key principle of crypto is open algorithms, the only secret should be the key. As long as no effort is made to undermine those keys or the access to encryption, I think anything is fair game if you have physical, court ordered access.


If you want the hard sell, watch Charlie Rose's interview last Friday with Cyrus Vance, the Manhattan district attorney, and Jon Miller, the NYPD's head of counterterrorism [0].

You will hear a lot of ugly words and phrases like "terrorist," "rape," and "child pornography," as well as a lot of pleasant ones like "protection," "safety," "democracy," and "the American public." You will hear an argument that iOS 7's security model was great, and that Apple is challenging the U.S. government as a marketing pitch to foreigners.

You will not hear much that sounds like the sort of thing people say on Hacker News.

0. http://www.pbs.org/video/2365672280/


Ha - thanks. :) I'll watch it. But I'm specifically asking here because I'm trying to understand and learn from the rational arguments. There's never a challenge finding inflammatory or disingenuous ones, and there's not much to learn from them either (on either side of most topics - not unique to this one).


I do not agree with this line of reasoning, and personally find it ridiculous that the court can compel companies to subvert their own novel systems for government use (e.g. require access to suspects' OnStar), but:

This seems, to me, to be similar to CALEA[1] requirements that compel telco companies to implement infrastructure that allows wiretapping. I do not have firsthand experience, but on the face of it telcos are specifically forbidden under CALEA to implement devices or technology in their infrastructure that could prevent "lawful intercept" from occurring. This particular instance does not seem to have a CALEA justification, but that may be a temporary problem.

I think that they are setting themselves up for a win/win scenario and may not be concerned with the legal footing being ironclad. If they win, great. Otherwise, fighting for and losing this case is the stepping stone to going to congress to get a CALEA analog for US companies making communications devices.

[1]: https://en.wikipedia.org/wiki/Communications_Assistance_for_...


VERY interesting. Thanks for posting. I wasn't familiar with CALEA.

For clarity, a CALEA type approach wouldn't compel a company to ACTIVELY subvert their own systems (as in Apple creating new tools for the Gov't to achieve that end under All Writs).

Instead, CALEA would force companies to create LESS secure systems from the start which could then be subverted passively (no All Writs component) on request to allow for "lawful intercept". Is that a fair characterization?

But in sum, a legal argument based on CALEA. Makes sense. Thanks-


The whole point of NSLs is to have the same level as CALEA, but not publicly. Apple is requiring the FBI to lay down its cards and submit them to public examination, rather than act in secret, which is a progress for democracy in US.

I say US, because I don't see any mention of foreign countries about either the iPhone 5 or the Secure Enclave. Apple is probably legally required to spy on foreign nationals, isn't it?


For me, it boils down to the fact that I don't understand why Apple would not have the ability to restrict this exploited version of iOS to only run on the device in question. That's the entire point of code signing. They could have the OS refuse to execute on anything other than that specific device, and the FBI can't modify it without invalidating the code signature. The FBI could post the code on Facebook and it wouldn't make any difference, nobody would be able to use it.

With that in mind I find it ridiculous that Apple is refusing.

If the FBI was asking them to leave a vulnerability open in all versions of iOS that only the FBI could access, then I would have a huge problem with that. Apple seems to want everyone to think that's what is going on, but it's not. The FBI is asking Apple to exploit a vulnerability that already exists in an older operating system.

If Apple cared so much about user security then that vulnerability wouldn't have been there in the first place.


In all likelihood, the NSA has already created this hacked version of iOS that does exactly what the FBI is requesting. The government can probably already get the data off of the phone if that's all they're after. But it probably isn't. I imagine all of this is so the government can try to set precedent in order to publicly use that technology in courtrooms and investigations instead of having to keep it hidden from the world.


Ah, the 'parallel reconstruction' approach. That's an interesting consideration, but I don't think a domestic phone for a domestic entity (employer) would probably get scooped up in the collection. An interesting thought though.


Winner winner chicken dinner! The reason this is such a public matter is because it is going to be used as impetus to change laws so that data gained using such methods can be legally used as evidence in a court of law.


What would the NSA's motivation be for creating this? They are mainly SIGINT. This software is only useful when you are in physical possession of a phone.


Some of the Snowden leaks showed that the NSA was intercepting servers in transit and installing firmware with a backdoor.


>>The government says your objection appears to be based on concern for your business model and marketing strategy. Is that true?

> Absolutely not. Nothing could be further from the truth. This is and always has been about our customers.

so it is about marketing :-)

All joking aside though, I agree strongly with this document and I'm both a bit surprised and very happy about their detailed arguments and about the passion they put into this issue.

As a customer I'm happy to see that they are really fighting for me and not giving in, even to the point of refusing a comparatively reasonable request out of fear of producing a precedent.


>so it is about marketing

I read that line the same way. Under everything, their business isn't privacy; it's making money. They just happen to take a strong public stance on customer privacy. That seems really difficult to explain to readers without patronizing.


> Is it technically possible to do what the government has ordered?

>>Yes, it is certainly possible to create an entirely new operating system to undermine our security features as the government wants.

So... something to fix in the next release. Apple could be doing this all along. Maybe they've already done it in the past via a FISA warrant.

My point is, if you rely on software for security, and that software can be "upgraded" at any time by the manufacturer, it's a problem. This is the definition of a back door. They could design their OS so that it has to be unlocked to "upgrade", but they didn't...


What you said. They do not address this in this letter.


This letter comes off very strongly. It's as if they treat the government as just another customer. The way they describe their process for assisting law enforcement almost reads like their process for providing developer support:

> We also provide guidelines on our website for law enforcement agencies so they know exactly what we are able to access and what legal authority we need to see before we can help them.

The concerning portion of this letter is:

> Yes, it is certainly possible to create an entirely new operating system to undermine our security features as the government wants.

I have a feeling Apple is currently working on preventing updates to the Secure Enclave ROM from happening while a phone is locked (or at least ensuring the keys are wiped if it does happen while a phone is locked).


"It's as if they treat the government as just another customer."

Shouldn't they? Albeit a customer with different 'needs' under the law. There's a process to follow to get assistance from Apple. Their products are in wide use and they can't reasonably be expected to have a dedicated support person for every request from every customer every minute of every day. "You want our help? Try these things first. If that fails, here's the process for queueing up to get help."

Further, there's a process to make sure law enforcement are behaving well and not, say, stalking some random member of the public. Get the courts involved, provide the right legal paperwork, their lawyers will review it, and they'll respond appropriately.

The idea (and I realize not included in your comment) that governments are above the law, and that government agencies will only use these powers in proper ways is ridiculous. They've proven repeatedly that they can't be trusted with the power they've been given.


I'm not saying that in a negative context. I was just impressed at how strongly worded the Apple letter was, directed at the US government. They almost seemed to be telling them off.

Most companies came out with some sort of half-public-support for Apple's letter after it was published. This follow up by Apple is just so strong in its principles that it was impressive.


Indeed, I agree. I realized long after I wrote my reply that it was toned more aggressively than I had intended. In my mind, I was adding to your comment rather than arguing.


> We also provide guidelines on our website for law enforcement agencies so they know exactly what we are able to access and what legal authority we need to see before we can help them.

Why is that sentence objectionable? Every tech company has guidelines like that on their website.


He didn't say it was objectionable. He just said it treats the government like a customer. I, for one, would like to live in a country where the government is a servant, not a master.


I didn't mean to imply that it sounded objectionable. I was impressed with that public statement by Apple.


For an international corporation, in a global economy, a government is just another customer. They can and will sue, like an upset customer can, but will anyone go to jail for this?

I'm actually not sure. The significant difference here, between the US government and any other government, is certainly a valid one. This is the government that oversees the country where their headquarters are located. It'll be interesting to see how this plays out.

I have no doubt we are witnessing an event that will have an insurmountable influence on legal systems across the globe. Either way, a significant precedent is being set here.


One thing I don't understand is, what prevents the FBI (or Apple or anyone else) from duplicating the contents of the iPhone in question to a virtual machine, then trying the 10,000 possible 4-digit unlock combinations on virtual machines until they find the correct one?

I assume that since this seems like a fairly easy solution that it's not possible, but what makes it not possible?


The users passcode is entangled with a key unique to the hardware of the phone to generate the actual key used to encrypt the device. The hardware key is implemented such that it's impossible to read out via software.


Not sure if 100% applies to the iPhone in question, but the secure enclave was designed to prevent this sort of thing. Here's an intro to it:

https://www.mikeash.com/pyblog/friday-qa-2016-02-19-what-is-...


This applies to later models of iPhone; the 5c doesn't have a secure enclave. All password attempt limiting and erasure of data is implemented by the operating system.


The 5c still has the secret UID baked into the chip.

The escalating artificial delays are implemented by the OS and can be circumvented, but the secret UID is designed to make it impossible to extract.

Without that UID, then you're brute-forcing a 256-bit AES key, not a 4-6 digit passcode. Practically, the brute forcing can only be done on the actual iPhone.


The iPhone in question doesn't have Secure Enclave, but has similar principles to it. Like others have said, the users password is entangled with a key unique to the phone, which can't be read through software.

On iPhones with Secure Enclave, not only is the device-specific key stored there, but also the logic which keeps track of the number of and time between repeated failed attempts. On older iPhones, this logic is a part of iOS itself, hence why Apple is capable of overriding it with a modified iOS.


Just a guess :- Maybe the decryption needs to happen through means present on the device and not in any old VM. Maybe the decryption requires multiple keys :- the 4 digit code, a per-device key, a per-session key generated on every boot-up, etc. And they all could be stored such that they never leave the h/w making sniffing difficult.


Wouldn't your solution also duplicate the fact that the iPhone should erase its memory after 10 failed attempts?


Right, but my assumption is that you'd just be able to load up a new VM every 10 failed attempts.


When Apple says that they’ve never unlocked a device for the authorities one should keep in mind that the authorities have and use the power to force entities to respond this way.


They have the power to force people to keep quiet, but not to lie.


But lies of omission are fairly easy to construct, and the government seems particularly adept at coming up with them.

In this case, I'd feel much better if there were some sort of ToS canary or something. Does Apple have anything like that?


One question for the FBI: why so much investigation over a crime committed by someone who is dead?

Based on the nature of the crime, I'm guessing there weren't many accomplices. The one guy who helped him get the guns is probably where they need to be focusing their investigation and interrogation. If ISIS helped him, how? He took a gun to work and killed people. That's not an elaborate scheme.

It does not benefit the FBI to lose goodwill with the American people over this case.


Something about law enforcement causes practitioners to stop seeing themselves as peers of the citizenry. It is probably similar to the paternalism physicians exhibit toward patients despite the fact that they recognize the problem and train themselves against it.


Also see: Stanford prison experiment.


What's very sad is that there are actually people out there who would read your comment and believe you to be a terrorist sympathizer. Maybe you're not even just a sympathizer... maybe you're a potential future risk! Once you say anything that could be interpreted as defending a terrorist, a pedophile, or similar evil being... you may as well be one of them.


Could Apple alter the icloud account in question such that a login succeeds with any password or session key? I wonder if there is some variant of that which would allow the remote backup strategy to work.


I imagine the issue is that the iPhone has non-backed up information on it. From what I've read Apple has already turned over the latest backup from ~6 weeks before the shooting. Getting a current backup would require the phone's hardware to cooperate.


I was surprised to learn that iOS 4 - 8 permits updating the firmware without first entering the PIN.

Device Firmware Update mode https://www.theiphonewiki.com/wiki/DFU_Mode

This technical factoid is relevant to the current discussion. I did not understand how creating a custom firmware was useful.

To further ground the discussion, I found this informative:

Legal Process Guidelines U.S. Law Enforcement

https://ssl.apple.com/privacy/docs/legal-process-guidelines-...

Whereas before I was firmly against Apple helping to crack the San Bernadino iPhone, I'm now merely mostly against.

I don't understand how this action can help FBI. What additional, unique information could possibly be on this phone that they couldn't discover by alternate means?

I don't understand how an error by FBI obligates Apple to clean up their mess.

My understanding is that iOS 9 changed things so that this kind of forensics backdoor is no longer possible, mooting this discussion.

FBI should take their lumps and learn from their mistakes.


> My understanding is that iOS 9 changed things so that this kind of forensics backdoor is no longer possible, mooting this discussion.

The question Apple would like to avoid is if, in the future, creating such a backdoor-resistant OS will be illegal. iOS 9.2.1 (the latest pre-ruling release) is legal; however, any future releases of the OS with this capability may be considered a circumvention of or non-compliance with the ruling.


Aha.

I just reread the letter. I would like them to use your direct language.

The use of encryption is a human right (to privacy). I utterly oppose any measure to curtail my right to privacy.

Arguing about the technical whatnots, and who did what when, as I've done here, is a trap, to better obscure the fundamentals. And I fell for it.

So thank you. I know it feels like restating obvious. But I needed to be reminded what's at stake here.


> Yes, it is certainly possible to create an entirely new operating system to undermine our security features as the government wants. But it’s something we believe is too dangerous to do. The only way to guarantee that such a powerful tool isn’t abused and doesn’t fall into the wrong hands is to never create it.

Is this just security through obscurity, then?


No? Not if it requires a key to sign the software.


I think the argument here is that stealing a private key isn't fundamentally more difficult than stealing this iOS backdoor (were Apple to create it). Under this model, Apple's refusal to create a backdoor is in some sense an appeal to security through obscurity because it implies that engineering knowledge is the main obstacle to adversaries, whereas the true obstacle is the ability to steal protected information from Apple (the key or the backdoor). If the key and backdoor were equivalently useful, the existence of a backdoor wouldn't impact the safety of customers because the key already exists.

Of course, whether stealing a backdoor is actually as hard as stealing a key is a legitimate question, but (I thought) Apple had the option to unlock the phone in-house, which would at least keep the backdoor out of FBI hands (the legal precedent, however, could still pose a real threat to user security).


So then why couldn't Apple create this exploit so that it only runs on this specific device? That way the security of all iPhones is still reliant on the security of Apple's key, just like before.


Is GPG security through obscurity, since it only relies on you not knowing my private key?


Yes, if you use that private key to serve as an unauditable 3rd-party trust authority in what should be a single-party or two-party transaction.


The contradict themselves:

> Is it technically possible to do what the government has ordered? … Yes, it is certainly possible to create an entirely new operating system to undermine our security features as the government wants.

And yet:

> We have done everything that’s both within our power and within the law to help in this case.

If it's possible for them to do it, then it's within their power, and it's perfectly within the law for Apple to write a custom OS and deploy it onto a device with the device's owner's permission (in this case, the owner is the County of San Bernardino).

They don't want to do it. Heck, I don't want them to be able to do it. But they can, because they designed a system which they can backdoor.


> Heck, I don't want them to be able to do it.

Why? Having talked to a lot of people who hold this position, the only underlying thought I've been able to discern is that you can't conceive of any time someone else holding encrypted data could hurt you.

If this phone instead belonged to a living rapist or paedophile instead of a terrorist, and was the only evidence proving their guilt, would you feel the same?


Let's not pretend that this is a new issue for American society. We have almost 250 years of making these sorts of decisions.

For example, we know that bad guys use guns to commit crimes, yet we're very very reluctant to ban guns. Bad guys use cars, but we don't require cars to come with remote kill switches.

Bad guys use encryption. Shall we therefore ban it? Or destroy its effectiveness? Do any good guys use encryption? And should we take those uses in to consideration as we think about public policy?

The President of the U.S. uses an iPad. Millions of federal employees, including FBI agents, use iPhones. Let's think about the implications of punching a hole in the security of those devices.

It's very easy to make decisions when one ignores the broader context and consequences. That doesn't mean it's the right way to make decisions.


(To play devils advocate, coming from the perspective of my retired LEO friend)

legal warrants can we written to seize guns which can then be tested and information extracted from them.

Legal warrants can be written to search and seize cars, so they can be searched and extensively examined to extract information/evidence from them.

The argument goes that right now, warrants can be written to gain access to basically everything a criminal has/owns/has been in contact with so that it can all be gobbled up and analyzed.

People supporting the government in this believe the same is true for digital data - they don't care that it's on a phone or laptop or "online", they just think a warrant should let the government access it. If Apple can do it, then they must.

(Note I don't personally agree with that, but I understand it)


> If Apple can do it, then they must.

Which is why they ought to ensure that they cannot do it. And it's why we should resist any law mandating that they be forced to include a pre-emptive backdoor.


> Bad guys use cars, but we don't require cars to come with remote kill switches.

Nobody is asking for a remote kill switch. Searching a car is a common task, searching a phone should be the same.

> Let's think about the implications of punching a hole in the security of those devices.

Nobody is talking about uploading new software to every device. This is wildly misleading.


The needs of the many outweigh the needs of the few; if it's a choice between no encryption and a marginal increase in the conviction rate, or strong encryption and a marginal decrease in the conviction rate, I'm in favour of the latter.

But of course bad people can still use extra encryption, so lack of default encryption will mostly hurt good people. This increases the amount by which the detection and conviction rate needs to improve to justify the intrusion.

I think it comes down to how willing you are to let bad people get away with their crimes vs lose your own right to privacy in a world where more and more of your private life and thoughts are stored digitally.

I know I find myself increasingly self-censoring in case what I write gets taken out of context and used against me. I have to second-guess some third party reading my emails, my private notes, etc., and I really don't like that feeling. It doesn't take long for political winds to change, and we're not long past times where people were put to death for what they wrote or who they were.


> The needs of the many outweigh the needs of the few

No, they don't. We live in nations with individual rights. The US specifically is a nation without acceptable levels of healthcare. The needs of the many clearly do not outweigh the needs of a few.

> But of course bad people can still use extra encryption, so lack of default encryption will mostly hurt good people

This is a non sequitur driven by a false dichotomy. Nobody is suggesting the removal of encryption.

> I think it comes down to how willing you are to let bad people get away with their crimes vs lose your own right to privacy in a world where more and more of your private life and thoughts are stored digitally.

How many criminals are you willing to let get away with their crimes? Apple has currently built a system that allows child abusers and some rapists to be all but immune from conviction.

Are you happy with this? I am certainly not.


> How many criminals are you willing to let get away with their crimes? Apple has currently built a system that allows child abusers and some rapists to be all but immune from conviction.

Nonsense! Believe it or not, prior to the existence of smart phones it was possible to convict people of abuse and rape; in fact, it happened quite regularly. The evidence against such criminals need not be on a smartphone; after all, there are the victims; there are witnesses; there is forensic evidence.


> Nonsense! Believe it or not, prior to the existence of smart phones it was possible to convict people of abuse and rape; in fact, it happened quite regularly. The evidence against such criminals need not be on a smartphone; after all, there are the victims; there are witnesses; there is forensic evidence.

So this means it's OK to withold access to evidence, even if you are simply a private company with no legal standing? No of course it doesn't.

Please, tell me how a paedophile I know exists and uses an iPhone can be prosecuted for the contents of that device. I'm fairly sure he has taken illegal photos of children he may be abusing, but he lives in Cambodia.

What can be done?


>How many criminals are you willing to let get away with their crimes?

So you are saying Apple is only helping criminals with encryptions?

And because weakening encryption can not only be used in special cases (you mention child abusers and rapists in an attempt to appeal to emotions), you don't fear encryption being used against the average person?

For example if the police would search your phone in a traffic stop, that would be okay?

The amount of cases you could solve with a crypto backdoor seem very small in comparison the risks for the average citizen. Privacy is an important right for everyone.


> So you are saying Apple is only helping criminals with encryptions?

No I'm saying that arrogantly the people behind these decisions cannot imagine a scenario in which someone else could hold information about them that would be harmful just by its existence.

They can only imagine their secrets being revealed, such as Tim Cook being outed before he was satisfied.

> you don't fear encryption being used against the average person?

I'm confused as to what you're asking, that's exactly what I fear. A rapist taking photos of his victim, the evidence being insufficient for a conviction, Apple now protects that rapist and his access to his victim's photos at the cost of the victim's mental health.

> For example if the police would search your phone in a traffic stop, that would be okay?

If they had good reason, that's the basis of the legal system after all.

> The amount of cases you could solve with a crypto backdoor

Crypto backdoors are ineffective. Service provider accountability is. This is why Apple is fighting it.


Oh no, crypto backdoors are quite effective at their stated purpose - letting an authority break it when they desire. See also the Clipper chip, DUAL_EC, and so on.

The problem is that we're no longer talking about physical devices like a gun or a safe, we're talking about math. Breaking crypto is basically solving a math problem. If for any reason the problem is solved, it's forever solved, and expecting that solution to stay in a few trusted hands against nation-state level actors (or hell, even motivated security professionals) is absurd.

The issue here is that the FBI is demanding that Pandora's box be opened. There is no closing it again. Are you ready to sacrifice the safety of every iPhone everywhere based on a promise from the FBI?


I disagree with your healthcare analogy. It is in the interest of the many for the US to have widespread, cheap, available healthcare. It would give our citizens longer, healthier lives, which from a purely economic standpoint would allow them to work longer and increase GDP.

The US already spends significantly more per capita on healthcare than countries like the UK that have socialized it. Right now, the needs of the few (for insurance companies to make huge amounts of money) are outweighing the needs of the many. I suspect that will change over the coming decades, but there's a long political battle to fight before we see any real change.

Edit: one of many sources for info on healthcare spending http://www.politifact.com/truth-o-meter/statements/2015/dec/...


> I disagree with your healthcare analogy. It is in the interest of the many for the US to have widespread, cheap, available healthcare. It would give our citizens longer, healthier lives, which from a purely economic standpoint would allow them to work longer and increase GDP.

I agree it is, but my point is that despite this, it is not implemented in any way. The needs of the many in the US do not outweigh the needs of the few. Nor in most cases should they. Individual healthcare has no significant harm to it, but prohibiting the searching of communications and data on someone's primary computer has major harm implications.


> The needs of the many clearly do not outweigh the needs of a few.

> How many criminals are you willing to let get away with their crimes?

If the needs of the many don't outweigh the needs of the few, then why does the total number of criminals matter?

If I want to buy strong encryption with no backdoor, and I'm not a criminal, then why would the issue of crime matter at all?

You're not being consistent in your logic.


> If the needs of the many don't outweigh the needs of the few, then why does the total number of criminals matter?

Because the victims matter.

> If I want to buy strong encryption with no backdoor, and I'm not a criminal, then why would the issue of crime matter at all?

Being a criminal isn't an attribute of a person, it's the actions the person carries out. I don't give a shit about jaywalking, but I care about rape victims. This is not inconsistent.


The 4th amendment also helps pedophiles and rapists get away with crimes. It's obvious why police need some limits on their investigative powers.

Are you also in favor of DNA typing and fingerprinting everyone at birth into a national database? How about requiring all citizens to carry a GPS locator, camera and microphone which are always recording into a national database? We could always just limit access behind a warrant, right?

We create limits on police power because police abuse that power and it harms a free society. Backdoors into our personal electronic devices should not be allowed for the same reason.


> Are you also in favor of DNA typing and fingerprinting everyone at birth into a national database?

Yeah this has a lot of benefits and helps catch criminals extremely quickly. It has virtually no privacy implications. I'm fine with it and some nations do it.

> How about requiring all citizens to carry a GPS locator, camera and microphone which are always recording into a national database? We could always just limit access behind a warrant, right?

This is about the most disproportionate act I can imagine. If you have to reach this far, you don't have any point to defend.


Glad to see we found some common ground. That smartphone you're carrying... once companies can be compelled to backdoor them that's exactly the world we would be living in.


Well why don't we wait for that to be suggested to start getting outraged. In the case at hand, I don't believe Apple for a second that they would be decreasing the security for any other phone by complying with the FBI request. They're lying or exaggerating as far as I can tell.


It's a very bad idea to wait until after the precedent is set to try to ensure that this does not become routine. It's crucial to have the debate now, before the precedent is set, because of the way the US legal system works. If you can establish that under the AWA Apple must comply with this request for a backdoor, then future attempts to compel the creation of additional backdoors on any electronic device (think "Internet of Things") are approved as a matter of course.

While you might believe that Apple is capable of keeping the backdoor from being used unlawfully (although, why would you when they clearly failed to secure the 5c from this attack in the first place!) how about every other electronics manufacturer in the world?


> It's a very bad idea to wait until after the precedent is set to try to ensure that this does not become routine.

Well then why don't we start making this case about abortion rights precedent, for how applicable it is? This has nothing to do with the general practice of making devices backdoorable in general. It is a straight forward search of a device with a warrant.

Apple failed to secure the 5c, but they don't have to fail to secure others. I have every reason to believe they could have made the phone refuse to install updates without a user password, which would make this backdoor impossible.


> It is a straight forward search of a device with a warrant.

No, its not.

If it were a straightforward search of a device with a warrant, then -- the FBI having the device and the warrant -- no third party would have to be compelled to be involved.

That they are trying to compel Apple to write special software to enable what the FBI wants very clearly demonstrates that it is nothing like a straightforward search of a device with a warrant.


Hmm, that seems untrue. Is this the first case where an agency had to get some assistance from a third party to execute a warrant? Like, getting access to a storage unit, or getting access to documents? In the case of a locked door, don't you think it would be reasonable to require a landlord or such to unlock the door if they could? Especially if the walls were literally impenetrable.

I don't know. It seems like that is the way the law works now. If it's not what we want, maybe that should be the discussion. But this doesn't seem fundamentally different than any other normal search order. Superficially different, sure. Instead of simply unlocking a door, it's more like Apple has the blueprints for how to make a key and they're being asked to do so. Apple is implying (but strangely, not explicitly saying) that they can't do that without harming security for other people. I don't believe it, or they should come out and say that.

And lastly, I think it's splitting hairs to complain that they're being asked to write special software rather than just hand over documents or unlock a door. To a lay person that may seem like an over burden, but for SMEs it should be pretty easy. If it isn't easy, again, that should be what Apple says, but they haven't. Because it's embarrassing for them.

The position Apple should be in, is to say "we cannot comply with this request". That would be great. But they can, pretty easily I would guess, so they're beating around the bush instead.


> If this phone instead belonged to a living rapist or paedophile instead of a terrorist, and was the only evidence proving their guilt, would you feel the same?

Yes. There are many cases where the only evidence against someone lives inside his own head, and the State has no right to compel someone to give evidence against himself. Likewise, there are cases where someone has encrypted the only evidence against him, and the State has no right to compel him to decrypt that evidence.

How often is any crime perfect, anyway?


> Likewise, there are cases where someone has encrypted the only evidence against him, and the State has no right to compel him to decrypt that evidence.

My country does.

> How often is any crime perfect, anyway?

Significantly more often if Apple gets their way, as they provide guarantees backed by the richest company in the world. Taking a photo of your rape victim used to be idiotic and a way to get caught. Now it's a way to keep power over your victim as Apple will protect that photo.


They could have/should have circumvented all this by not allowing firmware to be forced onto a locked phone without it wiping its own key store.

Admittedly users could also solve the issue for themselves by using much longer passwords instead of short passcodes.


That would have eliminated their ability to fix bugs in the firmware, though. They literally did this last week with the "Error 53" patch.

There are tricks to allow this, but broadly no: what we're talking about here is Apple engineering a snoop-proof architecture that remains resistant when the attacker is Apple itself.

And that's just not going to happen in any practical way. Eventually, if the government wants to compel an backdoor in iOS encryption there will be a backdoor to iOS encryption. Arguing otherwise is just fooling ourselves.

And it's a silly issue anyway. If you want snoop-proof encryption on your personal device, install Linux, select "encrypt my drive", and memorize a secure pass phrase. Done. Relying on a third party hardware vendor to do it for you won't ever work.


Well, it would have to wipe the keys to update the firmware. Alternatively the update process could ask for the password/passcode.

I agree with you, but even if you install something open source you're still trusting the hardware, so at the moment there's basically no practical method of not trusting any hardware vendor at all. Obviously when you get all your hardware and software from the same vendor then it makes a move on the government's behalf much more practical for them.


> even if you install something open source you're still trusting the hardware

Not for the encryption. That's done in software. A seized linux laptop with an encrypted partition using a strong key is effectively snoop proof by the definition we're using here.

It's true that hardware could have other attack vectors: a key logger to intercept the pass phrase would be an obvious one. But again, that's just my point: Apple is in no privileged position here. If they get compelled to backdoor the iPhone then no amount of security architecture along the lines you posit is going to help us, because they can just be compelled to defeat it.


> Not for the encryption. That's done in software. A seized linux laptop with an encrypted partition using a strong key is effectively snoop proof by the definition we're using here.

You can't run the software without hardware, so it has to be trusted. Don't misunderstand me, this is obviously considerably more far fetched than the Apple attacking their own software/hardware combo. However, assuming (a big assumption) that we trust what Apple are telling us at the moment, a seized iphone with a strong password would currently be just as snoop proof. In fact, this includes the phone that has spawned this conversion.

They can be compelled to defeat their own security if you're accepting continued updates to your phone. Under the security architecture I've (loosely) described they can't attack it without the user accepting an update. Of course, you're totally correct in practice because you're most likely just going to have to trust Apple updates as they come out.


>The digital world is very different from the physical world. In the physical world you can destroy something and it’s gone. But in the digital world, the technique, once created, could be used over and over again, on any number of devices.

They leave out the fact that Apple would need to sign (literally, using their private key) every time it is used.

>Unfortunately, we learned that while the attacker’s iPhone was in FBI custody the Apple ID password associated with the phone was changed. Changing this password meant the phone could no longer access iCloud services.

Why can't Apple change the password back, reset the flag that says the password was changed, and have them turn it on again?


They don't really leave it out of the equation, though they don't explicitly state how it factors in. The issue is that a high profile case that people want to be prosecuted to the fullest extent is being used as a wedge to force a large undertaking (producing the unsecure firmware) to be made. Future requests from law enforcement on much smaller cases would then only require the smaller undertaking (signing the existing firmware), and resisting that court order would be significantly more difficult.


If the court decides that they're obligated to comply, then that should apply in all similar cases. Resisting that order would be more difficult because they already lost the battle, not because they already created the tool.


My point is that it would also apply in a smaller cases for a different reason than simple precedent: it would require no significant work not already undertaken.

This is the crux.


But why would that be bad? It shouldn't require significant additional work, even in smaller cases. If the FBI is getting warrants to access things without actual probable cause, then we need to be outraged about that issue, not how they get access.


It really seems like you've argued against yourself with this.


You are assuming the password is the key to authenticating. It might be, but it also might be that there's a two way handshake that sets up an encryption key for transmission of backups in a somewhat similar manner to how PFS works with SSL.


Whatever it was, changing the Apple ID password changes something on Apple's servers, and if they have the old state they can change it back.

The only two obstacles I see is if

1. Apple deletes info from their server on password reset 2. The phone was on, and already received the message not to try to sync anymore because of the password change

Both seem unlikely but possible.


You missed the entire point of my post. If I was designing a protocol I intended to be secure, I would add code that would fail if the password was changed and then changed back. I'm not saying Apple did this, but it's entirely possible they did.


How would the phone know that the password was changed if it was off the whole time?

Either 1 or 2 must be true for my proposal to fail.


It hasn't been off, that's been established already.


Where?


Just speculating, but if they changed the password using a "forgot my password" email verification method, then there may no longer be any record of what the old password was.


But then the data was on Apple's servers, and could possibly be recovered with data recovery. Also, I doubt Apple deletes that when there's no need to.

I know Google stores previous passwords, and will alert you if you try logging in with a previous password.


[deleted]


Hashes or whatever. The point is they have the ability to tell if I typed in an old password, so they must store some form of it somewhere.


My primary concern here is that there are 14 people dead and the powers that are asking for this obviously failed to protect them. To me the arguments people are making for Apple to comply are like abstinence education. Despite all facts pointing to the effect that with all that they ALREADY have on us they are completely useless at "protecting the American public" why are they using that as an argument here? Obviously because people are still buying it, but at some point shouldn't we point back to fact that they were unable to stop 14 people being massacred?


>"hundreds of iPhones they want Apple to unlock if the FBI wins this case."

Only hundreds?

It's not "unlocking" anyway, it's exposing the phones to an opportunity of cracking them open. A non-trivial time-consuming task, hardly the domain of opportunistic hackers with stolen iPhones.

>"government-ordered backdoor."

"Backdoor" is not what is being asked for, so they shouldn't use the word "backdoor".

Encryption is not under threat by this request. If raw computing power can break the encryption, then Apple should improve their encryption. Use more bits, more salt or whatever. Make it so a computer needs 50 years to crack a password, even with electronic brute force. Then it wouldn't matter whether the self-destruct kicked in or not.

And no mention of the compromise offer for Apple to keep the alternative OS on their premises and destroy it after.

Apple are trying just a bit too hard to "not put customers at risk". The risk is almost zero.

If Apple's security was as good as they claim, then not even Apple, no matter what they did to help, could crack the phone. That's where we want to be. At the point where it simply doesn't matter what the FBI asks for, the phone is uncrackable. Sounds to me like we're not there yet. Apple helping crack this phone will help us get there. And that's why I don't agree with Apple's position here. Let's see this phone cracked open, and then evolve the security to a point where a similar request would be impossible to achieve no matter what Apple or anyone did.


> It's not "unlocking" anyway, it's exposing the phones to an opportunity of cracking them open. A non-trivial time-consuming task, hardly the domain of opportunistic hackers with stolen iPhones.

Semantic nit picking aside, it takes a trivial amount of time for a computer electronically submitting passwords to crack a 4 digit code.

> "Backdoor" is not what is being asked for, so they shouldn't use the word "backdoor".

More semantic nit picking. They're asking for Apple to write software to disable the very security features that make the 4 digit passcode secure.

> Make it so a computer needs 50 years to crack a password, even with electronic brute force. Then it wouldn't matter whether the self-destruct kicked in or not.

Says the armchair security expert.

> Apple helping crack this phone will help us get there.

No, it won't. All it will do is open a new security hole where one didn't exist previously.


Are you suggesting a 4 digit passcode is secure? News flash: it's not.

Any 4 digit passcode is insecure by its own nature. It's security depends on another unrelated system. This is an inherent weakness.

If you're stamping your foot demanding that your 4 digit passcode not be compromised by the FBI or anyone else, may I suggest a smarter option: choose a longer passcode.

Choose 11 digits and then it doesn't matter if someone at Apple or FBI or Mr Robot goes bananas and decides to write some software that compromises your phone's passcode retry limit. 11 digits is not trivial to crack via electronic means.

If you're relying on a combination of a 4 digit passcode and faith in your phone's manufacturer that the retry and self-destruct methods can't be hacked, well that's your own irresponsible position. Keep it if you like, but it sounds like an Apple passcode fanboy party waiting to be crashed.

As long as encryption remains solid without back doors, that's all that matters. The rest is a matter for Apple to choose whether to cooperate with an investigation into serious crime. They choose not to, citing "back doors" and "protecting users". No surprises there.


Isn't the solution for Apple to have the phone also sign the firmware update so that the user has to enter their passcode to accept the update and sign the signature of their key.

If the firmware isn't signed by both keys (the users public key being stored in the secure enclave) then the phone should refuse to boot.

That way even if Apple is compelled to sign a rogue firmware, it still requires the user must also be compelled to accept it.


It would then be impossible fix to boot issues like the recent Error 53 problem.


In the specific case of the Error 53 problem I don't see a problem. In the case fingerprint sensor has been replaced with a non standard version I really want to the phone not to boot because there is a fair chance its been compromised.

(That said I think Apple's handling of the issue was terrible and it should have given a much more specific error and Apple should have been much less douchebaggy about replacing the sensor with an official version)


That error was factory test error code -- it was never meant for end users. It was a bug, that's all. Any other kind of bug could exist that prevents a device from booting (like the recent Jan 1 1970 issue).


I would like to know the technical limitation law enforcement is facing when trying to decrypt data in iOS 8.

I assume the data is encrypted using a key derived from the user's passcode, and that that key is purged from device memory after an idle period. Brute force attempts to guess the passcode are throttled, and too many attempts cause the device to delete the encrypted data.

Can someone confirm I'm on track so far?

Then, law enforcement would be limited to trying to circumvent the passcode entering throttling logic on the device, which Apple has physically engineered to be a destructive operation, thus it's outside the capabilities of even the most sophisticated technology labs in the US government?

Am I still on track?


Although focussed on the Secure Enclave, this may be of interest: https://www.mikeash.com/pyblog/friday-qa-2016-02-19-what-is-...


To me this post explained a lot of the legal side of the issue: http://www.zdziarski.com/blog/?p=5645 I think it may have been on HN earlier.


If corporations are people, would requiring a corporation to produce work product constitute a form of slavery? Or if writing code is a form of speech, would forcing them to write code violate their constitutional right to free speech?


Our forthright discussion on this issue on HN, while valuable, will not help the broader problem of educating the voting public. It's kind of like John Oliver's interview with Edward Snowden[1]. This issue needs to be translated into "pics of my junk" and become a sort of meme. Is the issue being framed wrong more of a problem than it not being framed at all?

1. https://m.youtube.com/watch?v=XEVlyP4_11M


Maybe I shall starting offer 1m USD dollars for leaking this said software to me. O.k. I am just kidding, but just imagine how much some organizations are willing to offer to get hands on this. Will you ever trust Apple again if it known this software is leaked? What if such software is leaked to a competitor corporate? What is such software is leaked to an enemy spy agency? Apple will be very doomed.


That's a lot of money to pay for a piece of software that only works on one phone that you don't even have access to.


i want to believe. but i'm having a hard time.

according to the new york times, for whatever they are still worth

> Apple had asked the F.B.I. to issue its application for the tool under seal. But the government made it public, prompting Mr. Cook to go into bunker mode to draft a response, according to people privy to the discussions, who spoke on condition of anonymity.

http://www.nytimes.com/2016/02/19/technology/how-tim-cook-be...

is the nyt reporting correct? i know they tend to side with natsec bullshit. i fear this being a really big pr stunt.

also, the nsa siding with apple is just an expression of rivalry, no?


The only two reactions to this news seems to be cynicism and praise. I think both voices in response to Apple's letters are valid and useful for improving user security in the future, and yet incomplete by themselves.

Sure, Apple should be praised for refusing to give government agencies the ability to unlock an iPhone, but a significant part of their motivation is not altruistic. It's in Apple's self-interest to make a stand in this case, but we can't always trust corporations to prioritize customer privacy over caving to government pressure.

Similarly, Apple has already admitted that a backdoor exists for all iPhones. In my opinion, this is an inexcusable security hole at best, and at worst an implication that Apple intended at some point to comply with government requests for encrypted information. However, the fact that the FBI has made this request in the first place, and that Apple is in a position to decline (at least initially) and make it public, is a good sign that the three-letter agencies may not be as all-knowing as some may fear.


> and at worst an implication that Apple intended at some point to comply with government requests for encrypted information

Apple's backdoor is for straightforward business reasons - they want to retain digital ownership of people's devices to take a cut from app distribution. Plus, keeping control is harder than giving up control. Keeping control allows total flexibility, like the flexibility to undermine security that's under discussion. Removing control requires careful planning to avoid later problems with no flexibility to fix them.


No, they could have made the device actually secure without losing any ability to take a cut from app distribution. Later iOS versions do it (although it seems to be unsettled weather a backdoor is still possible in those versions). The iOS version in question has a very obvious backdoor, which Apple is understandably getting really defensive about and doesn't want that fact made obvious.


I'm not sure I see what you're arguing. Implementing a more secure system is never convenient, but that obviously has nothing to do with whether it's justified.

As for my quoted statement, how can you really disagree? Having a system where "only the good guys" (i.e. Apple, right?) can break the security of any device is precisely what law enforcement has been asking for for years, and what HN users and the tech-savvy in general have been railing against. Now that Apple has completely admitted that this system exists, users are downvoted here for pointing it out?


I'm merely disagreeing on Apple's motives. Yes, the voyeurs want backdoors, and yes Apple has built a backdoor. But I don't think Apple has created this backdoor for law enforcement. I think it's simply due to them, like any other company, having a hard time giving up control and creating open platforms.

Don't sweat the downvotes. There seem to be a lot of Apple customers that don't have a technical clue about what guarantees other systems actually provide or what capabilities are theoretically possible. I surmise they view computing systems solely in terms of productized offerings from companies, and thus Apple is the leader of the pack for privacy and this case is key to preserving that privacy.


> Similarly, Apple has already admitted that a backdoor exists for all iPhones. In my opinion, this is an inexcusable security hole at best, and at worst an implication that Apple intended at some point to comply with government requests for encrypted information.

A backdoor does not exist and Apple is fighting the government to create one.


I think the implication is that Apple can push an iOS upgrade with security holes, and without an open-source system, we can't know if it has security issues or not. (At least that's the impression I've gleaned from the other comments)

I don't own an iPhone, so I don't know how the update process works.


The backdoor is the fact that Apple has the ability to push updates without a user's permission, as multiple other users have pointed out.


If it wasn't for Tim Cook...all the other tech giants are failing ethically.


Thing that bugs me most is the url `/customer-letter`. Is this the one and only "Customer Letter" they're ever going to write?


Certainly claims to be a different Apple than the one that released the iPhone3 that claimed it encrypted Exchange 2007 data (when it did not).


Can they not just clone the disk and bring it up in a vm? Is there anything that would prevent them from building some tooling for that?


I dislike Apple as a matter of principle, but I really love them when they are kicking and screaming for the right cause.


Why can the FBI just create a modified operating system?

I haven't seen this question answered anywhere.


It needs to be signed by Apple in order for it to be loaded.


Why couldn't the FBI subpoena Apple's signing key?

That way the FBI isn't asking Apple to create any software, it's only asking for something that already exists.

Obviously that would be bad for Apple and computer security in general, but is there precedent that prevents it?


Disclaimer: kind of glad that Apple's making noise, but kind of frustrated that it's about this specific case.

The answers to these questions has some pretty deceitful phrasing....

>First, the government would have us write an entirely new operating system for their use.

Only "new" in the sense of not being exactly the same as the current one. Implies much more work than we know to be the case.

>Law enforcement agents around the country have already said they have hundreds of iPhones they want Apple to unlock if the FBI wins this case. In the physical world, it would be the equivalent of a master key, capable of opening hundreds of millions of locks.

The master key analogy falls apart because the order specifically calls for making a version that only works on a targeted phone. At best it would be the equivalent of Apple being asked to make many individual keys. Unless, of course, they want to make a version of iOS with the exploit that would work on any iPhone.

> Of course, Apple would do our best to protect that key, but in a world where all of our data is under constant threat, it would be relentlessly attacked by hackers and cybercriminals. A

This is implying hackers could do anything with a version of iOS that is made to only work on one phone. You could absolutely release the update file that the FBI is asking for and have no risk of compromising anything because (again) this is for a specific phone.

>Has Apple unlocked iPhones for law enforcement in the past? >No.

(The answer then proceeds to say "Actually yes we have, just not past iOS 8)

>For devices running the iPhone operating systems prior to iOS 8 and under a lawful court order, we have extracted data from an iPhone.

>We feel strongly that if we were to do what the government has asked of us — to create a backdoor to our products

Using a backdoor already existing in your product...

>One of the strongest suggestions we offered was that they pair the phone to a previously joined network, which would allow them to back up the phone and get the data they are now asking for. Unfortunately, we learned that while the attacker’s iPhone was in FBI custody the Apple ID password associated with the phone was changed. Changing this password meant the phone could no longer access iCloud services.

Seriously FBI?

I know this letter isn't for me. I want to be on Apple's side based off of how they present the case. But if you look at the court order, off of the fact that the FBI got a warrant for a specific device, off of the fact that they're asking for an unlock of a specific phone, off of the fact that its technically feasible to do this without compromising all iPhones thanks to digital signage...

My impression is that Apple's position is that its technically infeasible to make this exploit, which isn't really true.

There's the other "but with this, we'll have to do a bunch of phones" argument... is there a term for being overburdened with writs from the court? What's the constitutional protection against that? That feels like the only valid defence at this point for them (from a legal standpoint)


> My impression is that Apple's position is that its technically infeasible to make this exploit, which isn't really true.

They've been perfectly clear that they could do it, and then explained why they feel it would be wrong to do so, nothing to do with it being technically unfeasible.


That is generally what is meant by new. People always object whenever any software is described as new and it is ridiculous. It isn't just software where we don't start from scratch and yet still call things new. iPhone's themselves are a good example, but the examples are everywhere.

If you think this request doesn't represent significant work then I question your ability to estimate such things. Just making a version of iOS that runs without being written to disk and is never written to disk, as specifically requested, would be a significant task. Being sufficiently satisfied that your code restricting it to a single device is correct and not exploitable is not something I would care to be responsible for ever. I can't believe people keep trivializing it. Defense in depth is generally considered necessary. Removing all the locks but Apple's signing key and calling that just as secure is foolish.

The master key analogy is sound if it is referring to the technique rather than the specific software. Does anyone believe this will only ever be done once, or that it will always be done correctly every time, including every other manufacturer? Surely we can agree there is some risk here.

They said no and explained what they did instead of unlocking. Seems so obvious that you must want to argue about the definition of unlock? Really?

I do not know how Apple could more clearly state that they could make this tool. How could you come away with an inaccurate impression?


> There's the other "but with this, we'll have to do a bunch of phones" argument... is there a term for being overburdened with writs from the court?

Yes, it is called "burdensome". It very well may be ruled too burdensome for them to unlock one specific phone, let alone all the others that have piled up. That is where the case was as of a couple days ago, with apple being asked by the court whether the request would be too unreasonably burdensome for them to comply.


Ultimately the point is that if we allow the government to compel companies to perform work, where's the line? If the AWA is upheld by all the courts, how do we prevent the government from just dictating to companies continuously the work they must perform on the government's behalf?


The government already does compel companies to perform work. Every other thing Apple already did to comply with the order was work. You think it was free? There are myriad examples of companies being compelled to do work to help an investigation.

If Apple could make the case that the change would be so burdensome that they can't feasibly do it without harming the company, they have a case to resist. I don't believe it, but they have to make that case.


I'm struggling to a think of a subpoena that wouldn't involve being compelled to work. Subpoenaing unencrypted customer data still requires work on the company's part. When does a subpoena become a burdensome amount of work?

Furthermore the government already forces companies to preform work. For example, paying taxes, providing insurance for employees, complying with industry regulations.


"When does a subpoena become a burdensome amount of work?" This is certainly an important part of the question. I don't have a well-formed opinion on how to define limits.

"Furthermore the government already forces companies to preform work. For example, paying taxes, providing insurance for employees, complying with industry regulations." These are well-defined items with known elements about how to implement them. At this point, everyone knows that when they start a company, taxes, insurance, and regulations are part of the game.

But when law enforcement shows up making random demands, or convinces a court to issue an order for random demands, there must be limits. If there are no limits, there will be no end to random government requests.


>My impression is that Apple's position is that its technically infeasible to make this exploit, which isn't really true.

There's an entire section in TFA where they specifically say that it is not technically infeasible.


The whole article is exceedingly disingenuous though. Which makes me not really believe their objections.

For example, calling it a "creating a whole new OS" is basically a lie. Saying it would weaken all devices is basically a lie. Unless what they mean to say is they cannot fathom a way to make a firmware that only works on one specific phone. But if that's the case they should have said that.


> I, for one, would like to live in a country where the government is a servant, not a master.

Would you? Why not Somalia?


We detached this subthread from https://news.ycombinator.com/item?id=11151312 and marked it off-topic.


I don't see the point you are trying to make. I'm simply suggesting that "treating the government as no better than a customer" is probably a healthy attitude. In general, I think society is a lot better off when the government has to beg for data, fight for it in court, and loses more often than not. Government should only be "powerful" in the sense that it has great power to protect the liberty of the citizens who own it.

I recognize this as an idealistic attitude. I've always leaned libertarian.


> I don't see the point you are trying to make.

Somalia is a country where the government are beholden to the citizenry.

> I'm simply suggesting that "treating the government as no better than a customer" is probably a healthy attitude

I see no evidence for that. Being able to treat a search warrant as if it's a polite request would help pretty much nobody.

> I recognize this as an idealistic attitude. I've always leaned libertarian.

By typical libertarian logic, Apple would be justified in torturing and executing the FBI agents on their property, as property is private and the Government is subordinate. The FBI agents violated the Non Aggression Principle by attempting to coerce Apple's employees to act.

Where have I mistaken your logic there?


1 - To whom do you suggest the government should be accountable?

2 - I believe a search warrant is a completely valid and enforceable way to compel information, including in this case. But if Apple cannot comply without endangering every iPhone owner, then the other owners of iPhones have a stake in the outcome. The government's right to investigate this crime doesn't expand to the right to endanger millions of people's privacy and security.

3 - Torture would be a crime against the individual agents, and punishable as a crime unto itself. Libertarian philosophy doesn't suggest that you have a right to hurt other people, unless in self-defense.


> 1 - To whom do you suggest the government should be accountable?

Other governments and ultimately the citizenry as a whole.

> if Apple cannot comply without endangering every iPhone owner, then the other owners of iPhones have a stake in the outcome

They do not. No more than owners of doors when the FBI executes a warrant.

> The government's right to investigate this crime doesn't expand to the right to endanger millions of people's privacy and security.

No plausible mechanism has ever been put forward where this would be the case.

> Libertarian philosophy doesn't suggest that you have a right to hurt other people, unless in self-defense.

Right, they attempted to coerce your staff on your premesis into acting against their own interests. This is a violation of the NAP and therefore any level of violence in self defence is justified.

This is one of the most core Libertarian beliefs.


I am reminding myself of this XKCD comic: https://xkcd.com/386/, but at the risk of wasting our time...

You said that in Somalia, the government is accountable to the citizenry. Yet you suggest that that's legitimate ("...ultimately to the citizenry as a whole..."). If I personally object to a law, do I have any recourse? What if my town does? My state? My region? At what point does it become immoral for the government to ignore change requests?

The door analogy is clever, but inaccurate. It is not possible to transmit door keys (or battering rams) via email, and use them to simultaneously bash open the doors of millions of people. Creating a method by which a phone can be cracked weakens the security of the phone for all users, including innocent users. So the FBI is asking to weaken the security of all phone users. The "plausible mechanism" would be "hacker pays off an apple dev for a copy of the hackable OS".

The last point doesn't make sense to me. Firstly, in a hypothetical libertarian legal system, corporations would not have a right to self-defense, or any other rights, as they are not humans. Even in individual cases, any level of violence in self defense is definitely not justified. When you said "...Apple would be justified in torturing and executing the FBI agents on their property...", that's saying that trespassing warrants murder, which I do not claim.


> Should we put in place ubiquitous video and audio surveillance in every square foot of the country just in case the FBI ever wants to review something that happened?

In public areas that are often visited? Absolutely yes, an unbiased source of evidence available to the public? Excellent.

> Law enforcement has apparently lost the ability to do on the ground investigation work in favor of whiz bang-ery

Apple has provided a child porn trading network protected by the very principles of mathematics and their refusal to cooperate with the FBI.

What they're doing is precisely what any law enforcement agency would.


Please stop taking HN discussions into generic flamewars. A comment like "No, I'm sorry. Kids being raped matters" (https://news.ycombinator.com/item?id=11153295) is an internet reductio ad absurdum.

We detached this subthread from https://news.ycombinator.com/item?id=11152327 and marked it off-topic.


Can we not drag this down to the media scare tactic that is child pornography?

For every sicko who's trading child porn there's dozens, if not hundreds, more who're trying to escape an abusive partner, leaving gangs, getting away from sexual slavery, or whistleblowing on government or companies breaching the law.

Additionally, should the millions of us engaged in perfectly legitimate activities have a dossier created on our lives just in case? Imagine what could happen to that information, you could be facing secret blacklists like the UK.

If Apple create this ability its use will extend past the immediate issue and affect people who genuinely need this security. In the digital world, as Apple said, you cannot destroy something or lock it away once it's used.


> Can we not drag this down to the media scare tactic that is child pornography?

You mean can we not use the crime Apple has essentially made impossible to secure a conviction on? No, I'm sorry. Kids being raped matters.

> For every sicko who's trading child porn there's dozens, if not hundreds, more who're trying to escape an abusive partner, leaving gangs, getting away from sexual slavery, or whistleblowing on government or companies breaching the law.

None of which is remotely related to phones being unbreakable by law enforcement.

> If Apple create this ability its use will extend past the immediate issue and affect people who genuinely need this security

A fallacious slippery slope. If you let the Police search my house, they will search everyone's house without a warrant and therefore it's bad to have any searching ever.


  > No, I'm sorry. Kids being raped matters.
And I'm sorry, no it doesn't matter to this argument. Civil liberties are civil liberties, whether we are talking about investigating the theft of a stick of chewing gum, the theft of millions of dollars, a slap in the face, the breaking of an arm, the rape of a child, or the murder to thousands of civilians by terrorists.

Bringing up the most horrible of crimes to justify a particular argument for increasing the powers of law enforcement is an obvious attempt to appeal to emotion, rather than to experience.

If your argument is any good, it will be just as good when talking about why the police should be able to search your phone for evidence of tax evasion as it is for talking about why the police should be able to search your phone for evidence of child rape and terrorism.

Otherwise, we go to a place where we say, "Well, the shouldn't summarily execute people who steal cigars from convenience stores, but when it comes to terrorists, we shouldn't let laws get in the way of their need to do what's expedient."


> Brining up the most horrible of crimes to justify a particular argument for increasing the powers of law enforcement is an obvious attempt to appeal to emotion, rather than to experience.

No it's actually an appeal to both. The increase in powers here if any exists whatsoever is minimal. I'm advocating Apple comply with them.

> If your argument is any good, it will be just as good when talking about why the police should be able to search your phone for evidence of tax evasion as it is for talking about why the police should be able to search your phone for evidence of child rape and terrorism.

If they can search your home for it, they should be permitted to search your phone for it. Both should have the exact same expectation of privacy and the exact same judicial oversight.

> Otherwise, we go to a place where we say, "Well, the shouldn't summarily execute people who steal cigars from convenience stores, but when it comes to terrorists, we shouldn't let laws get in the way of their need to do what's expedient."

There is no evidence that there is any legal protection for Apple here and strong evidence that indeed the FBI can compel them. Nobody is advocating breaking the law or even going around it.


  > If they can search your home for it, they should be permitted to search
  > your phone for it. Both should have the exact same expectation of
  > privacy and the exact same judicial oversight.
They are permitted to search your phone for it. The problem here is that they are saying:

We wish to search this home, as is our legal right. The home contains a safe that we claim we cannot open, and we wish to compel the manufacturer of the safe to assist us to search the safe. The manufacturer does not wish to do so, but we insist that they be forced to do so by threat of imprisonment.

Furthermore, we wish to do so by compelling the manufacturer of the safe to create technologies that could open all safes, without the knowledge of the safe owners. We claim we only want to open this one safe, but we have this long track record of opening as many safes as we can, using secret courts and hearings to obtain the right to search those safes without the owners of those safes having the opportunity to argue against us, which is a different level of judicial oversight than being applied to searching this one house.


> There is no evidence that there is any legal protection ...

Even if that were true, it would be irrelevant. We aren't stuck with our laws. If they're abusive or useless they need to be changed.

> If they can search your home for it, they should be permitted to search ...

Sure, I can see that you think that. But making false equivalences isn't a good argument even if it tends to be the usual limit of political discourse. Our servants are "permitted", with the right suspicion and warrants, to search almost anything (in the name of the people). Yes.

But this isn't a case of permission, it's about capability. They're incapable. You're implying that the law not only permits some searches, but necessarily compels unlimited help in making those searches possible?

That's a huge stretch. Especially when that help involves uttering falsehoods.


"Won't someone think of the children" is a battle cry used almost exclusively to subvert our liberties.


> No, I'm sorry. Kids being raped matters.

I never said it didn't matter, but the problem is bigger than one issue.

I have total distain for people who do this, but lets be clear that an encrypted phone isn't going to stop these people getting caught. They're already aware of how despicable their acts are and use the traditional trust systems to hide it. All the busts are through social engineering - agents who go undercover to infiltrate these networks and find the perpetrators. This must be one of the worst jobs in the world.

> None of which is remotely related to phones being unbreakable by law enforcement.

And as has been said by experts in cryptography, breakable by one third party means breakable by _everybody_. This means the partner who said they'd harm you if you told someone what they did to you. This means the pimp who's holding you against your will. This means the company who's allowing toxic substances to leak in to drinking water.

This is _much bigger_ than one issue. In the digital world once something exists it can escape. Once it escapes you can't stop it, and it will escape.

This is nothing like a normal search warrant. It's a warrant with no address, no reason and no time. It can be used whenever, wherever, and with no oversight on who does it.


> The issue is the precedent that is made

That searching the effects of a dead terrorist is acceptable? I really don't find this objectionable in any way whatsoever.

> Apple could no longer argue they had to create something to fulfill the warrant, since it was already created.

Good, Apple's desires to not be subject to the laws of nations where they do business has historically hurt their customers. Even now people argue that Apple are doing this to protect their customers of which there's no evidence whatsoever.


We detached this subthread from https://news.ycombinator.com/item?id=11151598 and marked it off-topic.


Do you really not understand how precedent works or are you being deliberately obtuse?

The criminal act being investigated is irrelevant. They want data on a device in their possession that they have permission to search. That is all that matters. It could just as easily be a drug investigation or, as Comey said last week in front of congress, investigating a car accident.


> Do you really not understand how precedent works or are you being deliberately obtuse?

Please be civil, regardless of how wrong you think somebody is.


> Do you really not understand how precedent works or are you being deliberately obtuse?

Please do not insult people on HN. This false dichotomy is offensive to me.

> The criminal act being investigated is irrelevant. They want data on a device in their possession that they have permission to search. That is all that matters.

Then why are you concerned about precedent? They have met all of the requirements to search this device. Apple is impeding them for no good reason purely to advance their corporate interests.


You would prefer not to be queried as to the limits of your understanding, but we would prefer you stop commenting assertively on the basis of your ignorance.


There comes a time where the only competition for a huge corporation is the government. And you can't win that one (AT&T) -- even when you do (Microsoft).




Consider applying for YC's Fall 2025 batch! Applications are open till Aug 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: