Hacker News new | past | comments | ask | show | jobs | submit login

No fucking way. This is disastrous PR stuff, second only to the Snowden revelations.

It should be clear by now that the NSA does not restrict themselves from anything... and should be disbanded.




I don't know how "disastrous" this really is.

NSA knows approximately 1 zillion vulnerabilities we don't know about and won't know about. They range from RCE's in Windows and Apache to flaws in cryptographic hash functions.

It's NSA's charter to stockpile these things, and, yeah, to use them against foreign adversaries.

It's bad though, because this one was so easily exploitable. It's the kind of thing a reasonable organization finds out about and wants fixed ASAP.


> It's NSA's charter to stockpile these things, and, yeah, to use them against foreign adversaries.

I don't see how leaving American companies vulnerable fulfills the NSA's charter.


American companies are vulnerable to literally hundreds of vulnerabilities NSA knows about; that's something that was widely known (public, in fact) almost a decade before Snowden.

I agree that this bug is different, but that might have been a subtle case to make inside the organization.


The worst problem with the NSA knowing about Heartbleed is the total lack of accountability.

If I were any US-based company CEO whose customers got hacked by Heartbleed exploits, I'd drag their corpses to the court if necessary.

Sidenote: People have asked "Why are you doing JS-based cryptography on passwords if you have HTTPS?" - here we have the ideal answer. Encrypting the passwords using public-key crypto in addition to HTTPS and doing the decryption in RoR/PHP/nodejs would at least have spared the users from the need to change their passwords.


There seems to be a lot of confusion about what the job of the NSA is, with this most recent Heartbleed incident being the most recent example.

The NSA's primary function is performing signals intelligence. To perform that function, they've spent the past ~60 years building up cryptanalytic capability (pretty much unmatched by any other single organization either governmental or private).

Because of this, they have a secondary function, which is to serve as subject-matter-experts for other government agencies. They provide advice, mainly in the form of influencing NIST standards (overtly by providing recommendations, and as we've come to learn, covertly by fucking with standards). This is a side-effect of their primary function however.

Asimov's first law of the NSA is to intercept and process signals intelligence. Any other function is secondary, and certainly will not take precedence over their first function.

What the Snowden revelations have shown, is that there's a conflict of interest between their primary function and being tasked with providing advice. I think it's a reasonable argument to be had that they probably should get out of the business of providing guidance to other agencies, as now all that advice is tainted.

The security of "US-based companies" is so far down the list of priorities that I hesitate to suggest it exists at all. Reporting vulnerabilities to vendors is at best orthogonal to their primary function, and at worst, counter to it. If you want to argue that someone in the government should be responsible for helping companies fix security issues, that's also a good argument. But it certainly shouldn't be the NSA (and definitely not now that we know they have no compulsion about misleading everyone).

I'm going to ignore your side-note about JS browser-based crypto. Unlike the people on here who diligently try to explain the fundamental issues with doing JS-crypto, I'm now of the opinion that you can't reason with these people.


It wasn't traffic that was revealed, it was server memory. Which could just have easily contained the decrypted passwords as the encrypted ones.


Heartbleed only leaks SSL-related memory - not program memory!


Consider: what if the NSA's sensors are so extensive that they know the exact moment anyone other than them tries to exploit certain bugs?

That changes the risks/rewards of early-patching quite a bit. They can be confident it's their own trump card for quite a while, and learn about (or strategically mislead) any teams that arrive later to the same knowledge. When it's really "burnt", and in use by the NSA's enemies, then they can help US companies patch... and possibly even assure them exactly how much damage (if any) occurred.

(In the extreme, with say a big friend-of-NSA telecom or defense contractor, that could even be: "Hi, American BigCo. In the 48 hours between the beginning of enemy exploitation and your patching, we saw about 13,000 suspicious heartbeats directed at your servers. If you don't have raw traffic logs to do your own audit of exactly what server memory was lost, we can share our copy with you. It's a pleasure doing business with you.")

In fact, perhaps the reason for the synchronized reveal from US and non-US discoverers just now is that the first non-NSA probing (by either malicious actors or researchers) was just recently detected, starting the race to patch.


I don't see how that protects the people whose data was stolen.

"Here's the license plate number and home address of the guy who just ran over your grandma. Sorry for your loss."


It limits the corporate risks: they know exactly which passwords to change, accounts to lock, and other data loss to ameliorate.

And if the time window of exploitation is kept small, the exact same magnitude of data loss could have happened in a rapid-disclosure and patch scenario. (Two years ago, were practices for rapid response better or worse than now? Would the time window of public-knowledge-but-incomplete-protection have been any smaller - or maybe larger?)

So why not let it break later (and maybe never), rather than earlier? It's like any kind of "technical debt" analysis... oftentimes it makes sense to defer fixes, because by the time the issue becomes critical, it may have already been rendered moot, by larger changes.


That doesn't give me, as a user, much comfort. But I can see your point from a corporate standpoint.

This whole things just sucks.


Eventually, the bad guys will find all of these bugs. And do massive amounts of damage (in this case, potentially billions of people who should change their password, and hundreds of thousands of administrators having to swap certificates).

And all the time, the NSA had the capability and knowledge to prevent this damage. What a great service they did to their country, indeed.


Eventually, the bad guys will find all of these bugs

That's not really true. The NSA has incredible resources that other bad guys do not.

I think a critical step on this logic chain is "once we find all the bugs, we will be safe." Given that step, you would obviously want the NSA to tell the vendors about every single bug they find.

But it's really not the case that we will ever "find all the bugs." Even if the vast resources required were needed to be spent to find all the bugs in version 3.1415, there would be new bugs in 3.1416.

I can kinda-sorta buy the full-disclosure argument that "if an independent researcher can find X then so can the bad guys." That argument doesn't apply to the NSA. They have a much better reason to believe "we found this exploit, and it will take a long time for someone else to find it out."


Problem is that "the bad guys" also includes Chinese and Israel intelligence agencies. The Israelis are known to have massive cyber ops going on (Stuxnet is said to have a large Israel contribution), and the Chinese cyber-ops exposed (iirc) one year ago only learned from their uncovering.

I would not be surprised at all if Israel and China didn't also know about Heartbleed.


Exactly, I don't mind them having the capability for this kind of thing. What bothers me is the lack of due process and rule of law!


There's plenty of both. NSA is wrapped with layers upon layers of process and oversight both, which is something re-confirmed in the wake of Snowden's revelations.

What people are shocked about is that they didn't understand what the law permitted, or how quickly mixing the law of induction with datacenters full of computers can led to global-level surveillance.

With all that said, I would mind if it's true NSA knew of this bug and left it alone. It's a powerful weapon for SIGINT to be sure, but it's too easy to find by other state spy agencies doing code review; NSA would have had to assume other nations knew about it at well and were putting US government and private-sector comms at risk.


The DoJ decided to ignore the law. https://www.eff.org/foia/section-215-usa-patriot-act The NSA secretly collects the private data of people they know are innocent, which is the opposite of due process.


The FBI collects the private data of thousands of people they are innocent every time they use a mass-intercept warrant on a particular cell phone tower.

An attorney general collects the business records of thousands of people they know are innocent of wrongdoing in the course of routine investigations into fraud by large businesses.

Both of those are still considered "due process" because in both cases they are following a set process. Due process doesn't mean the government won't look at you if you're innocent, which seems to be a big misconception. It essentially means that the government should handle cases with similar particulars in similar ways.

So in the case of the NSA, if that collection happened under the same type of legal authority, after going through the same FISC review (if needed), was held to the same standards of reasonable and articulable suspicion (or whatever standards were required to be met for §215) then the collection that would happen from there would have been given due process.

As for your link, while its accusations are damning enough, they don't appear to support your first point or your last one. It's hard to argue DoJ is "ignoring" the law when the EFF themselves make quite clear that "The language of Section 215 allows for secret court orders to collect 'tangible things' that could be relevant to a government investigation – a far lower threshold and more expansive reach than a warrant based on probable cause. The list of possible 'tangible things' the government can obtain is seemingly limitless, and could include everything from driver’s license records to Internet browsing patterns."

So don't take my word for it, take EFF's.


It is? It would be shocking if the NSA didn't use Heartbleed. This is basically the equivalent of doing a news story about an alcoholic who drinks the beer they have in their refrigerator. It's exactly what you'd expect them to do.




Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: