American companies are vulnerable to literally hundreds of vulnerabilities NSA knows about; that's something that was widely known (public, in fact) almost a decade before Snowden.
I agree that this bug is different, but that might have been a subtle case to make inside the organization.
The worst problem with the NSA knowing about Heartbleed is the total lack of accountability.
If I were any US-based company CEO whose customers got hacked by Heartbleed exploits, I'd drag their corpses to the court if necessary.
Sidenote: People have asked "Why are you doing JS-based cryptography on passwords if you have HTTPS?" - here we have the ideal answer. Encrypting the passwords using public-key crypto in addition to HTTPS and doing the decryption in RoR/PHP/nodejs would at least have spared the users from the need to change their passwords.
There seems to be a lot of confusion about what the job of the NSA is, with this most recent Heartbleed incident being the most recent example.
The NSA's primary function is performing signals intelligence. To perform that function, they've spent the past ~60 years building up cryptanalytic capability (pretty much unmatched by any other single organization either governmental or private).
Because of this, they have a secondary function, which is to serve as subject-matter-experts for other government agencies. They provide advice, mainly in the form of influencing NIST standards (overtly by providing recommendations, and as we've come to learn, covertly by fucking with standards). This is a side-effect of their primary function however.
Asimov's first law of the NSA is to intercept and process signals intelligence. Any other function is secondary, and certainly will not take precedence over their first function.
What the Snowden revelations have shown, is that there's a conflict of interest between their primary function and being tasked with providing advice. I think it's a reasonable argument to be had that they probably should get out of the business of providing guidance to other agencies, as now all that advice is tainted.
The security of "US-based companies" is so far down the list of priorities that I hesitate to suggest it exists at all. Reporting vulnerabilities to vendors is at best orthogonal to their primary function, and at worst, counter to it. If you want to argue that someone in the government should be responsible for helping companies fix security issues, that's also a good argument. But it certainly shouldn't be the NSA (and definitely not now that we know they have no compulsion about misleading everyone).
I'm going to ignore your side-note about JS browser-based crypto. Unlike the people on here who diligently try to explain the fundamental issues with doing JS-crypto, I'm now of the opinion that you can't reason with these people.
Consider: what if the NSA's sensors are so extensive that they know the exact moment anyone other than them tries to exploit certain bugs?
That changes the risks/rewards of early-patching quite a bit. They can be confident it's their own trump card for quite a while, and learn about (or strategically mislead) any teams that arrive later to the same knowledge. When it's really "burnt", and in use by the NSA's enemies, then they can help US companies patch... and possibly even assure them exactly how much damage (if any) occurred.
(In the extreme, with say a big friend-of-NSA telecom or defense contractor, that could even be: "Hi, American BigCo. In the 48 hours between the beginning of enemy exploitation and your patching, we saw about 13,000 suspicious heartbeats directed at your servers. If you don't have raw traffic logs to do your own audit of exactly what server memory was lost, we can share our copy with you. It's a pleasure doing business with you.")
In fact, perhaps the reason for the synchronized reveal from US and non-US discoverers just now is that the first non-NSA probing (by either malicious actors or researchers) was just recently detected, starting the race to patch.
It limits the corporate risks: they know exactly which passwords to change, accounts to lock, and other data loss to ameliorate.
And if the time window of exploitation is kept small, the exact same magnitude of data loss could have happened in a rapid-disclosure and patch scenario. (Two years ago, were practices for rapid response better or worse than now? Would the time window of public-knowledge-but-incomplete-protection have been any smaller - or maybe larger?)
So why not let it break later (and maybe never), rather than earlier? It's like any kind of "technical debt" analysis... oftentimes it makes sense to defer fixes, because by the time the issue becomes critical, it may have already been rendered moot, by larger changes.
I don't see how leaving American companies vulnerable fulfills the NSA's charter.