I'm so happy to see something like this in development. Every time there's a discussion about OpenSSL vulnerabilities, the topic of a future replacement written in Rust comes up, but no one was stepping up to the plate. Now we have some real progress towards a safer future.
There is also Thrussh, a Rust library for SSH. Rust now begins to shine where it was designed for -- in security.
According to https://doc.rust-lang.org/book/ffi.html it is possible to make callbacks from C code to Rust functions. This way other languages could take advantage of Rust's safe libraries.
Yes, but I don't think progman was suggesting that. Rust doesn't prevent code from having logic errors, but it does protect you entirely from certain classes of errors (e.g. use-after-free memory violations) as long as you stick to safe Rust. These are some of the most common bugs in C programs which have resulted in highly publicized vulnerabilities, so Rust will take a program a very long way towards being safer than any C counterparts.
Ted comments on this post in the article that it critiques. I do know that rust has important tools for writing secure programmes but using them is not really enforced (ie unsafe {}), thus it's possible to write exploitable bug in it. I want to note that I never used it though.
In the end it's possible in any language to produce exploitable bug (apart from maybe Erlang VM?). The point is about 1) how hard (probable) it is, 2) how popular are error checking tools (C static analyzers, Google Thread Sanitizer, etc), and 3) how fast can the culprit code be found and fixed.
In that regard Rust has reduced both 1) and 3) by only exposing dangerous features in unsafe {}, and greatly improved 2) since the compiler itself is checking those errors.
It does. There are distinct properties attached to Modernism and Modern. (I'm architect, hence I'm very much attached to these terms. Unlike SW people who didn't grow up with that).
First of all, it's the opposite of Post-Modern, to the perl style do it all in myriads of ways, everything is allowed, OpenSSL style.
With Modern, only the best API and implementation is allowed. API's are Stanford style well-planned ahead of time, and not adhoc added New-Jersey style.
Changes to the API need a rename, not just a major version bump. Hand-waving simplified development models are okay for a post-modern everything is allowed world, whilst modernism aims for long-term goals, making it easier for the user, not the developer.
Modernism is based on "Form Follows Function", and not the other way round.
Reduce it, abstract it. Functionalism is everything, marketing is less important.
And to avoid popular misconceptions, in our current era "Modern" doesn't mean "New" at all. Modern is more old Stanford-style development. Check the Unix Haters handbook e.g.
Rust is new, but this choice alone doesn't allow the use of Modern. Rust is better, because of its superior semantics and guarantees, whilst still adhering to the C ABI. It's a library for everybody afterall, and not comparable to the latest ocaml, haskell or lisp TLS library, which only adheres to their ABI and needs wrappers to be useful for projects in other languages.
You've given a while bunch of definitions for what you interpret modern to mean but none of those are inherent in the word which is what (I think) the op meant when complaining about its over usage.
So while I appreciate your attempt to redefine "modern" with all the above I'd rather people just used the more verbose explanation instead of using the meaningless term.
Nothing is inherent in a word other than what people understand it to mean, and the worlds where 'modern' is a term of jargon are larger overall than computing. It has fairly specific meanings in art, architecture, literature, and music.
Grandparent referred to the meaning of the jargon term in architecture. It's not his personal redefinition; it's an interpretation of a widespread understanding of the word mapped analogously to software.
Jargon is important. We'd have a hard time communicating if we could never create new words or imbue existing ones with new meanings by analogy. The more verbose explanation is mostly useful for laymen and beginners.
He's not redefining the term, he's giving the 'official' definition of modern from the humanities.
The word has spread in popular vernacular to mean 'the present' or 'contemporary' but for historians, philosophers, architects, art critics, critical theorists, etc. "modern" refers to a bunch of things, and the "modern era" is considered by many of them to be either over, extended, or exaggerated.
2. New and common; trite; commonplace. [Obs.]
[1913 Webster]
We have our philosophical persons, to make modern
and familiar, things supernatural and causeless.
--Shak.
Isn't that what we want from "modern" software? To "make modern and familiar" "things supernatural and causeless"? :-)
Another interpretation of Modernist Development is team size:
A modern SW project consists typically of 1-2, max. 3 devs,
a typical post-modern project of 20-200. With such member sizes consensus is rarely practical, political correctness and CoC discussions are taking over development, ABI and API's are driven by design and not by functionality and longevity.
I.e. modern development is small, functional and not designed by committee.
a typical post-modern project of 20-200. With such member sizes consensus is rarely practical, political correctness and CoC discussions are taking over development, ABI and API's are driven by design and not by functionality and longevity.
Is this something you feel to be true, or do you have any evidence to back up that claim?
Why are you conflating building architecture and software architecture. I'm pretty sure no software developer has every thought of their project in terms of "post-modern."
New Jersey style? The PERL style? I've never heard of "New Jersey style." Are you joking? I'm almost positive that Perl didn't influence openssl development at all.
"Modern is more old Stanford-style development." I think you've read way to much into this. And sorry citing the "The UNIX Hater's Handbook" doesn't do much for the credibility of your odd argument.
Look into "New Jersey vs. MIT" and "worse is better". The idea more or less is that AT&T (in Jersey) cared more about software that shipped whereas MIT cared about more about good design. MIT produced Lisp and Lisp machines, Bell Labs produced C and Unix. Who won?
The mainframes and OpenVMS (though its clone, Windows). They're produced closer to the MIT and cathedral styles with a practical focus. The mainframes still run the backends of our financial system, logistics, big retailers, and so on. Most stuff that people consume requires one of them. If a desktop is involved, it's usually Windows based with 90+% of the market share despite UNIX workstations and Linux desktops existing for a long time.
So, who won? Nobody. Both had success. Most successful, though, was combining a little bit of MIT method, cathedral, and ship fast from "worse is better." That's IBM and Microsoft's approach. A hybrid worthy of another Gabriel essay.
Bill robbing Apple of GUI is well-known. Less known is he stole a more robust and secure architecture from OpenVMS that was most rock-solid of OS's. They managed to show what would happen if OpenVMS had no QA process during development. Yet, recent efforts have gotten Windows Server reliable enough that the connection is more believable. :)
I take it to mean "uses current, best practices." For example:
- C++: uses C++11 features, avoids non-RAII resources, prefers standard libraries over older, platform-specific libraries, not written as "C with classes."
- C: doesn't do weird stuff like bypassing malloc(), avoids undefined behavior.
- in general: has a test suite, probably uses continuous testing like Travis, uses modern language features to achieve cleaner & more robust code.
> Speed is important for a malloc implementation because if malloc is not fast enough, application writers are inclined to write their own custom free lists on top of malloc. This can lead to extra complexity, and more memory usage unless the application writer is very careful to appropriately size the free lists and scavenge idle objects out of the free list.
An example of such an optimization is the arena allocator[1][2] employed by protobuf. Custom memory management schemes are not uncommon in performance critical code.
> An example of such an optimization is the arena allocator[1][2] employed by protobuf.
I work on the protobuf team at Google, so I'm aware of this.
Two things about that:
1. The underlying blocks for the arena allocator still come from the system allocator.
2. Because the arena allocator inhibits the capabilities of standard malloc-debugging tools like ASAN and Valgrind, the protobuf arena allocator includes special ASAN-aware code to mitigate this:
However, that code is ASAN-specific. It won't help other tools like Valgrind. So yes, different allocators are sometimes warranted for specific patterns like arenas. But if all you want is plain malloc()/free(), you should call malloc()/free().
If you're writing a library, letting the user specify their own allocation callback is also great, since it lets the user do whatever custom bookkeeping/pooling/etc. they want to do. But by default just call malloc()/free() (IMHO).
It's a bit of a naive comment, because it wholly depends on the libraries you include and what those do.
Use any of the debugging libraries that shim in their own alloc routines, you bypass malloc (or at least do a bunch of things that impact what gets malloc'd).
Sure, but those libraries exist for the explicit purpose of giving malloc() special behavior. I'm talking about general-purpose libraries that implement their own malloc(), thereby preventing the use of these malloc()-hooking libraries.
I'd love to see an explanation for not supporting client authentication. Also, completely ruling out discrete-log DH and requiring PFS is not feasible unless you want to rule out a lot of clients and servers, on top of not supporting TLS 1.1.
1. client auth in TLS1.2 and earlier is done at the wrong time in the handshake. As a result the client's identity (which unlike the server identity usually identifies a user; see sibling comment which confirms this) is sent in the clear. That's a big privacy failure.
2. to work around (1), some implementations do an initial server-auth handshake, then immediately renegotiate up to mutual-auth (renegotiations are encrypted). Renegotiation has quite a dismal history, and I definitely want to avoid it.
3. as a follow on from (2), the standard never described what implementations are expected to do if client/server identities change during renegotiation. This (partially) resulted in https://mitls.org/pages/attacks/3SHAKE
All of these are fixed in TLS1.3: client identities are encrypted and renegotiation is dropped.
Yeah, trying to make TLS client identity private is a recipe for sadness (well, before 1.3).
At least in the environments I use TLS, which is interservice datacenter communications, there are no privacy issues (especially since you can just look at what container a connection comes from for the same amount of identity leaking).
Thank you, this is a very good explanation and now I'm convinced that this is the right thing to do.
But I do need mutual authentication, and want to avoid rolling my own crypto for it--so does Rustls expect to support client auth in TLS 1.3 when that spec is finalized and implemented?
Super nitpicky, but would it make sense in the README then to specify "client authentication per TLS 1.2 or earlier", as the non-feature, or something like that?
> I'd love to see an explanation for not supporting client authentication.
My guess? They don't use it in their applications, so they don't think anyone else uses it, either.
TLS client authentication is widely used in 802.1X (WiFi and wired) authentication. I've seen it used in a lot of other situations, e.g. web client access (client cert + user password), LDAP client access, etc.
Maybe there are security issues in client authentication which they're aware of. If so, they should share them. But simply labeling client authentication as "obsolete" shows a close-minded attitude.
Where I work client auth is used for a good (and growing) number of internal services.
Client auth is simple to use - our internal services are given the username from the CN, which they use to perform authorization checks. For a lot of simple internal services that don't require two-factor auth it works great.
Am I missing something better?
(We already have the infrastructure in place to deal with client keys)
If you've ever used OpenVPN, you've used TLS client authentication. That seems like a pretty huge hole to purposely put in your feature set. I understand not implementing it yet because it's more work, but I don't get drawing a line in the sand.
Obscure is not on the list in GP's quote, and afaik client authentication is none of the things in that list. It's actually used a fair bit in uses of TLS that are not the open web.
Coming from a US DoD background, TLS client authentication is mandated for nearly everything. Everyone carries keypairs around their neck on their ID card (PIV smartcard with x.509 certificates)
That comment is intended to say that Rustls doesn't support the things in the list below because the technologies in that list are broken, not because Rustls itself is broken. So the authors have chosen just not to implement anything in that list.
Likewise. "Modern" is a worthless word. In time, the flaws of any "modern" product will become apparent and it will be replaced by something more "modern".
> The following things are broken, obsolete, badly designed, underspecified, dangerous and/or insane. Rustls does not support:
> Client authentication.
> Kerberos.
From a complexity perspective I understand why these things would not be the first choice to implement in a new library. What I don't understand is why they're on the "will never implement" list. Public-key client authentication is to this day one of the strongest methods of authentication that can be used, and not having that available in a library greatly limits its usages in high-security applications, exactly the places where someone might want to stop using OpenSSL and its broken peers.
Kerberos... well... Kerberos is a cluster-fuck. I think everyone knows that. But there are specific applications where Kerberos (or something similar) is exceptionally useful and maybe even necessary. I will acknowledge though that if Kerberos is missing its not a world-ender, because you can implement your own token-based authentication on top of normal TLS without using Kerberos, which in some cases might even be easier because Kerberos is a cluster-fuck. Despite this, though, if configured properly Kerberos is still one of the best methods for doing secure authentication between multiple servers/services via a central authentication mechanism.
If the justification here is simply implementation complexity that might cloud the codebase or otherwise make it harder to audit for security, I do understand that reasoning. I'm just curious specifically in these two cases because they stood out from the list as things that I don't consider "insane", unlike most of the rest of the list.
As an aside, thanks for not implementing RC4. RC4 needs to die die die die. I don't know why anyone is still using it, but nonetheless I see it in the wild still sometimes :(
Odd to see "PSK" on the possible future feature list, with "client authentication" on the "never" list.
Other than that, looks like a nice and sane subset they've picked.
Real shame if they end up avoiding cert-based authentication, though. All other options for authentication are strictly worse, from a security perspective - and leaves more room for implementors to shoot themselves in the foot. For passwords that are intended for human users, for example, you really need some form of rate-limiting. Not to mention the problem of setting up a session first, and then later binding to a user (if authentication succeeds) rather than the simpler "only valid client can connect".
Let's not forget the security kernel in Guttman's cryptlib. It's like a lightweight variant of formal verification that justs makes sure things interface correctly.
Which is interating here since F* can extract to OCaml. I wonder how hard it would be to wire up a test harness to compare the two with randomized tests.
OCaml has more runtime requirements. Can someone comment on how invasive those requirements are, and whether that would limit adoption of the OCaml version compared with C or rust?
Rust is not a good language for low level crypto implementation because it offers no facilities for side channel resistant algorithims. Ring uses the extensively reviewed implementation from BoringSSL and considerable expertise from the author. Ring has a goal of moving as much code that does not need side channel resistant to Rust.
By moving the protocol logic to Rust, the amount of code that needs to be reviewed for memory safety in a TLS library is drastically reduced.
What does Rust the language have to provide in order to achieve side channel resistant algorithms? That doesn't sound right to me. There are primitives in other languages that are needed? Or Rust doesn't abstract at the right level?
Most high-level languages don't provide guaranteed-constant-time behavior at all. That's a big reason why ring uses lots of BoringSSL's/OpenSSL's assembly language code.
Also, one of my goals with the ring project is to identify exactly what constant-time utilities are needed for a crypto library, so that I can draft a proposal for improving the Rust language and libraries to provide such features.
> Most high-level languages don't provide guaranteed-constant-time behavior at all.
Does even C provide such guarantees? Isn't the compiler free to rewrite the code it's compiling in whatever way it wishes as long as the output is the same?
Ultimately, it's an optimizing compiler, and it's difficult/impossible to tell the compiler "make this code fast, but not too fast for these specific cases". The same problem affects basically every language that isn't assembly.
Hi, I'm the person who started the ring project. There is a lot of assembly language code in ring, and there's still some C code too. The thing to keep in mind is that we started from 100% C and assembly language code. Since August 2015, ring has supported a 100% Rust certificate validation library (webpki, it's open source on GitHub), and a 100% Rust TLS implementation (not Rustls, but one that wasn't open source). As we've improved upon the code we started with, we do generally replace it with Rust code, while keeping everything on top of it working. But, we don't replace C code with Rust code just for the sake of doing so. There's always some concrete benefit to each change we make, beyond language advocacy.
The assembly language code we inherited from BoringSSL (and OpenSSL) is really important for performance and for safety from side-channel attacks, like timing attacks. I believe that rewriting most of the assembly language code in Rust would be a net loss for security. I have very long-term ideas for how to avoid needing so much hand-coded assembly, but we have higher-priority things to do now. And, the assembly language code is really good. Really.
The C code is increasingly getting replaced (not rewritten or just transliterated) with Rust code, wherever it makes sense to do so. You can see some of the planned work of this type at https://github.com/briansmith/ring/labels/oxidation. To see the past work, review the commit log of ring.
However, we've done tons of work to make the C code safer too. For example, I've written dozens of patches to eliminate cases of undefined behavior and other unsafe coding patterns in the C code. Many of these changes have been integrated into BoringSSL.
Also, we've greatly reduced the usage of the heap. Already, you can use most of ring's functionality without a heap. Importantly, this means that we have solid evidence that, for almost every ring feature, there is zero chance of use-after-frees, double-frees, or memory leaks. It also means that the memory usage is very predictable, which makes it easier to use in constrained environments.
In addition, I've tried to design the ring API very carefully to limit the potential for things built on top of it to misuse the crypto. For example, the API enforces--statically, at compile time--that an ECDHE key can be used only once. Similarly, it enforces--statically, at compile time--that an AES/ChaCha20 encryption key is never used for decryption, and vice versa. Similarly, it ensures that encryption is always properly authenticated--there's no way to accidentally do "MAC before encrypt" and similar things. We even make sure that you don't use less-safe AES-GCM nonces that aren't exactly 96 bits.
Finally, anything that uses ring get all the advantages that come with Rust automatically, such as Rust's use-after-free protection and data race protection. (ring replaced all the C threading/synchronization stuff using the safer Rust constructs already.)
So, even though there is some C code, and even though there's a lot of assembly language code, things that use ring are still getting lots of Rust's advantages.
There are other alternatives that are "pure" or close to "pure" Rust, such as rust-crypto. But, those libraries are missing important things like RSA and ECDH and ECDSA over the NIST P-256 and P-384 curves. That's all needed for a practical TLS implementation.
My application needs RFC 6091, i.e. using OpenPGP keys instead of the usual X.509 certificates. (Why not X.509? Ask Peter Gutmann¹). This feature is not listed as something they don’t support, nor as something they won’t support, which is odd. Likewise for DTLS (RFC 6347). These omissions are strange.
We use RFC 6091 because it’s the best fit for our problem, and allows us to avoid the needless complexity of X.509 certificates, which has caused many bugs in the past (in essentially all TLS libraries).
Yes, TLS 1.3 is still in draft[1]. Heard from a co-worker on Friday (@grittygrease) that draft 14 should be arriving very soon. We (CloudFlare) have implemented draft 13 in go and are actively testing it—try browsing https://tls13.cloudflare.com with Firefox nightly[2].
I think BoringSSL is also[3] working on their implementation and NSS (Firefox's SSL/TLS library) implemented[4] draft 11 in v3.23, but OpenSSL doesn't have plans to until after 1.1 ships[5]; I've heard the former is expected to land about 6 months prior to the latter as OpenSSL isn't starting until the RFC is finalized.
Think draft 13 is only deployed internally (as we're dogfooding it). Draft 11 is what's live unless you're on VPN. Nick (https://twitter.com/grittygrease) can confirm.
Rustls doesn't really have to deal with the timing attack issue, because those issues are handled by the underlying crypto library, ring. And, ring wasn't built from scratch. It builds upon all the constant-time crypto code in BoringSSL and OpenSSL. And, we've improved the constant-timedness of the code in several ways and contributed most of those improvements back to BoringSSL.
I hate so much this logic - it stops evolution. Humanity have enough programmers to write new libraries and patch existing code. Competition is better than stagnation.
It's also an argument against disruption in general. Great progress happens when people stop iterating and start rethinking stuff that hasn't been rethought for decades (and the "stuff" has seen only tiny improvements over the years following this strategy, too).
Don't get me wrong, I'm not against disruption at all. It's just there're cases where doing this doesn't make sense. Write a new text editor, invent a new programming language, sure. But writing a new cryptography library? You are throwing away decades of effort that went into to harden and secure the existing library, and your gain is pretty small.
Making things even worse is that you can't really write a crypto library without using C/assembly. So why write a new crypto library?
"You are throwing away decades of effort that went into to harden and secure the existing library, and your gain is pretty small."
If that were true, then it would be a good counterpoint worthy of long thought. The reality of crypto libraries is that popular ones often had preventable errors due to bad coding and/or unsafe language that also took entirely too long to notice.
"you can't really write a crypto library without using C/assembly"
I'd argue you can't write a crypto library using C by default unless you're a really good coder. C is just an accident of two team's bad hardware:
Languages like Modula-3, Free Pascal, and even Fortran were easier to analyze to ascertain the program's properties. It's why Modula-3 had first, standard library verified free of specific types of errors. Also took years to make a certified compiler for C which had been done in other, simpler languages. Finally, the top performer in secure, systems code is SPARK as illustrated when re-implementing C crypto in it caught a problem.
> Writing a cryptography library from scratch because the old one has too many security holes? Your implementation is likely to have even more.
That's not a reason to never write another crypto library ever. It's a reason to get started on a new one as soon as possible so it can start the process of being vetted.
Probably not. With security sensitive things, the security of the logic itself needs to be good too.
Attacks like downgrade attacks, for example, are not memory unsafety issues.
Servo would need a library that is battle-tested and has a vulnerability policy. The Rust libraries could be in this category in the future, but not right now. Using something like nss or boringssl, like other browsers, would be our best bet.
> Attacks like downgrade attacks, for example, are not memory unsafety issues.
First, this library only implements TLS 1.2 with AEAD cipher suites (AES-128-GCM, AES-256-GCM, and ChaCha20-Poly1305) and perfect forward secrecy (using ECDHE with the X25519, P-256, and P-384 curves, and RSA and ECDSA signatures). Thus, downgrade to something worse than what NSS or OpenSSL or BoringSSL consider to be the very best crypto is avoided at a very fundamental level.
Second, I and other people have been advocating strongly against design decisions that force implementations to add downgrade vectors. In particular, there is a version number field in the TLS ClientHello that we know causes problems. People have proposed solutions to avoid this version number causing compatibility problems for TLS 1.3, but Mozilla's TLS people have not supported making this improvement. Thus, we're likely going to have downgrade issues by design in TLS 1.3 that would be completely avoidable, due in large part to the people making NSS and other C crypto libraries.
Despite that, Manishearth is correct that they should rely on a battle-tested library instead of just Rust. That's just one kind of issue. The main ones will be:
1. Abstraction gap issues where security analysis of algorithm/protocol didn't reflect a realistic implementation. Padding errors were an old example of that.
2. Issues with each individual component where it might have not been instantiated or removed correctly.
3. Interface errors where things were connected in a dangerous way. Examples include wrong ordering or meet-in-middle.
4. Parsing errors.
5. Memory safety issues.
6. Compiler optimizations introducing security issues or removing security checks.
7. Covert storage or timing channels. Mainstream INFOSEC have re-discovered them as "side channels." I'm not sure you can even do covert, channel analysis in Rust yet as it requires a clear mapping from language to assembler. Most projects just ignore this requirement although BoringSSL addresses it with hand-coded assembly as you pointed out.
So, there's a lot of ground to cover implementing even a straight-forward, crypto protocol to ensure it's secure. Most of that has nothing to do with the protections Rust offers. Matter of fact, it might be a step back in many ways due to immature (or non-existent) tooling for specific requirements on the list. Side-by-side SPARK and MISRA C implementations with design-by-contract and some assembly are the strongest here for now.
Keep at your project, though. Your longer comment detailing the work was quite impressive. Great work. Even better that you're gradually tweaking one that already works in real-world deployments. Will avoid many issues that way.
This is good. However, I mentioned downgrade attacks as an example of the kind of bug Rust can't solve. Most likely new TLS stacks will not be prone to downgrade attacks. They might be prone to other logic errors. So will established TLS stacks, but those are more battle tested. The rust ones will be too, but eventually, not now.
As we mentioned in the recent Security meetings notes (https://github.com/servo/servo/wiki/London-Security ), there will be a blog post on blog.servo.org in the coming weeks from the folks on the team who are making these decisions.