It's worrying that for some cases you have to put a lot of effort into "obscuring" code and destroy readability to comply with the rules just because the C's semantics is not obvious and its type system weak and dated.
A good language for cryptography engineering should at least somehow allow the programmer to annotate/type variables containing secret data and functions as constant-time so that the programmer can concentrate on implementing the cryptography logic while the compiler warns on correctness issues or security violations.
There were a few interesting issues we ran into when implementing the compiler that we did not anticipate when writing the paper you linked (e.g., there are cases where making code constant-time can inadvertently introduce memory errors). You can try out the language here (based on Compiler Explorer): https://fact.sysnet.ucsd.edu/
I admit I've only ever taken cursory glances at Cryptol, but it's never seemed like a replacement for the sort of low-level programming described in the article. If anything, it goes the opposite way - its Haskell base inherits one of Haskell's flaws, namely that it can be difficult to understand the runtime/performance implications of Haskell code without careful analysis.
Cryptol's README describes it as a sort of standardization/prototyping language.
That said, I don't know these things for sure, and I'd love to be corrected if I'm wrong.
It's a higher-level language that generates or compiles to the lower-level forms, including C and hardware languages, so you don't have to worry about that stuff. The expert can specify the algorithm, mathematically verify it, synthesize it to code, another tool like Galois' SAW can verify that, and they usually do a certifying compiler (esp CompCert) at the end. Rockwell-Collins and Galois both use the tools in high-security projects.
I can't remember if it addresses constant-time programming. So, I went looking arount, found the other language, and maybe someone will merge them.
Well, it doesn't matter if Cryptol is compiled down to "safe" machine code, like FaCT, right? But that's not something I've seen explicitly stated in the Cryptol docs, so I would not make that assumption myself.
Are SAW or CompCert able to do formal verification around things like ensuring no branching on secrets, ensuring that memory is cleared after use, ...? Or does Cryptol make guarantees about the machine code it generates?
I see that SAW is able to do things like prove that hand-written, optimized code is equivalent to the Cryptol specification, which is certainly very neat - but that does not preclude potential side-channel vulnerabilities in the hand-written, optimized variant. After all, much of the scariness of side-channel attacks is that they're explicitly not targeting the (highly analyzed) mathematics of a construction, but rather its implementation.
"Well, it doesn't matter if Cryptol is compiled down to "safe" machine code, like FaCT, right?"
It does. It gets you the correct implementation of crypto. Most failures are implementation errors. From there, before the buzz about constant-time, high-assurance security was just mitigating that stuff by using fixed-size, fixed-timing transmission of messages. As in, they couldn't tell anything about the inside of the black box because all they saw was the same behavior on the outside. Designers sacrificed some performance and plenty of bandwidth to do that. Tightly-coupled designed with no covert channel analysis or mitigation later showed up that leaked all sorts of stuff with them changing coding styles instead of bad architecture. That's when we need stuff like OP article and FaCT.
"Are SAW or CompCert able to do formal verification" (you)
"I can't remember if it addresses constant-time programming. So, I went looking arount, found the other language, and maybe someone will merge them. " (me)
I already answered that. I collected papers like FaCT specifically in case anyone wants to integrate them with Cryptol to address that gap if it exists. Also, other analysis and testing for information-flow security to make sure it is working on top of human review. This is a tiny part of my research given covert channel detection is mostly a solved problem with public, free methods such as Kemmerer's Shared Resource Matrix and Wray's for timing channels that most people just don't apply. Most of my research is on stuff that blocks code injection.
Ah, sorry, I didn't mean to imply that Cryptol itself doesn't matter. It does seem very useful for its intended purpose (though again this is from an outsider's perspective).
Rather, I meant to say that you do still need to worry about what sort of code Cryptol generates if you want to address the points raised in the original article (e.g. clearing secret memory). That was my original point, anyway: Cryptol doesn't address low-level side-channel problems.
As far as integrating FaCT and Cryptol, couldn't you use Cryptol to build the reference version of a construction, then implement a fast and side-channel-resistant variant in FaCT, then use SAW to prove the two were equivalent? (Maybe not as ideal as combining the two languages, exactly, but workable.)
I really appreciate you taking the time to reply. This stuff isn't my area of expertise, obviously, but I still find it very interesting.
Ok, yeah, we're on same page now. Far as Cryptol, I wanted to seeif it could be extended to generate FaCT which generated constant-time code. It would also generate C to check with SAW or other tools for extra checking or situations where side channels didn't matter. Could also be a tool that checked equivalence of the two at source or object code level so any checking of C code applied to FaCT.
That should cover most of the problem areas. I still think C itself or SPARK Ada could be modified to do what FaCT does. Some subset at least. Then, one can eliminate need to build FaCT-based tooling. So, those were the ideas I was brainstorming at least for what applies directly to those languages.
Cryptography is hard to implement correctly. What you consider to be obscure and destroying readability is not necessarily there because of semantics, but in order to avoid particular vulnerability categories, such as timing side channel attacks, due to using particular coding patterns or the compiler optimizing things in a certain way.
The good language you talk about exists in crypto libraries that are meant to be used by us mere mortals.
This document discusses some of the problems and solutions that are important if you are building those very same crypto libraries instead.
On that note, veorq is an absolute authority in the field so I trust his writing on the matter.
A useful high language for cryptography would likely have to:
* Not compile to C because you'd be left with the same doubts that the generated machine code is constant time
* Not use gcc / clang internally to generate machine code for the same reason..
* ..but still optimize as well as either
* ..while also implementing SIMD for all the useful platforms
* ..and knowing which approach is most efficient for the target platform (SIMD vs scalar vs SIMD+scalar, instruction selection, instruction ordering, data alignment, etc)
otherwise it's easier to just use C, or if you're a masochist, qhasm.
There is also not a readability issue with constant time code as long as it's in properly named functions (which will get inlined away if necessary).
> Some compilers will optimize out operations they deem useless.
A compiler translates a certain language to machine code. If it is optimizing it will use that language's semantics for optimization. That is necessary, otherwise we would be writing almost everything in assembly.
> Know what optimizations your compiler can do, and carefully consider the effect of each one on security programming patterns. In particular, be careful of optimizations that can remove code or branches, and code that prevents errors which "should be impossible" if the rest of the program is correct.
A priority is to understand the language one is writing in.
> When possible, consider disabling compiler optimizations that can eliminate or weaken security checks.
A better solution is to use assembly language and thus exactly specify your program's semantics within the program; that compiler flag could be removed by whoever builds the program, or something like link-time optimization.
> A priority is to understand the language one is writing in.
That's all well and good but you can't expect a developer to know all the optimisations different compilers will do, especially when it relates to undefined behaviuor.
I think the point of the OP is that you shouldn't think about the optimizations a compiler might do, but what the language spec guarantees. If the language spec guarantees that your code behaves the way you want, no correct compiler will break it.
The problem arises when the language simply does not provide any guarantees on the sorts side-effects you are interested in, so you can not simply rely on specified behavior.
Slightly tangential to the article but still important, it says
"Some compilers infer that they can eliminate checks based on erroneous code elsewhere in the program. For example, [obviously dodgy code snipped]some compilers will decide that ptr == NULL must always be false, since otherwise it would be incorrect to dereference it in call_fn()"
So why does the compiler not report this error, or better still reject the code outright? It seems to turn a compile-time-detected fatal error into a critically dangerous runtime error.
This is great for educational purposes, but if you're thinking of implementing your own primitives or cryptosystems please use a vetted out-of-the-box solution like libsodium instead.
A good language for cryptography engineering should at least somehow allow the programmer to annotate/type variables containing secret data and functions as constant-time so that the programmer can concentrate on implementing the cryptography logic while the compiler warns on correctness issues or security violations.