Which aspect of elliptic curves would you like to understand better? The original paper for Curve25519 contains a dedicated subsection for attack models, for example, and leaves only marginal room for hidden backdoors with its detailed reasoning about curve parameter choice. The implementation of EdDH or EdDSA is specified in RFCs that are explicitly written to be "fool-proof", as others already commented.
Compare the NIST curves to RSA. For RSA we know there cannot be any backdoors. If you generate good quality primes you are in business (assuming you don't make mistakes elsewhere).
For NIST we cannot say anything about backdoors. We don't use those curves because we don't trust NIST. Not because we have any prove they are bad.
So to avoid that, there is a parameter selection process that supposedly leaves no room for backdoors, though at some CCC congress DJB described how you could use a similar process to add backdoors.
So basically, EC is based on magic. We cannot prove it is bad, we just have to hope there is no hidden magic.
Note you say 'only marginal room'. Soon the whole world will use exactly one curve. With 'only marginal room for hidden backdoors'.
I feel way more comfortable to know that with RSA what you see is what you get.
Even Bernstein doesn't really believe the NIST p-curves are backdoored, and the Koblitz/Menezes paper makes a pretty decent case that they couldn't be, but if you want to tinfoil hat it, just do what every modern system does and use Curve25519.
If any of this is new to you, though, you shouldn't be designing cryptosystems. Most people shouldn't! I sure shouldn't! It's an extremely specialized skill, and the world doesn't need that many new ones. Just use Nacl.
> but if you want to tinfoil hat it, just do what every modern system does and use Curve25519.
What is your take on the NIST curves being "officially" blessed for government data via Suite B (or whatever they're calling it)?
If it's good enough for government work, would it be good enough for us in the private sector? What are the chances the the NSA know weaknesses in Curve25519 or ChaCha like they knew about differential cryptanalysis attacks DES ahead of everyone else?
Frankly I think the kremlinology is a lot less interesting and useful than the engineering facts, which are that Curve25519 is more misuse-resistant, faster, and easier to implement in constant time. People shouldn't be using the P-curves anymore.
The 6 page summary in the 7th edition manual turned out to be comprehensive while still being approachable. I knew next to nothing about awk, not counting some snippets found on the web, after reading through this document twice awk feels like home.
I guess what jesse_m had in mind were the mini languages from the article. I'd say 'calc' looks like a good starting point, but I agree that picking an order is hard.
Yeah I'm familiar with OCaml I wasnt sure if there was a language progression that was suggested to go through. I was actually interested in using Menhir more too.
I've been running my mail server for 6 years now. Mail to Microsoft's servers (live.com, etc.) keeps ending up in the spam folder, even though spf, dkim and dmarc are set up and my IP has been clean for the entire period. The bright side is that I only notice this in the rare case of "mail to all contacts", since noone is on outlook.com these days.
Yes, live.com is a problem for us too. It seems they silently blacklist ip addresses and make it very difficult to get off that blacklist. There's a thread here about it with some of us trying to figure out how to solve this problem:
But other than that, running my own mail server hasn't been much of an issue. Set up sendmail, use public blacklists for spam control, and it pretty much runs without any intervention.
Outlook.com/Microsoft blacklists entire subnets because there are spammers in the same IP range. It's ridiculously lazy practice. They should just block IP addresses that are actually sending SPAM. It's not like that's a problem technically. You can fit all IPv4 addresses into a 512MB database.
I'm thinking of solving it by blacklisting outlook.com domain, so that senders at least know that I can't respond to them. I can put a message in the error response, that will be reliably relayed to the sender by the sending system.
Google's slightly better. Recently I did an experiment and created a few gmail accounts and sent some riduculously spammy messages full of typical keywords in between those gmail accounts, and they were all successfully delivered. Always.
Then I sent e-mail from new gmail account to my email server and simply responded and it went to spam. It's ridiculous that such a simple heuristic like someone responding to a message gmail user sends doesn't get the message through spam filters, even though the system can clearly determine taht the message is legit based on many variables (References field referencing message-id of the original message (noone else than the recipient should know this), reply being from a correct source (DKIM/SPF), message having normal looking business content, etc.).
There's way too heavy a weight on sending server IP range reputation.
With gmail you can at least request that they unblock you, and they will do that. With live.com and icloud.com you have to spend inordinate amounts of time bouncing between useless support people before you get anywhere. gmail in general seems to have the best spam filter (lowest false positives and negatives).
I had a problem with ipv6 myself with gmail, but the problem was that I just didn't have ipv6 fully set up on my server, so either SPF or reverse DNS wasn't working or something like that. I think I just configured sendmail to only use ipv4 and that solved the gmail issue.
My issue was that I wasn't very familiar with ipv6, and my ISP (OVH) apparently gave my server a range of about 256 ipv6 addresses, and I didn't really know how to properly set up reverse DNS and SPF. After spending a day or two getting nowhere, I just decided to turn off ipv6 completely for the server.
No, more realistically it results in the device being thrown out for one that doesn't ask the user a million questions at startup.
Sane defaults should be used because they enhance the user experience tremendously. Nobody buys a gadget for its setup, they buy it to use it, and delaying the user from that end goal is not going to do anything but annoy the end user and ultimately harm the manufacturer's bottom lines.
I suspect there is more people in the world that have built their own cpu and hardware from scratch than there is people that operate computers where they have made informed decision about every parameter for every default variable on all software that they use.
Even if we limit our self to just security defaults, what linux system don't have default ulimits? There is tradeoffs naturally in every number, but I would assume even linux from scratch users don't need to explicitly set each and every number. If a user prefer making different tradeoffs they can opt-in to make changes, but even a operative system that is designed to be built by hand by the user carries with it some defaults.
I thought of this quote while reading the article:
It can hardly be a coincidence that no language on Earth has ever produced the expression "As pretty as an airport." Airports are ugly. Some are very ugly. Some attain a degree of ugliness that can only be the result of a special effort. This ugliness arises because airports are full of people who are tired, cross, and have just discovered that their luggage has landed in Murmansk (Murmansk airport is the only exception of this otherwise infallible rule), and architects have on the whole tried to reflect this in their designs.