The UK. It had this huge navy, which amusingly is where its central bank comes from. An English Parliament wanted to buy the greatest navy the world had ever seen but that's very expensive, so their cunning scheme was, license some business people to run an exclusive Bank of England, secured by the word of the British government, the income from this funds a navy and the successors of that navy were still dominant into the 20th century.
The fact that the Bank of England was historically a private business is awkward when it comes to explaining to some modern country why it's not OK that their central bank is giving the leader's nephew $100M in unsecured loans, and this sort of discomfort is part of why it was bought by the British government and gradually ceased operating as a private bank in my lifetime. When I was younger I knew people whose mortgage was issued by the country's central bank. Not like celebrities or politicians or anything, just bureaucrats who got a good deal, sort of "mates rates" but for a house loan.
The question is how best to send the modulus, which is a much larger integer. For the reasons below, I'd argue that base64 is better. And if you're sending the modulus in base64, you may as well use the same approach for the exponent sent along with it.
For RSA-4096, the modulus is 4096 bits = 512 bytes in binary, which (for my test key) is 684 characters in base64 or 1233 characters in decimal. So the base64 version is much smaller.
Base64 is also more efficient to deal with. An RSA implementation will typically work with the numbers in binary form, so for the base64 encoding you just need to convert the bytes, which is a simple O(n) transformation. Converting the number between binary and decimal, on the other hand, is O(n^2) if done naively, or O(some complicated expression bigger than n log n) if done optimally.
Besides computational complexity, there's also implementation complexity. Base conversion is an algorithm that you normally don't have to implement as part of an RSA implementation. You might argue that it's not hard to find some library to do base conversion for you. Some programming languages even have built-in bigint types. But you typically want to avoid using general-purpose bigint implementations for cryptography. You want to stick to cryptographic libraries, which typically aim to make all operations constant-time to avoid timing side channels. Indeed, the apparent ease-of-use of decimal would arguably be a bad thing since it would encourage implementors to just use a standard bigint type to carry the values around.
You could argue that the same concern applies to base64, but it should be relatively safe to use a naive implementation of base64, since it's going to be a straightforward linear scan over the bytes with less room for timing side channels (though not none).
Converting large integers to decimal is nontrivial, especially when you don't trust languages to handle large numbers.
Why you wouldn't just use the hexadecimal that everyone else seems to use I don't know. There seems to be a rather arbitrary cutoff where people prefer base64 to hexadecimal.
That sounds horrible if you want to transmit a base64 string where the length is a multiple of 3 and for some cursed reason there's no letters or special characters involved. If "7777777777777777" is your encoded string because you're sending a string of periods encoded in BCD, you're going to have a fun time. Perhaps that's karma for doing something braindead in the first place though.
Then just prefixing it with an underscore or any random letter would've been fine but of course base64 encoding the binary representation in base 64 makes you look so much smarter.
Personally, "0%" doesn't mean "hasn't started" to me, it means "not enough progress has happened to reach 1%". Assuming I'm not alone with that, the rounding becomes a simple truncation `roundedPercent = int(percent)`
An implementation as simple as the concept, which is a good sign in my experience
I will never understand how people think like that. Sure, there are things computers cannot compute, but that's because those things are _uncomputable_ in general
It's obvious to you because of people like Church and Turing. Until their work on the Entscheidungsproblem was published, it was not only common, but mainstream to believe that nothing was inherently incomputable. They not only demonstrated that there were things that weren't, they did so by proving that the set of everything computable was the same set of things their approaches could compute and that certain problems were outside that set.
Yeah, that's definitely fair. What I'm annoyed about is the "checkmate, computers" stance, pretending like the problems computers cannot solve could be solved by other means
I don't believe only turing machines are capable of executing other turing machines... Surely lambda calculus can do the same? I was under the impression, lambda calculus can indeed execute itself with even less code than turing machines
There's several very dubious claims that are stated way too confidently like this in the article, like "Yep, virus scanners are almost completely useless"
WebAssembly is not without tradeoffs either. I'm not an expert, but it's often heavier due to bundling the native language's stdlib; it's annoying to interop with the browser environment because that's still JavaScript-tailored; it's also just hard to write code that can be compiled into WASM (e.g. in Rust, it needs to be `#[no_std]`)