This is very similar to a design Apple announced for iCloud Keychain several years ago at Black Hat.
iCloud Keychain synchronizes keychains across iOS devices, storing their contents encrypted under the user's passphrase on Apple's cloud servers. In theory, Apple has no access to this data, since they don't know the relevant passphrase. In practice, however, passphrases are weak, even under PBKDF2. An attacker that got access to Apple's cloud environment would simply dictionary attack the encrypted blobs, and would probably succeed a lot of the time.
So instead of the obvious naive design, Apple stores enough secret data in an HSM so that you can't attempt a decryption without the involvement of the HSM. At the same time, the HSM enforces an attempt counter, preventing brute force attacks. To scale the design, Apple partitions customers into "clubs" of HSMs, with the attempt counter synchronized among the HSMs of the club using a distributed commit algorithm.
(Somewhat infamously, Ivan Krstic detailed how they protected the HSMs themselves from malicious attacks by putting their software update signing keys through a "physical hash function" called "Vitamix blender".)
What Signal is doing here is essentially what Apple did, but using SGX instead of an HSM, and RAFT as the consensus algorithm to synchronize the counters. You might reasonably prefer the Apple approach to SGX, but at the same time, the data that Signal is storing is a lot less sensitive than the data Apple stores.
Probably the biggest end-user takeaway from this announcement is that it's the start of a process where Signal is able to durably and securely store social graph information for its users (without revealing the social graphs directly to Signal itself, unlike virtually every other secure messaging system). Once they can do that, they'll have ended most of their dependence on phone numbers.
FWIW, there are a couple of things about Apple's and Google's systems that don't work for Signal:
1. There is no meaningful remote attestation. There's no way to verify that there are HSMs at the other side of the connection at all. The people who issued the certificates are the same people terminating the connections.
2. There's no real information about what these HSMs are or what they're running. Even if we trust that the admin cards have been put in a blender, we don't know what the other weak spots are.
3. The services themselves are not cross-platform, so cross-platform apps like Signal can't use them directly.
4. It's not clear how they do node/cluster replacement, and it seems possible that they require clients to retransmit secrets in that case, which is a potentially significant weakness if true. I could be wrong about this, but the fact that I have to speculate is kind of a problem in itself.
My impression is that you're suggesting the HSMs Apple uses are better than SGX in some way, but it's not clear that anyone could know one way or the other. I think all of the scrutiny SGX is receiving is ultimately a good thing: it helps shake out bugs and improve security. It's not clear to me that the HSMs Apple uses would actually fare better if scrutinized in the same way, which could be a missed security opportunity for them.
We didn't feel that it would be best for Signal to start with a system where we say "believe that we've set up some HSMs, believe this is the certificate for them, believe the data that is transmitted is stored in them." So we've started with something that we feel has somewhat more meaningful remote attestation, and hopefully now we can weave in other types of hardware security, or maybe even figure out some cross-platform way to weave in existing deployments like iCloud Keychain etc.
"My impression is that you're suggesting the HSMs Apple uses are better than SGX in some way, but it's not clear that anyone could know one way or the other. "
I predicted SGX would have more attacks simply due to it being widely available with more incentives. They started showing up. The HSM's get an obfuscation benefit on top of whatever actual security they have.
The main benefit of a good HSM, though, is its tamper-resistance. It takes a meeting of mutually-suspicious parties to know it was received, set up properly, the right code on it, and inability to do secret updates outside those meetings. From there, there's probably a greater chance that you didn't extract any secrets from it than an Intel box with who knows whatever SGX attacks, side channels, etc are going around.
My recommendation was combining several of them (i.e. security via diversity) if one could afford it. The systems in front of them should also have strong, endpoint security carefully sanitizing and monitoring the traffic. Think a security-focused design such as OpenBSD or INTEGRITY-178B instead of Linux. Safe, systems language for any new code. Good you're using some Rust.
Honestly, I'm just hedging against people who spend a lot of time thinking about SGX and have formed opinions about it. I don't have a strong opinion either way. My "take" here is just that the information you're protecting with SGX is information Wire "protects" with indexed plaintext in a database, and that SGX vs. HSM is not really a useful debate to have in this one case.
These days however you can do password resets manually with Apple. It is no longer as stringent as before where previously if you lost your password without recovery methods enabled your account is as good as gone. The current Apple account system is a lot weaker.
iCloud Keychain synchronizes keychains across iOS devices, storing their contents encrypted under the user's passphrase on Apple's cloud servers. In theory, Apple has no access to this data, since they don't know the relevant passphrase. In practice, however, passphrases are weak, even under PBKDF2. An attacker that got access to Apple's cloud environment would simply dictionary attack the encrypted blobs, and would probably succeed a lot of the time.
So instead of the obvious naive design, Apple stores enough secret data in an HSM so that you can't attempt a decryption without the involvement of the HSM. At the same time, the HSM enforces an attempt counter, preventing brute force attacks. To scale the design, Apple partitions customers into "clubs" of HSMs, with the attempt counter synchronized among the HSMs of the club using a distributed commit algorithm.
(Somewhat infamously, Ivan Krstic detailed how they protected the HSMs themselves from malicious attacks by putting their software update signing keys through a "physical hash function" called "Vitamix blender".)
What Signal is doing here is essentially what Apple did, but using SGX instead of an HSM, and RAFT as the consensus algorithm to synchronize the counters. You might reasonably prefer the Apple approach to SGX, but at the same time, the data that Signal is storing is a lot less sensitive than the data Apple stores.
Probably the biggest end-user takeaway from this announcement is that it's the start of a process where Signal is able to durably and securely store social graph information for its users (without revealing the social graphs directly to Signal itself, unlike virtually every other secure messaging system). Once they can do that, they'll have ended most of their dependence on phone numbers.