Hacker News new | past | comments | ask | show | jobs | submit | jtl999's favorites login

What a ridiculous article. Nothing comparing the actual feature set and differant professional use cases.

If easydns is writing shallow blog fodder like this, should I expect the same thing from thier actual product line?


I suspect what the new gTLD program has really done is to make com/org/net seem even more trustworthy and premium because people can no longer keep up with the influx of new gTLDs ( all 1200 of them!).

FWIW, there are a couple of things about Apple's and Google's systems that don't work for Signal:

1. There is no meaningful remote attestation. There's no way to verify that there are HSMs at the other side of the connection at all. The people who issued the certificates are the same people terminating the connections.

2. There's no real information about what these HSMs are or what they're running. Even if we trust that the admin cards have been put in a blender, we don't know what the other weak spots are.

3. The services themselves are not cross-platform, so cross-platform apps like Signal can't use them directly.

4. It's not clear how they do node/cluster replacement, and it seems possible that they require clients to retransmit secrets in that case, which is a potentially significant weakness if true. I could be wrong about this, but the fact that I have to speculate is kind of a problem in itself.

My impression is that you're suggesting the HSMs Apple uses are better than SGX in some way, but it's not clear that anyone could know one way or the other. I think all of the scrutiny SGX is receiving is ultimately a good thing: it helps shake out bugs and improve security. It's not clear to me that the HSMs Apple uses would actually fare better if scrutinized in the same way, which could be a missed security opportunity for them.

We didn't feel that it would be best for Signal to start with a system where we say "believe that we've set up some HSMs, believe this is the certificate for them, believe the data that is transmitted is stored in them." So we've started with something that we feel has somewhat more meaningful remote attestation, and hopefully now we can weave in other types of hardware security, or maybe even figure out some cross-platform way to weave in existing deployments like iCloud Keychain etc.


Spoiler: the default SSH RSA key format uses straight MD5 to derive the AES key used to encrypt your RSA private key, which means it's lightning fast to crack (it's "salted", if you want to use that term, with a random IV).

The argument LVH makes here ("worse than plaintext") is that because you have to type that password regularly, it's apt to be one of those important passwords you keep in your brain's resident set and derive variants of for different applications. And SSH is basically doing something close to storing it in plaintext. His argument is that the password is probably more important than what it protects. Maybe that's not the case for you.

I just think it's batshit that OpenSSH's default is so bad. At the very least: you might as well just not use passwords if you're going to accept that default. If you use curve keys, you get a better (bcrypt) format.

While I have you here...

Before you contemplate any elaborate new plan to improve the protection of your SSH keys, consider that long-lived SSH credentials are an anti-pattern. If you set up an SSH CA, you can issue time-limited short-term credentials that won't sit on your filesystems and backups for all time waiting to leak access to your servers.


To be frank, lobste.rs seems like the same content as Hacker News + /r/programming, but with fewer people commenting.

I had the same question. So a quick run down of what the architecture actually does:

The core feature is Domain Tagging. The architecture adds 2 bits to data that define its 'domain' (code, code pointer, data, and data pointer). On its own this does nothing for security but it allows the churn unit to implement the two moving target defenses.

Pointer Displacement is the first moving target defense. It obscures pointer values by adding a random display to them and is domain dependent. So, the churn unit is able to find all code and data pointers and obfuscate them. Basically the same as encrypted pointers except they can be encrypted with a new key at run time.

Domain Encryption is the second moving target defense. It'll encrypt each domain with its own key. Details are sparse on how the encryption works, just the following quote:

> the domain encryption defense randomizes the representation of code, code pointers, and data pointers in memory using a strong cipher. These assets are encrypted in memory under their own distinct domain keys.

Both of these are actions performed by the churn unit which can be turned to run at a desired frequency or when certain rules are violated.

On the topic of rules, there are two types of rules the architecture enforces. ABORT and CHURN. Abort does the obvious and terminates the program, CHURN causes a re-randomization.

Aborts can be caused by:

- Attempting to execute anything not in the Code domain

- Use a Code data (not code pointer) in any instruction

- A jump target that isn't tagged as a Code Pointer

- A Load/Store address that isn't tagged as a Data Pointer

CHURNS can be caused by

- Performing an inter-domain comparison, the tags must match

- Any Code pointer arithmetic

- Any Data pointer arithmetic

- Any overflow occurs

- Invalid shift length (shifting by more bits than there are)

So on-to your question regarding ret2libc and friends. Basically, its an extra layer. A lot of attacks require that you leak some memory address (assuming ASLR is enabled) that is then used in the attack to, for example, craft a ROP or JOP chain. With pointer encryption you also need to leak the encryption key in order to craft a valid pointer.

Then you need to do so without causing violating an abort rule (shouldn't be too hard) and without violating a churn rule. The churn rules are potentially problematic as overflows are a common trick to getting certain values, and pointer arithmetic in general. As both cause churn that could invalidate the data you just leaked.

As for the root question of how can a valid application look up the memory, well its the architecture that decrypts the pointers/data in the architectures instructions (not adding new instructions like some pointer encryption implementations for example). So the program just needs the right pointer at run time, and the churn unit will rewrite them when it runs and changes the keys.

Honestly, it seems like a pretty solid mitigation for the types of attacks its designed to mitigate. An impossible barrier, I doubt it, without being hands on and trying to exploit it I can't speak with absolutes, but it certainly would be a challenge.

Though as many other have said, there are plenty of attacks it does nothing against but the paper is far more modest and accurate about the intent than the .edu post.


Oh, don't get my wrong. I think the project is interesting, and it's obvious open hw is the way to go - especially if the design can be kept both simple and useful.

I don't really see anything wrong with having the cable be shorter (ie: an open platform smart card).

Lets just say that from a practical standpoint, I'd be much more interested in getting "most" people to use s/mime and/or gpg with keys and encryption on a dedicated device -- no matter if that device is a re-purposed Android without a baseband chip, running some open Linux based OS (full distro or something like Replicant) -- or it is a smart card, or some kind of dedicated open hardware.

The cascading idea is interesting, but probably more useful in a more adversarial scenario than most people need. Good for running drugs, or a revolution (or insurgency) though ;-)

On a serious note, I do see some real overlap between this military grade approach and "normal" use-cases. Especially for people that find them selves at odds with their government. Be that FBI targeting #occupy in Zuccotti Park, the German intelligence services/NSA spying on elected officials in Germany -- or people opposed to current policies in China, or advocating gay rights in Russia.

We live in a time where there's enough oppression to go around :-(


Device drivers don't usually need to be updated unless the driver interface changes (i.e. when you update the Linux kernel) or the driver needs to be updated to accommodate quirks of new software (i.e. graphics drivers and new video games).

They probably do want to be getting the latest security patches to the kernel and base OS.


Yes, me too. If you're interested, my current list is:

- A State of Trance with Armin Van Buuren (though recently he's started adding much more talking so I'm close to dropping it)

- Club Life with Tiƫsto

- Hardwell on Air

- Corsten's Countdown

- Afrojack: Jacked Radio (this one is really hit-or-miss for me. I skip maybe half the episodes)

As an aside:

- the "Song Exploder" podcast is a fascinating view into what goes into making music though that one falls into the "Talk" category

- I love the old "Timeless Mixes" by the (now defunct) DJ River. Which are helpfully all available as a podcast, so once in a while I'll mark a few of them as unplayed so my player will download them and add them to the playlist.


Yeah, it's a great service. Used to be $50/yr, and they recently bumped it to $70. Given that I listen to it anywhere from 5-15 hours a day, I'd say I'm still getting my money's worth :)

That subscription also gives you the same higher streaming quality for DI.fm's sister services for Jazz Radio, Rock Radio, and RadioTunes (which has a great assortment of channels ranging from jazz to decades music to world and ambient).

The app and site let you mark songs as favorites and review a list of songs you've favorited, but there's no obvious way to purchase those. If they added direct purchases, I'd happily go pick up a bunch of my favorite tracks through them.


If memory serves me right the CVS bug was originally discovered and exploited by a member of an infamous file sharing site. After descriptions(?) of that bug were leaked in underground circles, an east European hacker wrote up his own exploit for it. This second exploit was eventually traded for hatorihanzo.c, a kernel exploit, which was also a 0-day at the time.

The recipient of the hatorihanzo.c then tried to backdoor the kernel after first owning the CVS server and subsequently getting root on it.

The hatorihanzo exploit was left on the kernel.org server, but encrypted with an (at the time) popular ELF encrypting tool. Ironically the author of that same tool was on the forensic team and managed to crack the password, which turned out to be a really lame throwaway password.

And that's the story of how two fine 0-days were killed in the blink of an eye.

(The other funny kernel.org story is when a Google security researcher found his own name in the master boot record of a misbehaving server.)


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: