Hacker News new | past | comments | ask | show | jobs | submit login
The Security, Usability and Multi-Device-Problem of Messaging Apps (tazj.in)
57 points by tazjin on Dec 17, 2015 | hide | past | favorite | 31 comments



Wait, have you actually used Signal multi-device? Your thesis is that it's not usable because you have to scan a QR code once, as opposed to Telegram where you have to receive an SMS on your device and manually type it in on your desktop?

The flow you're describing in your "possible solution" is basically identical to how Signal works now: you install Signal on a new device, approve it with your existing device, and are done. Anything involving the word "blockchain" here is fundamentally not going to work. It sounds like you want CONIKS, but there are some really substantial barriers to deploying that for marginal gains (and some clear losses).


Given that the OP didn't know about Signal-Desktop 4 hours ago when I told them (see https://news.ycombinator.com/item?id=10750637 ) and was bashing Signal then for not supporting it, I'd say tajzin has at most had a cursory look at it. I'm not sure how or why they feel qualified to blog about it.

I've been using Signal-Desktop since the first days of the Beta and so far it's been a really pleasant experience. Big thank you to all involved!


Signal having all three qualities author is blogging about: security, multi-device cync, and usability basically makes this post and SMU theory nil. This looks like a case of "I have my theory, now lets bend facts to fit the theory".


I'd argue that usability in Signal isn't that great for other reasons, particularly around the contact list.

I responded in another thread about this. [1]

[1] https://news.ycombinator.com/item?id=10753881


Since when do you need qualifications to blog about something?


To be fair, Signal desktop is really new, the author may not have known about it. But yes, the flow of adding devices in Signal is very seamless and simple.

The issue of not supporting SMS though makes usability go down the toilet, because it's not immediately obvious who you can message or receive messages from, but I understand the reason for not supporting it.


The author definitely knows about it, since the thesis of the article is that Signal would be near-perfect if the multi-device flow were usable but "..the process for setting it up is annoying."

It's fine if the author hasn't tried it, but don't write an article criticizing the usability of something you haven't even tried.


The author concludes by asking for feedback on the idea. It would be difficult to get that from anyone, let alone people with actual expertise in cryptography, without writing a blog post and sharing it in venues like Hacker News. Rolling their own cryptography and selling it on the app store would probably be worse, as a first public engagement.


> Rolling their own cryptography and selling it on the app store would probably be worse, as a first public engagement

Also not something I would do because I don't feel qualified to implement this beyond a simple POC :-)


> The author concludes by asking for feedback on the idea. It would be difficult to get that from anyone, let alone people with actual expertise in cryptography, without writing a blog post...

Wat? It's trivial: https://lists.riseup.net/www/info/whispersystems


Thanks for taking the time to give some feedback, appreciated!

I didn't say Signal isn't usable, but it doesn't fit into the requirements because there should not be a distinction between primary/secondary devices after onboarding.

It might be possible to change Signal's model to accomodate that, but why even have a model that involves copying private keys between devices if you can avoid it? What if you wanted to revoke privileges from a device, but it has a shared identity key with your others?

> Anything involving the word "blockchain" here is fundamentally not going to work

Why? What if you're building something as a federated system and you need to share this key info in the network? Do you have an alternative suggestion?


What are the substantial barriers to deploying something like CONIKS? It doesn't seem to help the multi-device problem, but in general it does seem useful.


I would be curious about this as well. The only attack that I can think of that he might be referring to is the server publishing bad keys for a particular client while that client is offline. This equivocation would be caught fairly promptly though, and I think that eventual proof-of-equivocation is better than forcing users to manually verify keys.


Missing an important part here: Telegram and co. is a cloud storage service where as WhatsApp, Signal, Threema are not. Think about this... your messages are stored and not delivered and deleted on the server. Telegram has the encryption and decryption keys but claims there are in different countries/jurisdication. So is this safer than deliver+delete? I think NO.

I wished somebody would analyze if WhatsApp protocol is using encryption also on iOS, groups and images as this website claims: https://github.com/WHAnonymous/Chat-API/wiki/WhatsApp-incomi...


After reading this post, I'm just sitting here smiling contently that you didn't give Telegram credit for "Secure".

(I've been loudly trying to convince more people to never use Telegram.)


Do you really need a blockchain? A certificate chain from the root certificate of the primary device to any secondary device should be enough.


The point is to not have a primary device, every device added to the pool should be able to sign in new devices.


I still don't think that a blockchain is really the answer though. It's an answer to Zooko's Triangle, but it also removes an element of privacy.

Do we need global names? I just want to talk to my brother, my friend, my colleague; I don't really care about talking to people I don't know. Given that, what's wrong with identifying people as 'Billy's wife's mother'?


> it also removes an element of privacy

What are you thinking of specifically? The chain wouldn't contain any network of contacts (like GPG signatures) and only hashed information about the user.

Worst case is that a user's number of devices and public keys can be looked up.


> The chain wouldn't contain any network of contacts (like GPG signatures) and only hashed information about the user.

If 'hashed user information' means a function of the user's identifier (e.g. the user's email address, or some other global user ID), then privacy is lost because one could hash any user's ID and see what his public keys are; many protocols include an identifier for the sender's public key in order to guide the receiver, and vice-versa. This means that one could see what messages that user has sent or received.

If it doesn't contain some function of the user's identifier, then what would be stored exactly, and what would its utility be? If I can't consult the global blockchain asking, 'what is the key for smith@example.invalid?' then why have a global blockchain? If it's simply a record where key X says key Y is the same as key X, then key Y says no it's not, then key X says yes it is again … what's the value?

Blockchains are an amazing innovation for achieving global agreement. Because of this, they can be used to bind easy-to-remember petnames (like 'bill' or 'Bill Gates') securely, such that the entire world knows that BLOCKCHAIN('Bill Gates') is the founder of Microsoft, not his father or some random fellow. If you're not using a blockchain for global agreement, what's the point?


Yes, it's supposed to be a hash of the user's identifier.

The goal is of course being able to look up the keys of individual users based on their publicly shared identifier. This system isn't trying to hide a user's keys in any way, so I think we're simply trying to solve different things


> Yes, it's supposed to be a hash of the user's identifier.

Which is why it's a privacy issue: given someone's public identifier, I can see what his key is and how it changes over time. I can see if he has a key, which is interesting information. As I noted, I can probably monitor traffic to find messages encrypted by or to his key.

Heck, I can pregenerate many potentially-valid identifiers (e.g. [0-9az]+@[a-z][0-9az]+\.(com|net|org|edu) and look for them in the public blockchain, and use that to confirm whether the accounts are valid or not (this could be used to see if an email address is valid without ever sending email to it). I could use this to verify addresses before sending out spam or malware.

If I can see two identifiers who communicate with one another, I could use this to, say, malware purportedly from one to the other via unsigned email, who is likely to trust it because it appears to be from someone he knows.

You see the issues, I hope. This kind of thing really is tough.


Alice sends a message to Bob, her device asks the messaging service for Bob's account's identify and public keys.

This is a classic trusted third party problem. We need to trust the service that the public key provided (and stored in the blockchain) actually belongs to Bob.

This boils down to Zooko's triangle: if the service allows to associate the account with a new key, there is no way to prove the new key actually belongs to Bob. If there is no such way, Alice will have to somehow figure out the undecipherable new ID of Bob.

The only benefit I can see of backing the keys in a blockchain is that your own client can monitor the blockchain for new keys associated with your account, ringing alarm bells in such a case.


It depends on what you are establishing trust to, Bob's account, Bob's first key or Bob as a person? The service could write a hash of Bob's account name / phone number / whatever and the account's identity to the chain. That way it cannot easily respond with a different set of keys for the same account.

One attack vector I can think of in terms of a malicious third-party is that they could take your initial account creation request and key, create their own key and use that as the initial one of your account in the chain.

In this model you have the ability to find out about that though.


There is one incredibly simple solution to solve the 'multi-device-problem' with security:

1. Allow the user to copy their keys to another device themselves.

To further improve security, you'd want a transfer mechanism that cannot be copied as text / emailed / IM'd / whatever just so the user is discouraged from sending it in their gmail. One suitable candidate is QR codes, but without a camera "Type 1 letter at a time" would also work.


You never want to share keys if you can get away with it.

Your suggestion also doesn't solve the problem of lost, compromised or destroyed devices.

If I have one phone and one computer, no algorithmic solution can determine if the person holding my phone is me, and the person holding my laptop is a thief, or if the person holding my phone is a thief and the person holding my laptop is me.

Honestly, even majority vote is insufficient: is there any reason to trust the person holding my phone and laptop over the person holding my computer? What if my home just burned down?

What's needed is for each device to have its own keys, and for messages to 'you' to really be sent to each device. And when you lose a device, you have to let your friends know, and they have to manually verify which of your devices to no longer trust. This can be presented in a pleasing fashion, but you can't eliminate the need for human judgement.


I think this depends on your appetite for risk. Forgive the trivialisation, however if you

a). generate a strong symmetric key on the client

b). encrypt keys on the client using AES and the strong symmetric key, and

c). encrypt the AES symmetric key using the user's password, and

d). store that encrypted AES key and the encrypted used keys on a zero-knowledge server, keyed on a hash of the user name,

you might be able to log in from another device, using your user name and password. Both encrypted keys are pulled from the server based on the user name, and decrypted by inverting steps a to d.


That's always an option, but it will never let you revoke a device again such that future messages won't be encrypted for it.


Am I the only one who does not see the dot before the TLD here?


Hey, I'm afraid I don't understand your comment. Can you explain?


This is what a link source looks like usually for me:

http://i.imgur.com/udKe02n.png

This is what this one looks like:

http://i.imgur.com/1JC34Dv.png




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: