Hacker News new | past | comments | ask | show | jobs | submit login

I can see a lot of people trashing on Matrix.org or the "hacker" themselves (the hacker opened a series of issues, detailing how he managed to get in - https://github.com/matrix-org/matrix.org/issues/created_by/m...). However everyone seems to be missing the point - matrix seems like a pretty cool and open project. And someone taking over their infrastructure in such an open way is also great for the community. Even though a little dubious on the legal side of things, I believe it's great it was approached with transparency and a dose of humor.

Some might argue that this is harmful to matrix as a product and as a brand. But as long as there was no actual harm done and they react appropriately by taking infrastructure security seriously, it could play out well in the end for them. This whole ordeal could end up actually increase trust in the project, if they take swift steps to ensure that something like this does not happen again.




On the first issue opened by the hacker:

> Complete compromise could have been avoided if developers were prohibited from using ForwardAgent yes or not using -A in their SSH commands. The flaws with agent forwarding are well documented.

I use agent forwarding daily and had no idea it contained well known security holes. If that's the case, why is the feature available by default?


SSH agent forwarding makes your ssh-agent available to (possibly some subset of) hosts you SSH into. This is its purpose. Unfortunately, it also makes your ssh-agent available to (possibly some subset of) hosts you SSH into.


I never quite understand why there’s not a confirm version. ForwardWithConfirmation or something. I’m active when I need forwarding - would be happy to simply be prompted before it’s allowed.


OpenSSH does have confirmation: use the '-c' switch to ssh-add.

https://man.openbsd.org/ssh-add


Or "AddKeysToAgent confirm" in ~/.ssh/config


Waaaaaaat?! That could definitely be better known.


TIL :|


Hang in, there.


This could be a sane default.


Hm, anything similar for gpg agent (both for gpg, and as a stand-in for ssh-agent)?

Ed: looks like I need to edit my sshcontrol-file

https://www.gnupg.org/documentation/manuals/gnupg/Agent-Conf...


If you use Yubikey with touch-to-use enabled that'll be basically what you're asking - each authentication will require touching the token.


I just enabled that after seeing this incident.

For people in the same boat, it can be done trivially using the YubiKey Manager CLI: https://developers.yubico.com/yubikey-manager/


Some ssh agent implementations do this, notably the one built into Android ConnectBot can be configured to request confirmation each time it is asked to authenticate. Unfortunately ssh-agent (from OpenSSH) does not as far as I know. It's happy to authenticate as many times as requested without any notification.


It can, and it's determined per key when added to the agent.

Look for -c here: https://man.openbsd.org/ssh-add


Indeed it is - I even checked the man page before posting the comment and completely missed that option.


Is there a secure alternative that achieves the same outcome?


Here are a few ideas that might help.

Use separate keyboard-interactive 2FA (I recommend google-authenticator) for production ssh access.

Use a key system which requires confirmation or a PIN to authenticate (such as a Yubikey). Use a persisting ssh connection with Ansible (ControlPersist) to avoid unnecessary multiple authentications.

Allow connections only from whitelisted IPs, or Uuse port knocking to open temporary holes in your firewall, or require connections to production infrastructure to go through a VPN.

Access production infrastructure from hardware dedicated for that purpose, never do anything else on it.

I wish there was a way in ssh to tag connections and only allow agent forwarding to keys with the same tag. That would prevent agent forwarding production keys from a dev host.



I'm not sure. A secure, backwards-compatible (with older servers) alternative, which only exposes keys you explicitly choose to expose, should be doable and might help.

another option would be for a SSH client to present a full-screen "$HOST is trying to use your your SSH PRIVATE keys. Press enter, then type "~A" to allow." prompt.


`ProxyJump`


Hadn’t seem that before. Article here explains is briefly https://www.madboa.com/blog/2017/11/02/ssh-proxyjump/


This article is .. weird. It mentions SOCKS5, DynamicForwarding and "decent version of nc", while you don't need anything at all for forwarding connection -- SOCKS is not involved in any way, and initial 1995 release of nc would work just fine.

Here is a much better explanation (from [0]):

> ProxyJump was added in OpenSSH 7.3 but is nothing more than a shorthand for using ProxyCommand, as in: "ProxyCommand ssh proxy-host -W %h:%p"

so the same thing that top poster was talking about.

[0] https://superuser.com/questions/1253960/replace-proxyjump-in...


Do you keep your keys on the proxy host, then? Otherwise, "ForwardAgent yes" and you're back to the same situation.


ProxyJump uses the keys from the original host, not the proxy host.


I know. That's why I asked. Chained agent forwarding will serve your keys just the same, so ProxyJump is not "a secure alternative that achieves the same outcome".


Are you disagreeing with the "secure alternative" or the "same outcome"? I thought the difference between ProxyJump and agent forwarding is the following:

Agent forwarding forwards the agent socket to the proxy server. Thus any ssh connection originating from the proxy server can reuse the agent, and with that has the same access to the agent as the originating host.

ProxyJump routes the ssh connection through the proxy host. The crypto takes place between originating host and target host, not between proxy host and target host. ssh connections originating from the proxy host can not access keys from the originating host.

But maybe my understanding of ProxyJump is incorrect?


I know exactly how agent forwarding and ProxyJump work, but I'm having a hard time understanding what you mean.

ProxyJump proxies your ssh connection, so connecting from A to B via proxy X the connections go A->X and X->B.

You can use AgentForwarding with ProxyJump, in which case agent connections go B->X->A.

I cannot see how ProxyJump would somehow be an alternative to AgentForwarding. You can use both independently.


> ProxyJump proxies your ssh connection, so connecting from A to B via proxy X the connections go A->X and X->B.

No, it rather works like this:

A -> B via X establishes A->X and then, through that connection tunnels a new ssh-connection from A->B.

A->X, then X->B would require forwarding the Agent from A to X, so that the connection from X->B can authenticate using that agent. Proxying the connection does not require X to ever authenticate to B, the authentication happens straight from A->B (1). Thus, no agent (forwarding) needed. You can also chain ProxyJumps: A->X->Y->B tunnels A->B in A->Y which is then tunneled through A->X. In that regard, ProxyJump and ProxyCommand can replace AgentForwarding in most use cases. There are some uses where AgentForwarding is the only solution, though.

(1) Added benefit: X never sees the actual traffic in unencrypted form and all port forwards A<->B work


Hehe, I think I figured out the source of the confusion.

I was thinking that the threat is that a compromised B gives access to your keys via agent forwarding. Presumably if you make keys available on B, you need them there. There's nothing ProxyCommand does to help there.

But you're talking about using ProxyCommand as an alternative for connecting A->X and then X->B, so keys are not available on X. That's of course an improvement.



That project looks dead.


I think an issue here is we've been told for a long time "always restrict access to the environment through a bastion host" without much implementation detail discussed after that. Agent forwarding tends to show up as the most efficient way to implement this.


Basically you need to make sure the host you are SSH'ing into with an agent is secure. Otherwise the root user on that host can access your agent socket and connect to any other machines your agent can.

So if you SSH -A to a compromised Jenkins server, and you've got all your production keys loaded in your agent, the hacker can now authenticate to all those production machines as well.

So don't ever SSH -A into a machine unless you KNOW its secure. The way I think about it is unless I trust the machine enough to leave my private keys on that machine, then I'm not going to SSH -A into it.


> I use agent forwarding daily

That's why. It's useful, but you have to be mindful of the security risks involved in using it.


Or the hacker could have responsibly disclosed the issue to Matrix then reported their findings like a professional. Besides, we're still defacing web sites? I though that went out of style years ago. Did the hacker make sure to post that on MySpace too?


Responsible disclosure is about not enabling third parties to leverage the disclosure to gain access. In this case the hacker did not disclose the security holes before they were closed for third parties (i.e. the hacker could only still access the hosts because he had access to the them in the past, new access was (hopefully) not possible anymore).

Which of course doesn't mean that the hacker should have just send an email to the matrix team.


> The matrix.org homeserver has been rebuilt and is running securely;

We should have more bounties. Let users donate and put wallets on servers. Attacker will be able to take these funds. It's a reasonable measure of an infrastructure security.


To avoid perverse incentives, you should also build in some reward for the developers/operators. As in: If the server gets hacked, the money goes to the whitehat. If the server does not get hacked for $TIMEFRAME, the money goes to the people responsible for its security.


Also, there is a requirement for the hacker to actually publish the results of how they did it. Otherwise, you run the risk of the hacker just walking away with the funds or giving a bogus reason (after they've already spent the wallet).

Therefore, the wallets should be stored GPG encrypted in some published location. After the hacker has successfully penetrated and retrieved the file, they need to publish a "how I did it" document along with the hash of the GPG encrypted wallet.

Once devs have confirmed the vulnerabilities exist, they respond with the passphrase to decrypt the wallet.


Unless I'm missing the joke, this is a bug bounty with extra steps.


My idea was to not require any explanations, so that blackhat could grab that wallet too. It's just about being able to say "this server is $1k secure". I think it's fantastic that we have a technology to do that.

You still need some trust that private keys to given wallet are on the server, but apart from that, when you know there's $10,000 dollars on the server for anybody who can access it, it says something about how secure this machine is.

Plus you get instant notification when the server is compromised. Not every hacker is kind enough to let you know.


How would a blackhat grab the wallet if it's GPG encrypted and needs the passphrase from the dev?


I like this idea!


Seems perverse to me as well. Might be a better idea to just fund Matrix enough to be able to have at least someone full time on it. With $3 752 per month on Patreon right now I cannot imagine it's a lot after infrastructure costs and taxes. Certainly not enough to let Arathorn or someone go out of his way to get expensive security training.


Seems like asking for trouble. Basically you're putting a thousand dollars cash in your house, then telling the world I have a thousand dollars cash in my house, if you find a way to break in and take it, it's yours, I won't make a fuss because you're doing me a favor by exposing a vulnerability.

Just please don't take the other valuables and ... oh yeah, please don't mess with any of my family members and maybe please let's try to keep it at no more than one hundred people trying at the same time b/c otherwise things might get out of hand.


You only get the 1000 Euros if you didn't do any of the harmful things you mentioned. It is not"cash" in the house


there are quite a couple of so called guides (opsec playbooks for crime) that I found specifically on Wall Street Market (a darknet market place like the now defunct Silkroad), available for purchase.

Some of them go beyond just instruction booklets but promise access to their chat systems via invitation (upon purchase of the pdf) and offer some kind of limited coaching. It is essentially the recruiting mechanism to bring in lower ranking soldiers starting out as mules, handlers, or basically move up from re-selling goods.

A couple of these guides point out how much Telegram sucks etc, and that they now have moved to p2p based systems. One praised Matrix heavily for it's good security feature.

The tech-savvy-ness of many vendors has picked up considerably since I first started watching. There is a strong push to re-think and refactor both tools and their processes (yes yes - this happens constantly otherwise they get caught, but never as fast or aggressive than these past months).

It's likely that this is just a (s)kiddy enjoying the attention. Though quite a lot of players have more than just an "academic" desire to ensure these (their) systems can withstand an attack by LE. When I browsed the matrix issues on github I couldn't help but immediately recall the strange emphasis on "we have switched to matrix". It's far fetched but I'd say somebody may have a strong interest in seeing these issues resolved (->or has gotten genuinely fed up and wanted to do something, as opposed to this being just a skid that only did it for the attention)

for a good analysis on how some of these tutorials and the philosophy behind them see: "Discovering credit card fraud methods in online tutorials"

https://www.researchgate.net/publication/303418684_Discoveri...


There are some weird people in those issue threads..


What did they say? The comments have been deleted.


checking in as an internet weird person here, any time you have a platform that synthesizes anonymity with collaboration/social interaction, weirdos like us are gonna pop out like clockwork because we find a safe haven for our, uh, weird stuff. a place to not be judged or whatever. i think the sjw type terminology for it is a safe space. and of course due to the human element being so easily corruptible, many people do also tend to use such things for illicit purposes.

hey anybody else remember the days of T-Philez?


Github issue got closed or removed it looks like. There is a new issue where people are complaining about the first getting closed:

https://github.com/matrix-org/matrix.org/issues/367


They were getting a ton of spam messages so they have been locked so that only collaborators can talk. They will be restored when the spam stops https://github.com/matrix-org/matrix.org/issues/367#issuecom....


I would like for matrix protocol and implementation to be better prepared for such cases.

While I didn't loose access to the encrypted messages, since I used the 'Encrypted Messages Recovery' function of Riot.im, I guess a lot of people have. Maybe allow to store more information on the client side?


I do not really like the fact that this feature can only backup keys server-side, so I did not enable it.

I do however have a keys backup dating back some time, that will hopefully restore some of my encrypted messages. But basically, I understand that every encrypted message was at risk of being lost, so it's not that big of a deal.


The backed up keys are encrypted against a client-generated Curve25519 public key, with new session keys being added incrementally (so you don't need to provide the key after you set it up)[1]. Personally I don't see it as much more of a risk than trusting them to host the ciphertext of your messages.

People have different threat models. When chatting with my family, it's more important that we have a permanent history of our messages rather than the worry of them getting leaked. But if you're a whistleblower you have a different set of requirements.

[1]: https://github.com/uhoreg/matrix-doc/blob/e2e_backup/proposa...


You have always been able to export your keys manually to a file.


I agreed with your comment, until I followed the github link and found that all issues had been removed.


Matrix operational security is a joke and developers understanding of security is a joke. This is 2019, not 1992.

Infrastructure with ssh access without hole punching for currently active authorized connections only? Decrypted signing keys accessible over the network? CI servers and developers having root access?

Though the "we had to revoke all the keys so you lost access to your encrypted messages unless you backed them up" takes the cake.


> "we had to revoke all the keys so you lost access to your encrypted messages unless you backed them up" takes the cake

This is just how it works. It's been well documented and mobile clients got updates that backs up the keys automatically. It's also effectively the same as WhatsApp and some other IMs (they just don't even save your encrypted messages). Either way - backup, or lose your history.


When it comes to criticism about backup I have no problems with things getting wiped from the server. I assume a good p2p design has a "little server" and as much client as possible in it.

Enforcing in the clients to properly back up by default, or otherwise properly educating the user of what happens if they don't back-up would be as important as getting the code right. There is little difference to the user whether they lost data because they didn't understand they really had to do backups, or they got their keys compromised and messages deleted by a malicious 3rd party.

I do agree with all of GP's other points though.


I stand by the assertion that it indicates the Matrix people are clueless.

If this is a design constraint, then the security model needs to accommodate that the user keys are the pot of gold, which means that there needs to be a service provided by a dedicated server which is inaccessible in the course of normal operation via any means other than a well defined braindead simple protocol <keyid>:<command>:<message> providing the message manipulations/key store functions from only other authorized production hosts that need to be able to access this functionality.

The server running the service should have a security policy that would prevent one from running any software that is not supposed to be already present on a server ( use SELinux enforcement policy ) to minimize the attack surface; have its own SSH keys not generally accessible during the normal operation, be accessible only from specific IP addresses, etc etc etc. If it is on AWS, it should probably be in a separate account.


I think you misunderstand why the keys were deleted. The keys get deleted on the client when you log out. This is sensible, because if you log out on a device, you probably don't want to keep the keys around in your browser storage. When the users session is destructed on the server, existing clients get a 403 error and told that their session is logged out. When that happens, they go through the normal logout routine which involves deleting the keys on disk.

Deleting the keys isn't something the matrix.org folks explicitly had to do because of the compromise; it's simply how the riot.im client reacts when you terminate it's session.


If user sessions are that important, then there's no way Matrix should be killing them and instead that behavior has to become a design and operations constraint.

Imagine if this was facebook. Or whatsapp. Or signal and this was the result. They would be crucified ( justifiably ). But for some reason we are giving Matrix a pass.


There is currently a bug open in Riot to allow users to save their keys if there was a forced logout by the server (right now, if you try to log out and don't have key backups set up Riot will warn you and ask you to set up key backups).

But, your comparison with other messaging apps aren't really a fair comparison (other than "they are messaging apps"). The reason why they don't have these issues is because they don't provide features that Matrix does -- and those features make it harder for Matrix to implement something as simply others they might. For example, Signal stores all your messages locally and doesn't provide a way for new devices to get your history -- Matrix doesn't store messages locally long-term and all your devices have access to your history. In addition, there is no "log out" with Signal unless you unlink your device.

The reason why Matrix doesn't have e2e by default yet is because they want to ensure issues like this don't happen to every user.


Maybe I'm not clear -- destroying any user's data without user's explicit authorization is unacceptable for any non-joke of a system.

If users' keys are linked to the session key then the system has to be designed in a way that the centralized session key store is protected like a pot of gold. That's a design constraint and dictates operational constraints.

> Matrix doesn't store messages locally long-term and all your devices have access to your history. In addition, there is no "log out" with Signal unless you unlink your device.

If one designs this kind of a system, one accepts the security constraints this system has. That's a basic competence or in this case a lack of it.


If Riot kept around your session keys even if you were logged out I guarantee that a similar complaint would be made about it being insecure since it leaks keys.

I would also like to point out that e2e is still not enabled by default because of issues like this. If you enable it you should know to enable key backups.

Riot has supported automatic key backups for the past few months, and if you'd used that you wouldn't have had a problem (yes it should've existed earlier but there are a lot of things for the underfunded Matrix team to deal with). And the reason it's not default is because making such a system opt-out would also make people start screaming about how Matrix is insecure because "it stores your keys on the server".

I think in many respects, the people working on Matrix are going to get criticised like this no matter what they do. I note you haven't actually suggested a specific proposal for how to fix this -- you're just going on about design cinstraints and how Matrix is therefore a joke system. To me that seems to be more snark than useful advice.


> Riot has supported automatic key backups for the past few months, and if you'd used that you wouldn't have had a problem (yes it should've existed earlier but there are a lot of things for the underfunded Matrix team to deal with). And the reason it's not default is because making such a system opt-out would also make people start screaming about how Matrix is insecure because "it stores your keys on the server".

Encrypt the bloody backup keys with a key derived from a passphrase selected by a user.

> I think in many respects, the people working on Matrix are going to get criticised like this no matter what they do. I note you haven't actually suggested a specific proposal for how to fix this -- you're just going on about design cinstraints and how Matrix is therefore a joke system. To me that seems to be more snark than useful advice.

The snark would be to say "Use Matrix. Who cares about the system not being built to deal with the design constraints"

No one should defend Matrix after this. It was not a mess up. It was an Equifax level fuckup that was totally preventable.


> Encrypt the bloody backup keys with a key derived from a passphrase selected by a user.

Actually the system they have is better than that. You generate a random Curve25519 private key and the public part is stored. This allows your client to upload backups of session keys without needing to constantly ask the user for their recovery password.

You can then set a password which will be used to encrypt the private key and upload it to the homeserver (but you can just save the private key yourself).

So, not only do they have a system like you proposed, it's better than your proposal.

> It was an Equifax level fuckup that was totally preventable.

I agree with you that their opsec was awful on several levels, but you're not arguing about that -- you're arguing that their protocol doesnt fit their design constraints (by which you mean that they clear keys on forced logout without prompting to enable backups if you don't have them enabled yet -- as I mentioned there is an open bug about that but that's basically a UI bug).

All of that said, it's ridiculous that they don't have all their internal services on an internal network which you need a VPN to access.


Yes - on their operational security.

Not the implementation of the encryption code.

Which we'll continue to use from people like www.modular.im, and whoever else springs up. As well as self-hosted servers.

Don't trust them? Host it yourself, and it's easier every day.

That's what drew me to the platform, and what will keep those serious about security, and decentralization/federation.


One of the swift steps should be to address https://github.com/matrix-org/matrix-doc/issues/1194 and https://github.com/matrix-org/matrix-doc/pull/1915 and https://github.com/matrix-org/synapse/issues/4540 properly, so others servers cannot be impacted in any way.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: