Hacker News new | past | comments | ask | show | jobs | submit login
Lock Up Your Customer Accounts, Give Away the Key (technologyadvice.github.io)
65 points by TomFrost on Jan 8, 2016 | hide | past | favorite | 25 comments



There's a reasonable case for including internal actors in one's threat model for larger companies or ones working in extraordinarily sensitive product domains. Most startups probably don't need to prevent the team from being able to read credentials, because that's theatre when they have 15 different ways to get to any secret the company has.

We use Ansible's vault feature to decrypt a few centralized secret files onto machines at deploy time. This lets us commit the encrypted text of the files. (The source of truth for the key is in Trello, IIRC, but it could be anywhere you have to auth in as an employee to view.)

It's modestly annoying (operations like "check what changed in the secret configuration file as a result of a particular commit" are impossible) but seems like a reasonable compromise to ensure that e.g. nobody can insta-create an admin session if they happen to have a copy of our codebase and a working Internet connection.

Secrets are communicated to processes which need them in boring Linux-y ways like "file i/o" and "stuff it into an environment variable that the process has access to." If you're capable of doing file i/o or reading arbitrary memory, we're in trouble. Of course, if you can do either of those on our production infrastructure and also connect to our database, we've already lost, so I don't see too much additional gain in locking down our database password.

If you're starting from the position "I have a Rails app which has passwords in cleartext in database.yml" this is an easy thing to roll out incrementally: move the password from database.yml to ENV['RAILS_DB_PASSWORD'], spend ~5 minutes getting your deployment infrastructure to populate that from an encrypted file (details depend on your deployment infrastructure -- I am liking ansible, a lot, for this), verify it works, then change passwords. Voila; Github no longer knows your database password and your continuous integration system no longer knows your production credentials. One threat down; zero coordination required with any other system you use or any other team at the company. You can standardize on this across your entire deployment or not, your call, and it's exactly as easy to back out of as it was to get started.


> or ones working in extraordinarily sensitive product domains.

A side project I'm working on comes under that domain (medical data), By any chance do you have any recommendations of books on this kind of stuff? I do all the usual OWASP/best practice stuff but most of my day job is LoB stuff and while security is important it's not quite the same as losing potentially thousands of peoples medical data.


This article confuses me. The author tears down a strawman argument about running centralized key services ("The expensive solution"), then recommends exactly such a solution in Amazon KMS.

The only plausible way this can make sense to me is if he said "Running your own key service is a pain, use Amazon KMS". But that's a simple service question, probably wouldn't have taken up as much space.


Not only that, but everything here except for the engineers question can also be solved by simply hosting these things yourself.

You don't need 3rd party code hosting on Github, just use Gitlab or JIRA. You don't need some external CI service, run your own Jenkins node. Chat and email should also be internal (we use XMPP, a local Mattermost instance would be an alternative) and SSL-only.

You can do all of this with basically 1 docker command per install on your own dedicated hardware with a fairly underpowered machine.

And this prevents leaking of all sorts of information, not just production database passwords. If you don't trust your engineers, you have bigger problems, as another poster pointed out, if they can modify your software to simply report the password back to them, or just login to production and decrypt it, you're dead in the water.


I argue security starts with being paranoid. Not that I don't trust anyone I work with, including myself, but I can leak emails or my computer can get hacked. Shit happens. So I would start with the worst case and ask myself how to defend against any leaks.

External service SLA can be joke. It's always aftermath thought. Damage control is always on the customer side because customer has to rotate / invalidate leaked credentials, so first step for me is to have a process to invalidate credentials as often as possible.


KMS is not a centralized secret database -- it's a hosted Hardware Security Module. There is no way to store your service's secrets in it for later retrieval, unlike the solutions listed in the article. I suppose an argument could be made that it still provides a single point of failure, however the risk level of KMS and the SLA it provides is far lower than what one might encounter by maintaining their own server cluster.


AWS KMS does not use HSMs. Amazon says it runs on HSAs ("hardened security appliance"), but they don't provide a lot of info on what that means. I would presume that the only thing keeping a limited number of Amazon employees from accessing your keys is policy.

I agree that using AWS KMS is the same architecture as using some other KMS that you run yourself, you just garner the benefit of their software and operational capacity and you buy service; this is the same as any other PaaS service at Amazon or some other vendor.

What's the value of Cryptex, though? Why not just store KMS-wrapped secrets in your config file and have Amazon unwrap them? Then you wouldn't be dependent on any local crypto implementation and you could use other KMS features, such as AEAD.


It literally says AWS KMS uses HSMs in the introductory paragraph.

https://aws.amazon.com/kms/


I'm relying on https://d0.awsstatic.com/whitepapers/KMS-Cryptographic-Detai.... There are HSMs, I guess, but they'r only used to back up the keys when they're stored offline for durable backups. I hadn't seen the claim on the main page, but I'd consider it to be misleading, presuming that the cryptographic details whitepaper didn't totally misstate the design.


An interesting article, I'm working on a side project/long term project that will hold medical data, it will be self selected (i.e. people entering their own data rather than a gov dept etc) however security is #1 on my list since frankly the idea of leaking someones medical data (even if they opted in and agreed to the license) scares the living shit out of me.

All my side reading recently has been on writing high(er, I follow best practices with my other stuff) security systems across the entire stack, it still frightens me but I see a real need for the side project so I'm going to do everything I can to make it as secure as possible and take a shot.


Depending on the data you're storing, you may be responsible for HIPAA compliance. Such a thing is possible on AWS[0], but is not provided out-of-the-box.

[0]: https://aws.amazon.com/compliance/hipaa-compliance/


I'm not in the US (though I've looked at the HIPAA guidelines anyway in the course of my research), I'm in the UK and will only be storing UK data (at least initially, I suspect there is strong demand for the idea but I'm a) not planning on making huge amounts of money b) supporting other countries since the laws on medical data are so varied), I spoke to friends in local government who put me in touch with the people who deal with storing medical data for them, as long as I follow best practices, make sure that users are aware of the license terms of using the system and behave ethically that (appears) to be all that is required except for of course obeying the rules on DPA/PII (Data Protection Act, Personally Identifiable Information), as I'm not a public organisation their rules don't apply (though I'm still going to follow all their guidelines anyway).

I'm still going to speak to the company solicitor though just for belts and braces.

Oh and on the hosting, I won't be using any cloud services, Physical server out of a a state of the art DC a few miles up the road that is certified to my UK Gov standards as a provider, they pretty much tick every box I'd ask for though not cheap I can get an insanely powerful machine and they have a superb reputation, looking at approx 75 quid ($110) per month for a dual core i3-4160/8GB RAM w/1TB RAID or £145 ($210) a month for a Xeon 1231 with 32GB RAM and 2TB of RAID storage (that one has dual power supply, n/c) which if it's used isn't that expensive at all.


The recommended solution is still vulnerable to employee compromise: if they can push software that runs as a trusted role, they can steal any secrets that software has access to.


This is certainly the case, however for an organization implementing best practices for code deployment, such a change would have to be peer-reviewed in the best case, or pushed directly to master with an obvious paper trail in the worst. It wasn't my intention to imply that employing well-designed envelope encryption would shut the door on any possibility of an engineer gaining access to secrets; clearly there's a lot more involved in making that happen. However, this goes a long way to allowing the source of any leaks to be traced should they occur.


Presumably, your rogue employee won't follow best practices, and there is not a quality audit trail for such abuse in most setups. I think we're in agreement: this is a hard problem and difficult to solve. In your article, that part of the problem statement is a red herring, as Cryptex doesn't solve it.


One solution we came up was to encrypt data before it is submitted and let the user have the private key. The private key is never transferred to our servers. (Generated on browser, kept by the user and used on the browser.) http://www.jotform.com/encrypted-forms/


I really like this solution, but it is still quite vulnerable to an inside job. To wit, if someone at jotform wanted they could poison the page, and recover the private key (or the data directly).

To address that you need process isolation between the storage of the cyphertext and the manipulation and use of cleartext. This eliminates the browser since for all intents and purposes it is not an isolated process. (You could still use the browser, but provide your tools as an extension that would, presumably, be inspected by users when it updated.)

That said, your solution takes care of a lot of other threat models, but it doesn't really protect users from you.


It does make exploitation harder (need to change JS) so it's good enough. There are no other usable options anyway.


Every time I see some one touts AES as the reason that their encryption is secure, I want to ask, in what mode? CBC, CFB, CTR or (the best) GCM? How is the IV generated? Are there any potential padding oracles? If they don't even understand these questions, then it is obvious that AES cannot save them at all.


I was surprised when the author all of a sudden started talking about AWS, and clicking some kind of button that creates a key.

(Besides, one would assume has been backdoored by the Amazon staffers anyway)


Is there a basis for such an assumption?

For an organization requiring the highest available security, the ideal solution would be a privately operated hardware security module kept off the DMZ. However, that, as well as the idea of self hosting (and maintaining) the entire dev, test, deploy, and prod stack suggested by another commenter, isn't always within reach of a small, agile team looking to focus on their core competencies.

One could argue that it's possible for Amazon to have falsified the description of KMS as an HSM, or the certifications[0] they were granted for it, but I'd retort that an organization in a position to seriously question those claims shouldn't be using a remote solution anyway.

So, making the more rational assumption that such claims by Amazon can be trusted, their offering is quite secure: the HSM does not allow the export of any key, and exposes only the ability to load encrypted data into the device and have it produce the decrypted result over a secure channel, and vice versa.

[0]: https://aws.amazon.com/kms/details/#compliance


I said it above, but I'll reiterate here that Amazon KMS does not use HSMs; they don't provide a lot of detail to help you reason about what that implies for key security. (I agree that there's no reason to believe they're lying or that it's backdoored.) There's also not much discussion about where the authorization checks happen, and the security of key operations is only as secure as the entity to whom that is delegated.


Re: your first line, yes: the existence of https://aws.amazon.com/govcloud-us/pricing/ -- and we know how the US Gov feels about computers.


I had the same reaction.

Looking at the docs, it looks like the master key source is pluggable, so you don't have to use Amazon's KMS... but none of the other options inspire confidence (local file, fetch from URL, plaintext password, or no password).

At the very least, I'd like to see a plugin for using a key stored on a local TPM chip -- which almost any modern bare-metal server would be equipped with.


Could this be used for online Bitcoin wallets?




Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: