Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

For the "broken access controls", "cryptographic failures", and "bad design" categories, I've been working on an open source project to help mitigate those.

It's still early and I haven't released it yet, but I have the docs[0] deployed now. If anybody feels like helping us test this early, I'd love some feedback. We're going to be pushing the code live in a week or so. (It's been a lot of building for a while now)

I've been thinking about these problems for a while now (as a security engineer) and it's cool to see that my intuition is roughly in line with what OWASP is seeing these days. It's always hard to know if the problems you see people struggling with are representative of the industry as a whole, or if you're just in tunnel vision.

Note: We're building this as a company so that we can actually afford to continue doing this full time. I'm still learning how to find the line between open source and a viable business model. Any thoughts would be appreciated[1]!

0: https://www.lunasec.io/docs/

1: email me at, free at lunasec dot io



> "To understand how data is encrypted at rest with the LunaSec Tokenizer, you can check out How LunaSec Tokens are encrypted."

It seems to me that all you're doing is providing encryption-at-rest-as-a-service. Why shouldn't your clients simply skip the middle-man and encrypt the data at rest themselves (entirely avoiding the traffic and costs incurred with using your services)?

Moreover, why should clients trust you with their sensitive customer content, encryption not withstanding? What are your encryption-at-rest practices and how can you guarantee they are future-proof?

And finally - your API is going to be a major single-point-of-failure for your clients. If you're down, they're down. How do you intend to mitigate that?

The whole thing is full of really strange and dubious promises, like this one:

> "In the LunaSec Token crypto system, information for looking up a ciphertext and encryption key given a token is deterministically generated using the token itself. A signed S3 URL configured to use AWS's Server Side Encryption is used for when uploading and downloading the ciphertext from S3."

What if an attacker figures out how the decryption key is "deterministically" derived? This attack vector would be devastating for you actually - since you can't just change the derivation algorithm on a whim: you would need to re-encrypt the original customer content AND somehow fix the mappings between the old tokens your client keeps in their database, and the new ones you'd have to generate post changing the algorithm. This is an attack that brings down your whole concept.

Then, there's issues like idempotency. Imagine a user accessing a control panel where they can set their "Display Name" to whatever they like. With your current design, it looks like you'll be generating new records for each such change. Isn't that wasteful? What happens to the old data?

Also, what happens if your clients lose their tokens somehow? Does the data stay in your possession forever?

Lots of big holes in this plot. I suggest you guys to get a serious security audit done as early as possible (by a reputable company) before proceeding with building this product. Some of this just reads like nonsense at the moment. CISOs (your main customers) can smell this stuff from miles away.

Good luck.


Thanks for taking the time to read through this and write some feedback for me. I sincerely appreciate it!

I wrote this post late last night, so pardon the delay with responding. Sleep happens.

> It seems to me that all you're doing is providing encryption-at-rest-as-a-service. Why shouldn't your clients simply skip the middle-man and encrypt the data at rest themselves (entirely avoiding the traffic and costs incurred with using your services)?

There is nothing stopping clients from making that call for themselves. At my previous employers, I've built similar systems multiple times. In those cases though, we had always checked first for any open source solutions. At that time, none of them fit the bill though so we ended up building it in house.

Which leads into your second point about "avoiding traffic and costs". We're making this open source and something that clients can self-host themselves precisely for that reason. Other players in the "Tokenization" market aren't open source or even generally self-hostable. That's one of the key differentiators of what we're building.

> Moreover, why should clients trust you with their sensitive customer content, encryption not withstanding?

Well, they don't have to. It's open source. They can check the code out themselves. And, with way we've designed the system, there is no "single point of failure" that results in leaking all of the data.

> What are your encryption-at-rest practices and how can you guarantee they are future-proof?

The encryption-at-rest uses AES-256-GCM which is implemented by Amazon S3. So, that part of the puzzle is well solved.

The rest of our system uses off-the-shelf crypto hashing (SHA-3). For the key derivation algorithms, we've implemented NIST SP 800-108 [0]. The key derivation is basically a cryptographically secure random number generator using the output of the SHA-3 hash as the seed. We use it to generator multiple random values. I'll expand on this in the docs soon (and you'll be able to read the source code).

We're intentionally not attempting to do anything novel with actual crypto math. We're just using existing, basic primitives and chaining them together (again, in accordance with the NIST paper I linked).

> And finally - your API is going to be a major single-point-of-failure for your clients. If you're down, they're down. How do you intend to mitigate that?

Well, it's open source and self-hosted. That's one of the primary goals for the system in order to _avoid_ this use case. At my previous employers, when we evaluated vendor solutions, those were both blockers to our adoption. Being beholden to a questionable vendor is a crappy situation to be in when you have 5+ 9s to maintain.

A common approach to adding "Tokenization" to apps (used by companies like VeryGoodSecurity) is to introduce an HTTP proxy with request rewriting. They rewrite requests to perform the tokenization/detokenization for you. It's simple to onboard with, but it has a ton of caveats (like them going down and tanking your app).

We've also designed this to "gracefully degrade". The "Secure Components" that live in the browser are individual fields. If LunaSec goes down, then only those inputs break. It's possible that breaks sign-ups and is also crappy, but at least not _everything_ will break all-at-once.

Finally, we've also designed the backend "Tokenizer" service to be effectively stateless. The only "upstream" service that it depends on it Amazon S3. And that's the same as the front-end components. By default, Amazon S3 has 99.99% availability. We have plans to add geo-replication support that would make that 6+ 9s of availability by replicating data.

> What if an attacker figures out how the decryption key is "deterministically" derived?

This is a real attack scenario, and something we've designed around. I'll make sure to write some docs to elaborate on this soon.

TL;DR though: If an attacker is able to leak the "Tokenizer Secret" that is used to "deterministically derive" the encryption key + lookup values, then they will _also_ need to have a copy of every "Token" in order for that to be valuable. And, in addition, they also need access to read the encrypted data too. By itself, being able to derive keys is not enough. You still need the other two pieces (the token and the ciphertext).

> You would need to re-encrypt the original customer content AND somehow fix the mappings between the old tokens your client keeps in their database, and the new ones you'd have to generate post changing the algorithm. This is an attack that brings down your whole concept.

You're right that this is a painful part of the design. The only way to perform a full rotation with a new "key derivation algorithm" is to decrypt with the old key and re-encrypt everything with the new key.

That's the nature of security. There is always going to be some form of tradeoff made.

Fortunately, there is a way to mitigate this: We can use public-key cryptography to one-way encrypt a copy of the token (or the encryption keys, or all of the above). In the event of a "full system compromise", you can use the private key to decrypt all of the data (and then re-encrypt it without rotating the tokens in upstream applications).

For that case, you would need to ensure that the private-key is held in a safe place. In reality, you'd probably want to use something like Shor's algorithm to require multiple parties to collaborate in order to regenerate the key. And you'd want to keep it in an safe deposit box, probably.

> Then, there's issues like idempotency. Imagine a user accessing a control panel where they can set their "Display Name" to whatever they like. With your current design, it looks like you'll be generating new records for each such change. Isn't that wasteful? What happens to the old data?

We did intentionally choose for this to be immutable because allowing mutable values opens up an entirely separate can of worms. Being able to distribute the system becomes a much harder problem, for example, because of possible race conditions and dirty-read problems. Forcing the system to be immutable creates "waste" but it enables scalability. Pick your poison!

For old data, the approach we're using is to "mark" records for deletion and to later run a "garbage collection" job that actually performs the delete. If a customer updated their "Display Name", for example, then the flow would be to generate a new token and then mark the old one for deletion. (And using a "write-ahead-log" to ensure that the process is fault-tolerant.)

> Also, what happens if your clients lose their tokens somehow?

This is again another tradeoff of security. By removing the Tokens from the Tokenizer entirely, you gain security at the expense of additional complexity (or reduced usability). You make it harder for an attacker to steal your data by also requiring them to get their hands on tokens, but you also force yourself to not lose access to your tokens in order to read data. It becomes very important to take backups of your databases and ensuring that those backups can't easily be deleted by an attacker.

This is mitigated with the "token backup vault using public-key" strategy I outlined above. But if you somehow lost those keys, then you'd be in a bad spot. That's the tradeoff of security.

> Does the data stay in your possession forever?

It's self-hosted by default. (Well, technically Amazon S3 stores the data.)

We may eventually have a "SaaS" version of the software, but not right away. When we do get there, we'll likely continue relying on S3 for data storage (and we can easily configure that to be a client-owned S3 bucket).

> I suggest you guys to get a serious security audit done as early as possible (by a reputable company) before proceeding with building this product.

It's on the roadmap to get an independent security review. At this point in time, we're relying on our shared expertise as Security Engineers to make design decisions. We spent many months arguing about the exact way to build a secure system before we even started writing code. Of course, we can still make mistakes.

We have some docs on "Vulnerabilities and Mitigations" in the current docs[1]. We need to do a better job of explaining this though. That's where getting feedback like yours really helps us though -- it's impossible for us to improve otherwise!

> Some of this just reads like nonsense at the moment.

That's on me to get better at. Writing docs is hard!

Thanks again for taking the time to read the docs and for the very in-depth feedback. I hope this comment helps answer some of the questions.

We've spent a ton of time trying to address possible problems with the systems. The hardest part for us is to convey that properly in docs and to help build trust with users by you. But, that's just going to take time and effort. There is no magic bullet except to keep iterating. :)

Cheers!

0: https://csrc.nist.gov/publications/detail/sp/800-108/final

1: https://www.lunasec.io/docs/pages/overview/security/vulns-an...


Can you tell me how the limitation of the creation of read grants in luna is done?


We have a few different strategies for this. You can read through the "levels" here[0]. (We need to expand on this doc still)

Level 1: There are no grants.

Level 2: Access requires a "shared secret" in order to authenticate to the Tokenizer. If you have the secret, get API access to the Tokenizer, and you have a copy of a token, then you can create a grant. In order to use the grant, you also need a valid session for the front-end, but if you have RCE on the back-end then you can get this pretty easily.

Level 3: Creating grants also requires presenting a JWT that's signed by an upstream "Auth Provider" that also proxies traffic. This JWT is only able to create grants that are scoped to a specific session (which is identified using a "session id" inside of the JWT).

You can still create a grant every token you have access to, but you need to get a valid session to do so. In this design, the proxy strips the cookies from the request and only forwards the JWT, which adds another step to the attack (you have to be able to login to on a browser).

This requires that you put your "Root of Trust" into your authentication provider, so you would want to "split" out your authentication/session creation into another service. We have an example app + tutorial explaining this that we'll publish soon.

Level 4: You write a separate function,called a "Secure Authorizer", that accepts a session JWT and a Token in order to "authorize" that a grant can be created for a given user.

This function is deployed in a hardened container and is difficult to attack (a network restricted Lambda).

By adding this layer, you now require that an attacker is able to generate sessions for any user that they want to leak data from. Or you require them to attack the "Secure Authorizer". It's a much more painful attack for an attacker to pull off once you've integrated all of these layers.

Does that answer your question? I'll make sure go add this explanation into that levels page.

Oh, and thanks for reading the docs! :)

0: https://www.lunasec.io/docs/pages/overview/security/levels/




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: