Hacker News new | past | comments | ask | show | jobs | submit login

We successfully used a Relationship-based Authorization System based on the Zanzibar paper at my last job building a B2B SaaS leaning heavily on cross-company integration.

The flexibility in defining rules through tuples helped us iterate rapidly on new product features. We used self-hosted Ory Keto [0] instances as the implementation, though we would have preferred a managed solution. We were checking out Auth0 Fine Grained Authorization [1] but it was still in Alpha back then.

[0]: https://www.ory.sh/keto/ [1]: https://auth0.com/developers/lab/fine-grained-authorization




> though we would have preferred a managed solution

We completely agree here, which is why we initially started out with our managed cloud offering, Warrant Cloud[1]. While Zanzibar is powerful, operating it with solid latency/availability can be quite challenging.

[1] https://warrant.dev/


So how do you manage filtering of a billion records?


Can anybody explain me why there seem to be much focus on scalability in this context? I mean we have 8 billion people. If the whole planet registers, home PC can handle it, plus it partitions beautifully if necessary in case of authentication. So what am I missing?


Forget about 8B people in this context. If you have 1000 microservices in the company and each has 100 rps, you are looking at ca. 100k rps to a Zanzibar-style system to authorize every request (not to authenticate a user).


Why does it need to be checked on a per-request level?

I'd expect you to be able to give short-lived capability tokens to clients that each machine can verify down the stack without making new rpcs. This would avoid the fan-out of all the internal services.

Is it just to prevent abuse?


You can encode capabilities/permissions as scopes in distributed tokens (e.g. OAuth) but this can start to break down if you have very granular, fine-grained permissions (e.g. user:1 has 'editor' access to 1000s of documents/objects). This is similar to the problem that Carta ran into while building out their permissions[1].

In addition, yes - validating permissions on each request makes it so that you can revoke privilege(s) with immediate effect without needing a token to be invalidated.

[1] https://medium.com/building-carta/authz-cartas-highly-scalab...


I think it's best to refer to the Zanzibar paper: https://www.usenix.org/system/files/atc19-pang.pdf


... or the annotated one from the Authzed folks https://authzed.com/zanzibar


Wow, I am also impressed by the tech behind! https://github.com/authzed/zanzibar-annotated


That's neat! All papers should allow this discussion on them.

BTW, didn't Google released something like it too early?


Does the token identify every resource you have access to? I think is for multi tenant applications with fine grained access control.


"Oh, you just [insert complex solution here]"

You need one capability token per principal and resource and perhaps access right.


This isn't meant to invalidate what you're saying, but this whole thread reads like a parody to me. 1000 services all making requests to Zanzibar, and this oreo keto thing.

Makes me think of this: https://m.youtube.com/watch?v=y8OnoxKotPQ


You could apply to S3, RDS, BigTable, Spanner, Firestore, etc. I feel like engineer orthodoxy (monoliths vs microservices, every monolith I've seen accesses a remotely DB and every microservice tends to access a DB which are themselves monoliths), no "god" services tend to break down for a lot of these important high scale stateful facilities.


Pure gold.


Thanks.


Glad to see that you used Ory Keto! :)

Ory does have a managed service offering now for Ory Keto as well!




Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: