Isn't the "problem" with this that you can't get arbitrary websites to talk to your OAuth2 server? For example, even though, say, Gitlab, supports OAuth2 login with Github, I can't get it to authenticate with stavros.io.
This is the problem Portier (https://portier.github.io/) and OIDC aim to solve, ie to be able to auth on any website with an auth instance you run.
I love this idea because it's much easier to secure one thing whose sole purpose is authentication than to secure every thing you want to authenticate on.
I'm not sure if I understood you correctly. Delegation of authentication usually implies trust between the two parties. GitLab (the hosted version) does probably not trust stravros.io enough to allow people to log in through there.
Portier looks indeed very nice, maybe I'll set up a tutorial how to get those two working together to get full Authentication (portier) + Authorization (Hydra) with using only open source technology.
Why does Gitlab have to trust anyone? It's the user that has to trust stravros.io not to tell Gitlab that other people are authorized.
It's no different than a regular email/password (with password recovery): if I register with user@stravros.io, then that email server becomes empowered to give access to the Gitlab account to anyone it wants. But that's not Gitlab's problem.
Exactly, and OpenID connect adds an authentication layer over OAuth2 for this exact purpose. If we manage to build that future, it will be very useful and quite exciting, at least to me. There won't be compromised passwords any longer, just the one password you can easily change.
You are ten years too late. The original OpenID did exactly this, and quite a few sites (especially tech focussed sites) let you sign in with it. Except then along came Google and Facebook with their proprietary login systems, and everyone jumped ship to those as they offered access to profiles rather than just a domain and possibly email address.
We first worked in this problem at Netscape just after the AOL acquisition in 1998. It turns out to be impossible because: show me the money. Something we figured out within a few weeks back then.
Please elaborate on how DNS has failed? It seems to me that everyone uses DNS all the time and is an essential component of the Internet as we know it, but you and I may have differing notions of failure.
That's not going to stop people from using those exact values. How many breaches have we seen due to the lack of sane defaults? Tutorial code (especially when written by the people putting out the software) is a default, and it's likely to result in plenty of people running this exact code in production.
That is a valid concern. However, once a password is published (especially in docs or tutorials) it is insecure whether they are random values or not - simply because they are public and clearly linked to the product you're running.
That's why I chose to make it explicit, and thus more likely to be caught in review if done.
Yes, it just creates an array of the decimal values for ASCII A-Za-z0-9. By default Get-Random just returns a random unsigned int, but if you pass in an array of objects it will select a random object from the array.
One solution could be the app/service could have some dummy passwords built in which you can use for tutorials but it will refuse to use them if supplied.
Good point, Hydra does this to for things like missing TLS encryption but not yet for secrets (it only rejects secrets that are too short). I've tracked this here: https://github.com/ory/hydra/issues/573
It seems like the simplest thing, then, would be to use a secret that is too short in your documentation and make note of both the fact that it's too short and what the requirements are for a good secret.
Of course, this runs the risk that a user will simply "salt" the sample you provide up to the necessary length, which makes the length of their secret effectively the difference between the minimum length and the sample length.
1) admin accounts are generated on startup with random passwords. These are printed on the console on the first startup only.
2) in a configuration format I use, there is a special keyword for using a random string as a value which is different each time the config is parsed.
I think any secure value should be either generated for you or random on each startup unless specified otherwise. It would greatly improve the security of some projects out there.
Is not running Postgres in Docker in production still a thing? What kind of problems are people running into? I love having everything managed using the same deployment techniques and I was on a team that ran Postgres in Docker in production for a couple of years and never ran into trouble. Of course that doesn't mean anything so I am curious.
How did you handle upgrading between major Postgres versions with Docker when the container only has one version (you need both for major version updates)?
It's advice born from people who don't quite understand docker. Postgres in containers just needs a data volume on a dedicated drive for speed, and you need to remember dockers default stop behaviour hard kills after 10s which is not long enough for a clean shutdown (use docker kill to send sigterm and wait).
I have to imagine it's about how bad a crash would affect you. If you can make hourly backups and the data loss is not a huge deal, I don't see why you wouldn't, but for more important stuff just go the extra mile and get some VMs.
Why would you have data loss? Why would crashes be worse than when Postgres is not run in a Docker container?
In our project we ran a master/slave Postgres setup and hourly backups. I don't fully remember the details, but as far as I know that's as safe as you can get regardless of Docker.
It might seem obvious to use volumes and maybe even cronjob to backup your data but a large amount of the instructions you find on the docker hub don't mention it. In any case, if you blindly copy and paste the instruction, you'll end up with data loss.
> Why would crashes be worse than when Postgres is not run in a Docker container?
because if you use docker wrong, the state of your system will leave inside the container and will be flush if you reboot your machine.
If you don't use docker, the state of your system will always remain in your disk until your disk crash.
> In our project we ran a master/slave Postgres setup and hourly backups. I don't fully remember the details, but as far as I know that's as safe as you can get regardless of Docker.
It's working great for me too but the point is if you do it wrong, you'll burn yourself. I guess that's the same thing for any piece of tech but behind it's apparent simplicity, it's quite easy to do a mistake with docker and if you do so, say goodbye to your data
> ...state of your system will leave inside the container and will be flush if you reboot your machine.
Nope, the state of the container will survive until you remove the container. Reboots DO NOT refresh a container (you can have the docker daemon automatically bring your containers back up on reboot).
Worth adding to this that Wordpress have been working on an official OAuth1.0a server plugin for while alongside the development of the new REST API. It works well. I've built an iOS app on the back of it. (1.0a was chosen due to WP not wanting to enforce https.) https://en-gb.wordpress.org/plugins/rest-api-oauth1/
I've also seen discussion on an official OAuth2 server plugin, what with the rapid increase of https sites, thanks to the likes of LetsEncrypt.
SAML doesn't get enough love. Big big companies want to use your software but really need to use their existing SAML-compatible identity provider. When you don't support it they'll move on to someone who can.
Conversely, if your organization has a SAML-compatible IdP you get to work with a vast sea of compatible software without really having to think about the integration. Exchange metadata URLs, maybe some URL templates, and you're done.
Indeed, it's excellent. I'm using it with an ldap back end for my companys internal infra. Unfortunately it doesn't support u2f as a 2nd factor yet, just totp codes. That's the only thing I can think of as critique for keycloak, so it's pretty damn good.
Last time I looked at gluu it seemed massive, requiring a beefy server dedicated to running it. It came with an ldap server etc, meaning I seemingly couldn't us my own. I'll have to revisit, but last I checked its way too much.
Bookmarked, thanks. I'll be sure to try gluu again and giving it its own server if u2f becomes a requirement, or if I need any of the other features it offers.
I've been jumping between hydra and dex for the last couple of weeks. On the one hand I like the tight focus hydra has, with the exception of the warden api. On the other hand it is really involved to simply setup a working environment that includes hydra ready to go. It would be nice to do all the token, client and policy setup using a simple docker-compose up.
Dex for example has a dev mode doing that for you. The downside of Dex is you cannot use your own backend without forking the project, writing your own login page and creating a custom connector for your existing login system.
Thank you for the valuable feedback! The dev mode is indeed a very good idea - I'll probably spin up another docker-compose example with all the default things set up. Would that make it easier?
Yes definitely. Although you could build a new image based off of the original one, add a bash script that sets it all up for you and overwrite the entrypoint i never like that sollution. It should be something that the software supports out of the box as it plays into one of the strengths of docker, easily spinning up and taking down instances.
It's basically a login with username and password.
If you want a fully fledged identity provider on top of OAuth2 (so create / update user account, password reset), I have a sample project which extends on the oauth2 repository and builds a full identity provider on top of it: https://github.com/RichardKnop/example-api
About dependencies: only two are required - etcd/consul and postgres. There is no other requirements.
Originally I developed this project while deploying to a CoreOS cluster so etcd was a native choice for storing app configuration in a distributed key store. Consul support was added later in form of a contribution as an alternative to etcd.
I also want to remove dependency on etcd/consul completely and allow just simple configuration via environment variables to make the projec more portable.
No it doesn't have things like user registration password reset flow etc. I wanted to keep the project just as straight OAuth2 server based on spec, nothing more.
I have another project which I sometimes use as a boilerplate when working on ideas and I need a simple API for my prototyping. It contains all those things as registration, password reset flow etc:
The go-oauth2-server contains simple web forms (which you can style to match your UI) to handle the full authorization and implicit flows of OAuth2 so you would connect to the oauth2 server from your app, log in and be redirected back to the app with authorization code and then the app can obtain access and refresh tokens from the oauth2 server via API call.
This is a normal authorization flow people are used to from Facebook/Github/LinkedIn, works the same way. See README for images of how the forms look out of the box, without any customization.
If you want to have in app login system, then for such scenario usual way I have implemented this before is to have a separate frontend layer and it works something like this:
1) Frontend (mobile/web app) displays login form
2) Enter username and password
3) Use resource owner credentials grant to obtain access token via API call
4) Now you can make authenticated API calls with the access token (and use refresh token in the background to renew your access token)
In case of web application frontend (let's say NodeJS app), the app would store client ID and secret server side (so you would proxy all requests from client app to Node proxy because we don't want to keep client ID and secret in public JS).
Just in addition to my answer above, yes there is a way to log in in my project. See the README which showcases the built in web forms.
The database contains a simple table to store usernames and passwords for resource owner credentials grant.
There is no API for registering a new user account though which is what I meant.
You can do that manually buy running SQL statement to insert new username and password, or by using the cli and load it from fixtures.
How you handle registering user accounts, updating user data, resetting passwords, all of that I wanted to leave open to implementation as there are various ways in which this can be done and other people might prefer one over another so I didn't want to prescribe a specific way to do it.
I offer my preferred implementation using JSON HAL in my extending project I mentioned above. If anybody is interested, they can still fork my example-api and customize that.
I'm no security expert, but to my understanding the ROPC grant makes sense for highly privileged applications, i.e. 1st party client applications (e.g. main app website, main native iOS client) as explained by http://oauthlib.readthedocs.io/en/latest/oauth2/grants/passw...
I've been looking around in this space for OAuth and auth out of the box alternatives. I've tried Kong's OAuth2 plugin (https://getkong.org/plugins/oauth2-authentication/) but after trying to integrate it felt like I had to write more code than necessary. Also had to configure a lot of APIs, and felt like it was clunky to manage them that way.
I have also tried to play with http://anvil.io, but the authors are busy with another project (https://solid.mit.edu) so Anvil is taking a back seat. Even the Getting Started currently has known unfixed issues.
I am heavily investigating http://www.keycloak.org/, and so far I am really impressed. However though, to deploy you will need to delve into Wildfly/Java configurations. And of course, minimum 512MB to run any Java app on a node.
Dex is also advertised as a solution but it looks like the documentation could do with more information and improvements. https://github.com/coreos/dex Doesn't seem easy to just take and run.
Thanks to comments here, I might these looking at these next:
PSA is a collection of authentication clients for authenticating with third-party auth providers (e.g. Google, Facebook, Microsoft). If you want to run your own auth provider server, you will need another library. We use Django OAuth Toolkit (https://github.com/evonove/django-oauth-toolkit) at edX.
If your sole purpose is authentication w/o authorization, one should use securelogin.pw which does not depend on identity provider. And btw OAuth2 spec is insecure by design, it's a known fact.
> btw OAuth2 spec is insecure by design, it's a known fact.
OAuth2 is only "insecure" in that it relies on TLS for its security: the same as HTTP, IMAP or SMTP. You should never run OAuth2 over a non-HTTPS (i.e. HTTP) connection. The same is true for any other login system.
That is a really bad specification with no examples, no formalization, and zero references.
However, all server-side attack scenarios listed there are not possible with Hydra. Some of them also boil down to misusing OAuth2 for authentication, which is why we have OpenID Connect.
This is the problem Portier (https://portier.github.io/) and OIDC aim to solve, ie to be able to auth on any website with an auth instance you run.
I love this idea because it's much easier to secure one thing whose sole purpose is authentication than to secure every thing you want to authenticate on.