To kind of combine the others comments and add some more depth:
HTTPS runs over port 443, and port 80 is normally for HTTP only.
Most people setup their webservers to serve HTTPS over 443 and redirect port 80 to 443 so anyone that sends something to `http://example.com` automatically gets forwarded to `https://example.com` (and then the browser can tell them to ONLY use the HTTPS version from that point forward)
This is a generally "acceptable" tradeoff between usability and security.
But when you control the API and the client, you can know that you will never send credentials over HTTP, only HTTPS. So you should have nothing coming over port 80.
Then, with a simple script, you could set it up so that it will accept everything on port 80, and if it includes a token of some kind from your app, it logs the request, then marks the token in your database as "expired" so it can no longer be used anywhere (even on port 443).
There's a kind of attack sometimes called "SSLStrip" which can block all requests for someone to an HTTPS version of the site, and in many cases can trick the client into trying the HTTP version if the HTTPS fails. This kind of thing would not only stop that attack, but it would also log it and ensure that any tokens sent that way would be instantly expired, so that the attacker (who saw the tokens when the client tried to send them) can't use them.
There are other reasons too. It will notify you of any developer-mistakes that are sending credentials over HTTP (it's a one letter difference, and despite the best efforts sometimes these kinds of things slip into a codebase as typos or a dev just not thinking), and it can help you tell "scanning" traffic from "normal" traffic (since nothing in your app will ever talk to port 80, so everything on port 80 is "bad" and can give you some clues into what attackers are trying on your network).
And the best part is that it's easy! I could probably throw something together for our API servers that does this in a day or so start-to-finish. That's a REALLY good ratio between time spent and security benefit that you normally don't get!
Requests sent to port 80 will (usually) be unencrypted HTTP traffic, hence revealing the secret token to anybody listening in between the client and the server. Someone may accidentally send an HTTP request either by typo or lack of knowledge.
Isn't JWT also a type of bearer token? Could you please provide some more detailed arguments about why JWT shouldn't be used other than linking its wikipedia article?
JWT is fine if "revoke" isn't in your vocabulary for the service. If you do need to revoke tokens, JWT becomes a racey contraption that requires synchronizing and looking up state on every request, the avoidance of which was the main reason to use JWT in the first place.
If you use a realtime transport like WebSockets, you could keep automatically re-issuing a fresh JWT with a very short (e.g. one minute) expiry every 50 seconds; just push them at an interval to authenticated clients. That way your banning mechanism would only have a one minute a delay. No need to revoke tokens; just let them expire.
In such a system, a user would be logged out after one minute of closing the connection... Probably good enough for online banking. In a way, this is safer than standard sessionId-based auth because once you've issued the token, you don't need to worry about scenarios where the user has gone offline suddenly.
There are very few systems that need banning with down-to-the-millisecond accuracy.
What you’ve described is a workflow that will technically work. But I’m weakly confident it’s still suboptimal for most use cases.
How will you deal with the usability problem introduced by expiring all sessions for users who are offline (closed tab, spotty internet connection, etc) for at least 50 seconds? It actually seems like you really should be wondering how to interpret users being offline temporarily.
You could just accept this as working as intended, but you don’t need to accept it if you just use normal session IDs. Is your application really so latency sensitive that it can’t tolerate a DB lookup? What is an example workflow in which users on a web or mobile application cannot be tolerably authenticated with standard DB lookups?
You can also use a refresh token, but this brings you back to the revocation problem, but with longer term tokens. Likewise, there is a material difference between millisecond revocation and sub-minute revocation. There are good reasons to care about sub-minute revocation.
You don't need JWT in this case. You can use a normal token with short expiry and some mechanism to keep it fresh as long as the user doesn't exit the application.
JWT is simpler to implement and more scalable than the sessionId approach so why would you use the more complex solution to get an inferior result?
With JWT, you only need to do a single database lookup when the user logs in with their password at the beginning... You don't need to do any other lookup afterwards to reissue the token; just having the old (still valid but soon-to-expire) JWT in memory is enough of a basis to issue a fresh token with an updated expiry.
It scales better because if you have multiple servers, they don't need to share a database/datastore to manage sessionIds.
I don't know what stack you're working with that makes you say re-issuing JWT every 50 seconds over WebSockets is simpler to implement than the session ID approach people have been using for 20+ years :)
Simpler != cheaper WRT resource consumption. Not having to hit a DB means not having to replicate the DB to respond quickly, and having one fewer point of failure.
If you can live with quickly expiring, quickly reissued crypto tokens like JWT, it's a boon.
But JWT definitely don't work for web auth. They can be used as CSRF tokens, though.
This is not accounting for reconnect scenarios. With sessionId, if the user loses the connection and reconnects, there will need to be another DB lookup to verify that the sessionId is valid. This is not the case with JWT. The validity of the JWT can be checked without any DB lookups.
Also, this is a security consideration because a malicious user could intentionally send fake sessionIds to make us DoS our database. With JWT, verification of the signature only uses CPU on the workers which are easier to scale.
Same goes for any signed token scheme. You can still revoke JWTs if you give them an ID and keep a revoke list somewhere. Though as you said most use these to avoid datastore lookups. It's a trade off. Either time-limit signed tokens that can't be revoked with benefit of no lookups or implement revokation.
> You can still revoke JWTs if you give them an ID and keep a revoke list somewhere.
You don't need the ID. You can simply store the token's signature. In fact, some implementations store the whole JWT to avoid roundtrips to the auth service, and revoking the token is just a matter of flipping an attribute in the database.
This kind of comment might make one wonder why not just use a sessionId to begin with but JWT in this case is still useful in a microservice arch because a token which has been revoked from one high security microservice may still be valid for lower security microservices: It gives each microservice the option of deciding what kind of security they need to provide... They may not need any revocation list; maybe the JWT on its own is sufficient; they just keep accepting the token until it expires naturally.
The token expiry determines the baseline accuracy of banning across all services.
> This kind of comment might make one wonder why not just use a sessionId to begin with
JWT and sessionIds are totally different beasts. JWT are used per request, are designed to expire and be refreshed, are specific to each individual endpoint and store authorization info in a specialized third party service.
"No revocation" is a dangerous constraint to have in an authentication session. What happens if a user's token is compromised? You have to either wait for the token to expire (if you implemented expiry) or log out every single user.
That's exactly the trade-off. I'm not going to say it's a big enough negative to dismiss using the stateless signed token scheme because it depends on the needs of the application.
But either way, if you really can't afford a database or cache layer lookup to see if a token is still valid, then you accept that by using a bearer token, that is only validated by signature alone, that it is possible a user will have their session hijacked without possibility of revokation.
The usual way this is mitigated is by use of a small expiry time (I've commonly seen <=5 min) and a revokable refresh token. This still gives a hijacker a possible 5 minutes (assuming 5 minute expiry) if a user revoked the refresh token, but it does mitigate the damage while still reducing DB lookups since you only do a lookup in token refresh. Hope that clears things up. Again your application needs should drive these decisions.
Indeed. However, this is just a building block and not a library solution.
Combined with a revocation list you are good. And use something like OpenPolicyAgent to implement it, adding a lot of other possibilities as well.
That's dandy, but it's a solution which is neither standardized nor native to JWT. It's also a weak, passive form of revocation instead of a robust, active form. How do do you revoke a token prior to timestamp expiry?
In 2018 it is fully possible to use authentication libraries which natively support granular control for things like revocation using strong, turnkey cryptography. I would argue most people who think they should be using stateless and signed sessions for e.g. performance are heavily discounting the revocation liability and neglecting to optimize their lookups sufficiently (such as by caching).
Revoking a bearer token is trivial and in all likelihood, revoking tokens is a very infrequent event. In most cases it is such a rare event that you can usually commit your blacklist to source code.
If not a service to validate tokens against a blacklist is again trivial and will scale to all but the top 0.1% of organizations. And it only needs to be in the blacklist long enough for the period until the token expires.
Yes, jwt is not ideal. But this talk that you should never ever use them and your service will be immediately hacked etc is silly internet bandwagoning.
For a huge percentage of services jwts are just fine. Anyone reading this, please do not over think this advice and just ship with jwts if that is what you have.
> Yes, jwt is not ideal. But this talk that you should never ever use them and your service will be immediately hacked etc is silly internet bandwagoning.
I never said you should never ever use JWT or that your service will be hacked if you do so. In fact, if you kindly reread what I wrote you'll see that I explicitly mentioned there are legitimate use cases for JWT. I am specifically refuting the use of JWT as an authenticated session management system.
> Anyone reading this, please do not over think this advice and just ship with jwts if that is what you have.
This is poor advice.
1) Authentication is sufficiently solved for most workflows and applications that you can use turnkey solutions for more secure and more performant authentication than JWT.
2) What exactly is the scenario you envision in which JWT is all someone has? Do you mean they're forced to use stateless session management, or that JWT is literally all they can do for authentication because nothing else is available?
> What exactly is the scenario you envision in which JWT is all someone has? Do you mean they're forced to use stateless session management, or that JWT is literally all they can do for authentication because nothing else is available?
Good luck using session cookies with Cordova on iOS, for example [1]. In cases like these JWT is perhaps your only option.
> That's dandy, but it's a solution which is neither standardized nor native to JWT.
That statement is false.
JWT were specifically designed to store a payload JSON object which among the many standardized fields include the token's expiry time, and JWT were specifically designed with a workflow which includes not only client-side token refreshing but also server-side token rejection that triggers client-side token refreshes.
In fact, JWT token refreshes and token rejections feature in any basic intro tutorial to JWT, including the design principle that tokens should be discarded and refreshed by the client as soon as possible and also the use of nonces.
No, it's not false. Tutorial "best practice" guidance does not constitute a standard. JWT does not provide native revocation. Neither refreshes nor expiry constitute revocation. Revocation is an active state change, not a dead man's switch.
Again, expiry is not revocation. This is an uncontroversial fact - if you disagree, please advise me as to how you'd revoke a token prior to its timestamp-mandated expiration without augmenting it further.
And the jti field is not intended for what you think it is. Anti-replay is not at all the same as revocation. Those are different things entirely.
I certainly believe (and have seen) the jti field used in the manner you describe. But no, that workflow is not intended for revocation. Which makes sense given the design intentions of JWT, because anti-replay can be accomplished as a stateless process, while revocation cannot.
"It's a common practice to add expiry timestamp for such tokens so each token will expire after certain interval."
With this:
"That's dandy, but it's a solution which is neither standardized nor native to JWT."
People are providing evidence that token expiration is native to JWT to refute that statement, while you are arguing in parallel that "expiry is not revocation" which is related but separate.
Issue and expiration timestamps are used along with nonces to enforce single use tokens. Once a token is used then the client is expected to discard and refresh the token.
Implementations are also free to keep track of issued tokens and that does not pose any problem in the real world.
> And the jti field is not intended for what you think it is. Anti-replay is not at all the same as revocation.
Why are you expecting to revoke a token in a scenario where the token is supposed to be used once?
Either the token is deemed valid and accepted or it's invalidated and rejected, which triggers clients to refresh the token and retry the request.
> Why are you expecting to revoke a token in a scenario where the token is supposed to be used once?
An attacker was able to somehow issue a bunch of tokens for himself. Now you want to invalidate them even though they're not used yet.
> Either the token is deemed valid and accepted or it's invalidated and rejected, which triggers clients to refresh the token and retry the request.
The other point here is that you are probably (not always, not in every possible case, but in most common cases) better off using just a bearer token (refresh it on every use if need be). There's no performance benefit in using stateless tokens when they can be used only once, and handling bearer tokens is much easier from a gun-to-shoot-your-feet-with perspective.
> An attacker was able to somehow issue a bunch of tokens for himself
I'm not expert in JWT and just jumping in here, but wouldn't that imply total compromise of the PKI if this ever happens?
I'm saying, if this scenario comes to pass, with basically any old authentication system, isn't it now time to roll the master keys and invalidate _every previously issued_ token/session the old-fashioned way, by disavowing the prior signing key, and then bouncing every user/ requiring to re-auth freshly and establish brand new sessions within the totally new PKI?
I assume this is always still possible even with JWT from what I've read so far, but I'm happy to be educated if either of you don't mind sharing.
> I'm not expert in JWT and just jumping in here, but wouldn't that imply total compromise of the PKI if this ever happens?
Not necessarily. Let's say I steal your password and use it against the auth endpoint to get 10 one-time tokens for your account. Re-rolling the master key is a solution, but a very radical one if I can just invalidate all your tokens don't you think? ;)
> Not necessarily. Let's say I steal your password and use it against the auth endpoint to get 10 one-time tokens for your account.
The tokens are valid, thus there is no objective reason to reject them other than there was an unrelated security failure elsewhere in the system.
Additionally, tokens are generated per request and are short-lived, with an expiration timestamp that is just enough to send a request to the server.
When the token is passed to the server, the nonce is added to the server's scratchpad memory to revoke the token and thus avoid replay attacks. If anyone for some reason wants to revoke a token, they only need to add the token's nonce to the revoked list. If the nonce is present in the list then the server rejects the token and triggers a token refresh.
I'd argue that people who think they should be using caching are heavily discounting the consistency issues they will encounter (no doubt at the least convenient time), and may well end up reintroducing the same problem they're trying to solve. If you have revocable tokens accessed via an authentication lookup cache with a 5-minute expiry then you've spent a lot of time and engineering effort to have exactly the same problem as if you had non-revocable JWTs with 5-minute expiry.
He suggests KISS: you can probably get away with plain old server-side auth, and if you really need client-side tokens, use something simple that just encrypts and signs them: https://news.ycombinator.com/item?id=13612941#13615634
I can elaborate a tiny bit. It's been mostly a rocky road in library development as well as some confusion in the jwt specification. Basically the JWT spec is poorly designed for lay-programmer use and some folks are implementing the spec wrongly or are just configuring their systems that use properly-implemented libraries in dangerous ways. For instance you need to choose the algorithm carefully and then be careful not to accept any other specified algos as it can cause some interesting attacks (specifying symmetric algorithm when the token was meant for asymmetric ones can lead to valid signing using the public key if the system allows it). Also Technically a user can specify a "None" algorithm that doesn't do payload verification, which tbh all backends SHOULD drop tokens specifying this.
JWTs as bearer tokens aren't bad in their own right, but if you aren't careful you can screw yourself and therefore many security experts avoid them for use in securing systems. Plus a lot of people mistake it for an encrypted token which it isn't. You can imagine how bad that can get.
Tbh I'm with the parent commenter. I avoid them, but if you avoid common pitfalls they should work for your system no problem.
I'm on mobile and can't be arsed to gather sources, but you can search the claims I made and you'll see several articles about these problems. There's even a defcon talk about a new proposed standard (called Paseto I think) that starts by highlighting the major issues with JOSE and JWT specifically.
Also (separate post for separate replies), why not use CORS? this is the first I'm hearing about this. SPA websites often use things like JWT and CORS (ours included)
The author hasn’t clarified yet, but I suspect what they’re referring to is the fact that CORS does not support granular access control. If you make something public under CORS, any client can retrieve the resource if no other authorization or authentication check is in place. It’s not a system of authentication, it’s a system of authorization - specifically, for authorizing hosts to request resources which normally wouldn’t be authorized to do so under same origin policy.
As a concrete example: people occasionally misuse the Origin header, thinking that they can use it as a form of client authentication. The idea is that any client request from a non-whitelisted origin will fail. But any user can spoof their own Origin header, and the Origin header is primarily intended to protect users from making CORS requests they didn’t intend (because in most cases an attacker cannot coerce a browser to forge a header).
Anyway HTTP2 would hopefully address that (through header compression), and things like zero-RTT TLS and keep-alive further minimize the overhead of an additional request.
Plus doesn't CORS only make preflight requests periodically, not for every request?
I wrote/research a lot about http/2, and even has a small tool for it (https://http2.pro).
Among many things you get from http/2, it cannot eliminate round trip time. Sure, you can keep a connection alive but that's possible with http 1.1 too.
Header compression is HPACK. If the header changes even the slightest bit, it's not cached. Dynamic URLs and headers can easily bust HPACK compression.
Preflights are cached, but because CORS is per-URL caching can be of limited value. If your API uses `/info` and `/edit`, a preflight request has to be made for both (assuming a preflight is necessary). If your application has dynamic URLs (e.g. `/widget/1`, `/widget/2`, etc.) the problem is exacerbated even further.
No, there are several common arguments against JWT for session tokens. The major one intrinsic to JWT is that it has no system of revocation. Thus instead of using a turnkey solution you need to add an additional layer of state logic to your authentication code if you want to be able to revoke tokens.
It is also correct that JWT 1) supports far more cryptography than is necessary; and 2) supports weak cryptography. You can do better than JWT for session management security and performance merely by generating pseudorandom tokens, associating them to sessions and performing lookups.
More generally speaking: signed, stateless tokens are attractive for a variety of technical reasons. They have legitimate uses. But it's typically a poor security decision to choose them in lieu of revocation, for reasons which are mostly uncontroversial among those who work in security.
> No, there are several common arguments against JWT for session tokens. The major one intrinsic to JWT is that it has no system of revocation.
That's technically false. JWT features multiple systems of revocation, including the use of nonces. Token revocation also features prominently in JWT's basic workflow.
The key aspect is that there is no turnkey implementation, and thus projects need to roll their own implementation, which is frowned upon some developers.
By "intrinsic", I meant precisely that there is no JWT standard which admits native revocation. It naturally follows that no JWT implementation provides a turnkey solution for revocation, because it's not intended to.
JWT is stateless. Revocation is stateful. This is a fundamental tension in both cryptography and access control. Yes, you can retrofit your stateless authentication system with a stateful revocation system. But at that point you're back to square one and the architect working on this should consider why they're undoing the legitimate benefits JWT provides.
Nonce based revocation is an active process. Timestamp expiry is not actually revocation, it's expiry. If your token is compromised prior to expiry, you're out of luck.
> By "intrinsic", I meant precisely that there is no JWT standard which admits native revocation.
That's patently false.
JWT's basic workflow features token refreshes, issue and expiration timestamps, and even nonces, and the backend workflow also supports arbitrary token rejections to trigger token refreshes.
The only aspect of JWT's workflow that is left as an implementation detail is tracking revoked tokens.
> JWT is stateless. Revocation is stateful. This is a fundamental tension in both cryptography and access control.
This sort of argument is ivory tower nitpicking stated disingenuously. JWT include issue and revocation timestamps, which already renders the workflow stateless. The only stateful aspect, which is silly nitpicking and technically irrelevant, is keeping track of nonces and arbitrarily revoked tokens, which require keeping a database to track revocations.
We’ve been going back and forth like this for quite a while, so at this point I doubt I’ll be able to convince you with further explanation. I’m shocked you think it’s “ivory tower nitpicking stated disingenuously” to call stateful tracking of nonces what it is - “stateful.”
I’ll recuse myself from further “nitpicking” I suppose, because this isn’t going anywhere. If you’re interested in actually following why your suggestions are a poor fit for session authentication, I’ll direct you to this flowchart: http://cryto.net/%7Ejoepie91/blog/2016/06/19/stop-using-jwt-....
That's just an argument ignoring the realities of scale. In any reasonable system the number of tokens that need to be held in blacklist until seen will be tiny in comparison to active sessions.
How does it matter how many tokens are in the blacklist? You're looking them up in a DB where the lookup time in lg(n) anyway. To give you an idea of how little it matters, let's say a small blacklist would be 10k tokens while a list of all tokens would 10M. log(10k) = 13.28. log(10M) = 23.25. It's only marginally more, because the main latency of the DB request is the network round-trip time.
The actual issue here is that a lookup needs to be performed at all. For every request, you need to pay the latency of one DB round-trip as well as maintaining code that does this lookup. And if you're going to do that anyway, why bother with this complexity of "stateless" tokens?
How does it matter how many tokens are in the blacklist?
If you're authentication something internal to a company, like the link between the website and the order status backend, there may be literally one user with one token.
In this case, the list of revoked tokens will take little space, and update very rarely!
If you're authenticating users logging into your website and you decide user logouts should be implemented by token revocation, you're going to have a great many revoked tokens - perhaps within an order of magnitude of the number of active users you have.
I suspect a lot of the disagreement here is between people who are thinking of different situations.
What you’re describing - a microservice architecture - is actually a legitimate use case for JWT. I would say that’s an example of sound authentication, but it’s not session authentication, which is what’s being talked about here. Microservices authenticating and communicating with one another don’t utilize the concept of sessions in the sense that clients (users) and servers do.
For that reason I don’t know that it’s fair to say the disagreement throughout this thread is due to people talking about different things. Microservice authentication notwithstanding, session management is not optimally handled by JWT.
I think the answer is supposed to be that you've done your architecture wrong if you ever allow a revoke list to grow as high as 10k or beyond. You should not have to grant very many long-lived JWT tokens to begin with, so for most revocations it should always be enough to simply let them expire.
If the token blacklist is budgeted and never allowed to grow to a size of more than say 10-200, then it can probably be safely maintained over the lifetime of the project in a way that doesn't require a round-trip, in the source code for the service or otherwise gated behind a release barrier.
I don't know if I agree with that (I've never implemented JWT) but at least I think I've heard of the idea that's how the architecture is supposed to be planned for JWTs.
> If you have to track revoked tokens you might as well track active sessions via a session ID.
No. Tracking revoked tokens is only necessary if for some reason a server wants to reject a valid token, and that's only required until the token expires.
The use of nonces to avoid replay attacks is also a widely established practice, thus we're not talking about extra infrastructure.
Tracking revoked tokens also doesn't take up any resources as tokens are designed to be short-lived.
JWT in particular uses cryptography to encode a timestamp and some other parameters. This way, you can check if it's valid by just decrypting it, without hitting a DB.
correct me if i am wrong, but if your backend and front-end run on different ports and you are developing locally using chrome, you have to use CORS to make any non GET requests
SameSite cookies can eliminate threats from cross domain requests. The strict mode is good enough to even block cross domain regular GET requests too.
However, I wouldn't throw other anti-CSRF measures away because if the attacker can use a stored XSS vuln, they can still make their way to a CSRF as well. Besides that, not all browsers support SameSite flag yet.
If you have an API, you can program your web client like an API client, using bearer tokens for authentication (put them in local storage). It's probably better than cookies.
Why do you think so? I would guess it's a tradeoff about what you think is more likely to happen. XSS or CSRF.
Local storage (and session storage) is vulnerable to XSS. Use a strict content security policy and escape (htmlspecialchars in php and similar functions in other languages) output to combat that.
Cookies are vulnerable to CSRF but can't be read from JS if they are http only (no XSS). To combat CSRF most frameworks already have built-in csrf token support. In case of a API use a double submit cookie. Frameworks like AngularJs/Angular support that out of the box. Also use the secure flag SameSite and __Host prefix [0][1]
If you mean that HttpOnly for cookies protects against XSS, you are mistaken. The attacker will simply generate requests to the secure endpoints rather than steal the token and use it from somewhere else. HttpOnly does not really protect you against XSS at all.
With "no XSS" I meant a XSS exploit doesn't allow access to the data stored in the cookie. I didn't mean it would protect against XSS. Poor/lazy wording on my part, sorry.
It's true that a attacker simply can generate requests from the XSS'ed browser, my understanding was that the session/token is more valuable to an attacker then only an XSS exploit.
However it seems that someone in the past had the same understanding as me and tptacek disagreed [0]. Oh well. Also reading the linked article [1] (are you the author since you use the same wording?) and it's linked articles it seems both cookies and webstorage are not ideal solutions, but local storage might be preferable since CSRF is not a problem, so one thing less to worry about.
How is that better than a cookie though? Cookies already provide automatic storage and expiry mechanism. Bonus feature is that they are not accessible by JS code at all, if set httponly flag.
Browsers automatically attach cookies to HTTP requests, opening the door to attacks like CSRF.
The security impact of automatic client-side expiry is tiny, since token expiration must be done server-side anyway.
The HttpOnly flag as an XSS mitigation is almost useless; competent attackers will simply run their code from the victim's browser and session. To protect against XSS, HttpOnly doesn't really help you at all. You should be setting a CSP that prevents inline and 3rd party scripts by default, and whitelist what you must.
Overall, cookies may seem like they have a lot of security features, but in reality they are just patches over poor original design. IMHO, using local storage is probably better, because there's less room to get it wrong.
If you use cookies as a storage mechanism and ignore the cookie header on your backend, you close the door to CSRF attacks.
Here's one glaring problem with local storage: literally any script on your page can access it (for example, vendor scripts). Cookies can only be accessed by scripts from the same domain from which they're created.
That's true, but if you run untrusted scripts on your site it's pretty much game over, anyway.
Why should those scripts limit themselves to stealing tokens when they can send authenticated requests from the browser? To put it another way, why would you care about knowing the root password when you have a way to run a root shell at will?
It's interesting that every time this comes up people talk as though the only vector for a malicious script running on your site is you serving it yourself. A reminder that browsers have a ridiculously lax permissions/security model for extensions which extension developers have been shown again and again to abuse (see the Stylish incident for instance).
The scenario is XSS, where the attacker manages to run their JS code on your page, and get all the same privileges as your own code on the page. Whatever mechanism your own JS code uses to perform authenticated requests, the attacker can do the same.
That is not the scenario you described (running untrusted scripts on your site). Cookies are not protected from XSS, but they are protected against malicious or compromised vendor/CDN scripts and browser extensions. Local storage, however, is vulnerable to all of the above.
It seems that you are saying that cookies are more protected from third party code than your own code. That is incorrect.
Let's get specific: let's say you have a page on mysite.com. When a user signs in, the server sets a HttpOnly session cookie to authenticate later requests from the user.
Now let's assume your page loads evilsite.com/tracker.js. The code in tracker.js can now send requests to mysite.com, and your HttpOnly session cookie will be sent. There is no extra protection for cookies that would check if the JS code doing the sending came from mysite.com.
Obviously tracker.js cannot read the value of your session cookie (and, indeed, neither can your own code), but mysite.com is more or less totally compromised.
You're describing CSRF, but again: this vulnerability doesn't exist in the scenario I'm describing.
If you don't set HttpOnly on your cookies and ignore the cookie header on your backend (i.e. only use cookies for storage, not for transport), cookies are strictly better than local storage, since the only difference between the two is now local storage's lax access policy.
The scenario you're describing can also be solved by using a CSRF token retrieved from the backend. Meanwhile, there is literally no way to secure secrets kept in local storage from third party scripts.
No, I'm describing XSS. You know, where an attacker injects scripts on your pages, and the attacker's requests come from the same origin as your own requests. In CSRF, the attack is hosted outside the target site.
I don't believe a situation exists where using cookies for client-side storage is more secure than local storage. Could you please explain this in more detail?
Your page is mysite.com. When a user signs in, you save some sort of session token to a non-HttpOnly cookie. Your backend ignores the cookie header, and you send the session token as a different header with every request. (Basically, the same way you'd authenticate if you were to use local storage).
Now assume your page loads evilsitem.com/tracker.js. They can send requests to your backend, and the cookie will be included, but since your backend ignores the cookie header it doesn't matter. The malicious script, however, cannot access the cookie directly, since the script's origin is not the same.
That's why I say cookies used this way are strictly more secure than local storage: the fact that they're included with every request is irrelevant, and they're protected from direct access by third party scripts. Local storage is not. Even if you use JWT for auth, you should still store it in a cookie.
You are misunderstanding how the same-origin policy applies to scripts. If a page on mysite.com loads evilsite.com/tracker.js, then tracker.js runs with the same "origin" as the rest of the page that loaded it. The script has all the same access, including document.cookie, as a scripts loaded from mysite.com. Try it.
The same-origin policy only limits access between windows and frames. All scripts loaded on a page will have the same "origin".
> competent attackers will simply run their code from the victim's browser and session
What do you mean? JS even on the same page can't read HTTPOnly cookies. If you are assuming that the browser has been hacked then it is pretty much game over regardless of what you use.
We are talking about XSS, where an attacker can run their JS code on your page. If the attacker can run JS on your page, they can already do whatever your signed-in user can do. No need to read the cookie to make authenticated requests, just like your own code doesn’t need to read the cookie.
No, they've described a Bearer Token workflow. JWT is a specific method that also (most times) uses Bearer tokens, but it wasn't the first, nor does it have a monopoly on Bearer tokens.
I remember building a service when I was experimenting with web development that used randomly generated tokens in a custom HTTP header, and that is closer to Bearer Token (the standard) than Bearer Token is to JWT.
You're trying to be disingenuously pedantic. It's irrelevant if the workflow is specific to JWT or is shared by other bearer token schemes. The point is that JWT, which is a bearer token scheme, follows that workflow, thus it makes no sense to present that workflow as an alternative to the JWT workflow, as it's precisely the same.
If you believe JWT is "precisely" the same as mere presentation of a token, then you're woefully ignorant of JWT.
> ... it makes no sense to present that workflow as an alternative to the JWT workflow ...
But that's not what happened, is it? In fact, it's the opposite. As I read it, [1] suggests a bearer token workflow, to which [2] replies that the suggestion is "an awful lot like JWT", whereupon [3] clarifies that the original suggestion is just a normal bearer token scheme, which, I claim, shares nothing with "JWT" except the "T".
> ... JWT, which is a bearer token scheme ...
The "T" in "JWT" is the least interesting bit of JWT, and merely a necessity.
> It's irrelevant if the workflow is specific to JWT or is shared by other bearer token schemes
When not talking about any specific bearer token scheme, it is absolutely relevant. Only the generic point was under discussion, until JWT was introduced. JWT is not just another bearer token scheme. It comes with its own additional obligations, restrictions, and extra steps, not to mention the purpose-defeating pitfalls.
> JWT is not just another bearer token scheme. It comes with its own additional obligations, restrictions, and extra steps, not to mention the purpose-defeating pitfalls.
The big objection to JWT is that it's a bearer token with no revocation support. If you're going to implement a bearer token with no revocation support, or a custom revocation implementation, anyway, then the criticisms of JWT apply just as much to the system you're building and you might as well just use JWT.
> The big objection to JWT is that it's a bearer token with no revocation support.
That statement is not true. JWT do support revocation. In short, servers are free to reject any token, which triggers a token refresh on the client-side. Token revocation is even an intrinsic aspect of JWT as they suport issue and expiry timestamps, along with a nonce to avoid replay attacks.
It seems some users have an axe to grind regarding the idea of having to keep track of some tokens that were poorly designed (i.e., absurdly long expiry dates without a nonce) but the solution quite obviously is to not misuse the technology. In the very least, if a developer feels compelled to use a broken bearer token scheme that does not expire tokens based on issue date then quite obviously he needs to keep a scratchpad database of blacklisted tokens to compensate for that design mistake.
> In short, servers are free to reject any token, which triggers a token refresh on the client-side.
Servers can of course implement whatever custom behaviour they desire, but the protocol itself (and common implementing libraries) does not have any direct support for revocation.
Furthermore, any revocation implementation will inherently have to compromise the statelessness that is JWT's most prominent selling point.
> Token revocation is even an intrinsic aspect of JWT as they suport issue and expiry timestamps, along with a nonce to avoid replay attacks.
JWT does indeed support expiry and nonces. But these are not the same thing as revocation.
> It seems some users have an axe to grind regarding the idea of having to keep track of some tokens that were poorly designed (i.e., absurdly long expiry dates without a nonce) but the solution quite obviously is to not misuse the technology. In the very least, if a developer feels compelled to use a broken bearer token scheme that does not expire tokens based on issue date then quite obviously he needs to keep a scratchpad database of blacklisted tokens to compensate for that design mistake.
Insults and "obviously"s are not a good way to convince people of your point of view.
> Servers can of course implement whatever custom behaviour they desire, but the protocol itself (and common implementing libraries) does not have any direct support for revocation.
That's patently false. The protocol does support revocation. In fact, its basic usage specifically states that servers are free to force the client to refresh its tokens by simply rejecting it. If a JWT is expected to be ephemeral and servers are free to trigger token reissues, what lead you to believe that JWT didn't supported one of its basic use cases?
> Furthermore, any revocation implementation will inherently have to compromise the statelessness that is JWT's most prominent selling point.
That's false as well for a number of reasons, including the fact that JWT use nonces to avoid replay attacks. And additionally JWT's main selling point is that's a bearer token that's actually standardised, extendable, provided as a third party service, and is usable by both web and mobile apps.
> JWT does indeed support expiry and nonces. But these are not the same thing as revocation.
Expiration timestamps and nonces automatically invalidate tokens, which are supposed to be ephemeral, and nonces are a specific strategy to revoke single-use tokens. As it's easy to understand a bearer token implementation that supports revoking single-use tokens is also an implementation that supports revoking tokens, don't you agree?
> Insults and "obviously"s are not a good way to convince people of your point of view.
Perhaps educating yourself on the issues you're discussing is a more fruitful approach, particularly if you don't feel confortable with some basic aspects of the technology and some obvious properties.
Also, people make a big stink when authentication cookies aren’t marked as HTTPONLY. Storing tokens in localstorage (even sessionstorage) is just as bad but for some reason more accepted.
Stealing tokens from localstorage or cookies means the attacker can run code in the user's security context. Why would they limit themselves to stealing tokens? Using them outside of the browser would be stupid, anyway, as it would risk tripping reauthentication, IPS, or whatever.
HttpOnly is a joke, and people should stop claiming it helps with XSS. It does not help. It's security benefit is at most neutral. In fact, people often seem to think that it prevents XSS, and get lulled into a false sense of security. For that reason, HttpOnly seems to be worse than neutral.
Persistent access via an authentication token is a hell of a lot more reliable than relying on the user not navigating from/refreshing a specific page where XSS is present.
An oversimplified version of the arguments against JWT for session management (as well as the JOSE specification for signing and encryption) ...
1. The specification has points of ambiguity that have led to a number of flawed implementations.
2. JWT is saddled with unnecessary complexity which also contributes to recurring implementation flaws.
3. JWT increases the complexity of session revocation in contrast to a simple, stateless session ID.
The arguments and counter-arguments are a bit more involved, but be aware that by the time you account for the downsides, you may have negated the value you hoped to gain from stateless web tokens.
If you can use a simple session id, use it. If you need JWT to support external authentication providers, use a short expiration and swap the (fully verified) token for a session id.
- For web, user/pass login exchanged for plain session cookies. Should be marked httpOnly/Secure, and bonus points for SameSite and __Host prefix [1]
- For web, deploy a preloaded Strict-Transport-Security header [2]
- For api clients, use a bearer token. Enforce TLS (either don't listen on port 80, or if someone makes a request over port 80 revoke that token).
- If you go with OpenID/Oauth for client sign-ins then require https callbacks and provide scoped permissions.
- Don't use JWT [3]. Don't use CORS [4].
Again these are broad strokes - if you gave more information you'd get a better response.
[1]: https://developer.mozilla.org/en-US/docs/Web/HTTP/Headers/Se...
[2]: https://developer.mozilla.org/en-US/docs/Web/HTTP/Headers/St...
[3]: https://en.wikipedia.org/wiki/JSON_Web_Token
[4]: https://developer.mozilla.org/en-US/docs/Web/HTTP/CORS