Hacker Newsnew | past | comments | ask | show | jobs | submit | more dimonomid's commentslogin

Thanks. What do you mean by "possibly" though?

> and possibly the invalidation of the lost key at first login

Do you mean that some service might disregard the counter value (the fact that Google and Github respect it doesn't mean everyone does the same), or something else?


Yes :) Personally, I would just start replacing credentials upon loss in descending order of importance.


Yeah sure, I mentioned in the article that the purpose of the backup is to enroll a new token and revoke the old one. It would be a bad idea to keep using backup for a long time anyway.


I just realized that there IS a reliable solution to this issue with the counter. Accordingly to ATECC508A datasheet, its counters can count only up to 2097151, but the whole range of U2F counter is 0xffffffff (which is more than 4 000 000 000). So, the counter boost should be set to the value larger than 2097151, and then, the primary token would never be able to return counter which is that large. So once backup token is used, the primary one is invalidated for good.

Ok cool, I’ll update the article with that important detail.


Wouldn't that also invalidate your backup token after first use?


No, of course not. In backup, we basically use this as a counter: `hardware_counter_value + 2000000000`. We don't care that `hardware_counter_value` cannot be larger than `2097151`; the value we use for calculations is 32-bit, so effectively, for the backup token, the values will start at `2000000000` and the maximum possible value will be `2002097151`.

But the primary token uses just `hardware_counter_value`, so its range is from `0` to `2097151`.

The important part is that the ranges of primary and backup tokens don't intersect.


> No site is doing anything useful with the counter.

What do you mean? In the article I mentioned that at least Google and Github refuse to authenticate if the counter is less than the last seen value. So using backup token does invalidate the primary one.

> If they steal a primary yubikey token, no. The counter is stored and managed only in the secure element part of the device. If they steal a primary u2fzero token, which of course the proposal depends on, the counter is not protected in any meaningful way.

Where did you get the idea that the counter on u2f-zero is not protected in any meaningful way? The counter is maintained by the ATECC508A chip and is incremented on each authentication. And also see my adjacent comment about reliably preventing primary token from returning a counter which is as large as that of backup token.


> The counter is maintained by the ATECC508A chip

In the u2fzero implementaiton, the counter is not used internally by the ATEC5508A in the signature generation. It's merely used as stable storage.

It's used much like unix advisory file locking. As long as you are not using it adversarially, it will work "correctly".

Once you attack the device, it's absolutely trivial to use any counter value you care to, not at all connected to the (yes, secure-enough) counter internally stored in the ATEC5508A.

Apologies about my incorrect statement about any site's usage of the counter. I was mistakenly thinking about the allowance of the counter to increase by any increment.

Still, this is a weakness of the U2F spec. In fact, there is no spec for counter usage on the RP (relying party) side, just an implementation consideration:

> Relying parties should implement their own remediation strategies if they suspect token cloning due to non-increasing counter values.

So you, the conscientious user, would need to verify with each site that they don't allow the counter to reset. Well, you would need to if the counter were implemented correctly with u2fzero.


> Once you attack the device, it's absolutely trivial to use any counter value you care to, not at all connected to the (yes, secure-enough) counter internally stored in the ATEC5508A.

Could you elaborate more on that? How exactly I could use any counter value?


> Having a duplicate of the U2F key is a bigger problem for security than those outlined here.

So, what security problems come to your mind if we consider a duplicate token buried at 1 meter somewhere in the forest (and nobody really knows where it might be) ?


My key is stolen. I have to revoke it, but I need my duplicate to log in to do it. The attacker stole my key and phone, because they were in the same place. I've only got a few minutes to go out to the remote forest and dig a meter down to find my key before the attacker uses it with the phone back at bad guy headquarters to launch the nukes. Did I bury it next to this tree or that one? Damn it, they all look the same now. Dig fast, but careful not to smash the key with the shovel. Gosh, the water table is higher than when I buried it. I hope it still works. Christ, it's all tangled in roots! Pull!! Got it! Now log in, revoke it, reflash the key, re-enroll. Phew! Humanity is saved.

With PKI, I contact my spouse. Honey, can you revoke the key I'm carrying? It's been stolen. Thanks sweetie! See you tonight.

I'll admit the duplicate approach makes for a much better movie. The PKI solution is positively boring. Maybe we could throw in a spouse kidnapping to keep it interesting?


Ah ok, I see what you mean. So can we use the PKI solution today to use as a second factor for, say, Google?


FireFox based browsers support pkcs11 for smart cards like Yubikey. The right way can be built today. Popular services never built it. Maybe when security keys are more popular, they will. I believe trying to make the wrong way work at any cost will only entrench the wrong way. I'd rather point out that there is a better way in hopes that others will adopt it.

Your article makes a valid point about the catch 22 of U2F keys. I simply disagree with your conclusions that it is the user who should try harder to make U2F work. It seems like you are pointing out that U2F is fundamentally broken, but you haven't accepted that yet.


Not that I'm aware of. I wrote to Yubico and surepassid, both said it's not possible.


There's no easy way to increment the counter, so one would have to invent some automation for sending authentication challenges to the token and pressing the physical button every time. The time it'd take should be enough for me to get another pair or tokens, enroll it to my accounts and revoke the old one.

Also let's make it not 1 000 000 but, say, 4 000 000 000, which still leaves plenty of values of a 32-bit value.


I don't know what makes you miss the point, but ok, let me repeat. With the regular second U2F device it has to be easily accessible (e.g. at home in my sleeping room). With the proposed solution, it doesn't have to be easily accessible.

> it still recommends storing the 2nd token at home

In the article I mention that I can either bury it somewhere in the forest or brick up into the wall. Even if we ignore the forest part, do you consider "bricking up into the wall" and just keeping in sleeping room as the same thing, even it's my home's wall? If you do, then ok, sorry for wasting your time, we're not going to agree. Because I believe that making the token really hard to access (like having to disassemble the actual wall a little bit) does make the token a lot more secure.

And another important point that with the regular token I have to add it to every new service manually. Which can be not trivial if I'm traveling far from home. And I can just forget, etc etc.


If someone is in my home looking for my U2F key, they've bridged the gap between the threat model I'm concerned with and the model where they're clearly motivated enough that putting the key in brick isn't going to stop them. Furthermore, I don't really like the prospect of having to take a sledgehammer to my wall when I drop my U2F key during a trip.

Personally, I have a couple U2F tokens and OTP on my phone, with recovery codes. If I register for a new site when I'm on the road, it gets my travel U2F and phone OTP. When I get home, I pair up the 2nd U2F and drop the recovery codes in the box with the rest of them.


First, if the token is easily accessible, then they can steal it even if they don't actually look for it. Like, steal "accidentally". Second, obscurity here is an important part of the security: you don't have to tell anyone that you have a token in the wall, so even if they came specifically for the token, they can surely look for it in your home (because the majority of people store backup token just at home), but it's a lot less likely that they start break walls (unless they specifically know it's there). So here, the level of security is up to the token owner.

As to having to take a sledgehammer: well, losing a token is an emergency situation and I just try hard not to lose it. But if I do, I still have a way out, and breaking a wall a little is still a lot better than losing the access to my accounts.


In which way u2fzero can be trivially hacked?


The ATECC508A is not used correctly. It is used as a RNG and for it's crypto functions, but not for key storage. So any site's key is trivially extracted with physical possession of the device. The wrapping key is also extractable, therefore you only need one-time offline possession of the device.

The source code is horrible so I'm not going to do a full analysis but that's the gist.

A recent generation phone is far, far more secure.


> The ATECC508A is not used correctly. It is used as a RNG and for it's crypto functions, but not for key storage.

Where did you get that idea from? Preparing a device consists of the following steps:

- Flash temporary configuration firmware

- Send keys to the device, which are stored on ATECC508A

- Flash actual u2f-zero firmware

So on the second step (configuring the device), the host writes keys on the device, and those keys are stored on ATECC508A. It happens there: https://github.com/conorpp/u2f-zero/blob/master/firmware/src...

I do agree that the code is bad though, but alas. I did consider reimplementing it properly, also on a more powerful chip, but I don't think I'll be able to find time for that.

Anyway, dirty code doesn't mean that the device is insecure. From what I see, among other things, ATECC508A is used properly.


> From what I see, among other things, ATECC508A is used properly.

You have completely misread/misanalyzed it. It's used only as a "hardware library" to avoid implementing ECDSA. Or perhaps you don't understand U2F well enough?

I can't understand why you are downvoted. It's a valid (albeit wrong) contributing comment.


I'd actually appreciate if you could elaborate on this. Which features of ATECC508A are not used, but should be used to make the whole thing more secure?


First, note that on your own site, the description of key generation is wrong. That's not how "U2F works", that's how a particular implementation derives keys. U2F tokens are free to create keys any way they like; it's opaque to the RP.

Most critically for u2fzero, the MCU sees the enrollment key. On enrollment, a new key is derived, and the nonce used in derivation is sent to the site as the key handle. On use, the key is re-derived from the returned nonce, given to the MCU, then loaded into the ATECC508A, then the "secure counter" is read and a signature generated. I haven't looked in detail at RMASK vs WMASK but it smells like they are not useful.

In the yubikey, the entirety of this code is implemented inside the secure element. In the u2fzero, except for the actual signature itself, the important parts of this code are executed outside of the secure area. The security of the key derivation is good because it prevents one-time access to the device from compromising future enrollments, but for the use case you propose (lost token) it isn't secure since the strongly-generated already-enrolled key is revealed.

A simplistic correction would be to limit the privacy preserving aspect of U2F to 16 sites (the key slot limit on the ATECC508A). This is a fine compromise since there are only 11 sites that accept U2F, per https://www.dongleauth.info/dongles/ . On the 17th reg, u2fzero can refuse to register, or it can cycle back to the 1st key, or it can keep reusing the 16th key, or it can regenerate (and thus unenroll) a key, or other choice.

Rather than a wrapped key, or nonce, or other data for key regeneration, the handle returned to the site is just a key slot index. In this way, the key is generated by and never leaves the ATECC508A.

So, if you're following along, you now realize that this defeats your backup methodology, since the keys would not be wrapped and passed back and forth between site and token. You could pre-generate all the 16 keys outside of the device and program 2 of them identically.

Or you could use the ATECC508A internal DeriveKey command. This can use just the appid as nonce and doesn't need an actual random nonce. This doesn't limit the number of sites that can be registered and also allows duplicates (by programming the same master key into all equivalent devices, along with a starting counter value). The handle in this case can just be a static token-id. If you wanted multiple users to actually register different keys using the same token, then you should add a random nonce. This prevents a site from identifying 2 users sharing a token, and learning something about their relationship. Or a site from identifying 1 user with multiple accounts. But the nonce isn't a security aspect per se, just privacy.

Regardless, your desired use case cannot be met "securely" with the ATECC508A, because the counter is not secure. Anyone with access to the lost primary token can just set whatever counter they like. This is mitigated by the token being a 2nd factor. I would further mitigate it by burning the JTAG fuse, and potting the ATECC508A. Then the MCU can't be reflashed or read out, and the ATECC508A can't simply be popped off trivially and put on a new custom board. If you mix a partial static data from the MCU, along with random nonce, as part of key derivation, then the MCU and ATECC508A pair are married and you have to depot both of them and now the attack is hard enough (beyond amateur hour) that you should have plenty of time to enroll a new token.

Let me just also laugh at the u2fzero site: "It is implemented securely."


First, I agree that there certainly are ways to do that more secure, like generate keys on-chip so that nobody knows them, but as I mentioned, together with security we have to think about a good recovery plan, otherwise one has to come up with some back-way which compromises this (good) security. And I still believe that having a second U2F token easily accessible at my home is way less secure than even the current implementation of u2f-zero with the backups as explained. It's all about tradeoffs.

Second, it looks like you're assuming that it's easy to just pick the MCU and read the code programmed into it. From my past embedded experience I know that e.g. MCUs by Microchip have read-protection bits in their config, so that if it's programmed with those bits set, one can't just read out the hex. Not saying it's totally impossible, but it takes considerable amount of time. This is the part of datasheet for e.g. PIC32:

> 27.2 Device Code Protection > The PIC32MX Family features a single device code protection bit, CP that when programmed = 0, protects boot Flash and program Flash from being read or modified by an external programming device. When code protection is enabled, only the Device ID registers is available to be read by an external programmer. Boot Flash and program Flash memory are not protected from self-programming during program execution when code protection is enabled.

HOWEVER, that said, after briefly looking at the u2f-zero's MCU datasheet (https://www.silabs.com/documents/public/data-sheets/efm8ub1-...), I failed to find any mention of read-protection, which does seem strange to me. Need to figure out more before I can say for sure. But nevertheless: suppose I get some time and reimplement something like u2f-zero but on the MCU which does have code protection, e.g. PIC32. Does it address your concerns?


> way less secure

First you argue that the device is secure. When I point out that it is not, you switch tack to argue that when you take other factors into account, it's still "better" than doing it some other way because of some other factor, so overall security is good.

You also want to conveniently forget aspects of your original argument at your blog, such as the defect with OTP that a phone can be stolen. A phone is much, much, much more secure than u2fzero.

You're also insisting that your particular use case for needing a backup is the only and best use case. If you want to design a system for your individual desire and needs, that's great, but this doesn't generalize. I'm guessing most people will not want their device baseline to be insecure so that they can have a 2nd insecure backup at hand.

As I hinted, the best solution is likely a cloud based token. This can be simulated with u2fzero by using a method similar to what @ecesena proposed. With either knowledge of an authorized external transport key, or by sharing a transport key, the atecc508a can share key material securely. I won't work out the details here, but basically you just need a single initialization-time secret that you store in the cloud. With that, you can take an arbitrary fresh unprogrammed u2fzero and initialize it to look like the one you lost. It has to be done this way (at user's direct control) because if you buy u2fzero's pre-programmed all security is lost.

The secret blob can be stored on your [secure] phone and backed up in a safe on a piece of paper, or any other storage scheme which a particular user may require to meet their needs. Personally, I would store it in icloud.

or if you used publickey, closer to @ecesena's proposal, you don't need to keep a secret. (at the expense of having to prepare the backup devices ahead of time.)


I apologize for being inconsistent in my arguing; indeed, I "switch tack". That's because it's actually hard for me to constantly keep the full picture in my head, and sometimes I start thinking like "omg what I've done it's so insecure", but then realize that I already thought about it and still decided to use that strategy because of this, this and this.

In fact I really appreciate you explaining all these details, and to be honest I need to find a decent amount of time to fully wrap my head around it; hopefully on the weekend I'll be able to. I doubt it will change my reasoning about my personal use case, but learning something new about security is never bad. Thanks for that! Hopefully I'll get back to you after learning more.

Also wanted to make 100% sure: are you arguing just about the u2f-zero being insecurely implemented, or the whole concept of backup with the cloned and securely-stored token also being bad? If we imagine that e.g. yubico starts producing matched pairs of tokens (I doubt it would, but still let's imagine), primary and backup with the boosted counter, which are implemented securely enough etc - would it be still bad in your opinion?


Diagrams help me. They do wonders to organize the thought process down to the most core ideas and cement it in your mind. I have planned to try "mind mapper" software to help with it but haven't done so yet. If you produce some kind of diagram I would be happy to review it with you.

> are you arguing just about the u2f-zero being insecurely implemented,

yes. the concept of a pre-cloned token is a good one. my main point of contention with you is that in your threat model, a lost phone is a security problem, yet a lost u2f is not. you need to revise your threat model or revise your solution.


> my main point of contention with you is that in your threat model, a lost phone is a security problem, yet a lost u2f is not

I realized that I could explain my concerns better in the article. My biggest issue with the phone is not that it could get hacked, but that I could just lose it together with my u2f token (because, you know, I always carry both phone and u2f token), and thus get myself locked out of my accounts. So it's not about somebody attacking me specifically to get my 2fa data, but just about some bad luck happening which results in losing both u2f token and a phone which was a backup for u2f token.

Instead of Google Authenticator we could use Authy, which synchronizes its database with the server, but Authy account could be recovered with SMS, which is anything but secure. I actually updated the article just now with that point.

I really want having a backup which is rock-solid secure and reliable, you know, more reliable than any other 2nd factor I have. So having a token bricked into the wall or something like that would work.

> you need to revise your threat model or revise your solution

When I have a chance to reimplement the same backup concept having something more secure than u2f-zero, then yeah I surely will revise the solution.


> MCU which does have code protection

http://www.break-ic.com/

in short, MCUs are not secure devices. the additional protection is great for surface level protection, and certainly sufficient for most threat models. but in this case, we are talking about an authentication token that is obligated to keep secrets. one cannot claim "secure implementation" without secure key storage.

even the atecc508a doesn't really meet the definition of secure. eg, there is no eal certification. no reputable vendor would use it in a commercial security device. it's great for its target market though.

unfortunately, there is no way for a hobbyist to acquire actually secure chips. something like the atecc508a will have to do.


although, for this implementation, you don't need to read out the chip anyway. you can just sniff the i2c bus to learn the keys.


> in short, MCUs are not secure devices

I doubt there is 100% secure device, you know, it's all about tradeoffs. Given enough time and resources, nearly anything can be hacked.

By the way, if an attacker has gained physical access to the primary token (and also to the another factor, a password, since token on its own is not too helpful), they don't need to hack it in any way: they can just use it to log into the account, add some other token and revoke the existing one.


> I doubt there is 100% secure device,

you might want to read up on EAL certification. Yes, there is no 100% secure device, but "secure" is about resistance to attack and an actual secure element is very resistant, as well as tamper evident.

u2f-zero could be hacked in 10s flat if you prep ahead of time.

> if an attacker has gained physical access to the primary token ... they don't need to hack it in any way

but they need long term access. with this device, short term access is enough to learn the key and then i can get access at a time of my choosing. i can also learn the counter value and you won't notice that i have gained access.


Yes using a normal MCU for U2F is a bit of a compromise since EAL chips are unobtainium. So flash read protection is the main barrier to physical cloning methods.

I'm not sure of any methods to bypass the read protection on normal MCUs in a 10s "drive by" attack. AFAIK, the special companies that provide flash readout (http://www.break-ic.com/), do so by decapping the chip and using involved imaging techniques. I suspect they get good at identifying various flash technologies, many of which are common to many chips. But don't think it's feasible for a drive by.

The I2C eavesdropping shouldn't be an issue because the ATECC508A does apply a mask.


> you won't notice that i have gained access.

If I lose my primary token, how can I not notice that? I look at it a few times per day, and I use it not rarely either, so I can't see how I can not notice if I lose it (even if it was replaced by the similarly-looking device).


> you can just sniff the i2c bus to learn the keys

RMASK and WMASK, which smelled not useful to you, are there exactly to prevent this from happening.


lol no. this is almost not even worth a response.

the key managed here

https://github.com/conorpp/u2f-zero/blob/master/firmware/src...

and here

https://github.com/conorpp/u2f-zero/blob/master/firmware/src...

just means that the "actual key" (unmasked) only lives in MCU memory for a short time -- the time from when the mask is applied and then until return to caller and memory is cleared, in the enrollment case I linked. In the authenticate case, it lives quite a bit longer because the stack space used for key storage isn't zero'd.

The atecc508 doesn't use or know how to use the mask. The actual key used for the encryption is passed in the clear over i2c.

(note that the key derivation you suggest is wrong because of the extra xor masking.)


This is a bit late, but the atecc508 does apply a random mask, see PrivWrite command in datasheet.

http://ww1.microchip.com/downloads/en/DeviceDoc/20005927A.pd...


Oh, thanks for the note about Twitter! I'll update the article.


SEEKING WORK, Remote

Hi, my name is Dmitry. I'm a passionate software engineer with strong background in low-level (MCU real-time kernels, C, Assembler), and experienced in higher-level technologies as well: Go, C++, JavaScript, and many others. Author of the well-formed and carefully tested real-time kernel for 16- and 32-bit MCUs: TNeo: https://github.com/dimonomid/tneo , which is now used by several companies.

Apart from professional activities, I'm a hobbyist in Lisp, Ruby, Node.js, Angular.js. Learning internals of the Linux Kernel, since this is something I'm truly excited about.

One of my hobby projects is a geeky bookmarking service written in Go: https://github.com/dimonomid/geekmarks

Technologies: Go, C, C++, Assembler, Low-level, Embedded, RTOS, JavaScript, SQL, PostgreSQL, Java, Linux, Git, Bash, Docker, Ansible

Some of my articles:

- How I ended up writing a new real-time kernel: https://dmitryfrank.com/articles/how_i_ended_up_writing_my_o...

- Here's why I love Go: https://dmitryfrank.com/articles/i_love_go

- How do JavaScript closures work under the hood: https://dmitryfrank.com/articles/js_closures

- Unit-testing (embedded) C applications with Ceedling: https://dmitryfrank.com/articles/unit_testing_embedded_c_app...

- Object-oriented techniques in C: https://dmitryfrank.com/articles/oop_in_c

Résumé/CV: https://dmitryfrank.com/dmitry_frank_resume.pdf

Email: mail@dmitryfrank.com


Consider applying for YC's Winter 2026 batch! Applications are open till Nov 10

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: