Hacker News new | past | comments | ask | show | jobs | submit login

I can surely recommend reading into SSH certificates using the ssh-keygen manpage. No any extra tools required.

I sign SSH certificates for all my keypairs on my client devices, principal is set to my unix username, expiry is some weeks or months.

The servers have my CA set via TrustedUserCAKeys in sshd_config (see manpages). SSH into root is forbidden per default, i SSH into an account with my principal name and then sudo or doas.

My gain in all of this: I have n clients and m servers. Instead of having to maintain all keys for all clients on all servers, i now only need to maintain the certificate on each client individually. If i loose or forget a client, its certificate runs out and becomes invalidated.




Expiries are not protection against compromise.

Compromises happen in seconds - milliseconds, and once they do they will establish persistence. Expiry systems do not and have never been protection against compromise. They're an auxiliary to revocation systems to let you keep revocation lists manageable.

If you don't have revocation lists, or your number of changes is small, you should go ahead and just set your credential expiries to whatever you want - infinity, 100 years, whatever - it won't make the slightest bit of difference.

Particularly in the case when they're protecting sudo user credentials, they're no defense at all.


Yeah the lack of mentioning a CRL at all really stood out when reading this. I actually didn't know about SSH certificates until I saw this article (I always assumed that SSH did not support this), but do run my own CA and authentication for internal web services, EAP-TLS, and VPN. The CRL is your first line of defense in the sense that it blocks the use of that credential instantly when it is revoked.

I will argue though that the use of a short expiry produces slightly better protection than no expiry at all. If an employee leaves the company (with no CRL in place) and their certs expire in 16 hours, then unless their credentials are stolen in that timeframe your systems are still safe.

Likewise, if a CRL is in place and credentials are stolen without you being aware of it, the expiry still provides a form of buffer if the stolen credentials end up being used after the cert expires. In this case the expiry would trigger before you realised that credentials were stolen and updated the CRL. Now yes compromises can happen in seconds, but that's not in every single case.

That being said I definitely agree that the expiry is not a subsitute to a CRL and any certificate system should have revocation systems in place. In the end you really should have both a CRL and expiry date if possible.


Rookie mistake: SSH has no CRL, it has an KRL.

And its actually a separate thing since it operates largely independently from the CA.

I have one in place. Used it once to terminate access for someone.


Rookie mistake: SSH's KRL is also a CRL. See KEY REVOCATION LISTS in ssh-keygen(1). You can revoke plain keys with it, but also revoke certs (both by serial number and identity) with it.

The infrastructure I built for access control using SSH certs used it. I know it works because I tested for it specifically.


It sounds like you could be making the rookie mistake instead by not reading what he/she actually wrote.

> Yeah the lack of mentioning a CRL at all really stood out when reading this. I actually didn't know about SSH certificates until I saw this article (I always assumed that SSH did not support this), but do run my own CA and authentication for internal web services, EAP-TLS, and VPN. The CRL is your first line of defense in the sense that it blocks the use of that credential instantly when it is revoked.

This sounds like he/she is running an x509 CA. He/she is generating certs for various use-cases.

It is possible to use x509 certs with SSH of course, and so he/she could leverage his/her pre-existing CA for that function.

Given above context CRL is completely accurate. And, KRL is not.


No, SSL CA certificate are in no way like OpenSSL CA issued ones.

The format of the certificate have diverged between two ecospheres about a decade ago.


If you didn’t know about SSH certs, you shouldn’t be giving advice. You should study the fundamentals


I think you may also have missed the context that he/she used, as they described running an x509 CA first.

In an organizational context, many organizations are not going to jump to creating a novel CA type (SSH CA) when in fact regular x509 CAs are well known and the basis for much security, and many in regulated industries are using them already.

Additionally, given that he/she is running an x509 CA, telling someone with that experience to study the fundamentals is not very polite. It assumes the author of the comment is not educated, but the very description of his/her use-cases are not simplistic ones.

Engineering is all about tradeoffs after all.


That’s a fantastic point. Mea culpa


Your pronoun thing makes your text painful to read.


... it genuinely pains you to read "he/she"?


That’s why I use “one”.


I just didn't want to assume gender, and didn't want to go through comment history in order to find it.


You would seem to have a very low pain threshold.


I'm not familar with SSH certificates, but I do know the fundamentals of certificate-based authentication. If you don't have a way to revoke the cert, then the server will assume that your properly signed unexpired certificate is valid. You will need some way to let the server know that the previously issued cert is not valid anymore.

This is how this type of authentication works, and the article did not address the important case of wanting to revoke a user's credentials.


To connect back on my rant -- isn't it amazing the disparity of thoughts around security best practices? How does someone who knows next to nothing become a reliable security professional if even the security professionals disagree on fundamentals?


The fundamentals are that you need to exceed O(2^N) > 80 bits roughly in complexity of your keys. Adding some padding to that is a good idea because some algorithms can be simplified in theory (for instance AES-128 is actually simplified down to like ~118 already through known math).

This is for symmetric encryption, and for asymmetric the equivalent is ~1024-bits, so padding it up to 2048-bits is generally the "minimum" for RSA, and some of that math is advancing too so bumping it to 4096-bits isn't a bad idea. If you want to be quantum proof, RSA will be broken so moving to something else like EC is nice. AES would be halved O(sqrt(N)), so AES-128 becomes the equivalent of AES-64, so if you want to be quantum proof there you need to jump up to AES-256 (unless you are using XTS/tweak mode, in which case AES-512). Keep in mind quantum also is not exactly short term practical to accomplish at the moment.

You can use whatever technology to accomplish that complexity, be it passwords, SSH keys, or SSH certs. Anything else is just technology architecture noise. Passwords absolutely can clear the O(2^N) > 80 bit threshold. It's just about bytes, and how you store them.

Nobody is going to be brute forcing a sufficiently complex password over the network anytime soon unless it isn't actually random but some default password that looks random.

Just look at the title of this post: "If you're not using SSH certificates you're doing SSH wrong". It's just completely devoid of environment issues, user issues, datacenter issues, and reeks of elitism. There is no "one true way" despite people's insistence that they are the arbiters of truth. I keep reading here about "you should just use serial over network instead of SSH!" but fail to read about how those serial over network connections are usually less secure than SSH itself.

Best practices guides have gone off the rails. They are generally good guidelines, but you have to make sure you are taking into account your own environment and user needs and take them with a grain of salt. Learn for yourself, and read raw facts from real cryptographers and people in the field. Don't take best practices guides as absolute truth, but learn from them.

How does one become a security professional? Maybe not with one of those "become a security professional in 30 minutes" packages then start a blog about how everyone isn't conforming to their tiny worldview. No matter what it'll take >10 years with actual experience, just like any profession. One has to start from the bottom and make their way up. Most environments are too complicated for any "one size fits all" solution:

https://xkcd.com/927/

EDIT: Further discussion on this here is interesting. The top comments go all in on SSH certificates, then down the line people start questioning why passwords are bad in the same ways. A lot of the "SSL certificate" push theorized here from their perspective seems to come from VPN providers that need it from lesser skilled clients/users (think, people who bought VPNs off YouTube video recommendations):

https://arstechnica.com/information-technology/2022/02/after...


> You can use whatever technology to accomplish that complexity, be it passwords, SSH keys, or SSH certs. Anything else is just technology architecture noise. Passwords absolutely can clear the O(2^N) > 80 bit threshold. It's just about bytes, and how you store them.

I always try to assume breach in my thought processes, but I recognize that this lead to overengineered solutions because sometimes the mitigation is not worth the cost.

> Just look at the title of this post: "If you're not using SSH certificates you're doing SSH wrong". It's just completely devoid of environment issues, user issues, datacenter issues, and reeks of elitism.

I think this is an excellent point you make. There are a few different ways to use SSH securely and I probably lean a little towards the x509 and other alternatives, given the established base of x509 within my industry.

I don't use SSH certificates at work because they really don't make sense for me when I am using a strong credential already (HSMs)

> There is no "one true way" despite people's insistence that they are the arbiters of truth. I keep reading here about "you should just use serial over network instead of SSH!" but fail to read about how those serial over network connections are usually less secure than SSH itself. Best practices guides have gone off the rails. They are generally good guidelines, but you have to make sure you are taking into account your own environment and user needs and take them with a grain of salt. Learn for yourself, and read raw facts from real cryptographers and people in the field. Don't take best practices guides as absolute truth, but learn from them.

These are some other seasoned points you make.

I like to think about "Security Objectives". In most cases I am concerned about is something secure from a confidentiality, or integrity perspective. But, since I also deal with an ICS/SCADA community, their context is completely driven by "Availability as Paramount", defined performance within an acceptable range being next, and only after that, does the other objectives come into play.

However, given the varying use-cases of machine, mobile, app, connectivity basis or lack thereof (internet, transient, air-gap, etc) and the limitations of each, sometimes a smorgasboard of solutions are needed to satisfy within constraints.

> How does one become a security professional? Maybe not with one of those "become a security professional in 30 minutes" packages then start a blog about how everyone isn't conforming to their tiny worldview. No matter what it'll take >10 years with actual experience, just like any profession. One has to start from the bottom and make their way up. Most environments are too complicated for any "one size fits all" solution:

Appreciate the words of wisdom.

I view security as having much in common with other rapidly evolving fields of expertise. The generalists becoming specialists, are now becoming sub-specialties, adding fellowships, etc. When I was a young force-sensitive had the good fortune to fall in with the right community in which to collaborate.

My opinion is that many of the security communities are among the most welcome, diverse, and inviting folks around.


> I always try to assume breach in my thought processes, but I recognize that this lead to overengineered solutions because sometimes the mitigation is not worth the cost.

I agree with this mindset, I do the same. But at the same time, yes you do have to realize that sometimes it's not worth it. For instance, there are two types of attack you might encounter, a strong nation-state and a drive-by botnet using known exploits and weak passwords to grab the low hanging fruit. If you are patched and using strong passwords, you aren't going to be affected by the drive-by botnet. If you are patched and using MFA and whatever strong credentials, a zero-day sat on by a nation-state is going to plow through anyway. Then they have gotten into that outer ring as a user and you are trying to protect against privilege escalation. Most things to protect against that here that are actually going to work are going to be strong process control or integrity checking (Windows), or Mandatory Access control systems (SELinux), or just basic user silo-ing and not running things as privileged accounts (either one). Most of that is going to be on the OS design itself or architecture of the process.

So we go to privilege escalation exploits. Take this year, at time of writing this is March. I have been patching nothing but privilege escalation flaws on Linux machines (I don't admin Windows, so I don't know that landscape) all year in 2022. It's only been three months. There's no short supply of them being discovered, and many of them are mildly, moderately, or entirely mitigated by just using SELinux. Some of them go all the way past it, though, so sometimes it can be futile.

So the nation-state threat in almost any case will likely have the ability to jump right past the zero-day to root level. So what about in-between? Well, learning about attack and if you are stockpiling or developing zero-days, those tend to add up quick or you just get locked out entirely because they get patched. Your skills also ramp up pretty quickly, too, as an exploit hunter. So you either develop a strong foothold or you fall out of the criminal world entirely. I'm sure it's probably the most paranoia-driven and stressful "job" to have while you are striving not to completely fall apart and get locked out due to defense ramping up or locked up (not that trying not to get hacked isn't paranoia-driven enough).

I also want to emphasize, you REALLY don't want to get compromised AT ALL at this point. Patching is probably the best way to do that, and the most important step. The reason being, you can't necessarily prove that you have kicked out the user after you think you have unless you just completely wiped the machine, and even then you have no idea if they got as far as a firmware exploit (in the instance of a nation-state), which is the more terrifying exploits that are being discovered and sought after.

But regardless, if you find out that you've been compromised and you're using a random password, you're going to change that password anyway if you are doing things right.

> I don't use SSH certificates at work because they really don't make sense for me when I am using a strong credential already (HSMs)

And that's a great point, too. HSMs are a great way to secure SSH as it is, and use the same or similar cryptography as SSH certs as long as they are well developed.

What comes to mind for me for a complicated environment where SSH certs don't help is that there might be inter-organizational issues where you have to make a connection work over multiple crazy hops. So for instance, an end-user's laptop has to connect to Citrix from home, then RDP into a local machine in organization A, then over an existing IPSEC tunnel use OpenVPN software to VPN into organization B, then SSH into a server in organization B. Organization B just did things using OpenVPN, and then SSH, but the rest had to be tacked on due to the client's environment. Real world example. So, the best usage in this case was for organization B to use Yubikeys in OTP mode to type the AES signed secrets typed as a keyboard through the multiple connections. Organization B had no control over organization A's infrastructure or ability to tell them to stop doing anything the way they were doing it, but had to consider the security implications of the way they had set their systems up anyway because the "client" was working in this environment. Then there was the issue of training the users, and explaining SSH certs OR keys to them would have been impossible. Telling them to hit a button was hard enough.

I've heard much crazier stories from the military involving piping encrypted sessions over satellite and jumping it over cable connections, etc (including patching live Super Bowl feeds over serial connections for officers which are always fun stories, especially when dealing with legal copyright issues involving the government in the 80s and fudging reasoning), but there are just some things when you are involved with multiple organizations or multiple connections or inter-organization or international things that you just can't control every single detail of. This is going to get more and more complicated as remote-work gets adopted more as well, so these old stories of network insanity are extremely useful for application level connectivity for sysadmins now.

Long story short, sometimes that thing you think is engineered terribly has a reason for it. Usually it involves stupid logistical nightmares, weird requirements, or bureaucratic/legal hopping. It's only going to get worse, too.


I'm not sure I understand the point here: are you saying that a CRL is an effective protection against compromise? If so, how exactly does that work?


If I'm not using a device for a long time, it ceases to be an authorized client. This is what i want.


`ssh-keygen` #Certificates: https://man7.org/linux/man-pages/man1/ssh-keygen.1.html#CERT...

"DevSec SSH Baseline" ssh_spec.rb, sshd_spec.rb https://github.com/dev-sec/ssh-baseline/blob/master/controls...

"SLIP-0039: Shamir's Secret-Sharing for Mnemonic Codes" https://github.com/satoshilabs/slips/blob/master/slip-0039.m...

> Shamir's secret-sharing provides a better mechanism for backing up secrets by distributing custodianship among a number of trusted parties in a manner that can prevent loss even if one or a few of those parties become compromised.

> However, the lack of SSS standardization to date presents a risk of being unable to perform secret recovery in the future should the tooling change. Therefore, we propose standardizing SSS so that SLIP-0039 compatible implementations will be interoperable.


Now you have a centralized single point of failure. While the ease of use is inherently obvious with the implementation, if/when it does fail you will have to fall back to public key/password auth anyways.


Centralized single points of control are a basic goal of corpsec. They trade availability for security. The alternative model of individual SSH keys is theoretically more highly available, but has many single points of security failure.


Please enlighten me on the ‘many single points of security failure.’


Which failure mode do you mean? The CA is accessible via offline means. I can walk to it and sign me a new keypair.


What happens when the building the CA is in burns down?


The CA is in a gpg-encryped secrets store (pass) and has a password on itself, so it can be backupped like normal data to an off-site location.


Scan printed QR codes of your private key that you had backed up off-site.


Ideally, k-of-n key shards, stored in safety deposit boxes.


That's actually pretty brilliant.


Provided you keep said papers away from prying cameras in a verifiable way, that is.

For more inspiration, check out the Glacier Protocol.

https://glacierprotocol.org/


Thanks for the heads up!

I wish I'd thought about this when playing with bitcoin a few months after launch and amassing an integer value larger than zero. That wallet died with the hard drive.


Please tell me you still have the hard drive. There’s a chance for recovery, and I have some experience in this area if you want some tips. Step 0 is always keep your drives for future recovery attempts.


It was dumped many, many years ago while BTC was still a novelty paying for pizza in the thousands BTC per. I went to see if I still had a backup of the wallet with a USD:BTC spike a few years back and it was gone.

Life goes on, even when sad things happen :(


Think of it this way: by starving the supply of that one bitcoin, you have contributed in some small way to the eventual loss of all bitcoins through similar events - speeding up the rate at which the world can move on from this silly fad.


ddrescue may be of interest if you still have the disk.

That's `dd` for broken disks. It keeps a log of data it couldn't read, and can keep trying to read it indefinitely, it even supports a save state and can resume trying again later.

I've recovered filesystems from several failed disks using it. It's not fast though!


The extreme version of this is using an HSM, and putting one in a safe deposit box.


It's not so extreme, you have to trust the HSM manufacturer.

Try generating randomness using casino-grade dice, and xor-ing it with the HSM. Maybe then.


Now I'm wondering who's managed to pull off supply chain attacks on dice, since I'm sure it's happened already.


Also, this doesn’t apply to most real scenarios (especially not “how I run my personal stuff” type scenarios), but is a fun one to contemplate: what happens when your customer has requirements that specify all keys (including root signing keys) to be rotated at a certain point in the future? Having a process for this is an interesting challenge.


The CA is a key, not a network service.


Sign with two or three CAs, and have sshd accept any of them.




Consider applying for YC's Fall 2025 batch! Applications are open till Aug 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: