Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Looking at my case, I wouldn't call this pointles, since it removed > 95% of login attempts.

Sure, IP address block fragmentation will remove some value, but you'd still kill a lot of juicy compromisable home IP addresses in those countries.

Regarding cloud providers - well, I hope they have some automated systems to clamp down, since it's against their own interest to serve as vectors.



> since it removed > 95% of login attempts

If you use certificate authentication and disable password authentication then you've killed off 100% of unauthorised attempts right off the bat. There really isn't much need to do more than that, its the world's easiest security fix.


Sure, I don't think we actually disagree too much.

Leaving password auth on is simply negligent.

That said, blocking all countries that you don't expect to talk to is the world's second easiest security fix, and protects other processes you might have running, and other unknown vectors that might be worse, such as heartbleed.


> That said, blocking all countries that you don't expect to talk to

All fun and games until you find yourself on travelling in Asia and you need to connect back home and forgot you blocked half the world.

As for "other processes", we're talking about SSH on this thread. If you've got other processes then that falls into on-host/upstream filtering via firewall area of security. Regarding "unknown vectors", as I said, patching, no amount of IP blocking will help you with that.

If you really want to talk about "world's second easiest security fix" for SSH, that would be running a super-hardened bastion host and using SSH ProxyJump.


> All fun and games until you find yourself on travelling in Asia

Leave a cheap droplet running somewhere, whitelist its IP, and SSH through it. I'm in Asia and I do exactly that with my U.S.-based servers. Actually, I've blocked all of the rest of the world as well, except the proxy droplet. It's like a global bastion host. There's no legitimate reason for anyone else on this planet to try and SSH into those boxes.


>Leaving password auth on is simply negligent

I know this is an oft-repeated trope, but I disagree. If you are whitelisting users for ssh and use secure passwords, you're really quite safe. Whereas if you lose access to your device with ssh keys, you're locked out with no hope of getting back in.

In what sense is it "negligent"? I feel like this is just an example of people constantly repeating popular advice without really considering it, like happened with bad password expiration policies. Like, I get that an ssh key probably has more entropy, but there is such a thing as good enough. My ssh password for my server is mixed case with numbers, and 20+ characters. Good luck cracking it.


I’ve seen a compromised machine whose sshd had been replaced with a trojan that saved all passwords entered into it. Some investigation revealed that sshd modification to be part of a standard script kiddie toolkit.

Are you 100% sure that every machine you log into remotely with your very strong password isn’t logging that password?

That’s one advantage of keys.


Perhaps I'm missing something? I don't understand the issue. If the machine you're logging into (running sshd) is already compromised, it's... already compromised. I don't recycle passwords, so what is the risk?

And if the device on which you're running your ssh client is already compromised, it doesn't matter whether you use a key or password, its the same thing.

Can you explain the threat a bit more?


> I don't recycle passwords

That’s good. I think you’re probably in the minority though.

There are other ways unique passwords can be compromised. I see passwords accidentally entered into IRC windows about once a month. And even if you have perfect discipline at using unique passwords, that’s not something you can enforce on anyone else logging into your machine.

Maybe someone will chime in with strategies to run ssh from a wrapper that loads a unique password from a password manager with no risk of reuse or entry into the wrong window, or something. But at that point, your complaint about being locked out without the necessary files would apply—might as well use a key, which is simpler and provides strong security with no rigamarole.


> And if the device on which you're running your ssh client is already compromised, it doesn't matter whether you use a key or password, its the same thing.

Whoa there sunshine.

Put your SSH keys on a USB HSM (Yubikey or Nitrokey) and nobody is ever going to be able to extract the private key.

Added bonus, put it on a USB HSM with touch auth (e.g. Yubikey) and nobody will ever be able to use the key without you knowing it (because you have to physically touch it).


>Put your SSH keys on a USB HSM (Yubikey or Nitrokey) and nobody is ever going to be able to extract the private key.

Except you. To run through a compromised machine... Perhaps I don't quite understand how it works, but I don't see how this setup negates that issue. Once you plug it into the compromised machine and allow access to it with whatever touch-authentication or w/e, I can't imagine you could keep it secret from the attacker on the compromised machine. But maybe it's encrypting the key on the device?


> Whereas if you lose access to your device with ssh keys, you're locked out with no hope of getting back in.

And if you lose access to your password manager, you're equally locked out. If you're not using a password manager, you're either 1. Dealing with a tiny number of servers (possible and legitimate), 2. Reusing passwords, or 3. Using insecure passwords. ... Okay, fine, 4. Or a world class memory/mnemonic device expert. Just back up your keys.


> you're locked out with no hope of getting back in.

Fortunately, most VPS and dedicated server hosts have a side channel that allows you to regain access when needed. It might be an automated dashboard feature to reset the root password, or you could open a support ticket. With colo, you can actually drive to the DC and reboot into single-user mode. In any case, you won't be locked out permanently. :)

Attackers, of course, can also social-engineer those side channels to gain access if they really tried. Much easier than cracking long passwords or 2048+ bit private keys.


> Fortunately, most VPS and dedicated server hosts have a side channel that allows you to regain access when needed.

Fortunately, but of course it means you now need to consider this side channel as well. Maybe you have strong ssh keys all across, but your cloud service has a web admin UI that can bypass them and someone has a 8 character password on it.


Yeah, the hosting company is usually the weakest link. I use 2FA on any web admin UI that supports it, but who knows how well it will hold up against a determined social engineering attack on the CS department?


> >Leaving password auth on is simply negligent

> I know this is an oft-repeated trope, but I disagree.

Agreed. Sometimes in these discussions it is forgotten that password and keys are both instances of a shared secret N bits long.

Now, yes, passwords tend to be shorter and have less entropy per byte if a human generated them and keys don't have these limitations. So in general it is nearly always wise to remove access via passwords. Certainly wherever general users might be creating those password since it is guaranteed some will be weak.

But any threat modeling exercise needs to consider availability as well. Using the STRIDE model, the D is for Denial of service. One case of that is not being able to access something important.

For my infrastructure there is (only) one ssh entry point which can be accessed via password. Limited only to very few select userids and the passwords have >=128 bits of entropy. Nobody will be brute-forcing those in the lifetime of the universe. It's a bit of a pain to memorize them, but it is possible. It has saved me a few times when I'm traveling and have access to nothing other than myself and my memory and need to get in.

On the downside, definitely need to be careful about operational security. If you are traveling, where are you entering this password? Can it be captured? Be wise. But there is a use case.


Private keys aren’t shared though. You never have to worry about leaking a secret when you authenticate with a key because the private key never leaves your machine.


> if you lose access to your device with ssh keys, you're locked out with no hope of getting back in.

I'd rather do backups than risking password bruteforcing on servers.


When this topic (always) comes up, somebody points out that moving the ports isn't security...but it kinda is, because if you suddenly see an uptick in logging, you know someone cares enough to FIND your port, and then POINT SOMETHING AT the port, and in the meantime it reduces heat and power and disk wear.


Moving the port reduces security. Port 22 is a privileged port. Standard users can’t listen on port 22. If you move the port to 2222 or wherever, then if an attacker with local access can get sshd to crash, they can run their own sshd instead. If you left the port as the default, they wouldn’t be able to do that without chaining it with a privilege escalation attack. But because you changed the port, you disabled this security feature and it could become a privilege escalation attack.


Only if they already have local access. And the sshd is not listening on the port (if it’s on, you can’t just bind it)

And they still can’t show the host certificate, so your client would tell you the certificate changed.

Unless, of course, they have a local root exploit — but then port 22 is just as unsafe.

I fail to see how changing a port makes anything less safe.


Its pretty trivial to disable the ability of users to listen on certain ports higher than 1024 with a firewall config.


A firewall config can block listening? What would that firewall config be? The firewall can block packets by owner uid. But I am not sure who is the owner in legitimate sshd case. Root or the user logged in?

SELinux can do it, probably other LSMs, too.


you cant block the actual port bind, but you can block any packets from reaching it, so the difference is mostly semantics


Yes, but that port needs to remain "open" for the legitimate sshd traffic. Can you see a difference in ownership as the firewall sees it between sshd and some user daemon? Sshd drops root partially when login succeeds.


that has nothing to do with sshd's listening socket, which remains owned by root


Sure the listening one remains owned by root. But the connected one? If you limit packets from/to e.g. 2222 to uid 0, will legitimate ssh traffic work? I don't say it won't, genuinely unsure. Haven't tried and today is a holiday. Maybe tomorrow :)


In practice, people won’t do that. Case in point: the article doesn’t mention this mitigation at all. It introduces an additional attack vector and tells you it’s safer.


I am not sure it reduces security. You have a valid point. I adds an attack vector. However "only" if you have the attacker in your system already.

Having all the log spam from failed attempts on 22 might the make the admin negligent on carefully following their logs at all. Having it pretty silent on a higher port usually will make you notice occasional scanning. At least I have noticed it and made me tighten the firewall a bit.

There is no perfect security.


Your point about privileged ports is valid. So maybe change to a different but still privileged port?


> and in the meantime it reduces heat and power and disk wear.

If drive-by SSH attempts - even a large number of them - are enough to have a noticeable impact on heat/power/wear, then you should probably consider, you know, not putting hardware from 1997 on the Internet. Really doesn't take that much energy on hardware built during the current century to reject an authentication attempt and log it.


Call it a feng shui thing.


> it reduces heat and power and disk wear.

Measurably?


When will people realise that removing login attempts is meaningless security theatre? It's like when some CIO or politican says "we've been attacking 29 Million times today".

This is why companies pay for https://www.greynoise.io/ so they don't need to worry about meaningless stuff.


Its not about stopping brainless botnets from actually logging in with root:toor 9 million times, its about removing clutter from your logs so you can more easily tell when something actually dangerous is happening.


> it removed > 95% of login attempts.

This is the wrong number to look at. The relevant number is the login attempts that would have otherwise been successful. And if you’ve disabled passwords, that will be 0.


If failed login attempts go from 0 to 1000 in a single day, that’s very interesting and likely means you are being specifically targeted. You’d notice that if your ssh port is, say, 7731.

With exactly the same targeting, on port 22, you will see a rise from 100,000 to 101,000 which will likely not notice, despite it being as dangerous.

Changing the port does not make a targeted attack any more or less likely, or any more or less successful. But it does make it much more visible - and that’s useful for security as well.

A fault in your reasoning assumes that you know exactly what the attacker does - a credential attack; indeed this is the most common. However, maybe they are trying to exploit a zero day timing attack, requiring multiple attempts? Or some reconnaissance that lets them figure out valid usernames?

The different port won’t stop these of course, but will make the attempts stand out - which may allow you to stop them if noticed in time, or at least understand them in retrospect.




Consider applying for YC's Winter 2026 batch! Applications are open till Nov 10

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: