Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Following random guides on the internet doesn't necesarily have to be harmful if you don't simply copy-paste, but rather make an effort to understand what is the advice being given, why is it being given, and form an opinion about it. For instance, assuming you start from the very beginning, if a guide suggests disabling root login and you do your own research to understand what root accounts are, what can they do, and why would you want to disable it, and after doing your homework you happen to agree it is a good idea, then it's probably okay to follow that specific advice.

That doesn't mean you are going to learn everything there is to server administration by following a random guide on the internet, but for a small startup it's not necesarily a bad way to get started since at the end of the day you are going to learn more from experience than by reading 50 books.



There are some surprises if you go down that hole.

The best argument I could find for "disable root login" was "the attacker has to guess the username, too", which doesn't align with Kerckhoffs's principle and isn't the way security should be done, imo.

Also, fail2ban is a protection against bruteforce. If bruteforce is an issue for you, you're doing something wrong.

Please correct me if you know more.


Disable root login is part of the principle of making all access accountable to individuals, not to role accounts. Imagine how much more challenging things are forensically if you see a bunch of actions in the logs taken by "root" vs. by "joeg, the sysadmin who was fired a week later.

fail2ban helps with a lot of things. It keeps spam out of the logs. Some systems have high CPU cost per login (bcrypt), so similar systems can help prevent brute force attempts turning into (or being actually intended as) DoS.


Some of the brute force attempts against servers are so relentless now that they can consume a significant amount of server resources just causing the server to say, "no. no. no. no. no. no. no. no. no...." They also fill up your log files, needlessly consuming disk space and making it a pain to crawl through logs later on to troubleshoot legitimate issues. Plus, you can hook Fail2Ban so that other services can use it to buff up their filters. For instance, if someone's spamming your mail server, your mail server can trigger Fail2Ban and then Fail2Ban can tell your web server to also block the IP (or network) to help reduce common sources of WordPress spam.

There are good reasons to use Fail2Ban, and the counterarguments that it doesn't actually improve security miss all the other benefits it brings.

And, I've read all of Theo de Raadt's arguments against these approaches. I understand and mostly agree with them. I get that with ssh key only authentication and sane services configuration and so on that people can hammer away at your server all day and never accomplish anything. But that still doesn't mean I want to provide a test bed for every dumb script kiddie on the internet (and there are many).


Use the simplest tool possible. Fail2ban relies on log parsing, which is a possible attack vector.

The thing is that you can reach pretty much the same effect with a smaller attack service and better efficiency using rate limiting in your packet filter.

E.g. in iptables the 'recent' module can do this, see man man iptables-extensions and search for 'recent'. E.g. you can set up a rule that any IP address making more than 5 connection attempts to port 22 in one minute gets put on a list that is DROPped.

Edit: BTW, if you think the fail2ban attack vector is purely theoretical, you might want to check the CVEs:

http://www.cvedetails.com/vulnerability-list/vendor_id-5567/...


iptables rate limiting still doesn't solve the problem of identifying attacks against one service so that they can be preemptively blocked by other services on other servers.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: