An interesting thing that I've noticed is that some of the attackers watch the Certificate Transparency logs for newly issued certificates to get their targets.
I've had several instances of a new server being up on a new IP address for over a week, with only a few random probing hits in access logs, but then, maybe an hour after I got a certificate from Let's Encrypt, it suddenly started getting hundreds of hits just like those listed in the article. After a few hours, it always dies down somewhat.
The take-away is, secure your new stuff as early as possible, ideally even before the service is exposed to the Internet.
> The take-away is, secure your new stuff as early as possible, ideally even before the service is exposed to the Internet.
Honestly it feels like you'll need at least something like basicauth in front of your stuff from the first minutes it's publicly exposed. Well, either that, or run on your own CA and use self-signed certs (with mTLS) before switching over.
For example, when some software still has initial install/setup screens where you create the admin user, connect to the DB and so on, as opposed to specifying everything initially in the environment variables, config files, or other more specialized secret management solutions.
Yes, if you follow the advice of "not exposing anything unless you deployed the security for it" of course you block password auth before exposing SSH to the internet.
Not everyone is following that advice. Just last week I taught a friend about using tmux for long-running sessions on their lab's GPU server, and during the conversation it transpired that everyone was always sshing in using the root password. Of course plugging that hole will require everyone from the CTO downward to learn about SSH keys and start using them, so I doubt anything will change without a serious incident.
Are we just speculating? SSH scanners are not sources of DDoS. Large companies have ssh bastions on the internet and do not worry about ssh DDoS. Its not really a thing that happens.
You don't need to freak out if you see a bunch of failed ssh auth attempts in your logs. Just turn off password based authentication and rest easy.
You want to keep these things behind multiple locked doors, not just one.
For the servers themselves, you shouldn't be able to get to sshd unless you're coming from one of the approved bastion servers.
You shouldn't be able to get to one of the approved bastion servers unless you're coming from one of the approved trusted sources, on the approved user access list, and using your short-lived sshd certificate that was signed through the use of a hardware key.
And all those approved sources should be managed by your corporate IT department, and appropriately locked down by the corporate MDM process.
And you might want to think about whether you should also be required to be on the corporate VPN. Or, to be using comparable technologies to access those approved sources.
Agreed. Another thing you can do to drastically reduce the amount of bots hitting your sshd is to listen on a port that is not 22. In my experience, this reduces ~90% of the clutter in my logs. (Disclaimer: this may not be the case for you or anyone else)
Just to reduce the crap in the log and also because I can, I have my SSH servers (not saying what their IPs are) using a very effective measure: traffic is dropped from the entire world, except for the CIDR blocks, which I put in ipsets, of the five ISPs over three countries I could reasonably be in when I need to access the SSH servers.
And if I'm really, say, in China or Russia an really need to access one of my servers through SSH, I can use a jump host in one of the three countries that I allow.
So effectively: DROPping traffic from 98% of the planet.
This is the way, outbound only connections so you can stop all external unauthenticated attacks. I wrote a blog 2 years back comparing zero trust networking using Harry Potter analogies... what we are describing is making our resources 'invisible' to silly muggles - https://netfoundry.io/demystifying-the-magic-of-zero-trust-w...
I used to have an iptables config that just drops everything by default on the SSH port and run a DNS server that when queried a magic string would allow my IP to connect to SSH. It did help that the DNS server was actually used to manage a domain and was seeing traffic so you couldn't isolate my magic queries so easily.
Yes, better to make your bastion 'dark' without being tied to an IP address. This is how we do it at my company with the open source tech we have developed - https://netfoundry.io/bastion-dark-mode/
Until a junior from another project enables password-based root logins because Juniper team that was on site to help them install beta-version of some software they collaborated on asked them to.
Few days after they asked to redirect an entire subnet to their rack.
And yes, you still need to remember to close password logins or at least pick serious password if you need them. Helps to have no root login over SSH and normal users that aren't defaults for some distro...
I am not blaming this on SSH (also, no longer in that org for many years).
I am just pointing out (also in few other, off-site discussions), that one should not even think of exposing a port before finishing locking it down.
Because sometimes people forget, even experienced people (including myself), and sometimes that's enough (I think someone few weeks ago submitted a story which involved getting pwned through accidentally exposed postgres?).
And there's enough people who get it wrong for various reasons that lowest of low script kiddies can profit buying ready-made extortion kits on chinese forums, getting a single VM with windows to run them, and extort money from gambling/gameserver sites. Not to mention all the fun stuff if you search for open VNC/RDP.
Your security is only as good as the people running your system. Unfortunately not everyone has teams of the best of the best. Sometimes you get a the jr dev guy assigned to things. They do not know any better and just do as they are told. It is the deputized sheriff problem.
In that case it wasn't even the junior's fault - they were following experts from Juniper who were supposed to be past-masters on installing that specific piece of crap (as someone who later accidentally became developer of that piece of crap for a time, I feel I have the basis for the claim).
And those people told him the install system didn't support SSH keys (hindsight: they did) and got him to make the root logins possible with passwords. Passwords that weren't particularly hard to guess because their only expected and planned use was for the other team to login for the first time and set their own, before the machines were to be exposed to internet, using BMC.
I wish. I use basicauth to protect all my personal servers, the problem is Safari doesn't appear to store the password! I always have to re-authenticate when I open the page. Sometimes even three seconds later.
Was looking into Certificate Transparency logs recently. Are there any convenient tools/methods for querying CT logs? i.e. search for domains within a timeframe
Cloudflare’s Merkle Town[0] is useful for getting overviews, but I haven’t found an easy way to query CT logs. ct-woodpecker[1] seems promising, too
It seems like the principle of least power would apply here. There's value in restricting capability to no more than strictly necessary. Consider the risk of a compromised some-small-obscure-system.corporate.com in the presence of a mission-critical-system.corporate.com when both are issued wildcard certs.
Wildcard certs are indeed a valuable tool, but there is no free lunch.
You'd usually put a reverse proxy exposing the services and terminating TLS with the wildcard cert.
The individual services can still have individual non-wildcard internal-only certs signed by an internal CA. These don't need to touch an external CA or appear in CT logs - only the reverse proxy/proxies should ever hit these, and can be configured to trust the internal CA (only) explicitly.
A compromised wildcard certificate has a much higher potential for abuse. The strong preference in IT security is a single-host or UCC (SAN) certificate.
Renewing a wildcard is also unfun when you have services which require a manual import.
Using them like that never occurred to me. I was thinking multiple sites on one host or vanity hostnames: dfc.example.com / nullindividual.example.com. etc.
Unless you're running some sort of automated system to churn out vanity host names (like an Azure or AWS would to provide you an OOTB URI), a UCC/SAN cert is a better choice.
More restrictive is better than less restrictive when it comes to certificates.
"I've had several instances of a new server being up on a new IP address for over a week, with only a few random probing hits in access logs, but then, maybe an hour after I got a certificate from Let's Encrypt, it suddenly started getting hundreds of hits"
I host so many services, but I gave up totally on exposing them to the internet. Modern VPNs are just too good. It lets me sleep at night. Some of my stuff is, for example, photo hosting and backup. Just nope all the way.
If you're the only one accessing those services, then why use a VPN instead of port mapping those services to localhost of the server, and then forwarding that localhost port to your client machine's localhost port via SSH?
I am in the same situation with the grandparent. I don't even expose the SSH port to the outside. The only port open is the UDP port of Wireguard which allows only the packets signed by the correct key. Everything works perfectly, no issues with NAT, I even give my mobile devices an IPv6 that my ISP allocates.
Tunneling through SSH is significantly worse because you encapsulate a TCP connection inside a TCP connection and it's userspace.
I have also set up wireguard but I changed my model and only use to troubleshoot.
The reason is privacy. I use VPN to obfuscate my IP which means I would have to VPN my entire network. Unfortunately this has proven surprisingly difficult to do properly, meaning with appropriate performance (MTU), IPv6, no blocking (exit IP reputation), etc.
Hence I switched to Argo/cloudflare tunnels for pretty much everything.
I work as a security engineer and, yes, the CT logs are extremely useful not only for identifying new targets the moment you get a certificate but also for identifying patterns in naming your infra (e.g., dev-* etc.).
A good starting point for hardening your servers is CIS Hardening Guides and the relevant scripts.
Fun anecdote - I wrote a new load balancer for our services to direct traffic to an ECS cluster. The services are exposed by domain name (e.g. api-tools.mycompany.com), and the load balancer was designed to produce certificates via letsencrypt for any host that came in.
I had planned to make the move over the next day, but I moved a single service over to make sure everything was working. Next day as I'm testing moving traffic over, I find that I've been rate limited by Lets Encrypt for a week. I check the database and I had provisioned dozens of certificates for vpn.api-tools.mycompany.com, phpmyadmin.api-tools.mycompany.com, down the list of anything you can think of.
There was no security issue, but it was very annoying that I had to delay the rollout by a week and add a whitelist feature.
On censys.io you can search by domain for example. Some internet facing appliances generate certificate automatically with letsencrypt but use a central DNS server, meaning every one of these appliances is on the same domain, using random Subdomains.
Once you figured out what the domain is, you can easily build a list of IPs out of the cert transparency log and if there is ever an exploit for this specific type of appliances, attackers now have a bespoke list of IPs to hack, a dream come true.
I don't see a solution for this particular use case, I would argue self signed certs would be more secure in this case.
Same! As soon as a new cert is registered for a new subdomain, I get a small burst of traffic. It threw me off at first assuming I had some tool running that was scanning it.
> The take-away is, secure your new stuff as early as possible, ideally even before the service is exposed to the Internet.
What? Ideally..before? Seriously? It is 2024.. and this was true even decades ago, absolutely mandatory.
(Still remembering that dev that discovered file sharing in his exposed mongo instance (yes, that!! :D) without password only hours after putting it up.. "but how could they know the host it is secret!!" :D ).
I've had several instances of a new server being up on a new IP address for over a week, with only a few random probing hits in access logs, but then, maybe an hour after I got a certificate from Let's Encrypt, it suddenly started getting hundreds of hits just like those listed in the article. After a few hours, it always dies down somewhat.
The take-away is, secure your new stuff as early as possible, ideally even before the service is exposed to the Internet.