So a modem on a pine phone can survive the HN hug of death (and the number one post no less)? Am I missing something here? Because there's no way I believe that!
Since we're here, I did a quick grep through today's access log. It appears that your link got me 573 visitors to that article and 56 people clicking through to the other article linked therein.
My mind is blown that anyone writes software for stacks that max out at few hundred requests per second.
Everything I've seen comparing "high level" languages says programmers work around the same efficiency whatever the language. So why isn't everything done in Go, C#, and Java where 100,000 requests/second is trivial?
Even if you don't have much load, isn't the possibiliy of a cat DDoS'ing you by sleeping on someone's F5 key at least slightly concerning?
I swear half these companies that claim their site is "being DDoS'ed!" it's just somebody just running HTTrack to archive an article they like.
A static site is really easy to host and HN's hug of death isn't actually that intense; the site itself only gets like six million views a day, a fraction of those click on every link, and no syndicated page has much reach.
People badly overestimate what it takes to serve a website nowadays. I don't know whether it's just because we're all used to 10+ second website loads as our browser groans under the weight of all the trackers, or if we all have too much experience with web pages that run multiple 10ms+ unoptimized queries against databases that pull way too much data back, etc. and so we all think web pages are some sort of challenge, but webpages themselves really aren't that hard. It's all the stuff you're trying to put in to them that is the challenge.
I hit SHIFT-CTRL-r to watch the page reload under the network debugger so I could get a snapshot of the total size, which looks to be about 100KB, just eyeballing the sums, so at 10MB/s it should be able to serve around 100 requests/second, which is enough for most websites, even ones getting hugged.
Reminds me of when my university changed from “CCnet”, running all courseware/grading on custom code written by an engineering professor running on a tower under his desk to WebCT/blackboard.
The former worked fast and without any issues, but the latter just groaned and groaned and crashed.
During the transition, a prof asked the class which one to use and all 250 people said in unison “CCnet”.
“ Its main virtues were the stark, no-nonsense interface and the incredibly efficient (although complex) database system that made CCNet ideal for running large numbers of courses with very little by way of server computing resources.”
I recently migrated a personal blog running on a hugo static site to digital ocean with automated github deployment, SSL certificates, CDN, and a custom domain. It took all of 15 minutes including updating the DNS on my custom domain registrar.
The same but with AWS S3 instead of a VPS is even simpler and avoids any maintenance / software updates. The bandwidth is more expensive but for a small personal site it shouldn't make much of a difference because you also save the price of the VPS.
I speak as a software engineer whose blog has hit the HN frontpage multiple times, and who has optimized my blog to be static & lightweight, just like the author. In my experience, a front-page HN post (in the top 3) leads to 30-40 page hits per second. The page at https://nns.ee/blog/2021/04/01/modem-blog.html weights 44.4 kB transferred (HTML, CSS, images, fonts, etc, all compressed over HTTPS). So this is 11-15 Mbit/s of peak bandwidth, below the maximum throughput the author measured on the Pine phone modem (20.7 Mbit/s).
And the bottleneck is going to be purely network. Not CPU. Not disk. Just serving a small set of files cached in RAM by the pagecache.
It's above the max, no? The author measured 20Mbps between the phones main OS and the rest of the world, but 10Mbps between the phone and the modem (over the adb bridge).
My website also got multiple times in the HN frontpage and there was never any lags even on a very cheap VPS. The websites that get the HN hug of death are often the one requiring database access (e.g. wordpress).
But also, serving static content is more or less a matter of network connection speed. You can cache everything in RAM and write data to a socket very quickly. HTTPS makes things a little wonky because you can’t just sendfile() files directly into a socket but the overhead is still pretty minimal.