Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

So a modem on a pine phone can survive the HN hug of death (and the number one post no less)? Am I missing something here? Because there's no way I believe that!


HN traffic isn't all that demanding[1]. Websites falling under the load are a testament to the poor quality of the software stack used to host them.

[1]: <https://xyrillian.de/thoughts/posts/latency-matters-aftermat...>


Hey look, it's my blog. It's a small world. :)

Since we're here, I did a quick grep through today's access log. It appears that your link got me 573 visitors to that article and 56 people clicking through to the other article linked therein.


What can I say. That's a good blog. :)


I was thinking of much larger numbers here.


One of my favorite bike shedding topics.

My mind is blown that anyone writes software for stacks that max out at few hundred requests per second.

Everything I've seen comparing "high level" languages says programmers work around the same efficiency whatever the language. So why isn't everything done in Go, C#, and Java where 100,000 requests/second is trivial?

Even if you don't have much load, isn't the possibiliy of a cat DDoS'ing you by sleeping on someone's F5 key at least slightly concerning?

I swear half these companies that claim their site is "being DDoS'ed!" it's just somebody just running HTTrack to archive an article they like.


A static site is really easy to host and HN's hug of death isn't actually that intense; the site itself only gets like six million views a day, a fraction of those click on every link, and no syndicated page has much reach.


People badly overestimate what it takes to serve a website nowadays. I don't know whether it's just because we're all used to 10+ second website loads as our browser groans under the weight of all the trackers, or if we all have too much experience with web pages that run multiple 10ms+ unoptimized queries against databases that pull way too much data back, etc. and so we all think web pages are some sort of challenge, but webpages themselves really aren't that hard. It's all the stuff you're trying to put in to them that is the challenge.

I hit SHIFT-CTRL-r to watch the page reload under the network debugger so I could get a snapshot of the total size, which looks to be about 100KB, just eyeballing the sums, so at 10MB/s it should be able to serve around 100 requests/second, which is enough for most websites, even ones getting hugged.


Reminds me of when my university changed from “CCnet”, running all courseware/grading on custom code written by an engineering professor running on a tower under his desk to WebCT/blackboard.

The former worked fast and without any issues, but the latter just groaned and groaned and crashed.

During the transition, a prof asked the class which one to use and all 250 people said in unison “CCnet”.

“ Its main virtues were the stark, no-nonsense interface and the incredibly efficient (although complex) database system that made CCNet ideal for running large numbers of courses with very little by way of server computing resources.”

https://www.utsc.utoronto.ca/technology/what-learning-manage...


We have a Rails app that struggles at 50 req/sec. It has an embedded Angular app whose back end easy handles 10,000 .

One of the reasons were trying to migrate is that Rails servers cost us almost an engineer salary, where our other backend is ~200/month


That's 10 mega bits per second, not bytes.


I recently migrated a personal blog running on a hugo static site to digital ocean with automated github deployment, SSL certificates, CDN, and a custom domain. It took all of 15 minutes including updating the DNS on my custom domain registrar.


The same but with AWS S3 instead of a VPS is even simpler and avoids any maintenance / software updates. The bandwidth is more expensive but for a small personal site it shouldn't make much of a difference because you also save the price of the VPS.


I'm using DO's free tier. The static site hosting may be a new feature. There is no VPS needed for it.

My AWS S3 skills weren't good enough to figure out the CDN + https. S3 static hosting without those was super simple though.


I find GitHub Pages even easier for that scenario.


Yes it can survive the HN hug of death.

I speak as a software engineer whose blog has hit the HN frontpage multiple times, and who has optimized my blog to be static & lightweight, just like the author. In my experience, a front-page HN post (in the top 3) leads to 30-40 page hits per second. The page at https://nns.ee/blog/2021/04/01/modem-blog.html weights 44.4 kB transferred (HTML, CSS, images, fonts, etc, all compressed over HTTPS). So this is 11-15 Mbit/s of peak bandwidth, below the maximum throughput the author measured on the Pine phone modem (20.7 Mbit/s).

And the bottleneck is going to be purely network. Not CPU. Not disk. Just serving a small set of files cached in RAM by the pagecache.


It's above the max, no? The author measured 20Mbps between the phones main OS and the rest of the world, but 10Mbps between the phone and the modem (over the adb bridge).


My website also got multiple times in the HN frontpage and there was never any lags even on a very cheap VPS. The websites that get the HN hug of death are often the one requiring database access (e.g. wordpress).


Something in front of it, maybe? It reports nginx.

  $ curl --head 'https://nns.ee/blog/2021/04/01/modem-blog.html'
  HTTP/2 200 
  server: nginx
  date: Fri, 02 Apr 2021 13:28:26 GMT
  content-type: text/html
  content-length: 12937
  last-modified: Fri, 02 Apr 2021 09:40:40 GMT
  etag: "6066e698-3289"
  accept-ranges: bytes


The op mentions in another comment that he has nginx in front: https://news.ycombinator.com/item?id=26670500

No idea if he's caching there, though.


Could be. The statuspage from the device shows darkhttpd as the server, not nginx: https://nns.ee/blog/status.html


Looks like darkhttpd's website: https://unix4lyfe.org/darkhttpd/ is NOT hosted on a GPS/LTE modem and has been hugged to death.


Oh my. Sorry to who hosts that site!

(FWIW, I think the darkhttpd's website uses... Apache)


And if it turns out that Apache's website uses lighttpd?


I think I saw it mentioned (possibly also linked) in a comment on another article here today. So it's not just us, it's also us!


I see nginx there too.

% curl -I https://nns.ee/blog/status.html HTTP/2 200 server: nginx date: Fri, 02 Apr 2021 14:48:48 GMT content-type: text/html content-length: 8754 last-modified: Fri, 02 Apr 2021 14:48:31 GMT etag: "60672ebf-2232" accept-ranges: bytes


I meant the content of the page, which shows running processes on the system.


Currently can’t load the site.

But also, serving static content is more or less a matter of network connection speed. You can cache everything in RAM and write data to a socket very quickly. HTTPS makes things a little wonky because you can’t just sendfile() files directly into a socket but the overhead is still pretty minimal.


I think you can now. You can have TLS handled by the kernel

  setsockopt(sock, SOL_TCP, TCP_ULP, "tls", sizeof("tls"));


HN itself runs on one core.


Its uncached database hits that crash sites.

You pretty much never need that, but many/most sites do it somewhere anyway.

A static site running on a modem should be fine, but there is nothing saying they cant/arent using a CDN too, which would be even better.


See the first two words in the post: "No, really."

It was just used to catch attention.

But with a CDN in front it might likely work out.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: