Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

2400 rps on this hardware on hello world application - isn't it kinda bad?

And we trading performance for what exactly? Code certainly didn't become any simpler.



It's only bad if you need to get more than 2000 rps

Which is only a small proportion of sites out there.


Yes, but it's running on pretty powerful hardware. Try this with 1 vCPU, 512MB RAM and a website, that makes a lot of requests for a single page visit. Until recently I used to maintain some legacy B2B software, where each customer got their own container with very strict resource limits. A single page visit could cause 20-50 requests. Removing CGI was a significant performance win, even with a single user loading just one page.


If there is no some other advantages - it is just bad.


It's boring technology that's supported everywhere, and starting the program from the beginning for every request eliminates a lot of the places where corrupted state can persist from one request to the next.


I'd rather not pay for 8 cores / 16 threads, though...


Depends on where you are shopping. I pay €211 every month for 96 threads and 384 gb of ram (clustered) -- disks are small (around 1tb each), but I'm still nowhere near 50% utilization there.


Yeah, I pay $400/month to not be bothered with any installation or upgrade drama, ever. The problem is that I only get 16 threads.


Drama is pretty much non-existent; but when it happens, it can be a day or two where things are in a not-great state. Backups help a lot here, to easily get into a known working state -- also practicing restoring from those backups is a good exercise, so I don't mind it too much. There's nothing like learning your backup was missing some component or something, especially when the risks aren't high.

I think the worst drama ever was a partial disk failure. Things kinda hobbled along for awhile before things actually started failing, and at that point things were getting corrupted. That poofed a weekend out of my life. Now I have better monitoring and alerting.


Cool, I could see doing this for some projects. Thanks for going into detail a bit!


I installed many Arch Linux for servers, they have been online for decades. There is no fuss. The only downtime was when I issued a reboot, but it was back in 5 seconds if not less.

So you really do not have to be bothered by installation or anything of these lines. You install once and you are fine. You should check out the Wiki pages of Arch Linux, for example. It is pretty straightforward. As for upgrades, Arch Linux NEVER broke. Not on my servers, and not on my desktop.

That said, to each their own.


Arch on servers is completely insane, unless you never update or are a maniac and update every few days


Why would you think that it is insane? It works well, and had no issues for decades. Any personal experiences you have that suggest otherwise? I would love to hear.

I will give you the benefit of the doubt that you are not regurgitating what other people have been saying (IMO wrongfully), which is: "Arch Linux for servers? Eww. Bleeding edge. Not suitable for servers.". All that said, please, do share. It will not negate those decades of no issues, however.

As I said, I maintain quite a lot of Arch Linux servers with loads of services without any issues, for decades.


I used Arch for a few years on desktop (granted that was over ten years ago), and if I didn't update frequently enough, updates would routinely break. I would never use it on a server because of that. RedHat and Debian exist for a reason.


Interesting. I have never run into that issue. Sometimes I do not upgrade for months, yet everything works after I do. Are you sure it is not a matter of archlinux-keyring? You have to update it first. Plus, you have to check out their website because once in a blue moon, breaking changes happen.


For the record, I have been running Arch Linux many years ago as well for desktop, and the only issue I ran into was related to not having updated archlinux-keyring before downloading other updates. This is most likely solved though, because it is no longer an issue. I see sometimes that there is a new version of it and it seems to work without updating it first, but to be honest, I always update archlinux-keyring first as a "just in case". Old habit.

Software is so advanced these days that tech SMBs can probably run Windows XP in production.


I paid about that amount once for an ex-lease server which has been in use since 2017. A DL 380 G7 (24 threads, 128 GB) giving me all the freedom I want. A large solar array on a barn roof gives us negative energy bills so power use is a non-issue. If you have the space for the hardware and the possibility to offset power use using solar or just dirt-cheap electricity this might be a solution for you as well. There's plenty of off-lease hardware on the market which can run for many years without problems - in the intervening 8 years I have replaced one power supply (€20), that's it.


Yes. But you’re pretty much limited to residential speeds. It’s pretty hard to get a 10+ gbps symmetrical connection in a residential area. Not impossible, but unlikely.

> It's only bad if you need to get more than 2000 rps

Or if you don't want to pay for an 8/16 for the sort of throughput you can get on a VPS with half a core.


I'd argue it's bad even if you get more than 1000 Bq of requests. You never want to approach 100 % utilisation, and I'd aim to stay clear of 50 %.


It's not great, but it is enough for many use cases. Should even handle a HN hug of death.


The marging for error becomes tiny though... Performance regression making some requests slow? Suddenly you don't handle even those 2k requests. And a Denial of service attack doesn't even have to try hard.


I'm not convinced the run-time performance always comes from the same pool as start-up overhead. If the regression is waiting-based because resource contention or whatnot (very common) then that won't make the OS start up new processes more slowly, for example.

Sure, there are regressions that will make start-up overhead worse, but I mean, there will be pathological regressions in any configuration.


The problem is that 2400rps is such a tiny number. You can ddos yourself accidentally from a few browsers and a bug that makes it retry a request over and over; and the whole service will melt down in fun ways before you can isolate that.

The thing limiting you to that number also isn't just the startup cost. If it was, you could just run more things in paralell. The startup cost kills your minimum latency, but the rps limit comes from some other resource running out; cpu, memory, context switching, waiting for other services that are in themselves limited, etc. If it's cpu, which is very likely for python, any little performance regression can melt down the service.

Life is so much easier if you just get a somewhat performant base to build on. You can get away with being less clever, and you can see mistakes as a tolerable bump in resource usage or response times rather than a fail-whale


But why? What advantages we getting?


Hypothetically, strong modularisation, ease of deployment and maintenance, testability, compatibility with virtually any programming language.

In practise I'm not convinced -- but I would love to be. Reverse proxying a library-specific server or fiddling with FastCGI and alternatives always feels unnecessarily difficult to me.




Consider applying for YC's Fall 2025 batch! Applications are open till Aug 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: