My cheap $20/month VPS serves tens of thousands a user per day without breaking much of a sweat. Using a good old LAMP stack (Linux, Apache, MariaDB, PHP).
I don't know how many requests per second it can handle.
Trying a guess via curl:
time curl --insecure --header 'Host: www.mysite.com' https://127.0.0.1 > test
This gives me 0.03s
So it could handle about 30 requests per second? Or 30x the number of CPUs? What do you guys think?
You need to do load testing to determine this - a request's time includes many delays that are not related to the work the server does, and thus it's not as simple as 1/0.03 - it's possible that 0.0001 second of that time is actually server time, or 0.025 - plus you also have to consider if there are multiple cores working, or non-linear algorithms running, or who knows what else.
Best way to figure it out is to use an application like Apache Bench from a powerful computer with a good internet connection, throw a lot of concurrent connections at the site, and see what happens.
I think it makes sense to test from the server itself because otherwise I would test network infrastructure. While that is interesting too, I am trying to figure out what the server (VM) can handle first.
Concurrency Level: 100
Time taken for tests: 1.447 seconds
Complete requests: 1000
Failed requests: 0
Requests per second: 691.19 [#/sec] (mean)
Time per request: 144.679 [ms] (mean)
Time per request: 1.447 [ms] (mean, across all concurrent requests)
Wow, that is fast. Around 700 requests per second!
I guess you basically run a load test of randomized or usage weighted list of API endpoints for increasing number of synthetic users and see when things start breaking. Many free tools help run these tests from even your laptop.
A day is 16 * 60 * 60 = 57,600 seconds (night time substracted). So tens thousands users per day is like 1-2 req/s, maybe 50 at peak time.
What is more important is what kind of requests your server has to serve. Nginx can easily serve 50-80k req/s of static content; 100ks range if tuned properly.
> In practice, we currently see peaks of above 2000 requests every single second during prime time. That is multiple billions of requests per month, or more than 10 million unique monthly visitors. And all of this before actually serving images.
If I am reading that correctly, 2000r/s does not include images, and makes it unclear if $1500/month does.
I'm pretty sure that includes images, that's why people visit the site. Prime time happens when a very popular manga gets released at around the same time every week.
Hosting static files isn't really that hard. I used to host a website that at its best served around 1000 GB of video content in 24 hours. Of course, it wasn't the fastest without a CDN but it was just 25 €/month.
I wanted to start a discussion about how to estimate the number of requests a given server can handle per second. So when I read "x requests/s" I can put that into perspective.
But it seems you think I wanted to start a dick measuring contest?
If your question is genuine: I would serve images via a CDN. The above timing is for assembling a page by doing an auth check, a bunch of database queries and templating the result.
I don't know how many requests per second it can handle.
Trying a guess via curl:
time curl --insecure --header 'Host: www.mysite.com' https://127.0.0.1 > test
This gives me 0.03s
So it could handle about 30 requests per second? Or 30x the number of CPUs? What do you guys think?