Hacker News new | past | comments | ask | show | jobs | submit login

60/hr is not the same as 1/min, unless you're trying to continually make as many requests as possible, like a crawler. and if that is for your use case, then your traffic is probably exactly what they're trying to block.





60/h is obviously well within normal human usage of an app and not bot traffic...

A normal rate limit to separate humans and bots would be something like 60 per minute. So it's about an order of magnitude too low.


Use case: crawling possibly related files based on string search hints in a repo you know nothing about...

Something on the order of 6 seconds a page doesn't sound TOO out of human viewing range depending on how quickly things load and how fast rejects are identified.

I could see ~10 pages / min which is 600 pages / hour. I could also see the argument that a human would get tired at that rate and something closer to 200-300 / hr is reasonable.


All of that assuming they're limiting based on human-initiated requests, not the 100x requests actually generated when you click a link.



Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: