How the hell is the scraper dev supposed to anticipate how poorly-written these particular views are with no backend knowledge? If not an automated scraper, a thundering herd from content gone viral would trigger the same result.
Scraping is not an intended purpose for most websites. Unless the website specifically states that this is an intended function, it is not reasonable to assume so. In fact it may be in violation of the terms and conditions of the given website.
If the law assumed that only intended functions are permissible, innovation would be a crime. By definition, innovation is finding new and unforeseen uses for resources.
You both make good points. If you make the law too strict you punish reasonable uses of the website, like scraping a few publicly available pages to help users. If you make it too lenient you permit DOS attacks.
It’s not easy to craft a law that will punish bad behaviour without blocking innovation.
Oh come on, you're trying to scrape the data out of a black box. You have no idea what their infrastructure is like, and for your purposes, you don't really care.
Of course, some sense is more than welcome, but if my scraper makes one request every 2 sec knocks down your server, it's your fault, not mine.