You'd be surprised how few attacks I've personally seen vary that much. But yes, it happens, and good applications put their identifiers in their paths and ignore the querystrings, most CDN/security providers allow the configuring of their layer to ignore querystrings entirely.
Of course, this is precisely the attack that works on a search page, hence the advice above to be ready to captcha that if you haven't.
Anything GETable cache, everything else you need to think about how to validate the good traffic (trivially computable CSRF tokens help) and captcha the rest.
404s, 401s, etc... they should cost the underlying server as little resource as possible and also cache their result at an applicable cache layer (404s at the edge and 401s internally, 403s at the edge if possible, etc).
Actually Varnish is great here, one normalises the requests and retains only the querystrings that are valid for your application filtering out (removing) all those that are not valid.
The key thing is, you know your application, and you know what the valid keys are and the valid value ranges. If you can express that in your HTTP server and discard requests then it can be very cheaply done.
A forum really doesn't have that many distinct URLs, and so this is easily done. It would be harder on a much more complex application, but the original question related to these smaller side-project applications.