That was one of my first thoughts, but I would assume that GAE spam-bot abuse would be smarter than that. If there really was someone doing this kind of stuff, surely they would just block that particular PayPal account (along with banning the GAE user) - AFAIK valid PayPal accounts aren't easy to generate in large quantities. And I can't imagine it's a reaction to a GAE-based DDoS attack, as spotting that kind of pattern ought to be really easy and resorting to blocking URLs wouldn't be necessary.
To be honest, I suspect this to be a bug rather than deliberate, or else you'd have thought that they'd have notified people.
Locally, this code reports 200 OK for all of those except the last reports a 302 redirect (to the https, I presume). As you can see, on Google, they all fail with an internal exception.
Paypal does have a universal Disallow in robots.txt, so I thought I'd setup why.gd to do the same. But it works fine.
Not sure what can be done about this, but the URL matching is missing lots of stuff. Posted this five hours ago, with a 1-character difference in the URL:
In that particular case, stripping off everything after # would have helped. In general, it's practically impossible to defend against dupes because you can just frivolously add query strings, so
aren't dupes, even though they link to the exact same page. There really isn't anything you can do about this particular case except error-prone heuristics.
Yeah, it's a difficult problem. In this case, it was a single character that I didn't notice. I'm not even clear on how it got there (or why it wasn't on the second submission URL).
Perhaps the "solution" is to ignore query strings....how many sites use them to distinguish content anymore? Alternately...compare the content of the <head> tag on the linked page? That wouldn't be a perfect solution, but it would probably go a long way.
The hash is of course for linking sections within the page, so I suspect the culprit is a link you clicked on the page itself before submitting the story. There are a couple of links with href="#" there, although they all have JavaScript event handlers that cancel the default action.
Do you have JavaScript disabled by any chance, or do you use some obscure browser?
In any case, I think HN should strip the hash and what follows for purposes of dupe detection, but keep them in the link in case someone actually wants to link to a specific spot in the page.
To answer your question: query strings ("?foo=bar&a=b&c") are widely used. Among other places, HN itself uses them. :) Also, whenever you submit a form with GET.
I honestly couldn't tell you how I got to the page (or what I clicked on once I got there), but it probably involved clicking a number of links.
I had forgotten that HN was using query strings to reference articles...D'oh. By now, I figured that everyone had adopted the URL-mapping approach. Anyway, detecting collisions based on the head tag still seems like a possibility....
You mean matching the title tag if the pre-query-string portion of the URL matches? That could certainly work. Maybe this is something to test with xirium's latest content scrape.
Thanks, Marzia