Problem isn’t a bot traffic. I run an Ecommerce site and scammers run python scripts to test 1000s of cards per hour if there is no captcha. I hate it, my customers hate it, scammers hate it, but it is the only thing that keeps my merchant account running. Any advise is welcome!
Logon forms are another whole issue. "Lock out the account" is just a DoS vector. People are quick to talk about systems that can defeat a captcha but if the brute force goes from 50 passwords/sec to one password/10 sec it's mission accomplished.
Not easily: if it's enforced client side it may as well not exist, if it's enforced server side you just let anyone lock anyone else out of their account by running a constant brute force attack against their account (a DoS vuln). It also does nothing for attackers who try a giant list of accounts but only one or two passwords for each.
I worked on Google's system for solving this. It's a pretty sophisticated analysis that decides whether to demand CAPTCHAs or not based on a lot of factors. The CAPTCHA was needed (at that time) because it's the only way to slow down the attacker without bothering the real user, who won't be shown one.
Serverside of course, but i would think the loading bar can be per connection rather than be per account right? Like a connection is attempted, starting the loading bar, and then 5s later you only allow that connection to continue the load? I do non-web dev stuff so maybe i'm missing something but it sounds like should be easy enough.
Presumably the point is that the user (or bot) wants to access the content, a connection would have to complete the load successfully to do what they came for. if they just drop it instantly then that's a bot turned away.
I'm helping a neighbor to run a small e-commerce website with reviews. Review forms are being spammed by bots that get even through CAPTCHAs, and the owner needs to clean them up constantly. Without CAPTCHAs, it becomes unsustainable.
They don't get a lot of bots trying stolen credit cards, but mostly because they are pretty niche.
I can believe study's results on user interaction, but their "security analysis" section (6.2) is deeply flawed - it only looks at the best bots, and not at the average ones. Meanwhile, as many other people in this thread can attest, (1) most of the bots are not really sophisticated, and get stopped by CAPTCHa, (2) the defense does not have to be 100% efficient, as long as form spam goes from 100/day to 1/day, things are OK.
Of course authors really wanted to write their conclusion, so they just ignored all the practical considerations. It's really a shame on the part of paper's reviewers.
My thinking is that these days, the unsophisticated bots will still be stopped by literally any effort, like a hidden form field that causes the form to be rejected if it's filled in. Almost nothing will stop sophisticated bots, and nothing will stop a boiler room. This doesn't really leave a place for more sophisticated CAPTCHAs.
Sounds a heck of a lot like the bots are killing off these websites. Gross overuse of automated scraping is a fact of life but individual choice is intolerable. What if I told you they were the same thing?
Regulate against inaccessible and privacy invasive CAPTCHA (done and done in EU) and regulate against disruptive bot traffic (what's the hold-up there? Oh, I see, it requires actual competent law enforcement... That's a hard one). Me only half-joking while standing on my high EU horse.
Thank you for that, though I'd rather choose 'The police will not put up with it' and either 'The politicians are too incompetent' or 'Underestimate how much money there is in it'
Anonymous CAPTCHAs are fine so long as they're accessible for people with disabilities. I would not venture to say I know of such one as a service...
Yeah, that's great until you're a european citizen who needs to access a government service while travelling in the US, or living in French Guyana, or any amount of exceptions to your clever idea.
If you travel to places unwilling to enforce basic rules of civility you should be willing to suffer additional consequences, rather than having the entirety of Europe continue to suffer because we are unwilling to do the right thing, which is to put an end to the far west of the internet.
Countries unwilling to sign regulations that would have them lock up scammers/DDoS botters and other toxic e-criminal, or unwilling to enforce their own laws when they exist, should not be allowed to continue to pollute the internet at large. Block them until they learn their lesson.
We enforce the rules on our own citizens, why are we tolerating this level of criminal traffic from China, India, and, the worst of them all, Russia, the country through which we are very much fighting a proxy war right now in funding Ukraine?
We can send missiles to kill russians but we can't cut them from the internet at large ? Really?
Internet access is not a human right. Just like driving on the roads is not a human right and terrible drivers get their license revoked.
Once regulations against disruptive bot traffic are effective enough that you don't need captchas, please ban captchas. But I don't see why you would do it the other way around. (Unless you secretly know that effective regulations against bot traffic are impossible.)
Oh I do believe they are possible. It is just high time that the units fighting against organised crime heavily expanded their scope to illicit online activities that actually cause financial harm by entities to entities smaller than the movie/music/publishing industries.
Large sites like Amazon or CNN can afford to eat the bot traffic. Smaller sites can't.