Naive question: how can clicking on the motorbike or traffic light image help to train an ML algorithm if they already know what image has a motorbike in it, or otherwise the captcha would not make sense.
Maybe they put 3 image which are already with a score of >0.90 and one which is just 0.40?
> Naive question: how can clicking on the motorbike or traffic light image help to train an ML algorithm if they already know what image has a motorbike in it, or otherwise the captcha would not make sense.
It's more than just your answers that are fed into ML and more than just what others have already said: there's also the way that your browser functions and the way you interact with it. Your IP address, browser, OS, screen size, input type, timezone and current time of day, how fast do you select different images, etc etc. All of this gets fed into ML algorithms and answers to the obvious images are used as corollaries to support/deny your ancillary information.
Hypothetically speaking, if they've got a 97% good ML model, they could implement a captcha where if you disagree with their model you have to do a second image, and a third image and so on. Then they could show each image to several different humans, and only if a bunch of people disagree with the model do they take a closer look.
Frankly a lot of the images I get are... kinda easy? This isn't the classic book-reading recaptcha where you could see why the text had confused the OCR.
I’m not sure. If I don’t click on one that is a bus it won’t let me forward. It’s not like I click an “Ok, I’m done” button. I guess we could all delay clicking and maybe it would give up and assume the unknown bus wasn’t really a bus after all?