Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

The whole aim of the AI was to make decisions like the recruiters did -- that is explicitly what they were aiming to do. It might be worth reading the article as it addresses your two ideas (the aim of the project and the fact that the training set was indeed heavily male).


Hey. I did read the article. It doesn’t support the conclusion OP is drawing. The aim of the AI is to “mechanize the search for talent”. It doesn’t care to, nor have any means to, make decisions “like the recruiters did”. Obviously machines don’t make decisions like humans do. They’re trying to reverse engineer an alternate decisions making process from the previous outcomes.


> The aim of the AI is to “mechanize the search for talent”. It doesn’t care to, nor have any means to, make decisions “like the recruiters did”.

This is why AI is so confusing. All "AI" does is rapidly accelerate human decisions by not involving them, so that speed and consistency are guaranteed. They are not replacements for human decision making, they are replacements for human decision making at scale.

If we can't figure out how to do unbiased interviews at the individual level, then AI will never solve this problem. Anyone that tells you otherwise is selling you snake oil.


> If we can't figure out how to do unbiased interviews at the individual level, then AI will never solve this problem. Anyone that tells you otherwise is selling you snake oil.

I wonder to what extent people want to solve it and perhaps more importantly whether or not it can be solved at all...


This is all happening before the interview, even. The AI, as far as I can see from the article, was just sorting resumes into accept/reject piles, based on the kinds of resumes that led to hire/pass results in the hands of humans.


So the recruiters may or may not have been biased, but if the previous outcomes were (based on the candidate pool) then the AI is sure to have been "taught" that bias.

Unless Amazon is willing to accept a) another pool of data or b) that the data will yield bias and apply a correction, the AI is almost guaranteed to be taught the bias.


Yep, I agree a skewed dataset is not good for the task of correcting an unequal distribution and is likely to maintain or even increase it.


Aren't the "previous outcomes" past hiring decisions though?


Yes, but you have to know what pool you started with. As an overly simplistic example, if a bank used historical mortgage approval records from primarily German neighbourhoods to train AI, it might become racist against non-Germans despite that it’s just an artifact of the demographics of the time. I think it just shows how not ready for prime time AI is.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: