Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Here's an example: mortgages (in the USA) used to be approved or denied by humans, but there were certain neighborhoods where only white people were allowed.

Now, there's a law against that.

In the future, there will be an AI system to approve or deny mortgages, based off of historical training data. Since that data includes the redlining era, the AI will learn to make racist decisions.

Most people do not understand how it is possible for a computer to be racist. (Other than against all humans like in Terminator 2.) This is why it's "through the back door", because it's not obvious how it's possible or where it's coming from.




"Since that data includes the redlining era, the AI will learn to make racist decisions."

This is a crude assumption.

AI researchers are well aware of these potentialities, and you'd (or the government) would have to provide evidence that these systems are racist before banning them.

The basic premise you're making is: "The world is unfair -> AI uses data from the real world -> the AI is racist".

Insurance fiduciaries already use an incredible amount of 'training data' in their work, and we don't have hugely material problems there.




Consider applying for YC's Fall 2025 batch! Applications are open till Aug 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: