> An AI program called COMPAS has been used by a Wisconsin court to predict the likelihood that convicts will reoffend. An investigative piece by ProPublica last year found that this risk assessment system was biased against black prisoners, incorrectly flagging them as being more likely to reoffend than white prisoners (45% to 24% respectively). These predictions have led to defendants being handed longer sentences, as in the case of Wisconsin v. Loomis.
Also, I'd dispute your injection of the "more racist than humans" framing (which is also moving the goalposts a bit). The problem with racist algorithms isn't necessarily their "degree of racism" but the fact that they mask very real racism behind a veneer of false computerized "objectivity."
One I can think of off the top of my head (statistics, not AI, although AI would also allow it) is that the actuarial calculations for home/car insurance quotes rely on risk data by zip code, education level, income, and any and all other socioeconomic variables not including protected class, but which often correlate/group by protected class, and which are also reliable indicators of risk.
Depending on who you talk to these algorithms either are or are not discriminating against protected classes "through the back door".
Sure but my point is that, while you could argue that decisions about some topics could be discriminatory by definition, that has nothing to do with AI (and saying that AI is at fault is pure anti-AI FUD).
Parent mentioned that AI is used to sneak in discrimination through the back door, implying that discrimination wouldn’t be there (or there would be less) without AI.
Here's an example: mortgages (in the USA) used to be approved or denied by humans, but there were certain neighborhoods where only white people were allowed.
Now, there's a law against that.
In the future, there will be an AI system to approve or deny mortgages, based off of historical training data. Since that data includes the redlining era, the AI will learn to make racist decisions.
Most people do not understand how it is possible for a computer to be racist. (Other than against all humans like in Terminator 2.) This is why it's "through the back door", because it's not obvious how it's possible or where it's coming from.
"Since that data includes the redlining era, the AI will learn to make racist decisions."
This is a crude assumption.
AI researchers are well aware of these potentialities, and you'd (or the government) would have to provide evidence that these systems are racist before banning them.
The basic premise you're making is: "The world is unfair -> AI uses data from the real world -> the AI is racist".
Insurance fiduciaries already use an incredible amount of 'training data' in their work, and we don't have hugely material problems there.
The OP mentioned "AI that makes opaque life-changing decisions". In that context, "through the back door" was more likely meant in the sense of "without anyone noticing".
It doesn't really matter if there is "less" discrimination without AI. While AI is not there, there is no discrimination from AI. If there is some after introducing AI, then it's a problem with AI.
Do you have any concrete examples of these, in particular where the use of statistics or AI enables more discrimination than using human decisions?