One I can think of off the top of my head (statistics, not AI, although AI would also allow it) is that the actuarial calculations for home/car insurance quotes rely on risk data by zip code, education level, income, and any and all other socioeconomic variables not including protected class, but which often correlate/group by protected class, and which are also reliable indicators of risk.
Depending on who you talk to these algorithms either are or are not discriminating against protected classes "through the back door".
Sure but my point is that, while you could argue that decisions about some topics could be discriminatory by definition, that has nothing to do with AI (and saying that AI is at fault is pure anti-AI FUD).
Depending on who you talk to these algorithms either are or are not discriminating against protected classes "through the back door".