Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

The listed examples seem reasonably good.

    * systems which establish priority 
      in the dispatching of emergency services
    * systems determining access to or 
      assigning people to educational institutes
    * recruitment algorithms 
    * those that evaluate credit worthiness
    * those for making individual risk assessments
    * crime-predicting algorithms
While I'd also like to see autonomous military devices banned, banned AI that makes opaque life-changing decisions about individuals seems reasonable. We already say that these shouldn't discriminate and we've seen ways AI can allow discrimination through the back door.



I think the tradeoff is that at least the AI discrimination is systemized, and there's one place you can manipulate to reduce that discrimination, while with pre-AI human discrimination, it's not at one place, so it's harder to eliminate.

As an example, it's the difference between being rejected by a central agent for a loan, versus going to your local branch, and being rejected by a random employee at the local branch. It's obviously much easier to change the central agent than it is to change every distributed employee.

Now, whether this is actually the case in practice, and whether this is a good or bad thing is open to interpretation.


"I think the tradeoff is that at least the AI discrimination is systemized"

What does systematized mean in this context? The specific problem is that modern deep learning systems are unsystematic - they heuristically determine a result-procedure based on some goodness measure and this result-procedure is a black box.

You already have criteria-based algorithms for things like loans - the individual employees aren't making arbitrary decisions or just pen-and-paper calculations. You have a central algorithm now in a given bank, one that can be looked and understood. The question is whether to go from that to an opaque, "trained" algorithm whose criteria can't analyzed directly.


As far as I can tell the law does not prohibit algorithm assisted decision making. So as long there is a human rendering the final decision we are good. Which seems to be a reasonable balance IMO.


*It's obviously much easier to change the random employee than it is to change the central agent.


Good? This list is basically a rehash of popular sci-fi dystopias. I bet setting it up did not involve much thought beyond "Oh yeah, I remember this was the premise behind Minority Report. Let's ban it"


> we've seen ways AI can allow discrimination through the back door.

Do you have any concrete examples of these, in particular where the use of statistics or AI enables more discrimination than using human decisions?


>>> we've seen ways AI can allow discrimination through the back door.

>> we've seen ways AI can allow discrimination through the back door.

> Do you have any concrete examples of these, in particular where the use of statistics or AI enables more discrimination than using human decisions?

Here's a concrete example:

https://towardsdatascience.com/racist-data-human-bias-is-inf...

> An AI program called COMPAS has been used by a Wisconsin court to predict the likelihood that convicts will reoffend. An investigative piece by ProPublica last year found that this risk assessment system was biased against black prisoners, incorrectly flagging them as being more likely to reoffend than white prisoners (45% to 24% respectively). These predictions have led to defendants being handed longer sentences, as in the case of Wisconsin v. Loomis.

Also, I'd dispute your injection of the "more racist than humans" framing (which is also moving the goalposts a bit). The problem with racist algorithms isn't necessarily their "degree of racism" but the fact that they mask very real racism behind a veneer of false computerized "objectivity."


One I can think of off the top of my head (statistics, not AI, although AI would also allow it) is that the actuarial calculations for home/car insurance quotes rely on risk data by zip code, education level, income, and any and all other socioeconomic variables not including protected class, but which often correlate/group by protected class, and which are also reliable indicators of risk.

Depending on who you talk to these algorithms either are or are not discriminating against protected classes "through the back door".


Sure but my point is that, while you could argue that decisions about some topics could be discriminatory by definition, that has nothing to do with AI (and saying that AI is at fault is pure anti-AI FUD).


Yes - and you've just proven the folly of this entire exercise -> those algorithms have nothing to do with AI!

If the government believes that credit risk systems cannot use 'race' as a factor, then they ought to reaffirm that.

They shouldn't be restricting broad use of a technology.


It's not that it enables more, it's that the AI is harder to fight and easier to excuse.

Just like when a clerk tells you "The computer won't let me."



Why does it have to be "more"?


Parent mentioned that AI is used to sneak in discrimination through the back door, implying that discrimination wouldn’t be there (or there would be less) without AI.


Here's an example: mortgages (in the USA) used to be approved or denied by humans, but there were certain neighborhoods where only white people were allowed.

Now, there's a law against that.

In the future, there will be an AI system to approve or deny mortgages, based off of historical training data. Since that data includes the redlining era, the AI will learn to make racist decisions.

Most people do not understand how it is possible for a computer to be racist. (Other than against all humans like in Terminator 2.) This is why it's "through the back door", because it's not obvious how it's possible or where it's coming from.


"Since that data includes the redlining era, the AI will learn to make racist decisions."

This is a crude assumption.

AI researchers are well aware of these potentialities, and you'd (or the government) would have to provide evidence that these systems are racist before banning them.

The basic premise you're making is: "The world is unfair -> AI uses data from the real world -> the AI is racist".

Insurance fiduciaries already use an incredible amount of 'training data' in their work, and we don't have hugely material problems there.


The OP mentioned "AI that makes opaque life-changing decisions". In that context, "through the back door" was more likely meant in the sense of "without anyone noticing".

It doesn't really matter if there is "less" discrimination without AI. While AI is not there, there is no discrimination from AI. If there is some after introducing AI, then it's a problem with AI.


Presumably politicians fear an independent AI that would make technically correct but politically incorrect decisions in all of those categories.




Consider applying for YC's Fall 2025 batch! Applications are open till Aug 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: