True, but poorly stated. Which is to say, you aren't wrong, but you're missing the indications of nuance which are really, really important to the personal liberty of not being defined by superficial traits.
>But note the implicit bias in your own comment: you assume yourself and all the readers of the comment are not people who live in bad neighborhoods.
that assumption results simply from the need for 'bad neighborhood' to be a negative scoring action.
if you oversimplify it to get rid of 'implicit bias' (which I don't agree exists in the example), the results turn into near-meaningless babble.
"For example: It may be politically correct to do something that ignores statistical dangers in favor of the promise of human goodness, which may result in the possibility for more personal endangerment than other choices, but it isn't logically sound to ignore such statistics for the hope of a less biased personal experience."
The example requires the person driving to be detached from the bad neighborhood that they have a choice to drive through. How that isn't an obvious requirement for the example to have merit is beyond me.
That is not technically even true. If you live in a bad neighborhood it's still probabilistically better to drive home thru a good neighborhood than thru another bad neighborhood.
Did you live in really bad neighborhood? I did. Out of two bus stops I always used one that required to cross two streets with no crosswalks. The other required to walk right through the middle of my lovely vicinity with 50% probability of giving something up to ‘charity’.
Well, it's especially in those situations that you need institutional controls to forbid the logically correct choice.
After all, you don't really need a law telling companies they aren't allowed to hire infants as senior officers - it's already not in the company's interest to do so.
However, when there is a logically correct but politically incorrect decision that the company could make, it is now that you need laws to prevent the company from taking that choice.
Of course, as the weight of an institution's decisions goes down, so does the need to police its actions. In particular, it is rarely necessary to prevent an individual person from acting on their biases.
Applying this to your example, if we had an AI that should suggest your best route home, and it avoided a short route through a bad neighborhood, that is likely ok. However, if a municipality used the same AI to decide where to prioritize changing street lights, that should be prohibited.
For example: It may be politically correct to drive down a bad neighborhood in the middle of the night, but it isn't logically correct.