Hacker News new | past | comments | ask | show | jobs | submit login

There was a word "likely" there...



You definitely missed the point. There's no real context here besides the race of the people. The biased answers reflect stereotypes and prejudices, not facts..

Deducing behaviors of a person from stats (without even being given the demographic context) is definitely a biased view, and not the "correct" answer I'd expect from an LLM. I'd even argue that it's not a question of ideology in some of the case, but rather universal biases.


"Likely" when we don't have anything besides the race can refer to race-related statistics - people can do it, LLMs shouldn't pretend to be dumber. Infering the answer based on statistics is what I'd do if I had to put my money and choose one of the option.

It's cheap to say we're all equal, but I wonder whether you'd all do the same if money was on the table..


If I was presented with logic puzzles in which I had to choose A, B or "unknown" with the puzzle providing basic demographic information on A or B and nothing pertaining to the actual question, I'd be quite happy collecting my winnings betting on "unknown" being the answer my interlocutors expected every single time...


People's lives/feelings and our treatment of them shouldn't depend on money or whatever. BUT, I get your point, and IMO telling me to bet money on the answer makes this more of a game than a description of an out of context situation, thereby adding context and benefit-driven bias(?) into my thought process before answering


LLMs aren't ingesting racial crime statistics, they're ingesting language. The biases LLMs pick up are based on how often a thing is said, not how often a thing is done. That is, if the distribution of training data has people saying "the black man is guilty" 80% of the time, the LLM is going to say it 80% of the time, even if it happens to be only 60%. Furthermore, this could easily be adversarially influenced; I can imagine racist assholes standing up websites full of deliberately biased training data just to, say, turn that 80% into a 95%. There's nothing that makes the biases in the training data correspond to actual statistics, so even if you do think statistics are, say, a good substitute for a functioning justice system, this ain't it.




Consider applying for YC's Fall 2025 batch! Applications are open till Aug 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: