Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

I guess you could. Although I'm not sure how exactly an algorithm can be inherently prejudiced, that sounds rather postmodern. You can make one that is inefficient or apply one that isn't suitable to a given data set, and end up with skewed results. But an algorithm promoting prejudice by itself? I don't know. I always thought mathematics was neutral.

Either way, it absolutely does not apply here. HN's algorithms sinking sexism-related stories is a side effect. The fact of the matter is it will sink any story that has an imbalance in the upvote-to-comment ratio, whether it's about sexism, Haskell or the Church of the SubGenius.

This is the same logic as saying that encryption is evil because it can be used by terrorists to hide information. Encryption always does the same thing, it's fundamentally neutral, regardless of what's being encrypted. The same way HN's algorithms are fundamentally neutral, regardless of what happens to trigger their procedures.



I think you're overcomplicating it by insisting on isolating the algorithm from its owner and environment. No-one else was doing that.

If you don't want to apply a label to the algorithm, how about "Foo Bar's decision to run Algorithm X against Dataset Y resulted in consequences that predictably increased prejudice, so it would have been better if he hadn't done it"?

If that's okay, note that this is pretty close to what parfe said -- he's clearly upset at the fact that pg chooses to run an algorithm on Hacker News, rather than at the algorithm itself.


I won't comment on the efficiency of the algorithm (although it is questionable).

Let's assume that it's a bad design decision. Your logic behind it being prejudiced is still faulty, as it ascribes a fallacious cause.

The algorithm merely shuffles page rankings based on two criteria: upvotes and comments, both of which are user-supplied. The algorithm does not support prejudice, it simply behaves in the fashion it is programmed to. You're completely ignoring all of the mundane stories that go down due to the same factors, and cherry picking the ones about sexism.


The algorithm also uses flags. Some people flag the sexism articles once the wingnits arrive because the threads are effectively useless by then.


Along a slightly different track than you were originally running on, there are a number of ways in which an algorithm could promote prejudice. Consider a nice little machine-learning algorithm that analyzes characteristics of successful employees at a given company in order to select new hires that share those characteristics. If, for instance, the company had a couple rabid barbecue-lovers who made sure that all lunches primarily featured delicious pork ribs, and they made fun of the vegetarians, Jews, Muslims, and people with braces who couldn't chew the meat off the ribs, and so being a rabid lover of pork was strongly correlated with success in the company and being a non-pork-lover or hampered by braces was strongly correlated with simply leaving the company at an early stage to get a job with the rival across the street that had vegetarian and shrimp satay lunch options... well, it's just the algorithm at work, my friend! Math is truth! As a mathematician, I guarantee it.

Time for dinner. heh.


Once again, this isn't the algorithm being inherently prejudiced. It does what it's meant to do, but it's being applied to a prejudiced and unreliable data set.

Don't blame the algorithm for the fallacies of the people using it. The only fault in your scenario is the PEBCAK.


Although I'm not sure how exactly an algorithm can be inherently prejudiced, that sounds rather postmodern.

All known halting program/input pairs are inherently a little bit prejudiced.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: