At the risk of being censured for a "shallow dismissal", I read here a very long-winded and partial call to prioritise within a technical field the suggestions of graduates of "the university of life" over those with technical expertise ("book learning").
What it implies - as these things often do - is that those with a rationalist approach leapt somehow fully-formed into adult life without experiencing anything negative that might have helped build character and "wisdom".
Ironically, the studies it draws on that reveal bias are not "afro-feminist" folk-wisdom but people doing actual academic research.
That work in some cases may well be motivated by the researchers being from particular backgrounds - e.g. marginalised communities - and will clearly be informed by the lived experience of people from diverse backgrounds, but surely in a respectable field its worth will always be measured in what it brings to the discussion that is actually new and verifiable, not what moral-authority-by-demographic-disadvantage the researcher claims?
Its worth is measured in the degree to which it transfers money, resources, and power from people like you to people like the author, and words arranged like those that appear in this document have proven to be an effective tool for this.
I can get half way to agreement with this abstract.
One concrete example of algorithms run amok is music. Autotune and gridding externalize human creativity and crush it. Yes.
But deciding that algorithms and the Western culture that begat them, as such, are the problem seems a tad bit peevish.
One wonders at the direction of any of this thinking. It seems incapable of any positive outcome. The desire for 'positive' outcomes, itself, seems likely to come under attack for emoploying the English language, the Roman alphabet, and daring to insinuate there is anything amiss with what my strange new overl(ord)s are dictating.
That is interesting. I did not read the article as a plea to avoid algorithms or 'Western culture' in general and interpreted it more as suggesting that ideas like "bias", "data" and "rationality" do not exist outside of context, politics and history (data cleaning is typically the phase where this is obvious to many researchers) and that this should be considered even though a perspective like X is the problem and Y is the solution might make an algorithm or its debiasing seem to be a generally positive outcome/solution.
I feel obligated to acknowledge that I'm sympathetic to the author's concerns here.
But I think she's neglecting the important reason why what she calls a "rational" approach is so popular: it's the only kind of approach that generalizes across different cultures. When you identify an object detection model that tends to perform poorly on people with dark skin, researchers in Dublin, Tokyo, Beijing, or Lagos can understand and agree with what you've found. If you want to find ways to reduce this bias, they'll all be able to help you look, and you can help them with their challenges in turn.
When you move to her "relational" approach, and start talking about "personhood, data, justice, and everything in between", those researchers are going to have wildly divergent and often irreconcilable views. Even two researchers in the same city might struggle to work together on these terms, if they come from different subcultures with different views about how the world should be. The possibility of cross-cultural collaboration is a big thing to give up, and to most people (including myself) it's really a core requirement for any scientific research practice.
> it's the only kind of approach that
> generalizes across different cultures.
I agree that the "rational" approache’s mobility is very interesting and powerful.
The general point from science and technology studies would be that, to move research results around and get this agreement, there needs to be a vast infrastructure that standardizes what rationality depends on – and even then, scientific replication, e.g., can be quite difficult (Collins story of the attempt to replicate building an early TEA-Laser is an interesting example).
The point specific to the paper is that determining what you assume to be bias and what you assume to be unbiased is already a political and ethical decision that can not be 'rationally' decided from within the research.
What it implies - as these things often do - is that those with a rationalist approach leapt somehow fully-formed into adult life without experiencing anything negative that might have helped build character and "wisdom".
Ironically, the studies it draws on that reveal bias are not "afro-feminist" folk-wisdom but people doing actual academic research.
That work in some cases may well be motivated by the researchers being from particular backgrounds - e.g. marginalised communities - and will clearly be informed by the lived experience of people from diverse backgrounds, but surely in a respectable field its worth will always be measured in what it brings to the discussion that is actually new and verifiable, not what moral-authority-by-demographic-disadvantage the researcher claims?