> The little-known group was thrust into the spotlight this week by reports about a 2012 experiment in which the news feeds of nearly 700,000 Facebook users were manipulated to show more positive or negative posts. The study found that users who saw more positive content were more likely to write positive posts, and vice versa.
Ok, this seems like it should be about as controversial as a supermarket that plays downbeat music and tests to see if sales decrease.
That's not to say that the study was manipulative, involuntary, and barely consented to, but hey, that's Facebook in a nutshell.
The tone of this article is "these studies happened with no oversight!" but it never suggests who would oversee Facebook, except Facebook, which doesn't really seem like oversight at all.
Quite a few conferences now require that experiments involved human subjects be reviewed by an ethics committee of some kind before being carried out. Universities have such committees and they do oversee themselves pretty effectively.
So no, I do not think that Facebook overseeing its own researchers is so far fetched. The ethical review board would probably consist of a combination of lawyers, PR people, and people with research backgrounds. The lawyers and PR folks would not have seen this experiment as a "great opportunity for research that was previously impossible" so much as "lawsuit bait" and "PR nightmare."
That's a good point. It is possible for a panel of professionals to qualify and limit the kinds of work are ethical. That's what ethics are! Accountants, lawyers, and doctors are all good at that.
The critical difference, in my mind, is that academic research takes place more or less in the public sphere. Even when it's not, an academic can always be counted on to challenge and discredit a fellow academic.
An in-house council of research practice could exist, but it would take a group of professors with reputations on the line. And then again, to my main point, Facebook's central business practices revolve around manipulating users in ways that are even _less_ concerned about the user's well-being.
Sometimes there's no obvious solution to a problem and all you can do is call attention to the problem. This is definitely one of those cases: The issue isn't that there's some obvious source of oversight that was disregarded - the issue is that dangerous and potentially harmful experimentation is carried out without oversight. Maybe sufficient oversight is impossible in these scenarios, but if so it should be an informed decision made after long, thoughtful consideration, probably at something approaching an industry level.
First, the article mentions that Facebook does have multiple levels of internal oversight. I really don't think this kind of research can be externally regulated, and attempting to do so would just discourage publication of the findings. Businesses do controlled experiments on their customers all the time, with the most relevant example being A/B testing. The danger is extremely limited, and it seems like overreaching to have some sort of international IRB for all experiments that online businesses may need to conduct to optimize the content displayed to their visitors. Publishing research is a good thing.
Quotes in the article from research team employees suggested the opposite: that experiments were often run without anyone else on the team even knowing about it. That doesn't sound like oversight to me.
I'm not a big fan of Facebook, but they are absolutely in the clear on this one IMO. Your music analogy is spot on. WSJ needed an article with the keyword "Facebook" in it so that they could get more visitors to their site.
I doubt that anyone outside of the authors of these articles is actually upset over this, but if there are such individuals, they are just the normal cadre of people that are upset because they like being upset and have nothing better to do. Anyone legitimately bothered by this should go join a parents group and start writing letters to TV execs about how they and their children were scarred for life by the last wardrobe malfunction they saw on live TV.
My grandma died a few months ago. So my feed was pretty depressing with talk of funeral arrangements and things like that. The fact that they might have made my feed even more depressing when I was already hurting to perform a experiment without my consent is pretty fucking infuriating.
The key here is the majority of the content on your Facebook feed is what your friends and family post. Your friends and family's posts have massive meaning to your life, and you will take on board their sentiment more than, say, an ad.
The meaning behind the music in a supermarket is negligible - you don't give any weight to songs played in a supermarket other than "I like this song" or "this song is annoying".
In a sense this whole fiasco is a good thing. The feed, for me, has now become less attractive as a "news source" for what my mates are up to. And that can only be a good thing. I dislike how much everyone (including myself) has taken to Facebook as a means of friends/family communication.
And this is the key issue - Facebook is not some buff actor telling you that you're inadequate if you don't buy a certain aftershave. Facebook is the lens through which people see a significant and important portion of their world. Friends, family, etc.
Artificially adjusting what is seen though that lens with the express intent of modifying someone's mood is horrifying by itself. It's even less acceptable when it is done without consent.
I'm very surprised that so many HN commenters seem to be unable to see, or unwilling to accept, this crucial and key difference.
Your analogy doesn't make sense because Verizon doesn't select which calls you receive. A phone company that drops any calls (positive or negative) is just defective. Facebook always curates what they show on your newsfeed. The neutral case where the experiment isn't being run still involves Facebook picking and choosing what to show you.
If you want an analogy relating to socializing, how about a dating site experimenting with showing you more or less attractive people? Is that terrible and unethical?
I'm glad you said this, because the kneejerk reaction is always "how dare they!" when this type of invasive data mining and experience A/B testing has happened in a myriad industries for decades.
As you say, what "oversight" is required of a corporation doing passive demographic or behavioral studies of its users/customers? Should we go nuts if we see Google Analytics or Omniture tags on any given Web site?
Suppose they did. Would that be unethical? That's essentially their business model. If optimizing their business model is unethical then their business model must be unethical at its core.
Ok, this seems like it should be about as controversial as a supermarket that plays downbeat music and tests to see if sales decrease.
That's not to say that the study was manipulative, involuntary, and barely consented to, but hey, that's Facebook in a nutshell.
The tone of this article is "these studies happened with no oversight!" but it never suggests who would oversee Facebook, except Facebook, which doesn't really seem like oversight at all.