Informed consent is a bedrock of ethical psych research. If a lowly undergrad is expected to get consent before administering any sort of dinky survey, why is Facebook exempt? They created a study which purposely manipulated the emotional state of their customers in order to publish results in a scientific journal. So Facebook doesn't have to give a shit about ethical research because they're Facebook? I don't understand how people here can't understand why these protocols are in-place and why others see a problem when they are ignored. It was completely unethical.
Eh, so someone made up a bunch of rules and stamped "ethics!" on them, and now, completely independent of the original justification, it's somehow the law of the land. But, at least as applied here, it's pretty bogus.
Think of this way: there's some sort of algorithm that controls what goes on your Facebook feed. There has to be, basically, or else they just have to show you everything. The algorithm takes in sentiment analysis as part of making those decisions, which seems perfectly reasonable. So far, just by building something that interacts with users and affects their emotions, FB has a completely uncontrolled experiment. If they show negative stories vs positive stories, what happens? So far, they just don't know, which seems really bad. Well, to find out the consequences of what they actually do, they have to dial up and down some knobs and see the effects. If they can't try things and find out what happens, how do you expect them to do anything? Just guess? Even worse, if they just guess, they're still experimenting, but now with way less information.
This is completely normal, and every business does it. They have to answer questions about what effect the things they do have, or get stuck not doing anything. A huge part of what businesses do is interact with people, so a huge part of that effect is actually the impact on their customers' emotions. What if we make the colors on this display brighter? What if we play different music in our store? Psychology experiment! You can call it "manipulation" if you want, but you're really just going for cheap connotation points. Almost any editorial decision you could make on any subject with any audience is manipulation. Being systematic about it doesn't make it more so. Do you think you should be asked for consent before being subjected to an A/B test on an ecommerce site? Because you're definitely being manipulated in the relevant sense.
It sounds to me like the objections are not so much that FB ran an experiment so much as it's that they published a paper about it. You could argue that that really is dispositive, but, well, nobody is. And it seems if they're going to do some research, we should actually encourage them to share their results.
When Facebook tweaks users' news feeds to track engagement based on different algorithms, it's an experiment. When a psychological researcher purposefully emotionally manipulates their test subjects without informed consent, it's unethical.
The controversy is the framing of the issue, and that the published paper specifically mentions tracking engagement based on the manipulation of users' emotional state. If it was real research from the outset, it requires ethical review and informed consent, even under the guise of "anonymous news feed research which may affect what you do or don't see".
Right but they published the paper based on tweaking that ordinarily Facebook would have done anyways.
One could just look at it as zoologists studying lions (Facebook) interacting with gazelles (the population) in their natural habitat.
Marring this research as unethical will just stop it from being published, not stop it from happening since without publishing it it is just an ordinary business processes.
I believe you completely missed the mark on this one...
Facebook themselves understood they needed consent... which is why they added a blurb in their TOS. The only problem... it was 4 months after the "experiment".
This was far beyond simple A/B testing. The complete disregard of ethics, and the downright Facebook fanboy-ism that is going on in this thread is bewildering.
So, to demonstrate that, you need to make an argument that:
1. Describes the limits of what experimentation is ethical without consent (i.e. the relevant way in which A/B testing is different than what FB did) in a consistently applicable way.
2. Justify those limits from more basic principles.
Perhaps you can make that argument, and I wouldn't mind be convinced (seriously, I have no horse in this race). But as it stands, the handwaving about "manipulation" and breathless denunciation of an advertising company for being data-driven is pretty weak. You can yell "unethical" as loud as you please and brand people taking the opposite view as "fanboys", but it's not at all convincing.
1) Facebook is not an advertising company, although they are trying to be (and so far have been unsuccessful).
2) Facebook was not just manipulating adverts, but instead your entire feed, which up until recently, had been largely organic (what your friends posted, you saw... all of it).
3) Facebook never has discussed the possibility of "testing" on users, and the general assumption they do, does not excuse the practice.
4) A/B testing for which color button makes people click more often is far different than only displaying emotionally charged posts/images and seeing how users react. This has reminiscence of psychological warfare tactics employed (and now largely banned) by a Vietnam era CIA.
5) Facebook acknowledged the need for prior consent via TOS implicit agreement... but only after they had already concluded the "experiment" (actually, 4 months after).
6) If another company, say Google, manipulated your inbox without your consent nor knowledge, you would likely feel strongly. Both Gmail and Facebook are operated by companies that can do whatever they wish -- but this does not make it right to do whatever they wish.
> 1) Facebook is not an advertising company, although they are trying to be (and so far have been unsuccessful).
You seem to be accusing Facebook of having no business model. I believe virtually all of their revenue comes from ads, so it's difficult to say what they are if not an advertising company.
1) That is where a significant portion of their revenue comes from.
2) News feed has been around for a long time in a form beyond what's newest. This has been well publicized.
3) Every company with durative engagement tests on users. If they don't, they don't have the interaction with users they desire.
4) Much like how the music in stores example is reminiscent of the CIA used music torture in Guantanamo. Or maybe, just maybe, there is some nuance to discuss?
5) This is the most valid of your points, but it's not uncommon for legal documents to be updated for more general coverage and doesn't necessarily imply the activity is unethical.
Gmail's spam filtering is certainly not run without consent or knowledge, unless you think I should be reviewing each piece of spam individually. It operates with the consent of spammers, but that hardly matters.
Changes to Gmail's spam filtering algorithms, however, do happen without your consent or knowledge. Similar to how changes to the algorithm governing FB's News Feed happen without your consent or knowledge, but you know that it exists. Like your spam folder, you can tab to "Most Recent" to see what you are missing.
Forget Facebook, maybe you might like to describe the limits of what experimentation is ethical without consent for scientific study. That's what I'd like to know.
Extreme example (IANAL): a person who uses Facebook heavily and is part of the "more negative stories" group commits suicide. Is Facebook exposed to any legal risk because of this?
IMO, it would have been pretty easy to ask some random users if they wanted to participate; from asking around my group of friends, we all would have opted in because we find it interesting, especially if the findings would be published.
Exactly. Most people would be delighted to be asked if they wanted to be part of Facebook's 'Insider' test group or somesuch, and they don't have to know what they're being tested on ahead of time. The FB researchers certainly knew what standard practice in this area was, any psychology/social science/economics paper that depends on tests or surveys spells the participation invite out first thing in the methodology section.
> Extreme example (IANAL): a person who uses Facebook heavily and is part of the "more negative stories" group commits suicide. Is Facebook exposed to any legal risk because of this?
IANAL either, but a) I'm guessing not, but b) not really the ethical question here either way (e.g. it might be legal but unethical or vice versa, or both, or neither). So I'll focus instead on the ethical question raised by the suicide: I guess I don't see how this is any different from some editor in a newsroom saying, "let's publish more gory crime stories and see what happens". Someone reads a bunch of it and commits suicide. Tragic, obviously, but not really the victim of an unethical experiment. Worse, the paper doesn't even know, since it can't monitor the response. Or: let's say FB never runs this experiment and they never find out that their current algorithm is especially depressing and all sorts of people kill themselves. What are the ethics of that?
I love how, no matter how slimy or nefarious or shady a company acts, there's always going to be someone here on HN defending their actions. Unbelievable.
You may not understand it because you are evaluating it by the standards of "psychological research", and others are evaluating it by the standards of "advertising industry research".
"Ethics" aren't a universal Kantian imperative, they're contingent. It's ethical, more or less, for you to talk about about your conversation with your acquaintance, but not if you're an attorney and they're your client, unless they've put it on the record before publicly, etc.
Ethical rules usually exist for a normative purpose - eg, to allow people to converse freely with their lawyers. The psychological research community doesn't care about their work actually working or proving something valuable nearly as much as their care about maintaining their status. Conversely, the advertising industry cares very much about proving their capabilities to their clients. So, different ethical regimes.
Why did Facebook conduct this expressly as psychological research, publishing it in a psychological research journal?
It seems to me that the appropriate ethical framework to apply is the research context it was conducted and presented in, not a looser one outsiders wish to see it in. That disparity /conflict of interests is a symptom of the problem here.
I'm setting aside the questions of how this experiment even furthers facebook's business interests. My guess is, it doesn't really.
The psychological research community doesn't care about their work actually working or proving something valuable nearly as much as their care about maintaining their status.
I was nodding my head and prepared to take you seriously until this sentence. Turns out you're just another HN reader angry at the academic community because you once had a lazy professor or something.
That's a very lazy inference on your part. It's uncontroversial, even within academia, that there is a large amount of statusmongering going on that is orthogonal to "real work". It doesn't imply they don't care at all about the underlying research, but academia is an industry like any other with its own priorities and incentives, and the people who rise to the top are the sort of people who are good at sussing those out.
In particular you can view IRBs and the like as a form of entry-restriction by entrenched actors that tries to keep disruptive research from competing.
The fact that there is a great deal of statusmongering says nothing about the relative importance of the "real work" and "statusmongering" in the eyes of people involved. So no, it isn't a lazy inference, the original comment made a mean-spirited and totally unsubstantiated claim and I pointed that out.
"If a lowly undergrad is expected to get consent before administering any sort of dinky survey, why is Facebook exempt?"
The researchers had IRB approval. And this whole 'scandal' is really just a media experiment to manipulate the emotions of dumb people. If these media reports were phrased like 'Facebook conducts research into affect correlations of user-generated content', which is probably more or less what the original paper actually said, I doubt anyone would care.
"Facebook said that since the study on emotions, it has implemented stricter guidelines on Data Science team research. Since at least the beginning of this year, research beyond routine product testing is reviewed by a panel drawn from a group of 50 internal experts in fields such as privacy and data security."
In fact, it is directly contradicted by the article.