Hacker News new | past | comments | ask | show | jobs | submit login
Facebook experiments had few limits (wsj.com)
49 points by anigbrowl on July 3, 2014 | hide | past | favorite | 66 comments



Informed consent is a bedrock of ethical psych research. If a lowly undergrad is expected to get consent before administering any sort of dinky survey, why is Facebook exempt? They created a study which purposely manipulated the emotional state of their customers in order to publish results in a scientific journal. So Facebook doesn't have to give a shit about ethical research because they're Facebook? I don't understand how people here can't understand why these protocols are in-place and why others see a problem when they are ignored. It was completely unethical.


Eh, so someone made up a bunch of rules and stamped "ethics!" on them, and now, completely independent of the original justification, it's somehow the law of the land. But, at least as applied here, it's pretty bogus.

Think of this way: there's some sort of algorithm that controls what goes on your Facebook feed. There has to be, basically, or else they just have to show you everything. The algorithm takes in sentiment analysis as part of making those decisions, which seems perfectly reasonable. So far, just by building something that interacts with users and affects their emotions, FB has a completely uncontrolled experiment. If they show negative stories vs positive stories, what happens? So far, they just don't know, which seems really bad. Well, to find out the consequences of what they actually do, they have to dial up and down some knobs and see the effects. If they can't try things and find out what happens, how do you expect them to do anything? Just guess? Even worse, if they just guess, they're still experimenting, but now with way less information.

This is completely normal, and every business does it. They have to answer questions about what effect the things they do have, or get stuck not doing anything. A huge part of what businesses do is interact with people, so a huge part of that effect is actually the impact on their customers' emotions. What if we make the colors on this display brighter? What if we play different music in our store? Psychology experiment! You can call it "manipulation" if you want, but you're really just going for cheap connotation points. Almost any editorial decision you could make on any subject with any audience is manipulation. Being systematic about it doesn't make it more so. Do you think you should be asked for consent before being subjected to an A/B test on an ecommerce site? Because you're definitely being manipulated in the relevant sense.

It sounds to me like the objections are not so much that FB ran an experiment so much as it's that they published a paper about it. You could argue that that really is dispositive, but, well, nobody is. And it seems if they're going to do some research, we should actually encourage them to share their results.


When Facebook tweaks users' news feeds to track engagement based on different algorithms, it's an experiment. When a psychological researcher purposefully emotionally manipulates their test subjects without informed consent, it's unethical.

The controversy is the framing of the issue, and that the published paper specifically mentions tracking engagement based on the manipulation of users' emotional state. If it was real research from the outset, it requires ethical review and informed consent, even under the guise of "anonymous news feed research which may affect what you do or don't see".


Right but they published the paper based on tweaking that ordinarily Facebook would have done anyways.

One could just look at it as zoologists studying lions (Facebook) interacting with gazelles (the population) in their natural habitat.

Marring this research as unethical will just stop it from being published, not stop it from happening since without publishing it it is just an ordinary business processes.


I believe you completely missed the mark on this one...

Facebook themselves understood they needed consent... which is why they added a blurb in their TOS. The only problem... it was 4 months after the "experiment".

This was far beyond simple A/B testing. The complete disregard of ethics, and the downright Facebook fanboy-ism that is going on in this thread is bewildering.


So, to demonstrate that, you need to make an argument that:

1. Describes the limits of what experimentation is ethical without consent (i.e. the relevant way in which A/B testing is different than what FB did) in a consistently applicable way.

2. Justify those limits from more basic principles.

Perhaps you can make that argument, and I wouldn't mind be convinced (seriously, I have no horse in this race). But as it stands, the handwaving about "manipulation" and breathless denunciation of an advertising company for being data-driven is pretty weak. You can yell "unethical" as loud as you please and brand people taking the opposite view as "fanboys", but it's not at all convincing.


Well, just some thoughts:

1) Facebook is not an advertising company, although they are trying to be (and so far have been unsuccessful).

2) Facebook was not just manipulating adverts, but instead your entire feed, which up until recently, had been largely organic (what your friends posted, you saw... all of it).

3) Facebook never has discussed the possibility of "testing" on users, and the general assumption they do, does not excuse the practice.

4) A/B testing for which color button makes people click more often is far different than only displaying emotionally charged posts/images and seeing how users react. This has reminiscence of psychological warfare tactics employed (and now largely banned) by a Vietnam era CIA.

5) Facebook acknowledged the need for prior consent via TOS implicit agreement... but only after they had already concluded the "experiment" (actually, 4 months after).

6) If another company, say Google, manipulated your inbox without your consent nor knowledge, you would likely feel strongly. Both Gmail and Facebook are operated by companies that can do whatever they wish -- but this does not make it right to do whatever they wish.


> 1) Facebook is not an advertising company, although they are trying to be (and so far have been unsuccessful).

You seem to be accusing Facebook of having no business model. I believe virtually all of their revenue comes from ads, so it's difficult to say what they are if not an advertising company.


1) That is where a significant portion of their revenue comes from.

2) News feed has been around for a long time in a form beyond what's newest. This has been well publicized.

3) Every company with durative engagement tests on users. If they don't, they don't have the interaction with users they desire.

4) Much like how the music in stores example is reminiscent of the CIA used music torture in Guantanamo. Or maybe, just maybe, there is some nuance to discuss?

5) This is the most valid of your points, but it's not uncommon for legal documents to be updated for more general coverage and doesn't necessarily imply the activity is unethical.

6) Gmail does. It's called spam filtering.


Gmail's spam filtering is certainly not run without consent or knowledge, unless you think I should be reviewing each piece of spam individually. It operates with the consent of spammers, but that hardly matters.


Changes to Gmail's spam filtering algorithms, however, do happen without your consent or knowledge. Similar to how changes to the algorithm governing FB's News Feed happen without your consent or knowledge, but you know that it exists. Like your spam folder, you can tab to "Most Recent" to see what you are missing.


Forget Facebook, maybe you might like to describe the limits of what experimentation is ethical without consent for scientific study. That's what I'd like to know.


Extreme example (IANAL): a person who uses Facebook heavily and is part of the "more negative stories" group commits suicide. Is Facebook exposed to any legal risk because of this?

IMO, it would have been pretty easy to ask some random users if they wanted to participate; from asking around my group of friends, we all would have opted in because we find it interesting, especially if the findings would be published.


Exactly. Most people would be delighted to be asked if they wanted to be part of Facebook's 'Insider' test group or somesuch, and they don't have to know what they're being tested on ahead of time. The FB researchers certainly knew what standard practice in this area was, any psychology/social science/economics paper that depends on tests or surveys spells the participation invite out first thing in the methodology section.


> Extreme example (IANAL): a person who uses Facebook heavily and is part of the "more negative stories" group commits suicide. Is Facebook exposed to any legal risk because of this?

IANAL either, but a) I'm guessing not, but b) not really the ethical question here either way (e.g. it might be legal but unethical or vice versa, or both, or neither). So I'll focus instead on the ethical question raised by the suicide: I guess I don't see how this is any different from some editor in a newsroom saying, "let's publish more gory crime stories and see what happens". Someone reads a bunch of it and commits suicide. Tragic, obviously, but not really the victim of an unethical experiment. Worse, the paper doesn't even know, since it can't monitor the response. Or: let's say FB never runs this experiment and they never find out that their current algorithm is especially depressing and all sorts of people kill themselves. What are the ethics of that?


I love how, no matter how slimy or nefarious or shady a company acts, there's always going to be someone here on HN defending their actions. Unbelievable.


"How do you expect them to do anything?"

By getting informed consent before experimenting on their users? You act like this is some outrageous bar they have to cross: it isn't.


You may not understand it because you are evaluating it by the standards of "psychological research", and others are evaluating it by the standards of "advertising industry research".

"Ethics" aren't a universal Kantian imperative, they're contingent. It's ethical, more or less, for you to talk about about your conversation with your acquaintance, but not if you're an attorney and they're your client, unless they've put it on the record before publicly, etc.

Ethical rules usually exist for a normative purpose - eg, to allow people to converse freely with their lawyers. The psychological research community doesn't care about their work actually working or proving something valuable nearly as much as their care about maintaining their status. Conversely, the advertising industry cares very much about proving their capabilities to their clients. So, different ethical regimes.


Why did Facebook conduct this expressly as psychological research, publishing it in a psychological research journal?

It seems to me that the appropriate ethical framework to apply is the research context it was conducted and presented in, not a looser one outsiders wish to see it in. That disparity /conflict of interests is a symptom of the problem here.

I'm setting aside the questions of how this experiment even furthers facebook's business interests. My guess is, it doesn't really.


The psychological research community doesn't care about their work actually working or proving something valuable nearly as much as their care about maintaining their status.

I was nodding my head and prepared to take you seriously until this sentence. Turns out you're just another HN reader angry at the academic community because you once had a lazy professor or something.


That's a very lazy inference on your part. It's uncontroversial, even within academia, that there is a large amount of statusmongering going on that is orthogonal to "real work". It doesn't imply they don't care at all about the underlying research, but academia is an industry like any other with its own priorities and incentives, and the people who rise to the top are the sort of people who are good at sussing those out.

In particular you can view IRBs and the like as a form of entry-restriction by entrenched actors that tries to keep disruptive research from competing.


The fact that there is a great deal of statusmongering says nothing about the relative importance of the "real work" and "statusmongering" in the eyes of people involved. So no, it isn't a lazy inference, the original comment made a mean-spirited and totally unsubstantiated claim and I pointed that out.


The field receives a pretty scathing review in The Cult of Statistical Significance. The summaries offered there are pretty damning: http://www.amazon.com/The-Cult-Statistical-Significance-Econ...


"If a lowly undergrad is expected to get consent before administering any sort of dinky survey, why is Facebook exempt?"

The researchers had IRB approval. And this whole 'scandal' is really just a media experiment to manipulate the emotions of dumb people. If these media reports were phrased like 'Facebook conducts research into affect correlations of user-generated content', which is probably more or less what the original paper actually said, I doubt anyone would care.


This statement is not supported by the article:

"Facebook said that since the study on emotions, it has implemented stricter guidelines on Data Science team research. Since at least the beginning of this year, research beyond routine product testing is reviewed by a panel drawn from a group of 50 internal experts in fields such as privacy and data security."

In fact, it is directly contradicted by the article.


I thought the researcher in this case was Jeff Hancock, who had approval from the Cornell IRB?


The title of the published research paper is: "Experimental evidence of massive-scale emotional contagion through social networks."


> The little-known group was thrust into the spotlight this week by reports about a 2012 experiment in which the news feeds of nearly 700,000 Facebook users were manipulated to show more positive or negative posts. The study found that users who saw more positive content were more likely to write positive posts, and vice versa.

Ok, this seems like it should be about as controversial as a supermarket that plays downbeat music and tests to see if sales decrease.

That's not to say that the study was manipulative, involuntary, and barely consented to, but hey, that's Facebook in a nutshell.

The tone of this article is "these studies happened with no oversight!" but it never suggests who would oversee Facebook, except Facebook, which doesn't really seem like oversight at all.


Quite a few conferences now require that experiments involved human subjects be reviewed by an ethics committee of some kind before being carried out. Universities have such committees and they do oversee themselves pretty effectively.

So no, I do not think that Facebook overseeing its own researchers is so far fetched. The ethical review board would probably consist of a combination of lawyers, PR people, and people with research backgrounds. The lawyers and PR folks would not have seen this experiment as a "great opportunity for research that was previously impossible" so much as "lawsuit bait" and "PR nightmare."


That's a good point. It is possible for a panel of professionals to qualify and limit the kinds of work are ethical. That's what ethics are! Accountants, lawyers, and doctors are all good at that.

The critical difference, in my mind, is that academic research takes place more or less in the public sphere. Even when it's not, an academic can always be counted on to challenge and discredit a fellow academic.

An in-house council of research practice could exist, but it would take a group of professors with reputations on the line. And then again, to my main point, Facebook's central business practices revolve around manipulating users in ways that are even _less_ concerned about the user's well-being.


Sometimes there's no obvious solution to a problem and all you can do is call attention to the problem. This is definitely one of those cases: The issue isn't that there's some obvious source of oversight that was disregarded - the issue is that dangerous and potentially harmful experimentation is carried out without oversight. Maybe sufficient oversight is impossible in these scenarios, but if so it should be an informed decision made after long, thoughtful consideration, probably at something approaching an industry level.


First, the article mentions that Facebook does have multiple levels of internal oversight. I really don't think this kind of research can be externally regulated, and attempting to do so would just discourage publication of the findings. Businesses do controlled experiments on their customers all the time, with the most relevant example being A/B testing. The danger is extremely limited, and it seems like overreaching to have some sort of international IRB for all experiments that online businesses may need to conduct to optimize the content displayed to their visitors. Publishing research is a good thing.


Quotes in the article from research team employees suggested the opposite: that experiments were often run without anyone else on the team even knowing about it. That doesn't sound like oversight to me.


I'm not a big fan of Facebook, but they are absolutely in the clear on this one IMO. Your music analogy is spot on. WSJ needed an article with the keyword "Facebook" in it so that they could get more visitors to their site.

I doubt that anyone outside of the authors of these articles is actually upset over this, but if there are such individuals, they are just the normal cadre of people that are upset because they like being upset and have nothing better to do. Anyone legitimately bothered by this should go join a parents group and start writing letters to TV execs about how they and their children were scarred for life by the last wardrobe malfunction they saw on live TV.


My grandma died a few months ago. So my feed was pretty depressing with talk of funeral arrangements and things like that. The fact that they might have made my feed even more depressing when I was already hurting to perform a experiment without my consent is pretty fucking infuriating.


Would you feel the same way if a grocery store played a sad song when you were depressed?


The key here is the majority of the content on your Facebook feed is what your friends and family post. Your friends and family's posts have massive meaning to your life, and you will take on board their sentiment more than, say, an ad.

The meaning behind the music in a supermarket is negligible - you don't give any weight to songs played in a supermarket other than "I like this song" or "this song is annoying".

In a sense this whole fiasco is a good thing. The feed, for me, has now become less attractive as a "news source" for what my mates are up to. And that can only be a good thing. I dislike how much everyone (including myself) has taken to Facebook as a means of friends/family communication.


And this is the key issue - Facebook is not some buff actor telling you that you're inadequate if you don't buy a certain aftershave. Facebook is the lens through which people see a significant and important portion of their world. Friends, family, etc.

Artificially adjusting what is seen though that lens with the express intent of modifying someone's mood is horrifying by itself. It's even less acceptable when it is done without consent.

I'm very surprised that so many HN commenters seem to be unable to see, or unwilling to accept, this crucial and key difference.


What a fucking terrible analogy. Do you meet up with all your family and friends to discuss things at the grocery store?

Here's a non-shit analogy: what if Verizon only let you receive depressing phone calls?


Your analogy doesn't make sense because Verizon doesn't select which calls you receive. A phone company that drops any calls (positive or negative) is just defective. Facebook always curates what they show on your newsfeed. The neutral case where the experiment isn't being run still involves Facebook picking and choosing what to show you.

If you want an analogy relating to socializing, how about a dating site experimenting with showing you more or less attractive people? Is that terrible and unethical?


Depends on their motivation. Facebook wanted to make me depressed if I was part of the selected group.


I'm glad you said this, because the kneejerk reaction is always "how dare they!" when this type of invasive data mining and experience A/B testing has happened in a myriad industries for decades.

As you say, what "oversight" is required of a corporation doing passive demographic or behavioral studies of its users/customers? Should we go nuts if we see Google Analytics or Omniture tags on any given Web site?


IMO, A/B testing can easily be unethical in much the same way that skinner box 'Free' to play game are generally considered unethical.

Suppose a gym did A/B testing specifically designed around reducing the number of active members and maximizing the number of inactive members.


Suppose they did. Would that be unethical? That's essentially their business model. If optimizing their business model is unethical then their business model must be unethical at its core.


I'm genuinely scared by the number of people who do not understand what is wrong with Facebook doing this without consent. I guess that was just a classic case of Projection on my part, thinking that other people felt the same way I do about this.

One of the questions this raises for me though is if these people who don't see this as an issue feel that way because they've already submitted to the oppression that is Facebook? Is it the case that Facebook has already "broken" them, as you would a steed?


What a brilliant and effective tool for propaganda and other forms of social control.

We should be sure to only show people patriotic words on July 4th so they can think patriotic thoughts and suffer none of that negative criticism stuff about the government they are so abused with normally. Maybe we should do that for all National holidays. Or maybe we should just do that all the time.

Good thing it wasn't a military branch providing funding for this. That would be scary.


I've honestly been ignoring all of the articles on this subject up to now. Finally decided to read one and see what all of the fuss is, and I still don't see the point. Are we supposed to be upset that they manipulated which things show up on the news feed in what order or something? At the end of the day, it's still just Facebook.


I was a bit surprised by the opener:

Thousands of [Facebook] users received an unsettling message two years ago: They were being locked out of the social network because Facebook believed they were robots or using fake names. To get back in, the users had to prove they were real.

I vaguely remember the fuss about this at the time on HN, and was a bit disturbed to discover that it was staged. I haven't used Facebook for more than an hour or two since 2010, but for many other people it seems to be their 'home on the internet' and the notion that they would be arbitrarily locked out of their accounts under false pretenses is a troubling one.


I cannot recall in which book I read this, but there was a similar experiment completed before. (Possibly an NLP book)

A University professor asked students to solve an anagram. After they were done, they had to walk into the professor's office to tell him the word. In his office he'd be pretending to have a conversation with a visitor.

The words the the students had to solve were either positive, negative or neutral.

The result was that students that had a negative word, were far more likely to interrupt the professor's conversation with the visitor more quickly than students that had a positive or neutral word.

After the students told the professor the word, he asked the students if they believed the word had an impact on their emotion. All of the students said no, but the data showed the negative words had an impact.


I wish this had all come out in time to have been a part of this week's "This American Life." The episode was called "The Human Spectacle"[0]. There are so many interesting parallels with several of the pieces presented (most notably, Acts 1 and 2)

It's staggering to think that out of all the Facebook users, 700,000 is still only a small portion of the total available users. I wonder what Facebook's ongoing role will be as a source of anthropological data?

0: http://www.thisamericanlife.org/radio-archives/episode/529/h...


> "They're always trying to alter peoples' behavior."

Just like every customer of the Wall Street Journal that's not a subscriber. All of the sophisticated advertisers for the Journal, online and in print, are running similar tests.


This will go down as a case study on how to not publish a study for the mainstream. The reality is that companies are running experiments similar to fb all the time in order to optimize revenue. But most companies don't publish their results like fb chose to do.


I generally refrain from posting a comment but the current debate is just getting ridiculous. Its more about people hating facebook rather than any real ethical issues. The worst part being, how the IRB's are invoked as if they are filled with ethical gods. A similar situation has happened before as explained here in the NYTimes article by Atul Gawande http://www.nytimes.com/2007/12/30/opinion/30gawande.html?_r=...

The key question is at what point is a certain method a quality control decision vs a research experiment. What Facebook did was an internal quality investigation. What are they supposed to do show posts at random, for this to qualify as an "experiment" we need a baseline. In reality there is no such baseline.


> What Facebook did was an internal quality investigation.

If it was an internal quality investigation, why was the experiment designed by Cornell researchers, and published in PNAS?


Yeah I seem to be in the minority in that I find it hard to get upset here. In fact, the lesson I take away is that if you, as a company--especially a highly visible one, do any sort of A/B testing which might be remotely seen as manipulating behavior by anyone (which covers broad ground), you probably shouldn't make it public.

I'm not sure I see how this is all that different from running a bunch of ads with different messages and/or emotional content and seeing how they differential ly perform.


No...

You should tell your users and include it in your TOS. (not do it, then add it to your TOS after the fact).


Yup, that's the big issue in my opinion. Their TOS never had research as condition that you accept. When Reddit decided to use their data-set for research and make it available, they had a public blog post explicitly calling it out and had an option to disable your data for research. Naturally I opted out. I was informed of the change and I acted on it. Those users on Facebook never agreed to the research component of the ToS (because it didn't exist yet), did not know they were involved, and were included in a PNAS study that had Cornell researchers involved.


What's next? Facebook studies showing that users who were shown more baby pictures in the news feed became more likely to have babies within the next 2 years?


This reminds me of the short story titled "The Wave."



:D


I'm just thinking, let's say McDonald's was putting something in our food to alter our emotional state and measuring the outcomes.

Would food industry people be coming out in support of them the way it seems half the people here are fine with what Facebook is doing...? "Hey, it says Happy Meal on the box, and they're delivering!"


It's really incredible; the number of people with full-blown Stockholm syndrome in this thread is staggering.

Why do so many people here think this kind of thing is okay when Facebook does it? Is it simply because they feel kinship to a tech-related company?


> Why do so many people here think this kind of thing is okay when Facebook does it? Is it simply because they feel kinship to a tech-related company?

Yup. It's like asking in a wolf forum if having the sheep for dinner is considered ethical.


What exactly is it that you think chemicals do when consumed by humans?


I think the "something" he is referring to is more akin to Valium and not sugar.




Consider applying for YC's Fall 2025 batch! Applications are open till Aug 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: