Extreme example (IANAL): a person who uses Facebook heavily and is part of the "more negative stories" group commits suicide. Is Facebook exposed to any legal risk because of this?
IMO, it would have been pretty easy to ask some random users if they wanted to participate; from asking around my group of friends, we all would have opted in because we find it interesting, especially if the findings would be published.
Exactly. Most people would be delighted to be asked if they wanted to be part of Facebook's 'Insider' test group or somesuch, and they don't have to know what they're being tested on ahead of time. The FB researchers certainly knew what standard practice in this area was, any psychology/social science/economics paper that depends on tests or surveys spells the participation invite out first thing in the methodology section.
> Extreme example (IANAL): a person who uses Facebook heavily and is part of the "more negative stories" group commits suicide. Is Facebook exposed to any legal risk because of this?
IANAL either, but a) I'm guessing not, but b) not really the ethical question here either way (e.g. it might be legal but unethical or vice versa, or both, or neither). So I'll focus instead on the ethical question raised by the suicide: I guess I don't see how this is any different from some editor in a newsroom saying, "let's publish more gory crime stories and see what happens". Someone reads a bunch of it and commits suicide. Tragic, obviously, but not really the victim of an unethical experiment. Worse, the paper doesn't even know, since it can't monitor the response. Or: let's say FB never runs this experiment and they never find out that their current algorithm is especially depressing and all sorts of people kill themselves. What are the ethics of that?
IMO, it would have been pretty easy to ask some random users if they wanted to participate; from asking around my group of friends, we all would have opted in because we find it interesting, especially if the findings would be published.