Ah, great question. The pattern I'm pointing to is a little subtle. I think at this point it pays to be extremely sceptical of Zuckerberg.
(Quoting from Zuckerberg's original post)
> "And if social media were as responsible for polarizing society as some people claim, then why are we seeing polarization increase in the US while it stays flat or declines in many countries with just as heavy use of social media around the world?"
This certainly needs a citation. Ethnic conflict in Ethiopia, division in Britain around Brexit, and US polarization would seem to be obvious counter examples. It should certainly be said that correlation is not causation. Also, note that the claim that 'social media doesn't cause polarization anywhere / everywhere in the world' is a subtle bait and switch from "Facebook causes polarization in certain areas" or "Facebook's lack of robust, well-staffed safety mechanisms allow it to be exploited to cause polarization in certain areas."
> "If we wanted to ignore research, why would we create an industry-leading research program to understand these important issues in the first place? If we didn't care about fighting harmful content, then why would we employ so many more people dedicated to this than any other company in our space -- even ones larger than us?"
There is more than one way to care about a research program, but absolute amount of budget spent on X is not the same thing as relative budget priority. For a company that made 54 billion in profit last quarter, it'd be more surprising if they had no research program. Zuckerberg does not present any specifics here -- what percentage of gross revenue is the research program? How many people are employed to fight content, and how does this compare to how many people are employed to encourage growth? And what's the point of research if the results are not acted upon? The whistleblower was pretty clear that the research doesn't matter even if suggested fixes will cause a <1% point hit to the core engagement metrics. Does Zuckerberg have any specific facts about how many times civic integrity / safety based suggestions was prioritized over the core metrics, other than the one example he cites (Meaningful Social Interactions)?
Speaking of Meaningful Social Interactions (MSI), the whistleblower specifically said that there is a foundational problem with how MSI is defined, because it includes the number of comments a post receives. Even without intending it, it is easy to see that controversial posts will attract more attention. Zuckerberg cites no evidence about the relative percentage of comments that are angry vs other emotions, and how this has changed.
> "That said, I'm worried about the incentives that are being set here. We have an industry-leading research program so that we can identify important issues and work on them. It's disheartening to see that work taken out of context and used to construct a false narrative that we don't care. If we attack organizations making an effort to study their impact on the world, we're effectively sending the message that it's safer not to look at all, in case you find something that could be held against you."
Zuckerberg is complaining about incentive problems? The whistleblower has said that Facebook's very policies make it "not care" even if individuals do. This is also what comes across in the WSJ articles. In other words: the narrative isn't false, and this has been documented. His point about the specific incentive problem of leaked research is interesting, but it's a case of an abstract concern (for other company's research) vs. very real and well documented harm Facebook is a) doing now, and b) per the whistleblower, is unequipped to solve alone.
Also, at one point in Zuckerberg's missive, he shifts the locus of responsibility from Facebook to Congress: "..at some level the right body to assess tradeoffs between social equities is our democratically elected Congress. For example, what is the right age for teens to be able to use internet services?" Deciding what the "right" age is can take several forms. A panel of seasoned jurists, child psychologists, policy experts, etc., can spend a long time debating what the "right" age is in the universal sense. Or, Facebook could take stand, err on the side of caution, and say that 17 is a better age than 13, and detail why they think so.
I'm British. I don't think anyone in the UK has tried to argue that disagreements over Brexit are caused by Facebook. Actually the whole idea would sound kind of absurd. People disagreed over Brexit because:
1. Some people disagree fundamentally over the nature of government and how power should function.
2. Some people were afraid of various kinds of "punishment" or instability that they were told leaving would cause, even if they would have supported it in the abstract.
Neither of these have anything to do with messaging apps or social media. As for ethnic conflict in Ethiopia - ?!?! - seriously?! That part of Africa has been a hotbed of bloody tribal conflict for my entire life. It's driven by the local culture, I seriously doubt anyone there gives the tiniest shit what people post on Facebook.
This is Mark's point. It's not a bait and switch to point out fundamental inconsistencies between other people's theories and the wider world. The idea that Facebook is some unique social evil that causes people to disagree just looks very odd from outside the USA, looking in. It's being made a scapegoat for US social problems. Everywhere else when people fight, they are well aware what they're fighting about and why.
Re: research. You seem to be arguing that yes, they spend a lot of money on this issue but it's not enough, whilst also admitting you don't know how much they spend. You're just convinced it's too low. But this is meaningless: research programs have natural costs and you can't simply double a budget and get ... what? Conclusions that are twice as "good"? Same conclusions twice as fast? It doesn't work like that.
Nor is research guaranteed to result in actionable outcomes. Look at their conclusions around Instagram. Some teenage girls said it made them feel worse, but more said it made them feel better. What's the actionable outcome here? Unless there was an incredibly specific kind of thing the girls who felt worse were seeing, there probably isn't any plausible action, and if there was some sort of specific content that made people feel bad, removing it would just be used as further proof of their guilt: they're manipulating the feed to increase engagement!
The rest of this thread is all like that. You start with take that is itself controversial and extreme, like "people talking about controversial topics is inherently bad and Facebook should suppress it". Then when Facebook pushes back and points out that actually, lots of people like talking to each other, including about politics, they are cast as villains.
This has all the trappings of a purity spiral. No matter how much effort Facebook makes, it's never considered to be enough. Activists who aren't quite sure what they're trying to fight or why, insist on ever more moderation in the hope that somehow this will cause other Americans to all start agreeing with them. The result is stuff like XCheck, an unstable downward spiral in which ever more aggressive moderation policies force ever more people to be exempted from them, lest the incoherency becomes too obvious.
Thanks for your comment. You make some good points. Zuckerberg's comment about the incentives of leaking research is certainly worthy of consideration. And while I don't have first hand experience with Brexit, I do not mean to claim that the disagreements were caused by FB. Only that FB may have had a role in causing people to become more entrenched in their positions.
One of the points I'm making is that Zuckerberg's statement lacks specifics in the form of numbers and data. I think it'd be interesting to read a point-by-point rhetorical analysis of his statement.
Also, because of this, yes, I don't know how much Facebook spends on research. I agree that though money and research quality are quite likely correlated, it's very hard to say by how much. That being said, I care a whole lot more about the values of the company. Haugen's testimony paints a textbook picture of a values problem. The whistleblower has repeatedly said, under oath, that Facebook understaffs its security and safety teams, and that they turned off the safety and integrity protections after the election, and more.
It's also true that civic divisions in the US -- not to mention other social problems -- run much deeper than Facebook. One mechanism people like me are concerned about is how users are recommended people to follow or content that results in either more division or them being led to a more extreme version of their views. In her testimony, Haugen gave the example of how indicating an interest in healthier eating on IG can lead recommendations of anorexia / eating disorder content. Saying that Facebook's engagement-based-ranking has nothing to do with promoting civil divisions seems to me like saying that the Youtube recommendation algorithm a few years back had nothing to do with the rise of the modern flat earth movement. Researchers have evidence that it did [0].
As for ethnic conflict in Ethiopia, I only bring it up because of Haugen's testimony. As this Guardian article puts it, "Haugen warned that Facebook was 'literally fanning ethnic violence' in places such as Ethiopia because it was not policing its service adequately outside the US." [1]. Your comment does make me wonder how many people in Ethiopia have access to the internet though.
This is a slight tangent, but it's also worth mentioning that re: IG and mental health... we don't know about other research, like about any further attempts at a causal study -- most of what's been cited is correlational and comes from small sample-sized interviews. So it would be nice to see larger and more rigorous studies. I don't believe that research should stop with question "Is Instagram Harmful." Of course that's going to have a mixed answer when dealing with large masses of people. "Who is susceptible to being harmed?", "By what mechanisms is IG harmful to some people?" etc. are questions that need answers.
I also disagree that people are so biased against FB/IG that anything they do will be seen in a bad light. Were they to tweak the IG recommendation algorithm so that an interest in healthier eating did not lead to anorexia content, people like myself would applaud. And though I am not an activist, I'm generally interested in (the enabling of) wholesome discussions and interactions, i.e. things that promote a feeling of being in a community / society rather than feeling apart from it.
I think part of the disagreement here is you see a whistleblower, but I see an activist. One who frankly, if I were Zuck, I would have fired or simply never hired in the first place.
Arguing that Facebook causes tribal conflict in Ethiopia by not "policing aggressively enough" or "understaffing" teams is not, to me, the argument of a whistleblower. It's the argument of someone who has totally lost perspective, of a totalitarian who believes that any and all of humanities ills can be fixed by manipulating communication platforms. It's no different to saying "if the phone company cuts off any phone call in which people are arguing, there will be no more arguments and everyone will be happy". When phrased in terms of slightly older-gen tech it is obviously absurd.
"Were they to tweak the IG recommendation algorithm so that an interest in healthier eating did not lead to anorexia content, people like myself would applaud"
Good on you for being consistent then! Sadly it seems to be very rare. Look at Zuck's post. He points out that Facebook did in fact make changes to prioritize stories from friends and family, even though that reduced their income and reduced the amount people used the site i.e. a lot of users were actually people who don't care much about their cousin's cat pictures, but do care a lot about civics, or phrased another way, "divisive politics".
Yet it doesn't seem to have done them any good. For people like Haugen and a depressing number of HN posters it's not enough to re-rank nice safe family stories about new babies. For them Facebook also has to solve teenage depression, war in Africa and probably world hunger whilst they're at it. And if they aren't it's because they're "under-staffing" or refusing to "adequately police" things.
My perception is that people aren't expecting facebook to solve teenage depression, but to prevent themselves from contributing to it if they are. FB's research has been criticized by scientists as being of poor quality [0], and Zuckerberg claims the findings were cherry picked. This actually good news for FB if true. Should they partner with neutral, third party university research teams, as well as commit to a transparent investigation, they'll be able to clear things up. Not everyone would agree, but I believe that many people are capable of changing their minds when presented with new evidence.
The metaphor of a phone company cutting off an argument is an interesting one. I agree that people arguing is a fact of society / nature, and I also agree that cutting off a phone call seems like an absurd way to try and solve the larger problem. But at the same time, I don't think the metaphor fully applies, for the following reasons:
First, a phone call is a one-to-one communication, and Facebook is one-to-many. It's rare if not unheard of for strangers to call each other and say what they think about, for example, a NYT article. Second, there is no recommendation system pushing "engaging" subjects, where engagement can be defined in terms of how controversial it is. Third, only 9% of FB users speak english, and Haugen testified that the non-english safety features, tweaks to the ranking algorithm, and tooling are not as good (potentially drastically worse?) in non-english languages.
Most people would argue that phone companies have some responsibility to prevent spam calls, similar to how an email services prevent or flag spam emails. These are network level actions, and a lot of Haugen's testimony was about how FB was being irresponsible in this regard.
(Quoting from Zuckerberg's original post)
> "And if social media were as responsible for polarizing society as some people claim, then why are we seeing polarization increase in the US while it stays flat or declines in many countries with just as heavy use of social media around the world?"
This certainly needs a citation. Ethnic conflict in Ethiopia, division in Britain around Brexit, and US polarization would seem to be obvious counter examples. It should certainly be said that correlation is not causation. Also, note that the claim that 'social media doesn't cause polarization anywhere / everywhere in the world' is a subtle bait and switch from "Facebook causes polarization in certain areas" or "Facebook's lack of robust, well-staffed safety mechanisms allow it to be exploited to cause polarization in certain areas."
> "If we wanted to ignore research, why would we create an industry-leading research program to understand these important issues in the first place? If we didn't care about fighting harmful content, then why would we employ so many more people dedicated to this than any other company in our space -- even ones larger than us?"
There is more than one way to care about a research program, but absolute amount of budget spent on X is not the same thing as relative budget priority. For a company that made 54 billion in profit last quarter, it'd be more surprising if they had no research program. Zuckerberg does not present any specifics here -- what percentage of gross revenue is the research program? How many people are employed to fight content, and how does this compare to how many people are employed to encourage growth? And what's the point of research if the results are not acted upon? The whistleblower was pretty clear that the research doesn't matter even if suggested fixes will cause a <1% point hit to the core engagement metrics. Does Zuckerberg have any specific facts about how many times civic integrity / safety based suggestions was prioritized over the core metrics, other than the one example he cites (Meaningful Social Interactions)?
Speaking of Meaningful Social Interactions (MSI), the whistleblower specifically said that there is a foundational problem with how MSI is defined, because it includes the number of comments a post receives. Even without intending it, it is easy to see that controversial posts will attract more attention. Zuckerberg cites no evidence about the relative percentage of comments that are angry vs other emotions, and how this has changed.
> "That said, I'm worried about the incentives that are being set here. We have an industry-leading research program so that we can identify important issues and work on them. It's disheartening to see that work taken out of context and used to construct a false narrative that we don't care. If we attack organizations making an effort to study their impact on the world, we're effectively sending the message that it's safer not to look at all, in case you find something that could be held against you."
Zuckerberg is complaining about incentive problems? The whistleblower has said that Facebook's very policies make it "not care" even if individuals do. This is also what comes across in the WSJ articles. In other words: the narrative isn't false, and this has been documented. His point about the specific incentive problem of leaked research is interesting, but it's a case of an abstract concern (for other company's research) vs. very real and well documented harm Facebook is a) doing now, and b) per the whistleblower, is unequipped to solve alone.
Also, at one point in Zuckerberg's missive, he shifts the locus of responsibility from Facebook to Congress: "..at some level the right body to assess tradeoffs between social equities is our democratically elected Congress. For example, what is the right age for teens to be able to use internet services?" Deciding what the "right" age is can take several forms. A panel of seasoned jurists, child psychologists, policy experts, etc., can spend a long time debating what the "right" age is in the universal sense. Or, Facebook could take stand, err on the side of caution, and say that 17 is a better age than 13, and detail why they think so.